May 22, 2015

Google taking the SMIL out of SVG.

Google has recently announced their intention to drop SMIL support in Blink, the rendering engine for Chrome. SMIL is a way to animate SVG’s in a declarative way. Google’s argument is that SMIL animation has not become hugely popular and that Web Animations will provide the same functionality. As a result of this announcement, the SVG working group decided to move SMIL from SVG 2 and into its own specification. One could say that SMIL is on life support at the moment.

SMIL’s lack of use is most likely due to its lack of support in IE. Microsoft has declared they will not implement SMIL in IE but they have hinted in the past that they are open to a native JS implementation built on top of Web Animations.

So why would losing SMIL be a great loss?

  1. SMIL declarative animations are easier to write compared to JavaScript or CSS/Web Animations.
  2. SMIL animations are in general more performant.
  3. With SMIL animations one can independently animate different attributes and properties.
  4. JavaScript is not allowed to run inside SVGs in many situations due to security issues so it is not a viable alternative in many cases.
  5. Web Animations don’t replace all the functionality of SMIL. For example, one cannot animate attributes including paths. In particular you won’t be able to do this:
Morphing Batman logos.

A variety of Batman logos, animated with SMIL.

Ironically, YouTube is planning on using SMIL to animate buttons.

As usual, if you are reading this in a blog aggregator and the images don’t display correctly, try viewing on my blog website. Aggregators don’t play well with SVG.

(For more on animating paths, see my blog post on path animations.)

You can read about Google’s intention and the debate that is going at the chromium.org Google group. If you use SMIL or plan to, let Google know that it is important to you.


A figure just to have a nice image in Google+ (which doesn’t do SVG… another reason to frown):

Frown face.

Second stretchgoal reached and new builds!

We’ve to got our second stretchgoal through both Kickstarter and the Paypal donations! We hope we can get many more so that you, our users, get to choose more ways for us to improve Krita. And we have got half a third stretch goal actually implemented: modifier keys for selections!

Oh — and check out Wolthera’s updated brush packs! There are brush packs for inking, painting, filters (with a new heal brush!), washes, flow-normal maps, doodle brushes, experimental brushes and the awesome lace brush in the SFX brush pack!

We’ve had a really busy week. We already gave you an idea of our latest test-build on Monday, but we had to hold back because of the revived crash file recovery wizard on windows… that liked to crash. But it’s fixed now, and we’ve got new builds for you!

So what is exactly new in this build? Especially interesting are all the improvements to PSD import/export support. Yesterday we learned that Katarzyna uses PSD as her working format when working with Krita – we still don’t recommend that, but it’s easier now!

Check the pass-through switch in the group layer entry in the layerbox!

Check the pass-through switch in the group layer entry in the layerbox!

  • Dmitry implemented Pass-Through mode for group layers. Note: filter, transform and transparency masks and pass-through mode don’t work together yet, but loading and saving groups from and to PSD now does! Pass-through is not a fake blending mode as in Photoshop: it is a switch on the group layer. See the screenshot!
  • We now can load and save layerstyles, with patterns from PSD files! Get out your dusty PSDs for testing!
  • Use the right Krita blending mode when a PSD image contains Color Burn.
  • Add Lighter Color and Darker Color blending modes and load them from PSD.
  • When using Krita with a translation active on windows, the delay on starting a stroke is a bit less, but we’re still working on eliminating that delay completely.
  • The color picker cursor now shows the currently picked and previous color.
  • Layer styles can now be used with inherit-alpha
  • Fix some issues with finding templates.
  • Work around an issue in the oxygen widget style on Linux that would crash the OpenGL-based canvas due to double initialization
  • Don’t toggle the layer options when right-clicking on a layer icon to get the context menu (patch by Victor Wåhlström)
  • Update the Window menu when a subwindow closes
  • Load newer Photoshop-generated JPG files correctly by reading the resolution information from the TIFF tags as well. (Yes, JPG resolution is marked in the exiv metadata using TFF tags if you save from Photoshop…)
  • Show the image name in the window menu if it hasn’t been saved yet.
  • Don’t crash when trying to apply isolate-layer on a transform mask
  • Add webp support (at least on Linux, untested on Windows)
  • Add a shortcut to edit/paste into a new image. Patch by Tiffany!
  • Fix the autosave recovery dialog on Windows for unnamed autosaves!
  • Added a warning for intel users who may still be dealing with the broken driver. If Krita works fine for you, just click okay. If not, update your drivers!

New builds for Linux are being created at the moment and will be available through the usual channels.

Linux:

Windows:

From Vista and up, Windows 7 and up is recommended. There is no Windows XP build. If you have a 64 bits version of Windows, don’t use the 32 bits build! The zip files do not need installing, just unpacking, but do not come with the Visual Studio C runtime that is included in the msi installer.

OSX:

(Please keep in mind that these builds are unstable and experimental. Stuff is expected not to work. We make them so we know we’re not introducting build problems and to invite hackers to help us with Krita on OSX.)

CloseHolesRTool

A closing holes tool for the re-topology room, with similar flexibility and power as the LiveClay closing holes tools.


May 21, 2015

Krita comes to Discworld!

We found out that the German Discworld covers were made with Krita, and had the privilege to ask the artist to talk about her work!

(Don’t forget to check out our 2015 Kickstarter campaign as well!)
Color of Magic z napisami Katarzyna Oleska

Hi. My name is Katarzyna Oleska and I am an Illustrator working for publishers, magazines and private clients. A couple of months ago, I came across a free program for painters called Krita. My experience of free programs in the past wasn’t great, but to my surprise, Krita was different. At first I was overwhelmed with the number of dockers and settings, but soon found that they were there for a reason. I fell in love with Krita so much that I left my old Corel Painter and started using Krita to paint my commissions. Most of my recent Terry Pratchett covers were painted in Krita.

How did you get into illustration/book cover painting in the first place?

I started painting covers back in 2003 when I was still studying architecture. I’d always liked to draw and paint and wanted to see my works in print. So one day I took a chance and e-mailed one of the Publishers I wanted to work for. I attached a couple of samples of my works and I got my first job straight away. Pretty lucky. Back then I was still working traditionally but as time went by I bought a tablet and started working digitally.

Pyramids z napisami Katarzyna Oleska

How do you find jobs?

It really depends. Some of the commissions come to me and some I have to chase. If the commission comes to me it’s usually through word of mouth or because the client saw my works online. But I also approach new publishers, send them my work samples, my portfolio etcetera.

Can you choose which books you illustrate, or do you just do what a publisher throws you?

Unfortunately I don’t have the comfort of choosing what I want to illustrate. I can refuse politely if I think I can’t deliver a good illustration, for example when I feel my style wouldn’t fit the story. But publishers usually know what I am good at, they know my portfolio and I have never really refused any cover yet.

How do you determine which scene/character(s) to put on the cover?

The best decision of which scene or characters to put on the cover can only be made if I know the story so whenever I have a chance to read a book I take it. Being a fan of reading myself, I know how important it is for the cover to reflect the story inside; especially with a series like the Terry Pratchett Discworld novels. I was already a huge Terry Pratchett fan, so that wasn’t a problem.

When choosing a scene to paint I usually try to analyse where the main focus of the story is. Very often I am tempted to paint a scene that would look amazing on the cover but I catch myself in time and remind myself that this particular scene, though amazing, wouldn’t really sell the story. So I choose the one that will do it better and will also resonate with the title. For example with “Guards, Guards” the only reasonable choice was to paint the Guards running away from a dragon they were trying to track down. Nothing else would really fit.

Guards Guards z napisami Katarzyna Oleska

Sometimes, however, it’s impossible to read a book because of a tight deadline or the language it was written in. When that happens I try to make sure I find out as much as possible about the book from the publisher.

What sets Krita apart from other tools you’ve used?

The first and most obvious thing is that it’s free. I love than young artists will now have access to such a great tool without spending lots of money. But I would never recommend a program based solely on the price. I have used some free programs and never liked them. They would last a very short time on my computer. With Krita it’s different – I think it’s already a strong competitor to the best known programs on the market.

For me, Krita feels very natural to use. I have worked both in Photoshop and Painter before and although I like them, I’ve always been hoping to find a program that sits somewhere in between those two. As an illustrator I am mostly interested in paint tools. Photoshop has always seemed too technical and not so intuitive. Painter, while trying to deliver the painterly feel, wasn’t really delivering it. With Krita I feel almost like I’m painting. The number of settings for brushes can be overwhelming at first, but it helps to create brushes that are customized specifically for me. I especially like how Krita manages patterns in brushes.

Sorcery z napisami Katarzyna Oleska

What does Krita already do better, and what could make it better still?

As well as the brushes, I also love the vector tools in Krita. I have never before seen a program where tools would change their characteristics depending on what kind of layer we use them on (paint/vector).

I also love that I can pick a color with ctrl and dynamically change the size of the brush by holding shift and dragging my pen. I often only have to use my pinky to control these two.

Rotating the canvas is easy (space-shift) and I am addicted to the mirror tool as I use it to verify the proportions in my paintings (mirroring the image helps spot mistakes). I love that when I’m using two windows for one file the mirror tool only affects one of the windows. The warping tool is also great. I don’t use it much, but I tried it out and I love the way it works. Multiple Brushes and Wrap Around Mode are great too, they make creating patterns so easy. But one of my favourite things is that I can choose my own Advanced Color Setting Shape and Type and that there are so many options that come with it.

Things that could be improved: when I overwrite a brush preset I cannot keep the old icon I created. Perhaps an option to keep the old icon could be added. Seems like a small problem but when using many brushes I get used to the icon and when it’s gone I have to search for my brush. The other improvement would be the ability to merge multiple layers together.

Can you give a quick overview of your workflow?

Sure. I actually prepared a short video that shows how I work. It’s a sketch for Terry Pratchett’s “Wyrd Sisters”. I used the older version of Krita back then but the workflow remains the same.

Do you work closely with the publisher for a book cover, or do you only deliver a painting so you don’t see the result until it’s published?

Very often, before I even start sketching, the publisher will send me a draft of the cover’s layout so that I am aware of how much space I have to work with. Sometimes however, when the publisher doesn’t know the final layout, they give me some directions and let me decide how much space I want to leave for the lettering. Usually after I’ve handed them the initial sketch they can correct me, and ask to change the composition a bit. When it comes to the finished illustration I have full control over it until I’ve e-mailed it to the publisher. Once they have approved it, how it looks when it is published is out of my hands. Sometimes they will send me the final version of the cover, so that I know what it will look like in print and I can make some last minute suggestions but I don’t have real control over the cover itself.

What are the special requirements (colour, resolution, file format) and challenges when you work for print?

I like to work with bigger formats. I think a painting looks better when painted big and then shrunk to the size of the cover rather than when it’s painted with only small size in mind. A big size forces me to be more precise in details so in the end the image looks more crisp and the quality is better. Besides, the client may in the future want to use the painting for a poster and then I know the painting will look great.

I usually work with psd files. I use many layers and this is the best file type for me. When I send out the final image I flatten the image and save it as a tiff file. It may be heavier than jpg but there is no loss in quality. Also I work in an RGB mode but I always switch to CMYK in the end to see if I like how it’s going to look in print (CMYK has fewer colors). If necessary I correct any mistakes I see.

To see more of Katarzyna’s work, visit her site: www.katarzynaoleska.com

Publishing House: Piper –www.piper.de
Lettering: Guter Punkt – www.guter-punkt.de

Twenty years of Qt!

I first encountered Qt in Linux Journal in 1996. Back then, I wasn't much of programmer: I had written a GPL'ed mail and Usenet client in Visual Basic and was playing around with Linux. I wanted to write a word processor, because that was really missing, back then.

I tried to use xforms, but that didn't have open source (it was binary only, can you believe it?), and besides, horrible. Since I didn't particularly care about having a GUI, I tried to use curses, which was worse. I had taken a look at Motif years before, so I didn't look again. Sun's OPEN LOOK had a toolkit that looked nice, but wasn't. For work, I had used Delphi and MFC, and had had to make sense of Oracle 2000. None of those were useful for writing a word processor for Linux.

And then, suddenly, out of the blue (though I remembered some of the names involved with Qt as being involved with my favorite ZX Spectrum emulator), appeared Qt. Qt made sense: the classes were helpfully named, the api's were clear and sensible, the documentation was good, the look and feel worked fine. It had everything you'd need to write a real application. Awesome!

So I got started and discovered that, in the first place, I didn't know what makes a word processor tick, and in the second place, I didn't know C++... So my project foundered, the way projects tend to do, if you're trying to do stuff all by your lonesome.

Then I changed jobs, stopped working on a broken-by-design Visual Basic + Oracle laboratory automation system for Touw in Deventer, started working with Tryllian, building Java-based virtual agent systems. Fun! I learned a lot at that job, it's basically where I Was taught programming properly. I discovered Python, too, and loved it! Pity about that weird tkInter toolkit! And I started using KDE as soon as I had a computer with more than 4 megabytes of ram, and KDE used Qt.

Qt still made sense to me, but I still didn't know C++, though it looked to me that Qt made C++ almost as easy as Java, maybe easier, because there were seriously dumb bits to Java.

Then PyQt arrived. I cannot figure out anymore when that was: Wikipedia doesn't even tell me when it was first released! But I threw myself into it! Started writing my first tutorials in 1999 and followed up writing a whole book on PyQt. My main project back then was Kura, an alternative to SIL's Shoebox, a linguistic database application that later got handed over to the Ludwig-Maximilians University in Munich.

I never could make sense out of Java's GUI toolkit: Swing didn't swing it for me. But that was work stuff, and from 2003, I worked on Krita. I started doing my own painting application in PyQt, about the time Martin Renold started MyPaint. I quickly decided that I couldn't do a painting application on my own, just like I couldn't do a word processor on my own. By that time I had taken a good hard look at GTK as well, and concluded that anyone who'd propose to a customer to base a project on GTK should be sued for malpractice. Qt just made so much more sense to me...

So I found Krita, learned bits of C++, and since then there haven't been many days that I haven't written Qt-based code. And without Qt, I probably would have started a second-hand bookshop or something. Qt not just lets me code with pleasure and profit, it is what keeps me sane, coding!

May 20, 2015

Nostalgic Blend(er) of the past II

Here’s the second batch of recovered videos from Vimeo











May 18, 2015

First Target Reached!

On Sunday, we made the base target for our Kickstarter! Unless too many backers decide to cancel their pledges, funding for making Krita really, really fast and animation support is secure! Now, of course, is not the time to fold the hands and lean back: it would be a pity if we don’t manage to reach a handful or even more stretch goals!

But having reached this milestone, it’s time to make it easy to back the project through paypal:

You can choose your reward level in the comment, and from 15 euros you’ll get your stretch goal voting rights, of course!

Talking about stretch goals… Michael Abrahams surprised us all by submitting a patch on reviewboard that implemented most of the selection tools improvement stretch goal! Shift, alt, shift all, ctrl have been implemented for the polygonal, elliptical and rectangular selection tools. The rest is still todo, so it’s not ready for a build yet.

We’ve been busy working on fixing other issues as well:

  • Dmitry implemented Pass-Through mode for group layers (note: filter, transform and transparency masks and pass-through mode don’t work together yet, but loading and saving from and to PSD does!
  • When using Krita with a translation active on windows, the delay on starting a stroke is a bit less, but we’re still working on eliminating that delay completely
  • The color picker cursor now shows the currently picked and previous color.
  • We now can load layerstyles (with some limitations) from PSD files. Saving is coming next!
  • Layer styles can now be used with inherit-alpha
  • Fix some issues with finding templates
  • Work around an issue in the oxygen widget style that would crash the OpenGL-based canvas due to double initialization
  • Don’t toggle the layer options when right-clicking on a layer icon to get the context menu (patch by Victor Wåhlström)
  • Update the Window menu when a subwindow closes
  • Load newer PSD-generated JPG files correctly by reading the resolution information from the TIFF tags as well. (Yes, JPG resolution is marked in the exiv metadata using TFF tags…)
  • Show the image name in the window menu if it hasn’t been saved yet.
  • Don’t crash when trying to apply isolate-layer on a transform mask
  • Add webp support (at least on Linux, untested on Windows)
  • Use the right Krita blending mode when a PSD image contains Color Burn.
  • Add Lighter Color and Darker Color blending modes and load them from PSD.
  • Add a shortcut to edit/paste into a new image. Patch by Tiffany!
  • Fix the autosave recovery dialog on Windows for unnamed autosaves

Unfortunately, all this has left our codebase in a slightly unstable state…  We tried to make new builds, but they just aren’t good enough yet to share! Working on that, though, and hopefully we’ll get there by Wednesday!

 

Interview with Evgeniy Krivoshekov

frog-rider800

Could you tell us something about yourself?

Hi! My name is Evgeniy Krivoshekov, 27 years old, I’m from the Far East of Russia, Khabarovsk. I’m an engineer but have worked as sales manager, storekeeper and web programmer. Now I’m a 3d-modeller! I like to draw, read books, comics and manga, to watch fantastic movies and cartoons and to ride my bicycle.

Do you paint professionally, as a hobby artist, or both?

I’m not a pro-artist yet. Drawing is my hobby now but I really want to become a full-time professional artist. I take commissions for drawings occasionally, but not all the time.

What genre(s) do you work in?

Fantasy, still life.

Whose work inspires you most — who are your role models as an artist?

Wah! So many artists who inspire me!

I think that I love not the artists but their works. For example: Peter Han’s drawings in traditional technique; Ilya Kuvshinov’s work in photoshop and with anime style; Dave Rapoza, who is an awesome artist who draws in traditional and digital technique with his own style and very detailed; Pascal Campion – his work is full of mood and motion and life! And all those artists who inspire me a little. I like many kinds of art: movies, cartoon, anime, manga and comics, music and all kinds of art inspire me,

How and when did you get to try digital painting for the first time?

Hmmmm… I’m not sure but I think that was in 2007 when my father bought our (my family’s) first computer for learning and studying. I was a student, my sister too, and we needed a computer. My first digital tablet was Genius, and the software was Adobe Photoshop CS2.

What makes you choose digital over traditional painting?

I don’t choose between digital and traditional drawing – I draw with digital and traditional techniques. I’ve been doing traditional drawing since childhood but digital drawing I think I’m just starting to learn.

How did you find out about Krita?

I think it was when I started using Linux about 3-4 years ago. Or when I found out about the artist David Revoy and read about Krita on his website.

What was your first impression?

Ow – it was realy cool! Krita’s GUI is like Photoshop but the brushes are like brushes in Sai, wonderful smudge brushes! It was a very fast program and it was made for Linux. I was so happy!

What do you love about Krita?

Surprisingly freely configurable interface. I used to draw in MyPaint or GIMP, but it was not so easy and comfortable as in Krita. Awesome smudge brushes, dark theme, Russian support by programmer Dmitriy Kazakov. The wheel with brushes and the color wheel on right-click of the mouse – what a nice idea! The system of dockers.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Managing very high resolution files, the stability and especially ANIMATION! I want to do cartoons, that’s why I want an animation toolkit in Krita. It will be so cool to draw cartoons in Krita as in TV Paint. But Krita is so powerful and free.

What sets Krita apart from the other tools that you use?

I use Blender, MyPaint, GIMP and Krita but I rarely mix them. MyPaint and GIMP I rarely use, only when I really need them. Blender and Krita are my favourite software. I think that I will soon start to combine them for mix-art: 3d-art+hand-drawing.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think Frog-rider, Sunny, detailed work with an interesting plot about the merchant on the frog. Funny and simple – everything I like.

What techniques and brushes did you use in it?

I used airbrush and circle standard brushes, basic wet brush, fill block and fill circle brushes, ink brush for sketching, my own texture brush and move tool. That’s all I need for drawing. As regards techniques… sometimes I draw by value, sometimes from a sketch with lines, sometimes black and white with colors underneath (layer blending mode) or with colors without shading – it depends on my mood of the moment,

Where can people see more of your work?

My daily traditional and digital pieces on Instagram. Some photos, but many more drawings. More art at DeviantArt and Artstation.

Anything else you’d like to share?

I just want to say that anyone can draw, it’s all a matter of practice!

May 17, 2015

Nostalgic Blend(er) of the past I

Hi all :)

Recently found my forgotten account in Vimeo where I posted several years ago (seems like yesterday) my experiments and tinkers. Since I will no longer use or maintain that account I’m uploading them to my active Youtube channel to avoid loosing them in case Vimeo servers drop inactive accounts.

OMG what a lot of memories they bring me, they still float around in this blog as I started it in 2007 when I called it: True Volumetrics (for Blender) and it has witness my personal growth from a curious kid to …. an older curious kid XD.

I will never forget at that time my huge lack of resources, it was just a year before, that I could finally bought my first Pentium 4 computer (256 MB RAM and 32 MB Video it was a dream come true at that time in Cuba), with little to none computer programming experience in the past, but a strong love for math and physics since I was studying Nuclear Physics and with a CG passion eroding me inside out for not choosing my perfect career. So, at the third year of Nuclear Physics (2 years before getting my degree!) I took the leap of faith and changed college: I remember it was a drama for me,  Directors, professors and classmates didn’t wanted me to leave as I was showing good promises in the field … making it more difficult for me to make the choice: -what if I’m wrong? what if I’m dropping a unique opportunity career for a mediocre future? cheers to them wherever they are now!

I think I will never know, and every time I face a life restart I have the very same fears, but at least the chosen path gave me the opportunity to realize my 2  biggest dreams: – Working at the CG industry and become free from the illness of my society.

Blender was my dream scene and my playground, due to my ingenuity I could not finish many projects there but I acted as a catalyst for them to happen. Instead of asking in forums for features I drafted them, showed them what can be achieved and that it was really possible, so experienced devs could turn them into life. The community opened me their arms and their heart and I still carry them with me, I guess that being open source, I can go back anytime in the future. Blender have long evolve those points and I see with pleasure how is turning into something bigger that the creators, become self sustainable. It has surpassed the critical mass for open source projects to continue live on forever and not be forgotten in the springs of short lived similar projects.

In those videos also appear my roots at 3DCoat, where I have sprout my first white hairs and I found a nest too and I pushed myself to horizons never expected before :) We can’t have everything we want in life and our lifetime budget is limited :( we get consolation in the hope that, someday in the future we will see again our friends, relatives, lovers and things will be the same but is not. Our life threads diverge and we turn complete strangers again.

All that remains is nostalgia. 

Raul


May 15, 2015

Of file modes, umasks and fmasks, and mounting FAT devices

I have a bunch of devices that use VFAT filesystems. MP3 players, camera SD cards, SD cards in my Android tablet. I mount them through /etc/fstab, and the files always look executable, so when I ls -f them, they all have asterisks after their names. I don't generally execute files on these devices; I'd prefer the files to have a mode that doesn't make them look executable.

I'd like the files to be mode 644 (or 0644 in most programming languages, since it's an octal, or base 8, number). 644 in binary is 110 100 100, or as the Unix ls command puts it, rw-r--r--.

There's a directive, fmask, that you can put in fstab entries to control the mode of files when the device is mounted. (Here's Wikipedia's long umask article.) But how do you get from the mode you want the files to be, 644, to the mask?

The mask (which corresponds to the umask command) represent the bits you don't want to have set. So, for instance, if you don't want the world-execute bit (1) set, you'd put 1 in the mask. If you don't want the world-write bit (2) set, as you likely don't, put 2 in the mask. So that's already a clue that I'm going to want the rightmost byte to be 3: I don't want files mounted from my MP3 player to be either world writable or executable.

But I also don't want to have to puzzle out the details of all nine bits every time I set an fmask. Isn't there some way I can take the mode I want the files to be -- 644 -- and turn them into the mask I'd need to put in /etc/fstab or set as a umask?

Fortunately, there is. It seemed like it ought to be straightforward, but it took a little fiddling to get it into a one-line command I can type. I made it a shell function in my .zshrc:

# What's the complement of a number, e.g. the fmask in fstab to get
# a given file mode for vfat files? Sample usage: invertmask 755
invertmask() {
    python -c "print '0%o' % (~(0777 & 0$1) & 0777)"
}

This takes whatever argument I give to it -- $1 -- and takes only the three rightmost bytes from it, (0777 & 0$1). It takes the bitwise NOT of that, ~. But the result of that is a negative number, and we only want the three rightmost bytes of the result, (result) & 0777, expressed as an octal number -- which we can do in python by printing it as %o. Whew!

Here's a shorter, cleaner looking alias that does the same thing, though it's not as clear about what it's doing:

invertmask1() {
    python -c "print '0%o' % (0777 - 0$1)"
}

So now, for my MP3 player I can put this in /etc/fstab:

UUID=0000-009E /mp3 vfat user,noauto,exec,fmask=133,shortname=lower 0 0

How to open .pdn files? or: Things I wish I'd known earlier.

Paint.net is a graphics program that uses its own binary file format .pdn that almost no other program can open. Paint.net has a large community and many plugins are available including a third part plugin that adds support for OpenRaster. Paint.net is written in C# and requires the Microsoft .Net runtime, meaning current versions work only on Windows Vista or later.

If you need to open PDN files without using Paint.net there is an answer! Lazpaint can open .pdn files and also natively supports OpenRaster. Laz paint is available on Windows, Linux and MacOS X.

In hindsight using Lazpaint would have been easier than taking a flat image and editing it to recreate the layer information I wanted. Although I respect the work done by Paint.net it is yet another example of time wasted and hassle caused by proprietary file formats and vendor lock-in.

May 14, 2015

Result of the work on GCompris is ready..

As I finished the work for the time allocated by this campaign, here is a video showing the result:

Of course being only 15% funded, I couldn’t complete the relooking for everything. But at least, I could update all the core components, all the main menu with all activities icons, and a good bunch of activities.

Thanks again to everyone who helped making this possible; more updates about it later…

May 13, 2015

designing interaction for creative pros /2

Part two of my LGM 2015 lecture (here is part one). It is a tale of cars. For many years I have had these images in my head and used them in my design practice. Let’s check them out.

freude am fahren

First up is the family car:

a catalog shot of a family car source: netcarshow.com

It stands for general software. It is comfortable, safe and general‐purpose. All you need to use it is a minimum of skills, familiarity and practice—in the case of cars this is covered by qualifying for a driving licence.

In the case of software, we are talking casual and enthusiast use. A good example is web browsers. One can start using them with a minimum of skills and practice. After gaining some experience one can comfortably drive use a browser on a daily basis. If a pro web browser exists, then it has escaped my radar.

(It would make a very interesting project, a pro web browser. But first a product maker would have to stand up with a solid vision of pro web browsing; its user groups; and some big innovation that is valuable for these users.

vroooom

When I think of creative pro interfaces, I think of this:

a rally car blasting around a corner on a rallystage in nature source: imgbuddy.com

The rally car. It is still a car, but… different. It is defined by performance. And from that, we can learn a couple of things.

speed, baby

First, creative pros work fast. They ‘wield the knife’ without doubt. A telltale sign of mastery is the speed of execution. I have this in mind all the time when designing for creative pros.

I vividly remember one of the earliest LGMs, Andy Fitzsimon went on stage and demonstrated combining pixel and vector in one image. The pace was impressive, Andy was performing nearly two operations per second.

Bam bam bam bam. At a tempo of 120 beats per minute; the solid tempo of marching bands and disco. That is the rhythm I aim to support, when designing for creative pros.

command and control

Second, creative pros really know their material, the medium they work with. They can, and need to, work with this material as direct and intimate as possible, in order to fulfil creative or commecial goals. This all can be technology‐assisted, as it is with software, but the technology has to stay out of the way, so that it does not break the bond between master and material.

The material I am talking about is that of film, graphics, music, animation, garments, et cetera. These can be digital, yes. However data and code of the software‐in‐use are not part of a creative pro’s material. Developers are always shocked, angry, then sad to learn this.

Thus Metapolator, has been designed for font designers and typographers who know what makes a font and what makes it tick. They know the role of the strokes, curves, points, the black and the white, and of the spacing. They are experienced in shaping these to get results. It is this material that—by design—Metapolator users access, just that it is organised such that they can work ten times faster.

dog eat dog

Third, it’s a competitive world. Creatives pros are not just in business. Also in zero‐budget circles there are real fun and/or prestigious projects where exactly those with proven creative performance, and ability to deliver, get asked.

Tools and software are in constant competition, also in the world of F/LOSS. It is a constant tussle: which ones provide next‐generation workflows with more speed and/or more room for creativity? Only competitive tools make masters competitive.

the point

Now that we got the picture, here is the conflict. The rules—the law and industry mores—that make good family cars may be a bad idea to apply to rally cars. And what makes rally cars competitive, may simply be illegal for family cars.

Every serious software platform has its HIG (human interface guidelines). It is the law, a spiritual guide and a huge source of security for developers. That is, for general software. It is only partly authoritative for software for creative pros. Because truly sticking to the HIG, while done all in good faith, will render creative pro software non‐competitive.

vorsprung durch technik

Rally cars contain custom parts, handmade from high performance materials like aluminium, titanium, carbon, etc. This is expensive and done because nothing off‐the‐shelf is sufficient.

Similarly creative pro software contains custom widgets, handmade at great expense—in design and development. For a decade I have witnessed that it is a force of nature to end up in that situation. Not for the sake of being cool or different, but all in the name of performance.

tough cookie

So, with loose laws and a natural tendency for custom widgets, can you do just what you like when you make creative pro software? Well no. It is tough, you still have to do the right thing. If this situation makes you feel rather lost, without guidance, then reach out and find yourself an interaction designer who really knows this type of material. Make them your compass.

picture show

To illustrate all this, let’s look at some of my designs for Metapolator.

of a glyph—surrounded by two others—all the points that make up its     skeleton are connected by outward radiating lines to big circular handles

Speed, baby! Big handles to select and move individual points on the skeleton of a glyph (i.e. direct control of the material). During a brainstorm session with Metapolator product visionary Simon Egli, he noticed how the points could be connected by rigid sticks to big handles.

I worked out the design with big (fast) handles available for furious working, but out of the way of the glyph, so it can be continuously evaluated (unbroken workflow).

four sliders for mixing fonts, one is reversed and has its thumb aligned     with another slider

This is a custom slider set for freely mixing master fonts—metapolation—to make new fonts. In this case four fonts, but it has been designed to easily scale up to nine or more; a Metapolator strength (vis‑à‐vis the competition).

One of the sliders—‘Alternate’—is in an “illegal” configuration; it is reversed. This is done to implement the rule that the mix of fonts has to always add up to 100%. There is special coupled behaviour between the sliders to ensure that.

The design of this part included a generous amount of exploration and several major revisions. Standard widgets and following the HIG would not deliver that every sliders setting maps to one unique font mix. Apart from a consistency goal, that is also about maximising input resolution. So I broke some rules and went custom.

a crossing 2-D axies system coupled to a single axis, with at least     3 fonts on each axis, with a font family and a single font instance placed     on them

This is also a metapolation control. In this case a three‐dimensional one involving eight master fonts. Working with that many fonts is really a pro thing; you have to know what you are doing and have the experience to set up, identify and pick the ‘good font’ results.

The long blue arrow is a font family, with nine or so fonts as members. The whole family can be manipulated as one entity (i.e. placed and spanned in this 3D space) as can each member font individually.

glyphs a, b and c set in 3 different fonts, with point selections across them

Final example: complex selections. Across three different fonts and three different glyphs, several points have been selected. Now they can be manipulated at the same time. That is definitely not consumer‑grade.

If that looks easy, I say ‘you’re welcome.’ It takes serious planning ahead in the design to allow this interaction; for the three fonts to appear, editable, at the same time; for deep selections within several glyphs to be possible and manageable—the big handles‑on‐sticks help also here.

vroom, vroom

In short: if there is one thing that I want you to take away from this blog post, then it is that image of the rally car. How different its construction, deployment and handling are. Making software for creative pros means making a product that is definitely not consumer‑grade.

That’s it for part two. Stay tuned for part three: 50–50, equal opportunities.

May 12, 2015

designing interaction for creative pros /1

Last week at LGM 2015 I did a lecture on one of my fields of specialisation: designing interaction for creatives. There were four sections and I will cover each of them in a separate blog post. Here is part one.

The lecture coincided with the launch of the demo of Metapolator, a project I have been working on since LGM 2014. All the practical examples will be from that project and my designs for it.

see what I mean?

‘So what’s Metapolator?’ you might ask. Well, there is a definition for that:

‘Metapolator is an open web tool for making many fonts. It supports working in a font design space, instead of one glyph, one face, at a time.

‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency. They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.

‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.

‘Metapolator is extendible through plugins and custom specimens. It contains all the tools and fine control that designers need to finish a font.’

theme time

That is the product vision of Metapolator, which I helped to define the moment I got involved with the project. You can read all about that in the making‑of.

One of the key questions answered in a product vision is: who is this for? And with that, I have arrived at what this blog post is about:

Products need a clear, narrow definition of their target users groups. Software for creatives needs a clear definition whether it is for professionals, or not.

Checking the vision, we see that Metapolator user groups are well defined. They are ‘“pro” font designers’ and ‘typographers.’ The former are pro by definition and the latter come with their own full set of baggage; they are pro by implication.

define it like a pro

But what does pro actually mean? And why is it in quotes in the Metapolator vision? Well, the rather down‐to‐earth definition of professional—earning money with an occupation—is not helping us here. There are many making‐the‐rent professionals who are terrible hacks at what they do.

Instead it is useful to think of pros as those who have mastered a craft—a creative craft in our case. Examples of these are drawing, painting; photographing, filming, writing, animating, and editing these; sewing, the list goes on and on.

Making software for creative pros means making it for those who have worked at least 10.000 hours in that field, honing their craft. And also making it for for the apprentices and journeymen who are working to get there. These two groups do not need special ‘training wheels’ modes; they just need to get their hands dirty with the real thing.

the point

The real world just called and left a message:

making it for pros comes at a price.

First of all, it is very demanding—I will cover this in the follow‑up posts. Second, it puts some real limits on who else you can make it for. Making it for…

pros
is perfectly focussed, to meet those demanding needs.
pros + enthusiasts
(the latter also known as prosumers.) This compromises how good one can make it for pros; better keep in check how sprawling that enthusiast faction is allowed to be.
pros + enthusiasts + casual users
forget it, because pros and casual have diametrically opposite needs. There is no room in the UI for both, and with room I mean screen real estate and communication bandwidth.
pros + casual users
for the same reasons one can royally forget about this one too. Enough said.

the fall‐out

You might think: ‘duh, that speaks for itself, just make the right choice and roll with it.’ If it was only that easy. My experience has been that projects really do not like to commit here, especially when they know the consequences outlined above. And when they did make a choice, I have seen the natural tendency to worm out of it later.

I guess that having clear goals is scary for quite a few folks. Having focussed user groups means saying ‘we don’t care about you’ to vast groups of people. Only the visionary think of that as positive.

Furthermore, clear goals are a fast and effective tool to weed out bad ideas, on an industrial scale. That’s good for the product, but upsets the people who came up with these ideas. So they renegotiate on the clear goals, attacking the root of the rejection.

no fudging!

In short: define it; is your software for creatives made for pros, or not? Then compile a set of coherent user groups. In the case of Metapolator the ‘pro’ font designers and typographers fit together beautifully. Once defined, stick with it.

That’s it for part one. Here is part two: a tale of cars.

[editor’s note: Gee Peter, this post contains a lot of talk about pros, but where is the creative angle?] True, the gist this post is valid for all professionals. The upcoming parts will feature more ‘creative’ content, more Metapolator, and illustrations.

git: Moving partial changes between commits

Now and then I face the fact that I’ve added changes to a commit I’d like to have moved into a different commit. Here is what you do:

What’s there

We have two commits. For illustration purposes I’ve trimmed the log output down:

$ git log --stat
commit 19c698a9ee91a5f03f1c3240fc957e6b328931f5

    WIP: adding tests

 parts/tests/functional/conftest.py       |  4 ++--
 parts/tests/functional/test_frobfrob.py  | 43 ++++++++++
 frobfrob.py                              | 14 +++++++++++++-

commit c7ef6c3014ca9d049dea46fbed44010acf53ae79

    prepare frob frob schemas

 parts/tests/functional/conftest.py           | 31 +++++++++++++
 frobfrob/models.py                           | 32 +++++++++++++

commit 5b30d351f51fda40d37d2f7dc25d2367bd37845a
[...]

Now I want to move the changes made to conftest.py from commit c7ef6c3014ca9d049dea46fbed44010acf53ae79 into commit 19c698a9ee91a5f03f1c3240fc957e6b328931f5 (or HEAD).

Pluck out the commit

In order to pluck out the changes to conftest.py, we’ll reset the file against the previous commit 5b30d351f51fda40d37d2f7dc25d2367bd37845a (you could also use HEAD~3).

$ git reset 5b30d351f51fda40d37d2f7dc25d2367bd37845a parts/tests/functional/conftest.py
Unstaged changes after reset:
M       parts/tests/functional/conftest.py

$ git status -s
MM parts/tests/functional/conftest.py

As you can see, we will have staged changes and unstaged changes. The staged changes remove the additions to the conftest.py file and the unstaged changes add our code to conftest.py

Remove and Add

We now create two commits:

  1. Use the staged changes for a new commit which we’ll squash with c7ef6c3014ca9d049dea46fbed44010acf53ae79.
  2. Stage the unstaged changes and create another commit which we’ll squash with 19c698a9ee91a5f03f1c3240fc957e6b328931f5 or HEAD.
# 1. commit Message is something like: squash: removes changes to conftest.py
$ git commit

# 2. commit
# stage changes
$ git add -p

# commit, message will be something like: squash: adds changes to conftest.py
$ git commit

# we end up with two additional commits
$ git log --oneline
492ff22 Adds changes to conftest
8485946 removes conftest files
19c698a WIP: adding tests
c7ef6c3 prepare frob frob schemas
Interactive rebase put’s it all together

Now use an interactive rebase to squash the changes with the right commits:

$ git rebase -i HEAD~5

May 11, 2015

Yet new Close Holes algorithm

Untitled

Here’s new closing holes algorithm in action, based on the FillHoles research on convex contour splitting and advancing front triangulation, greatly improving existing tools.


May 10, 2015

Sat 2015/May/09

  • Decay

    A beautiful, large, old house in downtown Xalapa, where patina is turning into ruin.

    Decayed door Decayed roof Floor tiles Door latch Decayed door

May 09, 2015

Interview with Amelia Hamrick

duelling with a demon 800px

Could you tell us something about yourself?

My name is Amelia Hamrick, and I’m a junior music and fine arts double major at Oklahoma Christian University. I’m actually working towards a masters in library science, but I’d like to get into illustration, concept art and webcomics on the side!

Do you paint professionally, as a hobby artist, or both?

I’m still an art student but I hope to work on a professional level. I definitely draw a lot for fun outside of classwork though, so both!

What genre(s) do you work in?

Fantasy and sci-fi mostly, though I dabble in other genres!

Whose work inspires you most — who are your role models as an
artist?

Hmm… In no particular order, Hiromu Arakawa, Mike Mignola, Hayao Miyazaki, Maurice Noble, Bill Watterson, Roy Lichtenstein, Alphonse Mucha… There’s too many to list!

How and when did you get to try digital painting for the first time?

I started out trying to color ink drawings in GIMP with a mouse! That never turned out very well, haha. I talked my parents into getting me a little wacom bamboo tablet for Christmas when I was in… 9th grade, I think?

What makes you choose digital over traditional painting?

Layers, ctrl-Z, and transform tools, hahaha! I still do a lot of traditional work (I’ve really enjoyed working with gouache) but most of my not-schoolwork art is done digitally now.

How did you find out about Krita?

I used to go on occasional Google search sprees for all the latest drawing applications, and I found Krita during one of these about 3 or 4 years ago. My old computer couldn’t handle the windows build, though, and I hadn’t really gotten into Linux yet… I run ubuntuGNOME on my school-provided Macbook now, so I tried Krita again last year when it was just entering version 2.8, and I’ve used it ever since!

What was your first impression?

When I tried it for the very first time on an old computer it looked really impressive, but it was just too much for that poor old box, haha. Krita’s performance has improved tremendously!

What do you love about Krita?

It combines just about all the features of Photoshop that I’d use with a more streamlined interface, a MUCH better brush engine (loving the color blending and pressing E for erasing with any tool,) stroke smoothing and canvas mirroring like Paint Tool SAI… The feature I take most for granted, though, is the right-click preset palette!

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would really love to see a more stable OS X build, as I often try to convince my fellow art students to give Krita a shot but they mostly use OS X and are wary of trying experimental builds. Other than that, the software in its current state is fantastic and I’m really excited for the new features the Kickstarter will bring (especially the animation tools and gradient map!)

What sets Krita apart from the other tools that you use?

I guess I’ve already answered this, but it combines features from several different painting programs I’ve tried into one killer app! It’s the only program where I haven’t felt like I’m missing anything from my workflow. And being open-source is the cherry on top!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

This poster that I drew for a friend of a scene from her Pathfinder character’s backstory is the most ambitious project I’ve completed in Krita! She won her magical flying-V guitar in a shredding duel with a punk rock demon… how can I pass that mental image up?

What techniques and brushes did you use in it?

I used primarily the ink pen with the tilt function (ink_tilt_20) and the hairy brush (bristles_hairy). I sketched and inked first of course, then painted everything in greyscale on one layer and colored on top of that with an overlay layer! I also used the perspective guides quite liberally (though I hadn’t yet figured out how to properly use the ruler assistants, haha)

Where can people see more of your work?

I have an art blog on tumblr, and I’m working on getting a proper portfolio website set up!

Anything else you’d like to share?

Thank you so much to everyone involved in developing Krita! I wish I could help code… maybe I can volunteer with updating the user manual!

I’m also currently doing sketch commissions to help fund the Krita kickstarter!

May 08, 2015

Improved CloseHoles tool

Hi

Recent developments into QuadFill tool, proving its bulletproof  robustness I found a must to also port a similar algorithm into our closing holes arsenal. The result is quite preomising and equally robust. Our previous tools for the task where quite good, but there was still some remaining “difficult” cases to handle, like the ones I show here. Well, soon that will be the past ;)

Previous closing holes algorithm where non optimal for this case

Previous closing holes algorithm where non optimal for this case

Convex contour partitioning based

Convex contour partitioning based

Yet better using flat interpolation for this case.

Yet better using flat interpolation for this case.

 

Cheers

 

 


PIXLS.US Now Live!

I checked the first post I had made on PIXLS.US while I was building it, and it appears it was around the end of August, 2014. I had probably been working on it for at least a few weeks before that. Basically, it's almost been about 10 months since I started this crazy idea.



Finally, we are "officially" launched and live. Phew!

Help!


I don't normally ask for things from folks who read what I write here. I'm going to make an exception this time. I spent a lot of time building the infrastructure for what I hope will be an awesome community for free-software photographers.

So naturally, I want to see it succeed. If you have a moment and don't mind, please consider sharing news of the launch! The more people that know about it, the better for everyone! We can't build a community if folks don't know it's there! :) (Of course, come by and join us yourselves as well!).

I'll be porting more of my old tutorials over as well as writing new material there (and hopefully getting other talented folks to write as well).

Thank You!

Also, I want to take a moment to recognize and thank all of you who either donated or clicked on an ad. Those funds are what helped me pay for the server space to host the site as well as the forums, and will keep ads off the site. I'm basically just rolling any donations back into hosting the site and hopefully finding a way to pay folks for writing in the future. Thank you all!

May 07, 2015

Time to kick the tires on the new Fedora websites in staging!

So a couple of weeks ago I mentioned the work robyduck and the Fedora websites team have been putting in on the new websites for Fedora, primarily, spins.fedoraproject.org and labs.fedoraproject.org. Here’s the handy little diagram I put together back then to explain:

diagram showing four different fedora sites

This week, robyduck got the new site designs into staging, which means you can try out the new work-in-progress sites right now and provide us your helpful feedback, suggestions (and dare I suggest it) content contributions to make the sites even better. :)

labs.fedoraproject.org

Click below to visit the staging site:
Screenshot from 2015-05-07 17:02:39

spins.fedoraproject.org

Click below to visit the staging site:
Screenshot from 2015-05-07 17:02:29

You may notice as you peruse through the Fedora Labs staging site and the Fedora Spins staging site you’re going to see some bogus stuff. For example, the Robotics Suite page highlights Gimp and Inkscape as included applications. :) This is because a lot of the content is filler content and we need help from the users of these spins and experts in the individual technologies of what we should be featuring and how we should be describing these spins.

So this is sort of a continuation of our earlier call for help, but this one is really mostly focused on content – we really need your help.

help-1

With the staging sites for spins.fedoraproject.org and labs.fedoraproject.org up and running, we are hoping this will make it easier for folks to understand where we are lacking content and could use some help figuring out what to say about each spin. It helps to see it all in context for every spin.

This is a good way to contribute to an open source project if you enjoy writing or documentation – we will handle all the details of getting the content into the pages, you would simply need to email us or blog comment (or whatever is easiest for you) the content you are contributing.

If you are interested in helping us out or even have a particular interest in one of the following spins that is in most need of help, can you get in touch with us and we’ll help you get started?

  • Robotics Suite – needs list of featured applications with short descriptions.
  • Fedora Jam – needs list of featured applications with short descriptions. Could use an updated top-level description (the 2 paragraphs up top) as well.
  • Security Lab – needs list of featured applications with short descriptions.
  • Sugar on a Stick – needs list of featured applications with short descriptions.

We’d appreciate any help you can provide. Get in touch in the comments to this post!

Using the Transform Masks

screencast_azaleas

 


(This video has subtitles in english)

 

Okay, so I’ve wanted to do a tutorial for transform masks for a while now, and this is sorta ending up to be a flower-drawing tutorial. Do note that this tutorial requires you to use Krita 2.9.4 at MINIMUM. It has a certain speed-up that allows you to work with transform masks reliably!

I like drawing flowers because they are a bit of an unappreciated subject, yet allow for a lot of practice in terms of rendering. Also, you can explore cool tricks in Krita with them.

Today’s flower is the Azalea flower. These flowers are usually pink to red and appear in clusters, the clusters allow me to exercise with transform masks!

I got an image from Wikipedia for reference, mostly because it’s public domain, and as an artist I find it important to respect other artists. You can copy it and, if you already have a canvas, edit->paste as new image or new->from clipboard.

Then, if you didn’t have a new canvas make one. I made an a5 300dpi canvas. This is not very big, but we’re only practicing. I also have the background colour set to a yellow-greyish colour (#CAC5B3), partly because it reminds me of paper, and partly because bright screen white can strain the eyes and make it difficult to focus on values and colours while painting. Also, due to the lack of strain on the eyes, you’ll find yourself soothed a bit. Other artists use #c0c0c0, or even more different values.

So, if you go to window->tile, you will find that now your reference image and your working canvas are side by side. The reason I am using this instead of the docker is because I am lazy and don’t feel like saving the wikipedia image. We’re not going to touch the image much.
azelea_01_trunk

Let’s get to drawing!

 

First we make a bunch of branches. I picked a slightly darker colour here than usual, because I know that I’ll be painting over these branches with the lighter colours later on. Look at the reference how branches are formed.

azelea_02_drawing flowers

Then we make an approximation of a single flower on a layer. We make a few of these, all on separate layers.
We also do not colour pick the red, but we guess at it. This is good practice, so we can learn to analyse a colour as well as how to use our colour selector. If we’d only pick colours, it would be difficult to understand the relationship between them, so it’s best to attempt matching them by eye.

 

azelea_03_filling flowers

I chose to make the flower shape opaque quickly by using the ‘behind’ blending mode. This’ll mean Krita is painting the new pixels behind the old ones. Very useful for quickly filling up shapes, just don’t forget to go back to ‘normal’ once you’re done.

azelea_04_finished setup

Now, we’ll put the flowers in the upper left corner, and group them. You can group by making a group layer, and selecting the flower layers in your docker with ctrl+click and dragging them into the group.
The reason why we’re putting them in the upper left corner is because we’ll be selecting them a lot, and Krita allows you to select layers with ‘R’+Click on the canvas quickly. Just hold ‘R’ and click the pixels belonging to the layer you want, and Krita will select the layer in the layer docker.

azelea_05_clonelayer

Clone Layers

Now, we will make clusters.
What we’ll be doing is that we select a given flower and then make a new clone layer. A clone layer is a layer that is literally a clone of the original. They can’t be edited themselves, but edit the original and the clone layer will follow suit. Clone Layers, and File layers, are our greatest friends when it comes to transform masks, and you’ll see why in a moment.

azelea_06_transformmask

You’ll quickly notice that our flowers are not good enough for a cluster: we need far more angles on the profile for example. if only there was a way to transform them… but we can’t do that with clone layers. Or can we?

Enter Transform Masks!

Transform Masks are a really powerful feature introduced in 2.9. They are in fact so powerful, that when you first use them, you can’t even begin to grasp where to use them.

Transform masks allow us to do a transform operation onto a layer, any given layer, and have it be completely dynamic! This includes our clone layer flowers!

How to use them:

Right click the layer you want to do the transform on, and add a ‘transform mask’.

A transform mask should now have been added. You can recognise them by the little ‘scissor’ icon.

Now, with the transform mask selected, select the transform tool, and rotate our clone layer. Apply the transform.
You know you’re successful when you can hide the transform mask, and the layer goes back to its original state!

You can even go and edit your transform! Just activate the transform tool again while on a transform mask, and you will see the original transform so you can edit it. If you go to a different transform operation however, you will reset the transform completely, so watch out.

 

azelea_07_clusters

We’ll be only using affine transformations in this tutorial (which are the regular and perspective transform), but this can also be done with warp, cage and liquify, which’ll have a bit of a delay (3 seconds to be precise). This is to prevent your computer from being over-occupied with these more complex transforms, so you can keep on painting.

We continue on making our clusters till we have a nice arrangement.

 

azelea_08_leaves

Now do the same thing for the leaves.

azelea_09_paintingoriginals

Now, if you select the original paint layers and draw on them, you can see that all clone masks are immediately updated!

Above you can see there’s been a new view added so we can focus on painting the flower and at the same time see how it’ll look. You can make a new view by going window->new view and selecting the name of your current canvas (save first!). Views can be rotated and mirrored differently.

Now continue painting the original flowers and leaves, and we’ll move over to adding extra shadow to make it seem more lifelike!

azelea_10_alphainheritance_1

Alpha Inheritance

We’re now going to use Alpha Inheritance. Alpha inheritance is an ill-understood concept, because a lot of programs use ‘clipping masks’ instead, which clip the layer’s alpha using only the alpha of the first next layer.

Alpha inheritance, however, uses all layers in a stack, so all the layers in the group that haven’t got alpha inheritance active themselves, or all the layers in the stack when the layer isn’t in a group. Because most people have an opaque layer at the bottom of their layer stack, alpha inheritance doesn’t seem to do much.

But for us, alpha inheritance is useful, because we can use all clone-layers in a cluster (if you grouped them), transformed or not, for clipping. Just draw a light blue square over all the flowers in a given cluster.

azelea_11_alphainheritance_2

Then press the last icon in the layer stack, the alpha-inherit button, to activate alpha-inheritance.

azelea_12_alphainheritance_3

Set the layer to multiply then, so it’ll look like everything’s darker blue.

azelea_13_alphainheritance_4

Then, with multiply and alpha inheritance on, use an eraser to remove the areas where there should be no shadow.

azelea_14_alphainheritance_5

For the highlights use exactly the same method, AND exactly the same colour, but instead set the layer to ‘Divide’ (you can find this amongst the ‘Arithmetic’ blending modes). Using Divide has exactly the opposite effect as using multiply with the same colour. The benefit of this is that you can easily set up a complementary harmony in your shadows and highlights using these two.

azelea_15_alphainheritance_6

Do this with all clusters and leaves, and maybe on the whole plant (you will first need to stick it into a group layer given the background is opaque) and you’re done!

Transform masks can be used on paint layers, vector layers, group layers, clone layers and even file layers. I hope this tutorial has given you a nice idea on how to use them, and hope to see much more use of the transform masks in the future!

You can get the file I made here to examine it further! (Caution: It will freeze up Krita if your version is below 2.9.4. The speed-ups in 2.9.4 are due to this file.)

May 06, 2015

Tips for passing Google's "Mobile Friendly" tests

I saw on Slashdot that Google is going to start down-rating sites that don't meet its criteria of "mobile-friendly": Are you ready for Google's 'Mobilegeddon' on Tuesday?. And from the the Slashdot discussion, it was pretty clear that Google's definition included some arbitrary hoops to jump through.

So I headed over to Google's Mobile-friendly test to check out some of my pages.

Now, most of my website seemed to me like it ought to be pretty mobile friendly. It's size agnostic: I don't specify any arbitrary page widths in pixels, so most of my pages can resize down as far as necessary (I was under the impression that was what "responsive design" meant for websites, though I've been doing it for many years and it seems now that "responsive design" includes a whole lot of phone-specific tweaks and elaborate CSS for moving things around based on size.) I also don't set font sizes that might make the page less accessible to someone with vision problems -- or to someone on a small screen with high pixel density. So I was pretty confident.

[Google's mobile-friendly test page] I shouldn't have been. Basically all of my pages failed. And in chasing down some of the problems I've learned a bit about Google's mobile rules, as well as about some weird quirks in how current mobile browsers render websites.

Basically, all of my pages failed with the same three errors:

  • Text too small to read
  • Links too close together
  • Mobile viewport not set

What? I wasn't specifying text size at all -- if the text is too small to read with the default font, surely that's a bug in the mobile browser, not a bug in my website. Same with links too close together, when I'm using the browser's default line spacing.

But it turned out that the first two points were meaningless. They were just a side effect of that third error: the mobile viewport.

The mandatory meta viewport tag

It turns out that any page that doesn't add a new meta tag, called "viewport", will automatically fail Google's mobile friendly test and be downranked accordingly. What's that all about?

Apparently it's originally Apple's fault. iPhones, by default, pretend their screen is 980 pixels wide instead of the actual 320 or 640, and render content accordingly, and so they shrink everything down by a factor of 3 (980/320). They do this assuming that most website designers will set a hard limit of 980 pixels (which I've always considered to be bad design) ... and further assuming that their users care more about seeing the beautiful layout of a website than about reading the website's text.

And Google apparently felt, at some point during the Android development process, that they should copy Apple in this silly behavior. I'm not sure when Android started doing this; my Android 2.3 Samsung doesn't do it, so it must have happened later than that.

Anyway, after implementing this, Apple then introduced a meta tag you can add to an HTML file to tell iPhone browsers not to do this scaling, and to display the text at normal text size. There are various forms for this tag, but the most common is:

<meta name="viewport" content="width=device-width, initial-scale=1">
(A lot of examples I found on the web at first suggested this: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> but don't do that -- it prevents people from zooming in to see more detail, and hurts the accessibility of the page, since people who need to zoom in won't be able to. Here's more on that: Stop using the viewport meta tag (until you know how to use it.)

Just to be clear, Google is telling us that in order not to have our pages downgraded, we have to add a new tag to every page on the web to tell mobile browsers not to do something silly that they shouldn't have been doing in the first place, and which Google implemented to copy a crazy thing Apple was doing.

How width and initial-scale relate

Documentation on how width and initial-scale relate to each other, and which takes precedence, are scant. Apple's documentation on the meta viewport tag says that setting initial-scale=1 automatically sets width=device-width. That implies that the two are basically equivalent: that they're only different if you want to do something else, like set a page width in pixels (use width=) or set the width to some ratio of the device width other than 1 (use initial-scale=.

That means that using initial-scale=1 should imply width=device-width -- yet nearly everyone on the web seems to use both. So I'm doing that, too. Apparently there was once a point to it: some older iPhones had a bug involving switching orientation to landscape mode, and specifying both initial-scale=1 and width=device-width helped, but supposedly that's long since been fixed.

initial-scale=2, by the way, sets the viewport to half what it would have been otherwise; so if the width would have been 320, it sets it to 160, so you'll see half as much. Why you'd want to set initial-scale to anything besides 1 in a web page, I don't know.

If the width specified by initial-scale conflicts with that specified by width, supposedly iOS browsers will take the larger of the two, while Android won't accept a width directive less than 320, according to Quirks mode: testing Meta viewport.

It would be lovely to be able to test this stuff; but my only Android device is running Android 2.3, which doesn't do all this silly zooming out. It does what a sensible small-screen device should do: it shows text at normal, readable size by default, and lets you zoom in or out if you need to.

(Only marginally related, but interesting if you're doing elaborate stylesheets that take device resolution into account, is A List Apart's discussion, A Pixel Identity Crisis.)

Control width of images

[Image with max-width 100%] Once I added meta viewport tags, most of my pages passed the test. But I was seeing something else on some of my photo pages, as well as blog pages where I have inline images:

  • Content wider than screen
  • Links too close together

Image pages are all about showing an image. Many of my images are wider than 320 pixels ... and thus get flagged as too wide for the screen. Note the scrollbars, and how you can only see a fraction of the image.

There's a simple way to fix this, and unlike the meta viewport thing, it actually makes sense. The solution is to force images to be no wider than the screen with this little piece of CSS:

<style type="text/css">
  img { max-width: 100%; height: auto; }
</style>

[Image with max-width 100%] I've been using similar CSS in my RSS reader for several months, and I know how much better it made the web, on news sites that insist on using 1600 pixel wide images inline in stories. So I'm happy to add it to my photo pages. If someone on a mobile browser wants to view every hair in a squirrel's tail, they can still zoom in to the page, or long-press on the image to view it at full resolution. Or rotate to landscape mode.

The CSS rule works for those wide page banners too. Or you can use overflow: hidden if the right side of your banner isn't all that important.

Anyway, that takes care of the "page too wide" problem. As for the "Links too close together" even after I added the meta viewport tag, that was just plain bad HTML and CSS, showing that I don't do enough testing on different window sizes. I fixed it so the buttons lay out better and don't draw on top of each other on super narrow screens, which I should have done long ago. Likewise for some layout problems I found on my blog.

So despite my annoyance with the whole viewport thing, Google's mandate did make me re-examine some pages that really needed fixing, and should have improved my website quite a bit for anyone looking at it on a small screen. I'm glad of that.

It'll be a while before I have all my pages converted, especially that business of adding the meta tag to all of them. But readers, if you see usability problems with my site, whether on mobile devices or otherwise, please tell me about them!

Krita 2.9.4 released!

We’re not just keeping an eye on the kickstarter campaign (three days and almost at 50%! but go ahead and support us by all means, we’re not there yet!), we’re also working hard on Krita itself. Dmitry is busy with improving the performance of clone layers, adding PSD file support to the Layer Styles feature and fixing loading and saving masks to PSD files (we implemented that in October, but broke it subsequently…), and we’ve got a brand new release for you today.

Well, I made packages for Windows available already on Sunday, but here’s scoop — what’s in, what not! Layer styles, startup speed improvements, memory consumption improvements, bug fixes!

Big New Things

And we mean big. This is the first release with the layer styles feature sponsored by last year’s kickstarter!

  • Implement Photoshop layer styles. Note: this is the first version. Some features are not implemented and we load and save only to Krita’s native file format and ASL style library files (not PSD files yet). There is also still a bug with masks and layer styles
  • make start up faster by not waiting for the presets to be loaded (startup times are now 30-50% faster )
  • Big speed improvement when using transform masks and filters. The move tool is about 20% faster.
  • Reduced the  download size of Krita for Windows by 33% (145MB to 97MB). This is the result of cleaning up unused files and fixing translations

And then there are the bug fixes…

  • Fix the patch count of the color history
  • Lots of fixes to the layout of docker panels, dialogs and other parts of Krita
  • Lots of fixes for special widgets when using the Plastique style
  • Fix issues with resizing the icon size in resource selectors
  • Fix usability issues in the crop tool (reset size settings after doing cropping)
  • Add a function to hide docker titlebars
  • Fix issues with the default settings button
  • Save memory by not loading or saving texture information for brush presets that don’t use textures
  • Automatically add a tag based on the filename for all brush tips from Adobe ABR brush collections
  • Make Export and Save as default to the folder the original file came from
  • Make it possible to switch off compression for layers in kra files (bigger files, but faster saving)
  • Disable opening 32 bit float grayscale TIFF files: we don’t support that yet
  • Fix memory leak when using gradients
  • Fix color serialization from user interface to GMIC (bug 345639)
  • Fix crash when toggling GMIC preview checkbox (bug 344058)
  • Make it possible to re-enable the splash screen
  • Show the label for the sliders inside the slide, to save space.
  • Fix HSV options for the grid and spray brush
  • Don’t show the zoom on-canvas notification while loading an image
  • Fix many memory leaks
  • Fix the specific color selector docker so it doesn’t grow too big
  • Allow the breeze theme to be used on platforms other than KDE
  • Don’t crash when creating a pattern-filled shape if no pattern is selected (bug 346990)
  • Fix loading floating point TIFF files (bug 344334)
  • Fix loading tags for resources from installed bundles
  • Make it possible to ship default tags for our default resources (bug 338134 — needs more work to create a good default definition)
  • Remember the last chosen tag in the resource selectors (bug 346703)
  • Fix bug 346355: don’t say “All presets” in the brush tip selector’s filter

Downloads

OSX:

(Please keep in mind that these builds are unstable and experimental. Stuff is expected not to work. We make them so we know we’re not introducting build problems and to invite hackers to help us with Krita on OSX.)

May 05, 2015

Fedora Design Team Update

Fedora Design Team Logo

Fedora Design Team Meeting 5 May 2015

This is a very, very quick summary post:

Highlights

  • We had three tickets that opened up today – ticket 371 for a sticker design that mleonova picked up, ticket 369 for the flock 2015 t-shirt which riecatnor picked up, and 210 for an updated map graphics which bronwynmowens grabbed.
  • We had some nice “Going to FUDcon” artwork designed by jurankdankkal and gnokki in ticket 359 – good work!!
  • Bronwyn finished her the artwork for her first ticket, ticket 347 which is a series of banners to advertise Fedora on stackexchange.
  • We gave threebean some advice on fedmenu (ticket 374) and I think we came up with some interesting ideas. I am going to copy/pasta the discussion into ticket 374 and we may end up having another discussion about it.
  • kirkB has been making progress on ticket 364 (to update the Design team wiki page) and posted his draft asking for comments / feedback. If you are a design team member please take a look and let him know what you think in the ticket! :)

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

Telling the user about firmware updates

A common use-case that has appeared over the last week is that some vendors just want to notify people there is updated firmware available, but they don’t want fwupd to apply it automatically. This might be because an external programmer is required to update, the flashing tool is non-free, or other manual steps are required.

If anyone is interested in doing this for their device, there are just two USB string descriptors to add, and then it all just magically works once AppStream metadata is supplied. The device doesn’t have to be OpenHardware, so there’s no real excuse.

May 04, 2015

Site upgrade

Hi all

Domain

I’m starting to see the benefits of moving to U.S , chains are falling one by one.My blog is upgraded to farsthary.com , That’s it! plain and easy ;)

Accessing from previous address will be redirected here. Will be worth the upgrade? what experiences do you have with custom domains?

Cheers


Performance and Animation (and more): Join Krita’s 2015 Kickstarter Project

Let’s make Krita as fast, or faster than Photoshop! That’s this year’s Krita Kickstarter theme — to begin with.

Last year’s kickstarter was a big success and all the support resulted in the biggest, best Krita release ever, Krita 2.9, with a huge number of exciting features. In fact, this week we’ll be releasing Krita 2.9.4, the first version of Krita with the Photoshop-type layer styles feature included! (As well as speed-ups and dozens of bug fixes…)

This year, we’re going for two ambitious goals. The first is raw, interactive performance. Painting on a big canvas, with a big brush, with textures and gradients. Krita should become as fast as (dare we say it?) Photoshop! We’ve already gotten a proof of concept working, but it needs a lot of work deep down in the core of Krita’s code. As a result, Krita should also become much more memory-efficient.

We also learned our lesson from the previous three animation plugin projects: if we want Krita to support traditional hand-drawn animation, we need to put animation right at the core. Not so coincidentally, that’s exactly the same place where we need to work to make Krita’s painting performance outstanding.

And that’s this year’s big topic: it’s a lot of really hard work, and it needs to be done — and we need your help for that! With your backing, Krita 3.1 will be even better, even faster, even more fun to use.

If we go over target (and last year we did go over target!), then every 1500 euros will unlock a stretch goal, and our backers will get to vote on the stretch goals, just like last year!

Check out our kickstarter page here:

https://www.kickstarter.com/projects/krita/krita-free-paint-app-lets-make-it-faster-than-phot

Quadrangulation test

A quick test of the quadrangulation algorithm for arbitrary contours, I have remapped the functionality to the close holes tools just for development purposes, so don’t take the Interface as is, also the results are not turned into actual geometry for now. Is just to test how well and robust the algorithm can optimally fill any given contour (the holes) with quads. By no means a trivial task. ;)

What actual tools would you think can benefit from this?

Cheers


Bullet 2.83 released and upcoming SIGGRAPH 2015 course

bullet2.83

The new Bullet Physics SDK 2.83 is available from github. The biggest change is the new example browser using OpenGL 3+. For more changes and features, see the docs/BulletQuickstart.pdf as part of the release. For more information and download link, see http://www.bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=18&t=10527

Also, our proposal for a course on Bullet got accepted for the upcoming SIGGRAPH 2015 conference in Los Angeles. Hope to see you there!

May 03, 2015

Usability and Playability

I could be programming but instead today I am playing games and watching television and films. I have always been a fan of Tetris which is a classic, but I am continuing to play an annoyingly difficult game, that to be honest, I am not sure I even enjoy all that much, but it is strangely compelling. My interest in usability coincides with my interest in playability. Each area has their own jargon but are very similar, the biggest difference is that games will intentionally make things difficult. Better games go to great lengths to make the difficulties challenging without being frustrating, gradually increasing the difficulty as they progress, and engaging the user without punishing them for mistakes. (Providing save points in a game game is similar to providing an undo system in an application, both make the system more forgiving and the users allow users to recover from mistakes, rather than punishing and them and forcing them to do things all over again.)

There is a great presentation about making games more juicy (short article including video) which I think most developers will find interesting. Essentially the presentation explains that a game can be improved significantly without adding any core features. The game functionality remains simple but the usability and playability is improved, providing a fuller more immersive experience. The animation added to the game is not merely about showing off, but provides a great level of feedback and interactivity. Theme music and sound effects also add to the experience, and again provide greater feedback to the user. The difference between the game at the start and at the end of the presentation is striking, stunning even.

I am not suggesting that flashy animation or theme music is a good idea for every application but (if the toolkit and infrastructure already provided is good enough) it is worth reconsidering that a small bit of "juice" like animations or sounds effect could be useful, not just in games, in any program. There are annoying bad examples too but when done correctly it is all about providing more feedback for users, and helping make applications feel more interactive and responsive.
For a very simple example I have seen a many users accidentally switch from Insert to Overwrite mode and not know how not get out of it, and unfortunately many things must be learned by trial and error. Abiword changes the shape and colour of the cursor (from a vertical line to a red block) and it could potentially also provide a sound effect when switching modes. Food for thought (alternative video link at Youtube).

Interview with Wolthera

firebird800

Could you tell us something about yourself?

My name is Wolthera, I am 25, studied Game Design and currently studying Humanities, because I want to become a better game designer, and I hope to make games in the future as a job. I also draw comics, though nothing has been published yet.

I am also part of the Krita team as a volunteer, and aside from fixing bugs and adding simple features, I do a lot of non-programming things like helpdesking, maintaining the manual, writing tutorials, providing resources, making demonstration videos, etcetera. The answers to this artist interview will thus be from a slightly different angle.

Do you paint professionally, as a hobby artist, or both?

I mainly paint as a hobby artist, and I am trying to get into professional drawing as either concept artist or game artist, but strangely enough the only gigs I’ve had until now are logo design and UI design.

What genre(s) do you work in?

Mostly fantasy, I enjoy the strange things I can do with that.

Whose work inspires you most — who are your role models as an artist?

Ah… It’s a bit of a hotchpotch. I really enjoy the Art Nouveau and Romantic artists for their use of colour and linework. In particular art nouveau interests me in the way how they deal with presenting detail to the audience. I enjoy this type of thing also in comics, with a particular interest toward Manga, but I was raised with Franco-Belgian comics which are really impressive in this regard as well.

I also really like the compositions coming from Impressionist painters and the colour palettes from Expressionist ones, and how you can see both in the more abstract modern arts. Of contemporary art I most enjoy the architectural, and how it deals with space, and that’s largely because of my affinity with game design.

How did you get to try digital painting for the first time?

I think, officially this was DR. HALO on DOS, but I don’t recall which version exactly. I do recall making gameboard with it, drawing castle plans and the like. I continued to do the same thing with MS Paint. I later played around with ulead, which was a simple photo editing software, and then my older sister installed Paintshop Pro, and recommended me to get a tablet (which I did, and my graphire 3 is still working)… And then I spent a while colouring in my own scanned drawings. Then I moved over to Photoshop for a few years, after that Paint Tool Sai, I did use GIMP around this time but I never painted with it: the brush engines were not good enough. After that I just tried a lot of different programs, so Illustudio/Manga Studio, Open Canvas, I never enjoyed using Painter, Artrage and Azpainter, so I never really explored those programs. Then my sister showed me MyPaint, which I didn’t think impressive for a long time, until I really tried it out one day and enjoyed the effect I got from Deevad and Ramon’s brushpacks. I was soon making my own brushes then.

What makes you choose digital over traditional painting?

Initially it was the ease of using strong colours and having fun without wasting materials. Nowadays it’s about saving space: I don’t have a desk suitable for traditional drawing any more.

How did you find out about Krita?

After I played a lot with MyPaint, I heard from people that Krita 2.4 was the shit. When I went to the website at the time (which is the one before the one before the current) it just looked alien and strange, and worse: there was no Windows version, so I couldn’t even try it out. So I spent a few more years having fun with MyPaint alone, but eventually I got tired of its brush engine and wanted to try something more rough. When I checked Krita again, it had two things: a new, considerably more coherent website (the one before this one) and a Windows build. Around that time it was still super unstable and it didn’t work with my tablet. But MyPaint also had tablet problems, so I had no qualms about dual booting to Linux and trying it out there.

What was your first impression?

Oh my god, all these cool features! The brush engines still weren’t as high quality as those of the proprietary ones (they are now), and I missed MyPaint’s HSY’ selector(guess who is responsible for adding that to Krita? ;)) , but the brush engines were definitely super fun, and rough, and I could try out new things with it.

What do you love about Krita?

One of the things that was in Krita 2.7 was a bug with the Colour Selector: its cursor would automatically turn grey in certain shapes, and that annoyed me. So, in an attempt to find a fix, I tried building Krita using Deevad’s Cat guide. Turns out it wasn’t fixed in master either.

Then I realised ‘hey, I did programming before, I can read code’ and asked Boudewijn for how to make debug messages in QT, and initially I thought I would just track down the reason the bug happened, but it turned out Krita was comprehensible enough for me to actually fix it, despite me having no prior C++, let alone QT experience.

And that’s sorta what I really like about it: it’s a tool I can fix, improve and enhance for workflows. And that gave me a lot of agency, which leads to things like the vanishing point assistant, which could not have existed without me having a ton of experience drawing cityscapes and thus knowing the particular problems one faces when making these, and how a computer can complicate that. Other programs’ perspective tools were too simple for me. They assume that you only want to draw one type of perspective with definite sets of vanishing points, while in reality you will need a multitude of points, and this is because the people designing that feature only looked at an art book that barely explains how linear perspective works, without actually going through the workflow and finding the issues. But also, it would not have worked if I wasn’t supported so well by the senior developers in the Krita IRC, who always had time to check my code or to answer my questions, and I may seem overly praiseful of the Krita team, but it’s this support that has led me to spend most of my free time on Krita doing helpdesking, programming, tutorial writing, etc.

What do you think needs improvement in Krita? Is there anything that really annoys you?

It’s sometimes really annoying when a bug creeps in that I don’t have the knowhow to fix, but as a user I have no qualms with Krita. Granted, my annoyance gets tempered by the fact I can go in and change things, which is something that relates to how frustration is usually about not being able to change things that annoy you.

At the other end, I always feel very sad when someone has a problem but I can’t reproduce it or don’t know how to fix it for the user. You want people to just enjoy doing their thing, and it can be frustrating to know a program so well and yet not being able to make a difference. And there’s still things I want to do with the assistants, and I also am sorta adopting the colour management code which could use a couple of improvement, work is never done.

What sets Krita apart from the other tools that you use?

I can customise it to an extent I can’t with other programs.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Firebird! It was a drawing I did to test some brushes, and it just ended up really cool.

What techniques and brushes did you use in it?

A combination of textures and mix and smudge brushes as well as splatter brushes. I bundled them with my painting pack.

Where can people see more of your work?

Most of my 2010 onward work is here: http://wolthera.info/. That was about the time I started to avoid proprietary software. There’s still a couple of Paint Tool Sai and Adobe Illustrator images in there, but it’s mostly traditional or done with some combination of Inkscape, Gimp, MyPaint and Krita. I unfortunately don’t draw enough these days, due to school and the work I do for Krita, but I still open up Krita for things other than testing.

Anything else you’d like to share?

Yeah, I have a tutorial blog, which may be interesting to people trying to figure out workflows in Open Source Graphics Software.

Other than that I hope this was a fun read.

April 30, 2015

Do you know robotics?

Hi :)

Plea for help here :)

rg1024-tripulated-robot

The websites team and I would like to feature a photo of some real robots that have been programmed and/or built using Fedora as the main banner image for the Fedora Robotics spin – but we don’t know of any specific Fedora robots. We’d even be happy with a picture of a non-Fedora robot at this point.

If you know someone who is knowledgable about robotics and/or Fedora robotics, and who may have a picture they’d be willing to let us use, can you please get in touch?

Thanks :)

Stile style

On a hike a few weeks ago, we encountered an unusual, and amusing, stile across the trail.

[Normal stile] It isn't uncommon to see stiles along trails. There are lots of different designs, but their purpose is to allow humans, on foot, an easy way to cross a fence, while making it difficult for vehicles and livestock like cattle to pass through. A common design looks like this, with a break in the fence and "wings" so that anything small enough to make the sharp turn can pass through.

On a recent hike starting near Buckman, on the Rio Grande, we passed a few stiles with the "wings" design; but one of the stiles we came to had a rather less common design:

[Wrongly-built stile]

It was set up so that nothing could pass without climbing over the fence -- and one of the posts which was supposed to hold fence rails was just sitting by itself, with nothing attached to it. [Pathological stile]

I suspect someone gave a diagram to a welder, and the welder, not being an outdoor person and having no idea of the purpose of a stile, welded it up without giving it much thought. Not very functional ... and not very stilish, either!

I'm curious whether the error was in the spec, or in the welder's interpretation of it. But alas, I suspect I'll never learn the story behind the stile.

Giggling, we climbed over the fence and proceeded on our hike up to the very scenic Otowi Peak.

5 Humanitarian FOSS projects

Over on opensource.com, I just posted an article on 5 humanitarian FOSS projects to watch, another instalment in the humanitarian FOSS series the site is running. The article covers worthy projects Literacy Bridge, Sahana, HOT, HRDAG and FrontlineSMS.

A few months ago, we profiled open source projects working to make the world a better place. In this new installment, we present some more humanitarian open source projects to inspire you.

Read more

QuadFill tool

Hi

Finally iron out the remaining issues and is done! QuadFill is a tool aimed for quadrangulating an arbitrary 3D contours,  for testing purposes I used holes boundaries which I can easily create and shape in arbitrary forms so i can quickly test and debug the tool. But it will be used in places other than filling holes for which we ave an extensive set of algorithms. So will be more suited for retopo, autopo, and the like. The good thing is that is quite robust and flexible, given any vertex contours it will be quad dominant and produce at most, 1 triangle and the rest will be quads. Of course , that remaining triangle can be easily avoided if in a preprocess step the contour is made of even number of vertices, without loosing generality.

In previous iterations of this tool I have not addressed the complex contour case, like T-shapes, X-shapes, C, L and the like, non convex forms, causing it to fail at those:

fail1 mal2Now I have solved the general contour partition, avoiding uneven splitting, interpolating and recursive filling of each convex area. Eventually merging and smoothing/beautifying  the output mesh. Underlying steps are optimal or quasi optimal for initial contour constraints considering edge vertex are unmovable.

1 2 3

Cheers


April 29, 2015

Updating OpenHardware Firmware 2

After quite a bit of peer review, it turns out my idea to use the unused serial field wasn’t so awesome. Thanks mostly to Stefan, we have a new proposal.

For generic USB devices you can use a firmware version extension that is now used by ColorHug, and I hope other projects too in the future. With this the fwupd daemon can obtain the firmware version without claiming the interface on the device and preventing other software from using it straight away.

To implement the firmware version extension just create an interface descriptor with class code 0xff, subclass code 0x46 and protocol 0x57 pointing to a string descriptor with the firmware version.
An example commit to the ColorHug2 firmware can be found here. It costs just 21 bytes of ROM which means it’s suitable even for resource-constrained devices like the ColorHugALS. Comments welcome.

April 28, 2015

Updating OpenHardware Firmware

EDIT: Don’t implement this. See the follow-up post.

One of the use-cases I’ve got for fwupd is for updating firmware on small OpenHardware projects. It doesn’t make sense for each of the projects to write a GUI firmware flash program when most of them are using a simple HID or DFU bootloader to do basically the same thing. We can abstract out the details, and just require the upstream project to provide metadata about what is fixed in each update that we can all share.

The devil, as they say, is in the details. When enumerating devices, fwupd needs to know the device GUID (usually, just a hardcoded mapping table from USB VID/PID). This certainly could be in a udev rule that can be dropped into the right place when developing a new device, as I don’t want people to have to build a fwupd from git just to update the new shiny device that’s just arrived.

There are two other things fwupd needs to know. The most important is the current firmware version for a device. There is no specification for this as far as I can tell. ColorHug has a HID command GET_VERSION which returns 3 uint16M numbers for the major, minor and micro versions and other device firmwares have other similarly obvious but different ways of doing it.

The other is how to switch the device in firmware mode back into bootloader mode so that it can flash a new version. For ColorHug there’s a RESET command, but on other hardware it’s either a custom command sequence, or doing something physical like pressing a secret button with a paperclip or shorting two pins on a PCB.

I think it would be useful to notify the user that there in an update available, even if we can’t actually do the upgrade without doing some manual step. For this we need to get the current firmware version, ideally without open()ing the device as this will prevent other software from using it straight away. What we can get from the device for free is the device descriptors.

What I’m going to do for ColorHug is to change the unused device serial string descriptor to “FW:1.2.3″. I’ll also support in fwupd devices changing the product string from “Widget” to “Widget FW:1.2.3″ i.e. we look in the various strings for a token with a “FW:” prefix and use that.

If that isn’t specified then we can fall back to opening the device and doing a custom command, but when you can ask friendly upstream firmware vendors to make a super small change, it makes things much easier for everyone. Comments welcome.

released darktable 1.6.6

We are happy to announce that darktable 1.6.6 has been released. Please note that the 1.6.5 release was broken so 1.6.6 was directly pushed out. Just pretend 1.6.5 had been skipped.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.6
Please only use our provided packages ("darktable-1.6.6.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:
https://github.com/darktable-org/darktable/releases/download/release-1.6.6/darktable-1.6.6.tar.xz
https://github.com/darktable-org/darktable/releases/download/release-1.6.6/darktable-1.6.6.dmg

this is another point release in the stable 1.6.x series.

sha256sum darktable-1.6.6.tar.xz
f85e4b8219677eba34f5a41e1a0784cc6ec06576326a99f04e460a4f41fd21a5
sha256sum darktable-1.6.6.dmg
bce9a792ee362c47769839ec3e49973c07663dbdf6533ef5a987c93301358607

Improvements

  • fix the Olympus E330 support (which was accidentally broken in 1.6.4)
  • fix white balance reading for the Canon Powershot SX50 HS
  • white balance presets for RICOH GR
  • minor assorted bug fixes (masks, lens correction, profiled denoise, etc)

Google Summer of Code 2015

The list of students accepted to the 2015 edition of Google’s Summer of code has just been published. This year, we’ve got two students working on Krita: Jouni and Wolthera. Wolthera has been a Krita developer for quite some time, working on color selectors perspective assistants and more, while Jouni has contributed with bug fixes for 2.9.

Wolthera is working on an experimental brush engine: a tangent normal map brush engine. A surface normal is a type of vector used to determine how light bounces off a surface. 3d graphics have has a way to encode these in normal-maps. To the human eye, this encoding looks like a colour. This brush engine takes the tilt sensors of a tablet stylus, and treats it like a surface normal, having it output the correct color. This would be a worthwhile asset to Krita because of the interest in hand-painted textures. There’s already quite a bit of progress in her branch! Check out the forum thread, too.

tangent

Jouni will build on the lessons learned from last year’s animation project and integrate animation into Krita’s core. All three of our previous animation plugins were hampered by being designed as a plugin. This time, animation is going right into the deepest layers of Krita. Krita’s native file format will start supporting animations as well. Jouni had already started on animation support before Summer of Code was announced, in fact, and he has got a proof of concept up and running already.

animation

In fact, two weeks ago, we had our first sprint in Deventer, sponsored by the Krita Foundation, with a hopeful Jouni and Wolthera and prospective mentors Dmitry and Boudewijn, to thresh out the designs for both features and do some pair programming.

2015-sprint

Congrats to Jouni and Wolthera and let’s look forward to an awesome Summer of Code!

April 25, 2015

Stellarium 0.13.3 has been released!

Stellarium 0.13.3 is a stable version that introduces some new features and closes 33 bug and wishlist reports. A lot of work has been done on adding few new DSO catalogs: Barnard (B), Van den Bergh (VdB), Sharpless (Sh 2), H-α emission regions in the Southern Milky Way (RCW), Lynds' Catalogue of Bright Nebulae (LBN), Lynds' Catalogue of Dark Nebulae (LDN), Collinder (Cr) and Melotte (Mel). You can find two new plugins also and one of those plugins introduces the support the new very nice feature - 3D landscapes.

A huge thanks to the people who helped us a lot by reporting bugs!

We updated the configuration file and the Solar System file, so if you have an existing Stellarium installation, we strongly recommend to reset the settings after you install the new version (in the Installer).

Full list of changes:
- Added Scenery3D plugin: enable support 3D landscapes
- Added ArchaeoLines plugin: a tool for archaeo-/ethnoastronomical alignment studies
- Added new DSO catalogs: Barnard (B), Van den Bergh (VdB), Sharpless (Sh 2), H-α emission regions in the Southern Milky Way (RCW), Lynds' Catalogue of Bright Nebulae (LBN), Lynds' Catalogue of Dark Nebulae (LDN), Collinder (Cr) and Melotte (Mel)
- Added tiny visual improvements for info about comets and minor planets
- Added Hungarian translation for Aztec skyculture
- Added Russian translation for Western: H.A. Rey skyculture
- Added support of meteors for Windows/MSVC packages
- Added patch for multiscreen setups
- Added tui/tui_font_color option for change color of text user interface (LP: #1421998)
- Added new version of GCVS
- Added implementation of polinomial approximation of time period 1620-2013 for DeltaT by M. Khalid, Mariam Sultana and Faheem Zaidi (2014); http://dx.doi.org/10.1155/2014/480964
- Added new line on the celestial sphere - opposition/conjunction longitude (LP: #1377606)
- Added texture for Ceres (LP: #1271380)
- Extended list of proper names for deep-sky objects
- Fixed position of Great Red Spot (LP: #490019)
- Fixed bug in planet shadow shasder (LP: #1405353)
- Fixed flickering Moon problem (LP: #1411958)
- Fixed jittering moons (LP: #1416824)
- Fixed visibility objects on distance over 50 AU. (LP: #1413381)
- Updated the Spherical projection to be HiDPI aware. (LP: #1385367)
- Restore unit testing for refraction and extinction. Fixed issue in unit test for DeltaT and fixed small issue for DeltaT.
- Fixed issue in core.setObserverLocation (to a new Planet travels always 1 second) (LP: #1414463)
- Fixed issue for cursor, who points to a wrong position after orbit update (LP: #1414824)
- Fixed ignore the Enter key for online search dialog within Solar System Editor plugin (LP: #1414814)
- Fixed package issue for Windows XP (LP: #1414233)
- Fixed crash when trying to select a satellite with invalid orbit (LP: #1307357)
- Check of updates has been removed (LP: #1414451)
- Fixed issues in Search Tool (LP: #1416830)
- Fixed code to update ConfigurationDialog size (LP: #995107)
- Behavior of JD/MJD has been refactored for Date and Time Dialog (LP: #1417619)
- Avoid delaying for the telescope control slew commands over TCP (LP: #1418375)
- Avoid crash when clicking 'Lookup locations on network' checkbox in Locations window (Debian: #779046)
- Updated bookmarks for Solar System editor plug-in (LP: #1425626)
- Allowing display the scope marker below the landscape (LP: #1426441)
- Allow fading of the landscapes without atmosphere (LP: #1420741)
- Allow change thickness of lines of the constellations (LP: #1028432)

Cross Platformity

Several years ago we started porting Calligra to Windows. Supported by NLNet. Some time later, Intel also supported KO to create two Windows versions of Krita, one for tablets, one for convertible ultrabooks and later still, a new version of Calligra Words, Stage and Sheets for convertible ultrabooks. Meanwhile, the Krita Foundation has been publishing Krita on Windows and OSX for some time now. That's a fair amount of experience in puslishing software originally written on Linux on other platforms.

Let's take a look at the starting position.

Calligra, or rather KOffice, as it was called when it started out, was created to be a native KDE application. Integrated with the KDE desktop environment, using all the features that KDE offers, ranging from inter-process communication to system-tray notifications, shared plugin loading, mimetype handling, and other resource icon locating. These applications are KDE applications through-and-through.

There are also some hidden assumptions in the way KDE works on Linux: for instance, that reading thousands of small files on startup isn't a big deal. We Linux users are fortunate in having a choice of really good, really fast file systems, compared to Windows or OSX users.

So, on Linux, it's not that big a deal if people aren't running the KDE Plasma Desktop, since installing the libraries, the icons, maybe some system settings kcms will mean Krita will run just as well in Gnome as in KDE. It's a pity that some distributions don't make Krita depend on the oxygen icon set, and that others think that installing Krita means users need marble, but that's the way Linux packaging works, or doesn't work.

Now, there are two reasons for bringing a Linux application to another platform, two ways of going about it and two ways of implementing the port.

You can port an application to Windows or OSX because you want to use it yourself, as a Linux user in exile, or you can port an application to Windows because you want to gain a user-base on Windows.

And you can port an application to Windows or OSX by bringing the whole Linux and KDE environment with you, or by making the application as native as possible.

And finally, you can build on the target platform, with the target platform's native compilers, or you can cross-compile from Linux.

If you're porting for Linux exiles, the first approach is fine. It's what on Windows is done by cygwin, msys, KDE's emerge tool or KDE's windows installer. It's what FINK or MacPorts provide on OSX: a package manager, all the tools you're used to, all the platform services the application depends on. It's a big undertaking to maintain ports of so many components, and the porting system will often have temporary failures. I'm not saying that it's wasted work: I use KDE's windows installer to get Kate on Windows myself.

But it's not going to work when you want to build a user base for your application on Windows or OSX. The package managers, the installers are too complicated, too alien and drag in too many unfamiliar things. Even using something as Emerge to build, and then package up the built bits into an installer is problematical, because those bits will still expect the full KDE environment to be available.

And a Windows user is going to freak out when starting an application starts daemons that keep running. No kded, therefore. Their "protection" software (snake oil, but scary snake oil) will yammer when there's weird inter-process network communication. Bye-bye dbus. For a single application like Krita, a mimetype database is overkill. Loading icons by the hundred-weight, individually gives a bad hit on startup time. And yes, we're getting nastygrams about the number of files we package with Krita versus the number of files Photoshop packs.

It's really clear to keep in mind what the goal is. Because if you're working together with someone else, and they're of the linuxer-in-exile persuasion, and you're of the native-user persuasion, conflicts will happen all the time. The first crowd won't mind using glib in update-mime-database because, what's the problem? While you'll hate that requirement because building glib on windows and osx ain't easy, and it's a big dependency with a lot of other dependencies that come with it, and all just to be able to figure out that a .jpg file is a jpeg file.

Most 'native' Windows applications that use 3rd party libraries includes those libraries in their own source tree, either as dll's or as source code. It's easy, you never have to deal with newer versions of libraries breaking your code, you can build everything in one go. For some libraries, it's even the only way: you cannot build the breakpad library outside the source tree of the application that uses it. But for a cross-platform application that also targets Linux distributions, this cannot be done. You cannot include all dependencies in your source tree.

So, we need to find a way to build all dependencies without dragging in a fake Linux or fake KDE environment. Without dragging in a package manager. One that creates applications that behave as natively as possible.

What we're currently trying to do is this: build a cmake system that builds Krita's dependencies and then builds krita. CMake's external project system works pretty well for this. It's not as complicated as the Emerge system, though there are similarities, and we're even re-using patches from Emerge for our dependencies.

Here's the list of all dependencies we currently build:

  • automoc: needed for kdelibs-stripped
  • boost: needed for Krita
  • bzip2: needed for kdelibs-stripped
  • eigen3: needed for Krita
  • exiv2: needed for kdelibs-stripped and Krita
  • expat: needed for exiv2
  • ffi: needed for glib (only on OSX)
  • gettext: needed for glib (only on OSX)
  • giflib: needed for kdelibs-stripped and qt
  • glew: needed for Krita
  • glib: needed for shared_mime_info (only on OSX)
  • gsl: needed for Krita
  • iconv: needed for gettext and exiv2
  • ilmbase: needed for openexr
  • intltool: for glib (only on OSX)
  • jpeg: needed for kdelibs-stripped, qt and krita
  • kdelibs-stripped: specially hacked up version of kdelibs4 that doesn't need openssl, dbus and a whole lot of other things.
  • lcms2: needed for krita
  • libxml2: needed for kdelibs-stripped
  • libxslt2: needed for kdelibs-stripped
  • opencolorio: needed for krita, has it's own 3rd party external projects: tinyxml, yaml
  • openexr: needed for Krita
  • patch: needed on windows only, creates a myptch.exe because patch.exe is banned on Windows...
  • pcre: needed for shared-mime-info
  • perl: needed on Windows to build Qt
  • pkgconfig: needed on OSX to build glib
  • png: needed for Krita and Qt
  • qt: we need to build this ourselves to strip out openssl, qtscript, the database stuff and more, and because we build with a version of Visual C++ that Qt4 doesn't support out of the box
  • shared_mime_info: needed for kdelibs-stripped
  • tiff: needed for Qt and Krita
  • vc: needed for Krita
  • krita: what we finally want...

And this is what's still missing:

  • kdcraw: would provide camera RAW import
  • poppler: would provide PDF import and export
  • openjpeg: would provide jpeg2000 import and export.

All this is described here for OSX and here for Windows. My goal is to make a CMake based project that can build everything on OSX and Windows, and on Linux, for Windows in one go. It needs much more work, in fact more time than I actually have... Anything that can slim the number of dependencies would be very welcome!

And now it's coming full-circle: I actually would like to have installers for Krita that work on Linux, too. Just like Blender or Qt Creator have, a 32 and 64 bits installer that people can download, run and get the latest build of Krita, without weird extra dependencies, without waiting for distributions to pick up the next version of Krita. The system we're building should be able to provide that...

April 24, 2015

こんにちは日本! Krita Launches Japanese Site

We are happy to announce that we have launched a Krita site in Japanese! Over the coming months, this website will be evolving to help the Japanese community stay current with all things Krita. Currently, almost everything online is in English, so it can be difficult for other countries to learn about and use Krita. With this Japanese site, we can provide specialized instructions and resources to help them get started. We are still finishing up translations, but are far enough along that we want to release it in the wild.

Check it out
https://jp.krita.org

April 23, 2015

Redesign of spins.fedoraproject.org; Help make your spin rock!

Robyduck and I have been working on a total revamp of spins.fedoraproject.org. Behold, what spins.fpo looks like today:

spins.fedoraproject.org front page screenshot

spins.fedoraproject.org front page screenshot

Design Suite spin details page screenshot from the current spins.fedoraproject.org.

Design Suite spin details page screenshot from the current spins.fedoraproject.org.

Different kinds of spins

So one issue we have with spins is that there are different *kinds* of spins:

  • Desktop Spins There are desktop spins that each feature a different desktop environment on top of Fedora. While you can install multiple desktop environments by default, most people stick to one most of the time, and you certainly can’t use more than one in a given session. These spins are much more about the environment you use Fedora in rather than applications layered on top.
  • Functional Spins There are functional spins that consist of application bundles and configuration that you could honestly package-group install on any desktop and be able to use productively – Games, Design Suite, Robotics, etc. They are more purpose-directed than the desktop spins, which are more for general computing environments.
  • ARM builds (These aren’t actually spins but fell into the fray as they needed a home too!) Now that ARM is a top-level / supported architecture, we have ARM builds for many versions of Fedora. These images are solely architecture-based and cater to a very specific community and very specific use cases / hardware beyond traditional servers, workstations, and laptops.

We made a decision to split the desktop spins away from the functional spins. Functional spins will be housed at a new site catered specifically for them: labs.fedoraproject.org. ARM builds will also have their own one-page site with references to important documentation and the Fedora ARM community as well.

Here’s a rough diagram to illustrate:

diagram showing four different fedora sites

Desktop Spins (spins.fpo)

(These mockups are huge by the way. Sorry :) )

For the front page of the spins.fedoraproject.org site – since they are all desktops, we thought a full-width, large-size screenshot of what the desktop looks like in that spin would help folks figure out which ones they wanted to explore. (Note this mockup does not include Sugar but Sugar will be included in the final design:)

spins.fpo mockup

spins.fpo mockup

Rather than the tabbed approach we use today, we decided to consolidate all of the information on each spin’s individual details page to a one-pager. Spin SIGs can provide as much or as little data about the spin as they like. There are section they can use to highlight specific apps or features of their desktop, or they can opt to not use that display and instead just focus on the description and support content.

Desktop spin details page

Desktop spin details page

Functional Spins (labs.fpo)

The functional spins are more domain / goal-oriented than the desktop spins, so the previews on the front page are smaller and don’t necessarily feature desktop screenshots.

labs.fpo front page mockup

labs.fpo front page mockup

Similar to how the desktop spins’ details pages work, the lab spins details pages are one-pagers as compared to their current multi-tab incarnations. Again, the SIGs in charge can get as detailed or as brief as they’d like. Here’s the Design Suite’s details page:

labs.fpo details page mockup

labs.fpo details page mockup

ARM

So I’m still sorting out some issues with the usage of the ARM trademark here, so none of this is super-final in terms of the graphic design / lack of trademark notices / disclaimers / etc. This is strictly an incomplete work-in-progress. That being said, I’ve been picking the #fedora-arm folks’ brains a lot lately to understand our ARM offerings, their use cases, and ideal ways to represent them for the target audience. My thinking here is a simple one-pager that lists out all of the options by the major usage categories – headless and desktop computing – and just making it really easy to find the version you need across all of the many available options. The other thing here that the #fedora-devel and #fedora-arm folks suggested are links to the ARM wiki documentation for installing these images as well as references to the Fedora ARM mailing list and IRC channels to provide some support for folks using the page.

Anyway here’s where that mockup is at now:

Fedora builds for ARM mockup

Fedora builds for ARM mockup

Whoah cool. You guys have it all under control then?

NOOOOOOOO!!!!11 NOT TRUE!

help-1

We need your help. There are so many spins and versions of Fedora we’re juggling here, and neither of us is an expert in all or even most of them. We’re trying to adapt / convert the existing content / assets for these spins/versions of Fedora, but we really need help from the folks who maintain / use these spins to fully develop the content we need for their pages to come out looking great.

Robyduck sent out a bunch of messages to the various SIGs involved but we haven’t gotten a great response yet. Time is short to get these pages built in time for F22, so we really need all the help we can get. If you are an owner or even interested user in any of the spins featured in these mockups (or ARM,) can you get in touch and help us perfect the content for your spin / ARM build of interest?

Thanks :)

By the way, you can follow the process of the mockups here in the git repo for them. Robyduck is coding them up about as fast as I can crank designs out but I am sure he would appreciate some coding assistance too!!

April 22, 2015

Haskell: From N00b to Beginner

I gave a talk about my experience learning Haskell:

The slides can be found here: http://redhat.slides.com/rjoost/deck-2


Fedora Design Team Update

Fedora Design Team Logo

Fedora Design Team Meeting 21 April 2015

Summary

Announcements/News

  • Libre Graphics Meeting: decause asked if any team members are planning to go to Libre Graphics Meeting 2015 next week. He and riecatnor were thinking about going. Nobody else in the meeting had plans to go.
  • Flock talk proposal deadline extension: The deadline for Flock talk proposals has been extended to May 2, so we reminded folks to get their design team talk proposals in.
  • Fedora websites update: Mo noted that she and robyduck have made a lot of progress on the spins.fedoraproject.org redesign and the new labs.fpo website. If you are interested in the work it’s in github: https://github.com/fedoradesign/fedora-spins.

Tickets Needing Feedback

Tickets Needing Updates

Tickets Needing Triage

  • Update Design Team Wiki – Kirk and Yogi decided to take this one on as a team and have already made some progress!

Completed Tickets

LinuxFest NorthWest Ad

FEDORA-quarter-page_2015_lfnw_quarter-inch-NObleed-v5

Maria did a beautiful job on this design. We closed the ticket since it’s all complete!

Tickets Open For You to Take!

We triaged this ticket in the meeting and it’s all ready for a designer to pick it up and work on it! Could that be you? :)

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

Interview with Alexandru Sabo

lini800

Could you tell us something about yourself?

My name is Alexandru Sabo, and I am a freelance digital illustrator and 2D/concept artist. In the last 15 years I worked both as an employee and a freelancer for different companies, and various clients at the same time. A few to mention: Highlander Studio, Crytek, Fantasy Flight Games and recently start to work for Paizo Publishing.

Do you paint professionally, as a hobby artist, or both?

Professionally.

What genre(s) do you work in?

Mostly fantasy.

Whose work inspires you most — who are your role models as an artist?

When I started with the fantasy, I learned a lot from Warhammer Army Books. That means Adrian Smith, and after that I was corrupted further by Jim Murray’s work. It was closer to me (more comics/conceptual style).

When did you try digital painting for the first time?

In 2001, with my first children’s book.

What makes you choose digital over traditional painting?

I don’t know if it’s a matter of choice. From 2000 on I worked in graphic design for ten years, and when I encountered my first Wacom tablet in 2001, it was a natural thing to use with Photoshop. But I still believe in traditional painting: my sketches or drawings are still made mostly with traditional techniques.

How did you find out about Krita?

In April 2014 I was thinking about starting a new small illustration and graphic art studio. I was looking for an option to have between 2-4 computers with open source stuff (low cost). Gimp was not an option, and after I searched more I found David Revoy’s article “My hardware and software for digital painting”. Then I realised that Krita was the first viable option for me. A main thing was also that Krita could manage CMYK files, a big issue for an illustrator who works for print!

What was your first impression?

Awesome ;). It was very simple and friendly to use. Its GUI and workflow were very close to Photoshop, which was something important to me at that moment.

What do you love about Krita?

Hmm… funny question, because I do not love software :) . As a professional computer-graphics artist it’s more a matter of “can this software satisfy my needs or not?” To mention some very useful and maybe unique features: the easy and very fast way to make a mirror view, and the simple move tool as brush (this is kind of liquify tool/filter in Photoshop). But overall, it is so natural to start to work and use Krita.

What do you think needs improvement in Krita? Is there anything that really annoys you?

To be honest, the stability –it is the most important thing for me– and managing very high resolution files. Also improvements in the layer/layer group managing system, but I understand that there are already plans to fix those soon.

What sets Krita apart from the other tools that you use?

Free! :)  For freelancers, this can be a blessing, even if the best OS to run it on is Linux so far. And as I mentioned before, it’s natural to work with and very user friendly.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I can’t tell you right now, because it’s still under NDA! :D But I like the illustration with Lini — iconic character from the Pathfinder roleplaying world.

What techniques and brushes did you use in it?

My technique is very simple and common, basic brushes and a few extra, nothing that can’t be replicated in other software. And other common layer filters also. Actually nothing special.

Where can people see more of your work?

My best up-to-date place is my official site with a blog section as well, where I post my latest works: http://alexandrusabo.ro/

Anything else you’d like to share?

Maybe the new Patreon campaign I started a few weeks ago, where I will do all my personal projects, and try to keep fans or people who like my work in the same place. Of course there are some possibilities to support me to do more…

I try to offer some nice rewards there as well, so everybody is welcome!

April 21, 2015

Finding orphaned files on websites

I recently took over a website that's been neglected for quite a while. As well as some bad links, I noticed a lot of old files, files that didn't seem to be referenced by any of the site's pages. Orphaned files.

So I went searching for a link checker that also finds orphans. I figured that would be easy. It's something every web site maintainer needs, right? I've gotten by without one for my own website, but I know there are some bad links and orphans there and I've often wanted a way to find them.

An intensive search turned up only one possibility: linklint, which has a -orphan flag. Great! But, well, not really: after a few hours of fiddling with options, I couldn't find any way to make it actually find orphans. Either you run it on a http:// URL, and it says it's searching for orphans but didn't find any (because it ignors any local directory you specify); or you can run it just on a local directory, in which case it finds a gazillion orphans that aren't actually orphans, because they're referenced by files generated with PHP or other web technology. Plus it flags all the bad links in all those supposed orphans, which get in the way of finding the real bad links you need to worry about.

I tried asking on a couple of technical mailing lists and IRC channels. I found a few people who had managed to use linklint, but only by spidering an entire website to local files (thus getting rid of any server side dependencies like PHP, CGI or SSI) and then running linklint on the local directory. I'm sure I could do that one time, for one website. But if it's that much hassle, there's not much chance I'll keep using to to keep websites maintained.

What I needed was a program that could look at a website and local directory at the same time, and compare them, flagging any file that isn't referenced by anything on the website. That sounded like it would be such a simple thing to write.

So, of course, I had to try it. This is a tool that needs to exist -- and if for some bizarre reason it doesn't exist already, I was going to remedy that.

Naturally, I found out that it wasn't quite as easy to write as it sounded. Reconciling a URL like "http://mysite.com/foo/bar.html" or "../asdf.html" with the corresponding path on disk turned out to have a lot of twists and turns.

But in the end I prevailed. I ended up with a script called weborphans (on github). Give it both a local directory for the files making up your website, and the URL of that website, for instance:

$ weborphans /var/www/ http://localhost/

It's still a little raw, certainly not perfect. But it's good enough that I was able to find the 10 bad links and 606 orphaned files on this website I inherited.

Evolving KDE

Paul and Lydia have blogged about how KDE should and could evolve. KDE as a whole is a big, diverse, sprawling thing. It's a house of many rooms, built on the idea that free software is important. By many, KDE is still seen as being in competition with Gnome, but Gnome still focuses on creating a desktop environment with supporting applications.

KDE has a desktop project, and has projects for supporting applications, but also projects for education, projects for providing useful libraries to other applications and projects to provide tools for creative professionals and much, much more. For over a decade, as we've tried to provide an alternative to proprietary systems and applications, KDE has grown and grown. I wouldn't be able, anymore, to characterize KDE in any sort of unified way. Well, maybe "like Apache, but for end-users, not developers."

So I can only really speak about my own project and how it has evolved. Krita, unlike a project like Blender, started out to provide a free software alternative to a proprietary solution that was integrated with the KDE desktop and meant to be used by people for whom having free software was the most important thing. Blender started out to become the tool of choice for professionals, no matter what, and was open sourced later on. It's an important distinction.

Krita's evolution has gone from being a weaker, but free-as-in-freedom alternative to a proprietary application to an application that aspires to be the tool of choice, even for people who don't give a fig about free software. Even for people who feel that free software must be inferior because it's free software. When one artist says to another at, for instance, Spectrum "What, you're not using Krita? You're crazy!", we'll have succeeded.

That is a much harder goal than we originally had, because our audience ceases to be in the same subculture that we are. They are no longer forgiving because they're free software enthusiasts and we're free software enthusiasts who try really hard, they're not even much forgiving because they get the tool gratis.

But when the question is: what should a KDE project evolve into, my answer would always be: stop being a free software alternative, start becoming a competitor, no matter what, no matter where. For the hard of reading: that doesn't mean that a KDE project should stop being free-as-in-freedom software, it means that we should aim really high. Users should select a KDE application over others because it gives a better experience, makes them more productive, makes them feel smart for having chosen the obviously superior solution.

And that's where the blog Paul linked to comes in. We will need a change in mentality if we want to become a provider of the software-of-choice in the categories where we compete.

It means getting rid of the "you got it for free, if you don't like it, fuck off or send a patch" mentality. We'd all love to believe that nobody thinks like that anymore in KDE, but that's not true.

I know, because that's something I experienced in the reactions to my previous blog. One of the reactions I got a couple of times was "if you've got so much trouble porting, why are you porting? If Qt4 and KDE 4 work for you, why don't you stay with it?" I was so naive, I took the question seriously.

Of course Krita needs to be ported to Qt5 and Kf5. That's what Qt5 and Kf5 are for. If those libraries are not suitable for an application like Krita, those libraries have failed in their purpose and have no reason for existence. Just like Krita has no reason for existence if people can't paint with it. And of course I wasn't claiming in my blog that Qt5 and Kf5 were not suitable: I was claiming that the process of porting was made unnecessarily difficult by bad documentation, by gratuitous API changes in some places and in other places by a disregard for the amount of work a notional library or build-system 'clean-up' causes for complex real-world projects.

It took me days to realize that asking me "why port at all" is in essence nothing but telling me "if you don't like it, fuck off or send a patch". I am pretty sure that some of the people who asked me that question didn't realize that either -- but that doesn't make it any better. It's, in a way, worse: we're sending fuck-off messages without realizing it!

Well, you can't write software that users love if you tell them to fuck off when they have a problem.

If KDE wants to evolve, wants to stay relevant, wants to compete, not just with other free software projects that provide equivalents to what KDE offers, that mentality needs to go. Either we're writing software for the fun of it, or we're writing software that we want people to choose to use (and I've got another post coming up elaborating on that distinction).

And if KDE wants to be relevant in five years, just writing software for the fun of it isn't going to cut it.

An Update about G'MIC on OpenSource.graphics

David Tschumperlé has a blog over at OpenSource.graphics and it appears that after releasing G'MIC 1.6.2.0 he had some time to write down and share some thoughts about the last 10 months of working on G'MIC.

He covers a lot of ground in this post (as you can imagine for not having reported anything in a long time while working hard on the project). He talks about some neat new functionality and filters added like color curves in others colorspaces, comics colorization, color transfer (from one image to another), website for film emulation (yay!), foreground extraction, engrave, triangulation, and much more.


Interactive Foreground Extraction


Engrave Filter

A short table of contents for the post:

  1. The G’MIC Project : Context and Presentation
  2. New G’MIC features for color processing
  3. An algorithm for foreground/background extraction
  4. Some new artistic filters
  5. A quick view of the other improvements
  6. Perspectives and Conclusions

David may not write as often as I think he should but when he does - he certainly does! :) Head over and check out the latest news on an awesome image processing framework!

Next Kickstarter Date and Layer Styles Update

kickstarter-logo

While we continue to work on bugs for the next release (2.9.3), we have also been planning and working on the next Kickstarter!

We have been gathering your feedback across the forum, social media, and our chat room (IRC). We want to make the next feature release (3.1, planned for end of this year)  the best possible. We are planning on launching the next Kickstarter on May 4. Two weeks! We have two big projects in mind – as well as some exciting stretch goals. The first project is performance improvements. This includes speeding up the application and painting with seriously large brushes. Creating and working with large canvas sizes will be much more responsive.

The second big goal is adding an animation system. This will help artists create sprite sheets for their game jams, animatics for story boarding, and potentially even produce an entire animated film! While this new system won’t be as feature rich as dedicated animation software, it will be substantially more powerful than Photoshop’s animation tools. It will include things like onion skinning and tweenable properties. We will provide more details in the coming weeks.

Our target goal for this Kickstarter is going to be €20,000 (about $21,000). With everyone’s help, we think this is attainable. Like the last Kickstarter, the money will cover a developer’s salary. Every 1,500 (about $1,600) we go over the goal, we will add a stretch goal.

Some of the stretch goals will be further animation features, others will be workflow improvements, new features for the  brushes and more. There are too many stretch goals to list here! Like the last Kickstarter, what gets included will be voted on by the kickstarter backers.

We will let you know when the Kickstarter is launched to get all of the details. You can always sign up for the mailing list (at the bottom of this post) to stay up to date with all the news.

Layer Styles Update

Adding layer styles to Krita is a really BIG task. It turned out to be much more work than we planned for. This is why it hasn’t made it into the Krita 2.9 release yet. There is still work to be done, but we really want to get this into your hands for you to start playing around with. Starting with the next release (2.9.3), we will be including layer styles into Krita. While there are some features that we are still working on, we want you to play around with what we have. We will be continuing to work on it, so rest assured that the bugs and kinks will be ironed out in the future. Until then, check out this teaser video Wolthera made:

April 20, 2015

My latest ten months working on G’MIC

A few days ago, I released a new version of G’MIC (GREYC’s Magic for Image Computing, numbered 1.6.2.0). This is a free and generic image processing framework I’ve been developing since 2008. For this particular occasion, I thought it would be good (for me) to write a quick summary of what features I’ve been working on these last ten months (since the last release of G’MIC in the 1.5.x.x branch). Hopefully, you’ll be interested too ! This project takes a large amount of my free time (basically every week-end and evenings, except Wednesday, table tennis time :) ), so it’s probably good for me to take a little break and analyze what has been done recently around the G’MIC project.

I will summarize the important features added to G’MIC recently, which are: Various color-related filters, a foreground extraction algorithm, new artistic filters, and other (more technical) cool features. As you will see this quick summary has ended up a not-so-quick summary, so you are warned. Just before I start, I’d like to thank Patrick David again a thousand times for his proof reading. Kudos Pat! And now, let’s start !

1. The G’MIC Project : Context and Presentation

The G’MIC project was born in August 2008, in the IMAGE Team of the GREYC laboratory (a public research unit, located in Caen / France, affiliated to the CNRS, the largest public research institute in France), a team to which I belong. Since then, I’ve developed it apace. G’MIC is an open-source software, distributed under the CeCILL license (a French GPL-compatible license).

gmic_logo2

Fig.1.1 Mascot and logo of the G’MIC project, an open-source framework for image processing.

What G’MIC proposes is basically several different user interfaces to manipulate generic image data (sequences or still 2D/3D images, with an arbitrary number of channels, float-valued , etc., so definitely not only 2D color images). To date, the most popular G’MIC interface is its plug-in for GIMP.

gmic_gimp_overview2

Fig.1.2. Overview of the G’MIC plug-in for GIMP.

But G’MIC also provides a command-line interface (similar to what ImageMagick has, but more flexible, perhaps?), and a web interface G’MIC OnlineOther ways of using G’MIC exist, such as ZArt, a qt-based image stream processor, or a nice plug-in for Krita (developed by Lukas Tvrdy), or through some features implemented in the recent (and nice!) software Photoflow. These latter interfaces are still a bit more confidential, but not for long I guess. Actually, all of these interfaces are based on libgmic. This is the C++, portable, thread-safe and multi-threaded library (in particular through the use of OpenMP) which implements all of the G’MIC core image processing routines and structures. It also embeds its own script language interpreter, so that advanced users are allowed to write and integrate their customized image processing functions into G’MIC easily. Today, G’MIC already understands more than 900 different commands, all configurable, for a libgmic library file that takes a bit less than 5 MB. The available commands cover a wide range of the image processing field, with algorithms for the manipulation of image geometry and colors, for image filtering (denoising, sharpening with spectral, variational or non-local methods, …), for image registration and motion estimation, for the drawing of graphical primitives, for image segmentation and contour extraction, for 3D object rendering, artistic filters, and so on…

The set of tools G’MIC provides is really useful, for both converting, visualizing and exploring image data, and for creating, and applying complex image processing pipelines. In my opinion, this is one piece of free software that image processing savvy folks should have tried at least once. At least for me, there is an everyday use. :)

2. New G’MIC features for color processing

So, here is some of the stuff I’ve added to G’MIC recently, related to the manipulation (creation of modification) of the colors in an image.

2.1. Color curves in an arbitrary color space

The concept of color curves is well known for any artist of photograph who wish to modify the colors of their images. Basically, a color curve tool allows you to define and apply on each R,G,B component of your image, a 1d continuous function f:[0,255] -> [0,255]. Each original red, green or blue value x of a pixel (assumed to be in the range 0..255) is then mapped into a new value f(x) (in 0..255 too). To define these transfer functions (one for each of the R,G,B components), the user classically defines a set of key-points that are interpolated by spline functions.

But what happens if you want to define and apply color curves in a color space different than RGB ? Well… mostly nothing, as most of the image retouching tools (GIMP included) only propose to color curves for RGB, unfortunately.

So, one of the things I’ve done recently in G’MIC has been to implement an interactive color curves filter that works in other color spaces than RGB, that is, CMY, CMYK, HSI, HSL, HSV, Lab, Lch and YCbCr. The new command -x_color_curves is in charge of doing that (using the CLI interface). Plug-in users may use it via the filter Colors / Curves [interactive]. Here, using the plug-in is even nicer because you can easily save your favorite curves, as new Favorite filters, and apply the same color curves again and again on hundred of images afterwards. Here is how it looks:

Fig.2.1. Interactive definition of color curves in the Lab colorspace and application on a color image. On the left, the three user-defined curves for each of the components L (lightness), a and b (chrominances). On the left, the corresponding color transformation.

Fig.2.1. Interactive definition of color curves in the Lab color space and application on a color image. On the left, the three user-defined curves for each of the components L (lightness), a and b (chrominances). On the right, the corresponding color transformation it creates on the color image.

I’ve also recorded a quick video to show the action of this filter live, using it from the G’MIC plug-in for GIMP:

2.2. Comics colorization

I also had the chance to talk with David Revoy, a talented French illustrator who is the man behind the nice Pepper & Carrot webcomics (among other great things). He told me that the process of comics colorization is a really boring and tedious task, even using a computer, as he explains it on his blog. He also told me that there were some helpers for colorizing comics, but all proprietary software. Nice! There was obviously something worth trying for open-source graphics here.

So I took my keyboard and started coding a color interpolation algorithm that I hoped to be smart enough for this task. The idea is very simple: Instead of forcing the artist to do all the colorization job by himself, we just ask him to put some colored key-points here and here, inside the different image regions to fill-in. Then, the algorithm tries to guess a probable colorization of the drawing, by analyzing the contours in the image and by interpolating the given colored key-points with respect to these contours. Revoy and I also discussed with Timothée Giet (on irc, channel #krita :)), another French artist, about what could be a nice “minimal” interface to make the process of putting these colored key-points comfortable enough for the artist. And finally, the result of these discussion is the new G’MIC command -x_colorize and the corresponding filter Black & White / Colorize [interactive] in the G’MIC plug-in. Apparently, this seemed to be fair enough to do the job!

From what I’ve seen, this has raised quite a bit of interest among the users of Krita (not so surprising as there are many comics creators among them!). Maybe I’m wrong, but I feel this is one of the reasons the Krita team decided to put some additional efforts on their own G’MIC plug-in (which is something to become an awesome piece of code IMHO, probably even better than the current G’MIC plug-in for GIMP I’ve done).

Here is a quick illustration on how this works for real. The images below are borrowed from the David Revoy‘s web site. He has already written very nice articles about this particular feature, so consider reading them for more details about it.

Using G'MIC for comics colorization. Step 1 : Open your lineart image.

Fig.2.2. Using G’MIC for Comics colorization. Step 1 : Open your lineart image (image borrowed from the David Revoy web site).

Fig.2.3.

Fig.2.3. Using G’MIC for Comics colorization. Step 2 : Place some colored key-points and let the colorization algorithm interpolate the colors for you (image borrowed from the David Revoy web site)

I believe Revoy is using this colorizing algorithm in a regular manner for his creations (in particular for Pepper & Carrot), as illustrated in Fig.2.2-2.3. One can clearly see the different colored key-points put on the image and the outcome of the algorithm, all this being done under the G’MIC plug-in for Krita. After this first colorization step, the artist may add shadows and lights on the flat color regions generated by the algorithm. The nice thing is he can easily work on each color region separately because it is easy to select exactly one single flat color in the output color layer. And for those who prefer to work on more layers, the algorithm is able to output multiple layers too, one for each single reconstructed color region.

So, again, thanks a lot to David Revoy and Timothée Giet for the fruitful discussions and their feedback. We have now a very cool Comics colorization tool in a free software. This is the kind of academic / artistic collaboration I’m fond of.

Below you can see a small video I’ve made that shows how this filter runs inside the G’MIC plug-in for GIMP :

2.3. B&W picture colorization

But could this also work for old B&W photographs ? The answer is Yes! By slightly modifying the colorization algorithm, I’ve been able to propose a way to reconstruct the color chrominance of a B&W image, exactly the same way, from user-defined colored key-points. This is illustrated in the figure below, with the colorization of an old B&W portrait. The pixel-by-pixel content of a photograph being obviously more detailed than a Comics, we often need to put more key-points to get an acceptable result. So, I’m not sure this speeds up the colorization process so much in this case, but still, this is cool stuff. These two colorization techniques are available from the same filter, namely Black & white / Colorize [interactive].

Fig.2.4. Colorisation d'une vieille photo Noir & Blanc avec l'algorithme de colorisation de G'MIC._ (Ugh, vieil indien, excuse moi pour la couleur improbable de tes cheveux, qui te fait ressembler à [Kim Kardashian](http://media.melty.fr/article-2637254-ajust_930-f205979/kim-a-t-elle-le-meme-sex-appeal-en-blonde.jpg)...)

Fig.2.4. Colorization of an old B&W photograh with the G’MIC colorization algorithm.
(Ugh, old man, sorry for the improbable color of you hairs, which makes you look like Kim Kardashian…)

2.4. Color transfer

Here, what I call « color transfer » is the process of modifying the colors of an image A by replacing them by the colors of another image B (reference image) so that the modified image A’ gets the same “color ambiance” than the reference image B, and this is being done in a fully automatic way. This is an ill-posed problem, quite complex to solve, and there is already a lot of scientific papers published on this topic (like this one for instance). The main issue consists in generating a new image A’ that keeps a “natural” aspect, without creating synthetic-looking flat color areas, or on the contrary high color discontinuities that are not initially present in the original image A. In short, this is far from being trivial to solve.

I’ve worked on this topic recently (with one of my colleagues Julien Rabin), and we’ve designed a pretty cool algorithm for transferring colors from an image to another. It’s not perfect of course, but this is really a good start. I’ve implemented the algorithm and put it in G’MIC, available from the plug-in filter Colors / Transfer colors [advanced]. This filter requires several input layers, one being the reference image B containing the colors to transfer to the other layers. Here is how it looks when run from the G’MIC plug-in for GIMP, with the original color image (on the left in the preview window), the reference image (bottom-left) and the filter outcome (on the right).

_Fig.2.5. Aperçu du filtre de transfert de couleurs dans le greffon G'MIC pour GIMP._

Fig.2.5. Overview of the new Color transfer filter in the G’MIC plug-in for GIMP.

Sometimes, this filter gives awesome results! Two color transfer examples are illustrated below. Of course, don’t expect it to work nicely with pathological cases (as transferring a colorful image to a monochrome one for instance). But it’s not lying to say it works already quite well. And the best of all, it is quite fast to render.

gmic_transfer6

_Fig.2.6. Deux exemples de résultats de transfert de couleurs d'une photo de référence (image du milieu) vers une photo source (image de gauche). Les images de droite sont les résultats générés par l'algorithme de transfert de G'MIC._

Fig.2.6. Two different color transfer examples from a reference picture (middle) to a color image (left). The images on the right give the outcome of the G’MIC color transfer algorithm.

I’ve also recorded a quick video tutorial that shows how this kind of results can be obtained easily from the G’MIC plug-in for GIMP. We can also think of other usages for this interesting new algorithm, like the homogenization of colors between successive frames of a video, or ensuring the color stability of a stereoscopic pair for instance. Here is the video :

2.5. Website for analog film emulation

I have the pleasure of being a friend with Pat David, an american photograph who is very active in the libre graphics community, sharing his experience and tutorials on his must-read blog. I met him at LGM’2014 in Leipzig, and honestly, many useful G’MIC filters have been suggested or improved by him. Particularly, all the filters in the Film emulation category could not have been done without his work on the design of color transfer functions (a.k.a. CLUTs). Below is a figure that shows a small subset of the 300+ color transformations concocted by Pat that has been made available in G’MIC.

_Fig.2.7. Aperçu de quelques résultats de simulation de films argentiques, disponibles dans G'MIC._ (Cette image provient du site web de Pat David : http://blog.patdavid.net/2013/08/film-emulation-presets-in-gmic-gimp.html)

Fig.2.7. Overview of some analog film emulation outcomes, available in G’MIC.
(This image comes from the Pat David‘s blog)

Once added (two years ago), these filters raised much interest in the photo retouching community, and have been re-implemented in RawTherapee. To make these filters even more accessible, we have recently set up a web page dedicated to Film Emulation, where you can see/compare all the available presets and download the corresponding CLUT files. Note that you can also find the G’MIC Film Emulation filters in the G’MIC Online web site, and try them live directly in your web browser. Here again, analog film emulation was somehow a missing feature in the libre software world and the G’MIC infrastructure made it easy to fill this gap (who needs DXO FilmPack anymore now ? ;)).

3. An algorithm for foreground/background extraction.

In photo retouching, it is not uncommon to differentiate the processing of foreground and background objects. So one often needs to select the foreground object first (using the “lasso” tool for instance) before doing anything else. For objects with a complex geometry, this is a tedious task. G’MIC now integrates a user-guided segmentation algorithm to speed up this foreground extraction work. The G’MIC command -x_segment (using the CLI interface) and the plug-in filter Contours / Extract foreground [interactive] let the user play with this algorithm.

This works exactly the same way as the Comics colorization filter, except that the colored key-points are replaced by labelled key-points, with labels being either “foreground” or “background”. Then, the extraction algorithm interpolates those labels all inside the image, taking care of the contours, and deduce a foreground/background binary map. Thus, the filter is able to decompose an image into two layers, one with the foreground pixels only, and the other with the remaining pixels (background). The figure below shows a typical use of this filter : Starting from a color image of a flower (top-left), the user put few keypoints on it (top-right) and the extraction algorithm is able to generate the two background / foreground layers on the last row, from this very sparse set of points.

Doing this takes only few seconds, while the same operation made manually would have been much much longer to perform (the flower’s contour being not simple to cut).

_Fig.3.1. Décomposition avant-plan / arrière plan avec le filtre « Contours / Extract foreground » de G'MIC._

Fig.3.1. Foreground / background user-guided decomposition of an image, with the filter « Contours / Extract foreground » of G’MIC.

After that, it is quite easy to process the background and foreground separately. Here for instance, I’ve just changed the hue and saturation of the foreground layer a little bit, in order to get a more purple flower without changing the colors of the background. Selecting objects in images is a common operation in photo retouching, I’ll let you imagine in how many cases this filter can be useful.

_Fig.3.2. Après modification de la teinte et de la saturation des couleurs de l'avant-plan uniquement._

Fig.3.2. The same image, after changes on the hue and saturation applied on the extracted foreground only.

I’ve done another G’MIC video tutorial  to illustrate the whole process in real-time, using the G’MIC plug-in for GIMP:

4. Some new artistic filters.

G’MIC has always been a source of dozens of artistic filters. Below is an overview of some of the newest additions.

4.1. Engrave effect.

The filter Black & White / Engrave tries to transform an image into an etching. The high number of parameters lets you precisely tune the rendering done by the algorithm, and many different effects may be obtained with this single filter.

Fig.4.1. Overview of the « Engrave » filter, available in the G’MIC plug-in for GIMP.

What I find particularly interesting with this filter is the ability it has (with properly chosen parameters) to convert photographs into comics-like renderings. This is illustrated with the two images below.

_Fig.4.2. Conversion de photo (en haut à gauche) sous forme de dessin façon Comics, avec le filtre "Engrave" de G'MIC._

Fig.4.2. Photo to Comics conversion, using the Engrave filter from G’MIC.

_Fig.4.3. Conversion d'une photo de chimpanzé (à gauche) sous forme de dessin façon Comics (à droite), avec le filtre "Engrave" de G'MIC._

Fig.4.3. Another example of photo to Comics conversion, with the Engrave filter from G’MIC.

I’ve also recorded a video tutorial to show the different steps used to generate this kind of results. It takes only few seconds to achieve (maybe minutes for the slowest of you!).

4.2. Delaunay triangulation.

The algorithm of Delaunay triangulation has been added to G’MIC (through the -delaunay3d command). A new filter, namely Artistic / Polygonize [delaunay] inside the plug-in for GIMP uses it to transform color images into nice geometric abstraction. Each generated triangle color can be random, constant or related to the pixels under the triangle. The triangulation tries to stick to the image contours (more or less, depending on the chosen parameters).

_Fig.4.4. Aperçu du filtre "Polygone [delaunay]" et application sur une image pour un rendu de type "vitrail"._

Fig.4.4. Overview of the Polygone [delaunay] filter, and application on a color image to simulate a stained glass effect.

Applying this filter on image sequences gives also nice outcomes. I wonder if importing these triangulated images into Blender could be of any interest (even better, a full G’MIC plug-in for Blender, maybe ? :p )

_Fig.4.5. Application de la triangulation de Delaunay sur une séquence d'images._

Fig.4.5. Applying the Delaunay triangulation on an image sequence.

4.3. Other artistic filters.

As you can see, G’MIC is a very active project, and the number of available artistic filters increases every day. Thus, I can’t describe in details all the new filters added these ten last months, and because a picture is worth a thousand words, you will find below a quick overview of some of them.

_Fig.4.6. Aperçu du filtre "Arrays & Tiles / Grid [hexagonal]", potentiellement utile pour les créateurs de cartes des jeux de type Wargames ?_

Fig.4.6. Overview of the filter Arrays & Tiles / Grid [hexagonal], maybe useful for wargames map makers ?

_Fig.4.7. Aperçu du filtre "Patterns / Crystal" qui transforme vos images en cristaux multicolores._

Fig.4.7. Overview of the filter Patterns / Crystal which convert your images into colorful crystals.

_Fig.4.8. Aperçu du filtre "Rendering / Lightning" qui dessine un éclair dans une image._

Fig.4.8. Overview of the filter Rendering / Lightning which draws lightnings in your images.

_Fig.4.9. Aperçu du filtre "Arrays & Tiles / Ministeck" qui transforme vos images en représentation de type [Ministeck](http://de.wikipedia.org/wiki/Ministeck) (jeu pour enfants)._

Fig.4.9. Overview of the filter Arrays & Tiles / Ministeck which transforms your images into Ministeck games (child games).

_Fig.4.10. Aperçu du filtre "Sequences / Spatial transition", qui prend en entrée plusieurs calques et qui génère une séquence d'images de transition entre chaque calque consécutif, avec des motifs de transition réglables._

Fig.4.10. Overview of the filter Sequences / Spatial transition, which takes several layers as input, and generates an image sequence corresponding to a custom transition between each consecutive layers.

5. A quick view of the other improvements

Of course, the few new filters I’ve presented above are only a small part of the overall work that has been done on G’MIC (the most visible part). Here is a list of other notable (mostly technical) improvements:

5.1. Global improvements and the libgmic library.

  • A lot of work has been done to clean and optimize the entire source code. The size of the libgmic library has been drastically reduced (currently less than 5 MB), with an improved C++ API. Using some of the C++11 features (rvalue-references) allows me to avoid temporary buffer copies which is really nice in the context of image processing as the allocated memory may become huge quickly. With the help of Lukas Tvrdyone of the Krita developers, we have been able to improve the compilation of the project a little bit on Windows (with Visual Studio), and run various code analysis tools to detect and fix memory leaks and strange behavior. Many sessions of valgrind, gprof, g++ with the -fsanitize=address option, PVS Studio has successfully led to effective code improvements. Not very visible or exciting for me, but probably very satisfying for the final user :) As a consequence, I’ve decided the code was clean enough to regularly propose G’MIC pre-releases that are considered as stable enough while getting the latest features developed. The method of releasing G’MIC is now closer to a Rolling Release scheme.
  • The compilation of the default G’MIC binaries on Windows has been improved. It now uses the latest g++-4.9.2 / MinGW as the default compiler, and new installers for Windows 32 bits / 64 bits are now available.
  • G’MIC has new commands to compress/uncompress arbitrary data on fly (through the use of the zlib). So now, the embedded G’MIC commands are stored in a compressed form and the Internet updates are now faster.
  • The G’MIC project got its own domain name http://gmic.eu, independent from Sourceforge. We have improved the web pages a lot, particularly thanks to the wonderful tutorials on the use of the G’MIC command-line tool, written by Garry Osgood (again, thanks Garry!). This is clearly a page to bookmark for people who slowly want to learn the basics of G’MIC.

5.2. New possibilities for image filtering.

A bunch of new commands dedicated to image filtering, and their associated filters in the G’MIC plug-in for GIMP have been added, for image sharpening (Mighty Details), deconvolution by an arbitrary kernel (Richardson-Lucy algorithm), guided filtering, Fast Non-Local Means, Perona-Malik anisotropic diffusion, box filtering, DCT transforms, and so on. Honestly, G’MIC now implements so many different image filtering techniques that it is a must-have for any image processing person (even only for the denoising techniques it provides). For the users of the plug-in, this means potentially more diverse filters in the future. The figure below shows an application of the Mighty Details filter on a portrait, to simulate a kind of Dragan effect.

_Fig.5.1 Application du filtre « Mighty Details » sur un portrait pour le rehaussement de détails._

Fig.5.1 Application of the Mighty Details filter on a portrait to enhance details.

5.3. Improvements of the G’MIC plug-in for GIMP.

The plug-in for GIMP being the most used interface of the G’MIC project, it was necessary to make a lot of improvements:

  • The plug-in now has an automatic update system which ensures you always get the latest new filters and bug fixes. At the date of the 1.6.2.0 release, the plug-in counts 430 different filters, plus 209 filters still considered in development (in the Testing/ category). For a plug-in taking 5.5 Mb on the disk, let us say this makes an awesome ratio of number of filters / disk space. A complete list of the filters available in the plug-in for GIMP can be seen here.
  • During the execution of a filter that takes a bit of time, the plug-in now displays information about the elapsed time and the used memory in the progress bar.
_Fig.5.2. Affichage des ressources prises par un filtre dans le greffon G'MIC pour GIMP._

Fig.5.2. Display of the used resources in the progress bar when running a G’MIC filter in GIMP.

  • The plug-in GUI has been slightly improved: the preview window is more accurate when dealing with multi-layer filters. The filters of the plug-in can be interactive now: they can open their own display window, and handle user events. A filter can also decide to modify the value of its parameters. All of this allowed me to code the new interactive filters presented below (as the one for the Comics colorization, for the color curves in an arbitrary color space, or for the interactive foreground extraction). This is something that was not possible before, except when using the command-line interface of G’MIC.
  • G’MIC filter now has knowledge about all the input layer properties, such as their position, their opacities and their blending mode. A filter can modify these properties for the output layers too.
  • A new filter About / User satisfaction survey has been added. You can give some anonymous information about your use of G’MIC. The result of this survey is visible here (and updated in almost real-time). Of course, this result image is generated by a G’MIC command itself :).

5.4. G’MIC without any limits : Using the command-line interface.

gmic is the command-line interface of G’MIC, usable from a shell. That is undoubtly the most powerful G’MIC interface, without any limitations inherent to the input/output constraints of the other interfaces (like being limited to 2D images, 8bits / channels, 4 channels max on GIMP, or a single input image in the G’MIC Online site, etc.). With the CLI interface, you can load/save/manipulate sequences of 3D volumetric images, with an arbitrary number of channels, float-valued, without limits other than the available memory. The CLI interface of G’MIC has also gotten a lot of improvements:

  • It now uses by default OpenCV to load/save image sequences. We can then apply image processing algorithms frame by frame on video files (command -apply_video), on webcam streams (command -apply_camera), or generate video files from still images for instance. New commands -video2files and -files2video have been added to easily decompose/recompose video files into/from several frames. Processing video files is almost childs play with G’MIC now. The G’MIC manual (command -help) has been also improved, with a colored output, proposed corrections in case of typos, links to tutorials when available, etc..
_Fig.5.3. Aperçu du fonctionnement de l'aide en ligne de commande._

Fig.5.3. Overview of the help command with the command line interface gmic.

  • The invokation of gmic on the command line without any arguments makes it enter a demo mode, where one can select between a lot of different small interactive animations from a menu, to get an idea of what G’MIC is capable of. A good occasion to play Pacman, Tetris or the minesweeper on the pretext to test a new serious image processing software!
_Fig.5.4. Aperçu de G'MIC, lancé en mode démo !_

Fig.5.4. Overview of the demo mode of G’MIC.

5.5. Other G’MIC interfaces being developed.

  • ZArt is another available G’MIC interface initially developed as a demonstration platform, for processing images from the webcam. We use it classically for public exhibitions like the “Science Festival” we have once a year in France. It is very convenient to illustrate what image processing are, and what they can do, for the general public. ZArt has been significantly improved. First, it is now able to import any kind of video files (instead of supporting only webcam streams). Second, most of the filters from the GIMP plug-in has been imported to ZArt. This video shows these new possibilities, for instance with real-time emulation of analog films of a video stream. This could be really interesting to get a G’MIC plug-in working for video editing software, as the brand new Natron for instance. I’m already in touch with some of the Natron developers, so maybe this is something doable in the future. Note also that the code of ZArt have been upgraded to be compatible with the newest Qt 5.
_Fig.5.5. Aperçu du logiciel ZArt traitant une vidéo en temps réel._

Fig.5.5. Overview of ZArt, a G’MIC-based video processing software.

  • It has been a long time now since Lukas Tvrdy, member of the Krita devs, has started to develop a G’MIC plug-in for Krita. This plug-in is definitely promising. It contains all of the elements of the current G’MIC plug-in for GIMP, and it already works quite well on GNU/Linux. We are discussing to try solving some problems remaining for the compilation of the plug-in on Windows. Using the Krita plug-in is really nice because it is able to generate 16bits/channel images, or even more (32bits float-valued), whereas the plug-in for GIMP is currently limited to 8bits/channel for the plug-in API. This may change in the future with GEGL, but right now, there is not enough documentation available to ease the support of GEGL for the G’MIC plug-in. So now, if you want to process 16bits/channel images, you can do this only through the command-line tool gmic, or the G’MIC plug-in for Krita.
_Fig.5.6. Aperçu du greffon G'MIC pour Krita en action._ (Cette image provient du site web de Krita : https://krita.org/wp-content/uploads/2014/11/gmic-preview.png)

Fig.5.6. Overview of the G’MIC plug-in running on Krita.
(This image has been borrowed from the Krita website : https://krita.org/wp-content/uploads/2014/11/gmic-preview.png)

  • The integration of some of the G’MIC image processing algorithms has also begun in PhotoFlow. This is a quite recent and promising software which focuses on the development of RAW images into JPEG, as well as on photo retouching. This is a project done by Carmelo DrRaw, check out this project, it is really nice.
_Fig.5.7. Aperçu du filtre "Dream Smoothing" de G'MIC inclus dans PhotoFlow._ (Cette image provient du site web de PhotoFlow : http://photoflowblog.blogspot.fr/2014/12/news-gmic-dream-smoothing-filter-added.html)

Fig.5.7. Overview of the Dream Smoothing filter from G’MIC, included in PhotoFlow.
(This image has been borrowed from the PhotoFlow website : http://photoflowblog.blogspot.fr/2014/12/news-gmic-dream-smoothing-filter-added.html)

6. Perspectives and Conclusions.

Here we are! That was a quite complete summary of what happened the ten last months around the G’MIC project. All this has been made possible also thanks to many different contributors (coders, artists, and users in general), whose number is increasing everyday. Thanks again to them!

In the future, I still dream of more G’MIC integration in other open-source software. A plug-in for Natron could be really great to explore the possibilities of video processing with G’MIC. A plug-in for Blender would be really great too! Having a G’MIC-based GEGL node is again a great idea, for the upcoming GIMP 3.0, but we have probably a few decades to decide :) If you feel you can help in these topics, feel free to contact me. One thing I’m sure: I won’t be bored by G’MIC in the next few years!

Having said that, I think it’s time to go back working on G’MIC again. See you next time, for other exciting news about the G’MIC project! (and if you appreciate what we are doing or just the time spent, you are free to send us a nice postcard from your place, or give a few bucks for the hot chocolates I need to keep my mind clear working on G’MIC, through the donation page). Thank you!

April 18, 2015

Arbitrary 3D contour convex partitioning heuristic

Hi all

before
Some time ago I’ve developed an automatic quad fill routine to tessellate an arbitrary 3D contour into quads, as even as possible. That algorithm is quite good indeed but suboptimal for L-shapes, T-shapes and in general for complex concave contours.
So these days I’m quite busy trying to figure out an algorithm for spatial splitting the contour. After squeezing my brain finally found a very nice heuristic to split the contour at corner feature points. I’m exited because is very powerful and works in any arbitrary 3D spatial shapes. This algorithm will serve beyond the QuadFill tool and Im figuring out few interesting new geometric tools for it!
Here are some screenshots of the intermediate process with visual debugging.

1

1

2

2

Getting betterGot it


The chore of tuning PIDs

Tuning PIDs is one of those things you really don’t want to do, but can’t avoid it in the acrobatic quad space. Flying camera operators don’t usually have to deal with this, but the power/weight ratio is so varied in the world of acro flying you’ll have hard time avoiding it there. Having a multirotor “locked in” for doing fast spins is a must. Milliseconds count.

FPV Weekend

So what is PID tuning? The flight controller’s job is to maintain a certain position of the craft. It has sensors to tell it how the craft is angled and how it’s accellerating, and there’s external forces acting on the quad. Gravity, wind. Then there’s a human giving it RC orders to change its state. All this happens in a PID loop. The FC either wants to maintain its position or is given an updated position. That’s the target. All the sensors give it the actual current state. Magic happens here, as the controller gives orders to individual ESCs to spin the motors so we get to there. Then we look at what the sensors say again. Rinse and repeat.

PID loop is actually a common process you can find in all sorts of computer controllers. Even something as simple as a thermostat does this. You have a temperature sensor and you drive a heater or an air conditioner to reach and maintain a target state.

The trick to a solid control is to apply just the right amount of action to get to our target state. If there is difference between where we are and where we want to be, we need to apply some force. If this difference is smaller, only a small force is required. If it’s big, a powerful force is needed. This is essentially what the P means, proprotional. In most cases, as a controller, you are truly unhappy if you are elsewhere to where you were told to be. You want to correct this difference fast, so you provide a high proportional value/force. However, in the case of a miniquad, the momentum will continue pulling you when you reached your target point and don’t apply any force anymore. At this point the difference occurs again and the controller will start correcting the craft pulling it back in the opposite direction. This results in an unstable state as the controller will be bouncing the quad back and forth, never reaching the target state of “not having to do anything”. The P is too big. So what you need is a value that’s high enough to correct the difference fast, but not as much so the momentum gets you oscillating around the target.

So if we found our P value, why do we need to bother with anything else? Well sadly pushing air around with props is a complicated way to remain stationary. The difference between where you are and where you want to be isn’t just determined by the aircraft itself. There are external forces that are in play and those change. We can get a gust of wind. So what we do is we correct that P value based on the changed conditions. Suddenly we don’t have a fixed P contoller, we have one that has variable P. Let’s move on how P is dynamically corrected.

The integral part of the controller corrects the difference that suddenly appears due to the new external forces coming into play. I would probably do a better job explaining this if I enjoyed maths, but don’t hate me, I’m a graphics designer. Magic maths corrects this offset. Having just the proprotional and integral part of the corrective measure is enough to form a capable controller perfectly able to provide a stable system.

However for something as dynamic as an acrobatic flight controller, you want to improve on the final stage of the correction where you are close to reaching your target after a fast dramatic correction. Typically what a PI controller would get you is a bit of a wobble at the end. To correct it, we have the derivative part of the correction. It’s a sort of a predictive measure to lower the P as you’re getting close to the target state. D gives you the nice smooth “locked in” feeling, despite having high P and I values, giving you really fast corrective ability.

There are three major control motions of a quad that the FC needs to worry about. Pitch for forward motion is controlled by spinning the back motors faster than the front two motors thus angling the quad forward. Roll motion is achieved exactly the same way, but with the two motors on one side spinning faster than the other two. The last motion is spinning in the Z axis, the yaw. That is achieved by torgue and the fact than the propellers and motors spin in different directions. Typically the front left and back right motor are clockwise spinning and the front right and back left motor are spinning counter clockwise. Thus spinning up/accellerating the front left and back right motors will turn the whole craft counter clockwise (counter motion).

I prepared a little cheat sheet on how to go about tuning PIDs on the NAZE32 board. Before you start though, make sure you set the PID looptime as low as your ESC allow. Usually ESC send the pulses 400 times a second which is equivalent to a looptime of 2500. The more expensive ESC can do 600Hz and some, such as the miniscule KISS ESCs, can go as low as 1200.

ESC refresh rate NAZE32 Looptime
286Hz 3500
333Hz 3000
400Hz 2500
500Hz 2000
600Hz 1600

You do this in the CLI tab of baseflight:

set looptime=2500
save

Hope this has been helpful for some as it was for me :).

Quick Guide on PID tuning

April 16, 2015

I Love Small Town Papers

I've always loved small-town newspapers. Now I have one as a local paper (though more often, I read the online Los Alamos Daily Post. The front page of the Los Alamos Monitor yesterday particularly caught my eye:

[Los Alamos Monitor front page]

I'm not sure how they decide when to include national news along with the local news; often there are no national stories, but yesterday I guess this story was important enough to make the cut. And judging by font sizes, it was considered more important than the high school debate team's bake sale, but of the same importance as the Youth Leadership group's day for kids to meet fire and police reps and do arts and crafts. (Why this is called "Wild Day" is not explained in the article.)

Meanwhile, here are a few images from a hike at Bandelier National Monument: first, a view of the Tyuonyi Pueblo ruins from above (click for a larger version):

[View of Tyuonyi Pueblo ruins from above]

[Petroglyphs on the rim of Alamo Canyon] Some petroglyphs on the wall of Alamo Canyon. We initially called them spirals but they're actually all concentric circles, plus one handprint.

[Unusually artistic cairn in Lummis Canyon] And finally, a cairn guarding the bottom of Lummis Canyon. All the cairns along this trail were fairly elaborate and artistic, but this one was definitely the winner.

April 14, 2015

Minis and FPV

FPV

I’ve got some time into the hobby to actually share some experiences that could perhaps help someone who is just starting.

Cheap parts

I like cheap parts just like the next guy, but in the case of electronics, avoid it. Frame is one thing. Get the ZMR250. Yes it won’t be near as tough as the original Blackout, but it will do the job just fine for a few crashes. Rebuilding aside, you can get about 4 for the price of the original. Then the plates give. But electronics is a whole new category. If you buy cheap ESCs they will work fine. Until they smoke mid flight. They will claim to deal with 4S voltage fine. Until you actually attach a 4S and blue smoke makes its appearance. Or you get a random motor/ESC sync issue. And for FPV, when a component dies mid flight, it’s the end of the story if it’s the drive (motor/esc) or the VTX or a board cam.

No need to go straight to T-motor, which usually means paying twice as much of a comparable competitor. But avoid the really cheap sub $10 motors like RCX, RCTimer (although they make some decent bigger motors), generic chinese ebay stuff. In case of motors, paying $20 for a motor means it’s going to be balanced and the pain of vibration aleviated. Vibrations for minis don’t just ruin the footage due to rolling shutter. They actually mess up the IMU in the FC considerably. I like Sunnysky x2204s 2300kv for a 3S setup and the Cobra 2204 1960kv for a 4S. Also rather cheap DYS 1806 seem really well balanced.

Embrace the rate

Rate mode is giving up the auto-leveling of the flight controller and doing it yourself. I can’t imagine flying line of sight (LOS) on rate, but for first person view (FPV) there is no other way. NAZE32 has a cool mode called HORI that allows you to do flips and rolls really easily as it will rebalance it for you, but flying HORI will never get you the floaty smoothness that makes you feel like a bird. The footage will always have this jerky quality to it. On rate a tiny little 220 quad will feel like a 2 meter glider, but will fit inbetween those trees. I was flying hori when doing park proximity, but it was a time wasted. Go rate straight from the ground, you will have way more fun.

Receiver woes

For the flying camera kites, it’s usually fine to keep stuff dangling. Not for minis. Anything that could, will get chopped off by the mighty blades. These things are spinning so fast than antennas have no chance and if your VTX gets loose, it will get seriously messed up as well. You would not believe what a piece of plastic can do when it’s spinning 26 thousand times a minute. On the other hand you can’t bury your receiver antenna on the frame. Carbon fibre is super strong, but also super RF insulating. So you have to bring it outside as much as possible. Those two don’t quite go together, but the best practice I found was taping one of the antennas to the bottom of the craft and have the other stick out sideways on top. The cheapest and best way I found was using a zip tie to hold the angle and heatshrink the antenna onto it. Looks decent and holds way better than a straw os somesuch.

Next time we’ll dive into PID tuning, the most annoying part of the hobby (apart from looking for a crashed bird ;).

Lockee to the rescue

Using public computers can be a huge privacy and security risk. There’s no way you can tell who may be spying on you using key loggers or other evil software.

Some friends and family don’t see the problem at all, and use any computer to log in to personal accounts. I actually found myself not being able to recommend an easy solution here. So I decided to build a service that I hope will help remove the need to sign in to sensitive services in some cases at least.

Example

You want to use the printer at your local library to print an e-ticket. As you’re on a public computer, you really don’t want to log in to your personal email account to fetch the document for security reasons. You’re not too bothered about your personal information on the ticket, but typing in your login details on a public computer is a cause for concern.

This is a use case I have every now and then, and I’m sure there many other similar situations where you have to log in to a service to get some kind of file, but you don’t really want to.

Existing storage services

There are temporary file storage solutions on the internet, but most of them give out links that are long and hard to remember, ask for an email address to send the links to, are public, or have any combination of these problems. Also, you have no idea what will happen to your data.

USB drives can help sometimes, but you may not always have one handy, it might get infected, and it’s easy to forget once plugged in.

Lockee to the rescue

Lockee is a small service that temporarily hosts files for you. Seen those luggage lockers at the railway station? It’s like that, but for files.

A Lockee locker

It allows you to create temporary file lockers, with easy to remember URLs (you can name your locker anything you want). Lockers are protected using passphrases, so your file isn’t out in the open.

Files are encrypted and decrypted in the browser, there’s no record of their real content on the server side. There’s no tracking of anything either, and lockers are automatically emptied after 24 hours.

Give it a go

I’m hosting an instance of Lockee on lockee.me. The source is also available if you’d like to run your own instance or contribute.

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

San Francisco impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

April 13, 2015

Discourse Forum on PIXLS.US

After a bunch of hard work by someone not me (darix), there's finally a neat solution for commenting on PIXLS.US. An awesome side effect is that we get a great forum along with it.

Discourse

On the advice from the same guy that convinced me to build PIXLS using a static site generator (see above), I ended up looking into, and finally going with, a Discourse forum.


The actual forum location will be at:



What is extra neat about the way this forum works, though, is the embedding. For every post on the pixls.us blog (or an article), the forum will pickup on the post and will automatically create topics on the forum that coincide with the posts. Some small embedding code on the website allows these topic replies to show up at the end of a post similar to comments.

For instance, see the end of this blog post to see the embedding in action!

Come on by!

I personally really like this new forum software, both for the easy embedding, but also the fact that we own the data ourselves and are not having to farm it out through a third party service. I have enabled third party oauth logins if anyone is ok with using them (but are not required to - normal registration with email through us is fine of course).

I like the idea of being able to lower the barrier to participating in a community/forum, and the ability to auth against google or twitter for creating an account significantly lowers that friction I think.

Some Thanks are in Order

It's important to me to point out that being able to host PIXLS.US and now the forum is entirely due to the generosity of folks visiting my blog here. All those cool froods that take a minute to click an ad help offset the server costs, and the ridiculously generous folks that donate money (you know who you are) are amazing.

As such, their generosity means I can afford to bootstrap the site and forums for a little while (without having to dip into the wife goodwill fund...).

What does this mean to the average user? Thanks to the folks that follow ads here or donate, PIXLS.US and the forum is ad-free. Woohoo!

Paths: Stroking and Offsetting

Path stroking and offsetting are two intertwined topics; stroking is often implemented by path offsetting. This post explores some of the problems encountered with these path operations.

Stroking: It’s not as easy as it looks.

What could be easier that stroking a path? It’s a fundamental concept in all graphics libraries. You construct a path:

in PostScript:

newpath
100 100 moveto
150 100 lineto
10 setlinewidth
stroke

in SVG:

<path d="M 100,100 150,100" stroke-width="10"/>

and voila, you have a horizontal path, 50 pixels long, that is 10 pixels wide.

Hmm, if only it were that easy. It turns out that stroking an arbitrary path can be quite complicated. Different graphics libraries can give quite different results.

A simple Bezier path segment with high curvature at one end.

A Bezier path segment with high curvature at the end. Web browsers differ on the rendering. (SVG)

Firefox's rendering of the circle. It appears solid.
Chome's rendering of the circle. It appears like a donut.

Rendering of above path: Firefox (left/top), Chrome (right/bottom). (PNG)

There are two different ways to stroke a path. The first method is to pass a line segment of length ‘stroke-width’, centered on and perpendicular to the path, from one end to the other. Any pixels the line crosses are part of the stroke. This seems to be what Firefox does. (An equivalent method is to pass a circle of diameter ‘stroke-width’ centered on the path and then clip the semi-circles at the ends.) The second method is to construct two paths, offset by half the ‘stroke-width’ on each side of the original path and then fill the area between the two paths. This seems to be what Chrome does.

A simple Bezier path segment with high curvature at one end.

A Bezier path segment with high curvature at the end. Stroke constructed by offsetting path. Red: original path, blue: offset paths. (SVG)

Rendering engines appear to fall into one of these two camps:

Sweep a line:
Firefox, Adobe Reader
Offset paths:
Chrome, Inkscape (Cairo), Opera (Presto), Evince, Batik, rsvg

The difference can be also be seen in circular paths.

Two circular paths with strokes of different widths.

Two same size circular paths with different stroke widths. When one-half the stroke width exceeds the circle radius (right circle), web browsers differ in their rendering. (SVG)

Firefox's rendering of the circle. It appears solid.
Chome's rendering of the circle. It appears like a doughnut.

Rendering of a circular path when one-half the stroke width is greater than the radius in: Firefox (left/top), Chrome (right/bottom). (PNG)

When using the Offset paths method, an inner path is always created. As the direction of this path is the same regardless of the stroke width, one cannot differentiate between the case where the stroke width is less than one-half the radius and the case where it is not. This can be seen in the animation below:

Two circular paths with strokes of different widths. The drawing of the stroke is animated.

Stroking the path. The arrows indicate the direction of the offset paths. If the drawing is not animated, view the image by itself. (SVG)

Interestingly, some renderers draw a filled circle when one-half the ‘stroke-width’ is greater than the radius for an SVG <circle> (i.e. not a circular <path>) while others still draw a doughnut. However, for the SVG <rect> element, the rectangles are always drawn filled if the ‘stroke-width’ is greater than either the ‘width’ or ‘height’ (at least in the renderers I tested).

So what does the SVG specification say about how to stroke a path? Nothing…! One can look to PostScript and PDF on which SVG is partially based for a hint on what it should say. The PostScript and PDF specifications say the same thing. From the PDF 1.7 reference:

The S operator paints a line along the current
path. The stroked line follows each straight or curved segment
in the path, centered on the segment with sides parallel to
it. Each of the path’s subpaths is treated separately…

This seems to indicate that the sweeping the line technique is what is expected and indeed, Adobe’s own product, Adobe Reader, appears to do just that.

Stroke Alignment

Designers often want more control over how a stroke is positioned: only on the inside, only on the outside, or some arbitrary ratio of the two. The new SVG ‘stroke-alignment‘ property offers this control. For a closed path, it is relatively easy to figure out how this property should behave:

A figure eight path showing various methods for offsetting.

Top: the original path. Middle: left: stroke inside; right: stroke outside. Bottom: left: stroke to left; right: stroke to right.

For an open path, it is not quite so easy. What is inside, what is outside? One can define the terms by looking at what is filled: inside is in the fill, outside is not in the fill. With this definition, a single straight line segment would render nothing for an ‘inside’ stroke and a stroke on both sides for an ‘outside’ stroke. The SVG specification has a slightly different definition for ‘outside’ (see figure). For an open path it may make more sense to talk about left/right rather than inside/outside.

A figure eight path showing various methods for offsetting.

Top to bottom: Default stroke. Fill area (in gray). Inside (according to SVG specification?). Outside (implemented here by masking). Inside (another interpretation). Outside (according to SVG specification?). Stroke on left (round end cap in pink).

Handling line joins is fairly straight forward. End caps, at least ’round’ ones, are another matter. Does one draw half an end cap? Or does the radius of the end cap match the width of the (shifted) stroke?

Left: straight lines, right: curved lines.

Round end caps. Top to bottom: Default stroke. Stroke alignment ‘outside’, end-cap radius doubled. Stroke alignment ‘outside’, end-cap radius same as normal.

The ‘stroke-alignment’ property was recently removed from the SVG 2 specification draft and moved into a separate SVG Strokes module, partly due to the difficulty in specifying exactly how it should behave.

The ‘stroke-alignment’ ‘inside’/’outside’ values can be simulated via other methods. The new ‘paint-order‘ property allows one to paint the stroke before the fill and thus simulating stroking only the outside of the path (this only works for opaque fill). A mask can also be used to simulate stroking the outside of path. A clip path can be used to simulate stroking the inside of a path.

Offset Paths

We’ve seen that offsetting a path can be used for constructing strokes. What about offsetting a path for the purpose of creating a new path? This is quite useful in mapping. For example you might want to show multiple bus routes going along a road with different offsets for each route. More stylistically, you could produce the shadowing seen around land masses in older, hand-drawn maps.

Section of map showing lines ringing a group of islands.

An excerpt from a submarine cable map showing the use of offset paths to shade around land masses. Note also the use of inside strokes to define country boundaries.

Offsetting paths is in practice extremely tricky! Here are a few of the problems:

  1. Offsets of Bezier segments are not Beziers; in fact they are 10th-order polynomials. In practice, one can do a pretty good job of estimating the offset by breaking up a Bezier path into smaller segments.
  2. Offset paths can have loops at cusps.
  3. Offset paths may require breaking apart left and right offset paths and recombining to form outset and inset paths. It can be difficult to get this right.

Entire scientific papers are written on this topic.[1]

Here is a simple example path with offsets both inside and outside:

Path with a series of offsets.

Left: insets, right: outsets. Red path is original.

In this case, the outsets correspond to the outer edge of a stroked path with appropriate width when the ‘stroke-linejoin’ type is ’round’. The insets correspond to the inner edge of such strokes. Taking a closer look at the offset paths shows a number of cusp loops:

Complex path with offsets.

The same original path as in the above figure. Left: the light blue region is created by stroking the original path. As can be seen it matches the corresponding outset (blue) and inset (green) paths. Right: The raw offset paths used to construct the visible outset and inset paths. In this case, the outset path is constructed from the raw outset path (blue) and the inset path is constructed from the raw inset path (green). Cusp loops and overlaps have been removed.

Determining what is outset or inset becomes more difficult as a path loops back on itself. Both the outset and inset paths can consist of parts of both the right-offset and left-offset paths as shown below:

A path that loops back on itself three times.

Left: The left-offset path (blue) and the right-offset path (green), relative to the path’s direction (clock-wise). Right: The resulting outset path (blue) and inset path (green).

Here’s an example where Inkscape’s Linked Offset function gets it wrong:

A circular path segment on top of a figure eight segment.

The resulting outset path (blue) and inset path (green) as found by Inkscape’s Linked Offset function.

The previous examples assumed that the line joins for outside joins are rounded. It would be desirable to be able to specify the type of join to use. This can maintain the feel of the original path.

A triangle path with 's' shaped sides with various offsets.

Left: Outset path with three different types of joins: ‘bevel’, ’round’, and ‘arcs’. Right: Outset paths with various offsets and with the ‘arcs’ line join. Note: the ‘arcs’ line join fails for the outer most path as the generated arcs do not intersect; this results in falling back to a ‘miter’ line join.

Allowing more freedom to define stroke position and being able to offset strokes are highly desirable features for designers, but as this post shows, they are not so simple to implement. Before we can add such features to SVG, we need to define robust algorithms for generating proper offset paths.

References

  1. An offset algorithm for polyline curves Xu-Zheng Liu, Jun-Hai Yong, Guo-Qin Zheng, Jia-Guang Sun.

An image for the sole purpose of having a good PNG image to show in Google+ which doesn’t support SVG images, bad Google+.

Complex path with offsets.

San Francisco Impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

April 12, 2015

OpenRaster with JPEG and SVG

OpenRaster is a file format for layered images, essentially each layer is a PNG file, there is some XML glue and it is all contained in a Zip file.

In addition to PNG some programs allow layers in other formats. MyPaint is able to import JPG and SVG layers. Drawpile has also added SVG import.

After a small change to the OpenRaster plugin for The GNU Image Manipulation Program, it will also allow non-PNG layers. The code had to be changed in any case, it needed to at least give a warning that non-PNG layers were not being loaded, instead of quietly dropping them. Allowing other layer types was more useful and easier too.
(This change only means that other file types with be imported, they will not be passed through and will be stored as PNG when the file is exported.)

April 10, 2015

Updating device firmware on Linux

If you’re a hardware vendor reading this, I’d really like to help you get your firmware updates working on Linux. Using fwupd we can already update device firmware using UEFI capsules, and also update the various ColorHug devices. I’d love to increase the types of devices supported, so if you’re interested please let me know. Thanks!

April 09, 2015

Simplifying things

social-network-links

Hi all
Don’t know if is something age related, but I’m striving now for simplicity in everything. That’s one of the reasons I’ve recently decided to perform some contacts clearance in my social networks. Is NOTHING PERSONAL, is just a bit stressful to have floating in every social network account same contacts again and again like it was not enough to have them on your email, email chats, Facebook, your phone, Google+, whatsapp, Skype, you name it…
And more importantly, 90% of those contacts are just sitting idle, or friends of friends, contacts that never actually reach you. I will always favor physical friends, or at least often contact as a selection criteria. But hey, is not big deal, there’s plenty of way to reach each other even if we are not facebook friends!


April 08, 2015

Hughes Hypnobirthing

If you’re having a baby in London, you might want to ask my wife for some help. She’s started offering one to one HypnoBirthing classes for mothers to be and birth partners designed to bring about an easier, more comfortable birthing experience.

It worked for us, and I’m super proud she’s done all the training so other people can have such a wonderful birthing experience.

released darktable 1.6.4

We are happy to announce that darktable 1.6.4 has been released.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.4
Please only use our provided packages ("darktable-1.6.4.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here's the direct link to tar.xz and dmg:
https://github.com/darktable-org/darktable/releases/download/release-1.6.4/darktable-1.6.4.tar.xz
https://github.com/darktable-org/darktable/releases/download/release-1.6.4/darktable-1.6.4.dmg

this is another point release in the stable 1.6.x series.

sha256sum darktable-1.6.4.tar.xz
 c5f705e8164c014acf0dac2ffc5b730362068c2864622121ca6fa9f330368d2a
sha256sum darktable-1.6.4.dmg
 e5bbf00fefcf116aec0e66d1d0cf2e2396cb0b19107402d2ef70d1fa0ab375f6

General improvements:

  • major rawspeed update
  • facebook exporter update (first authentication usability should be much better now)
  • first run opencl benchmark to prevent opencl auto-activation if GPU is obviously slower than CPU
  • lensfun cornercase fixes
  • some mask cornercase fixes
  • zonesystem now updates it's gui when number of zones changes
  • spots iop updates
  • ui_last/gui_language should work more reliably now
  • internal lua updated from 5.2.3 to 5.2.4 (distro typically use their own version of lua)
  • gcc 5 should build now

Camera support:

  • Canon Digital Rebel (non-european 300D)
  • Nikon D5500 (experimental)
  • Olympus E-M5 Mark II (experimental)
  • Samsung NX500 (experimental)

White balance presets:

  • Sony A77 II
  • Fujifilm X-E2
  • Olympus E-M5 Mark II

Noise profiles:

  • Canon 7D Mark II

updated translations:

  • German
  • French
  • Russian
  • Danish
  • Catalan
  • Japanese
  • Dutch

April 07, 2015

Fedora Design Team Update

Fedora Design Team Logo

Fedora Design Team Meeting 7 April 2015

Summary

I don’t actually have time today to post a full summary, so just a few bullet points:

  • Bronwyn, my new intern, started today so we welcomed her at the meeting and she took on her first ticket, which she’s working on right now.
  • We walked through tickets needing attention and tickets needing triage.
  • We talked about the F22 Beta readiness meeting – I will attend to represent the team this Thursday.
  • We talked about Flock and discussed more details about the topics we’d like to propose.

April 06, 2015

OpenRaster Paths (or Vectors)

Summary: plugin updated to allow round-trip of paths.

The MyPaint team are doing great work, making progress towards MyPaint 1.2, I encourage you to give it a try, build it from source or check out the nightly builds. (Recent windows build Note: the filename mypaint-1.1.1a.7z may stay the same but the date of build does change.)
The Vector Layers feature in MyPaint is particularly interesting. One downside though is that the resulting OpenRaster files with vector layers are incompatible with most existing programs. MyPaint 1.0 was one of the few programs that managed to open the file at all, presenting an error message only for the layer it was not able to import. The other programs I tested, failed to import the file at all. It would be great if OpenRaster could be extended to include vector layers and more features but it will take some careful thought and planning.

It can be challenging enough to create a new and useful feature, planning ahead or trying to keep backwards compatibility makes matters even more complicated. With that in mind I wanted to add some support for vectors to the OpenRaster plugin. Similar to my previous work to round-trip metadata in OpenRaster I found a way to round-trip Paths/Vectors that is "good enough" and that I hope will benefit users. The GNU Image Manipulation Program already allows paths to be exported in Scalable Vector Graphics (SVG) format. All paths are exported to a single file, paths.svg and are imported back from that same file. It is not ideal, but it is simple and it works.

Users can get the updated plugin immediately from the OpenRaster plugin gitorious project page. There is lots more that could be done behind the scenes, but for ordinary users I do not expect any changes as noticeable as these for a while.


Back to the code. I considered (and implemented) a more complicated approach that included changes to stack.xml, where raster layers were stored as one group, and
paths (vectors layers) as another group. This approach was better for exporting information that was compatible with MyPaint but as previously mentioned, the files were not compatible with any other existing programs.

To ensure OpenRaster files that are back compatibility it might be better to always include a PNG file as the source for every layer, and to find another way to link to other types of content, such as text or vectors, or at some distant point in the future even video. A more complicated fallback system might be useful in the long run. For example the EPUB format reuses the Open Packaging Framework (OPF) standard, any pages can be stored in multiple formats, so long as it includes a fallback to another format, ending with a fallback to a few standard baseline formats (i.e. XHTML). The OpenRaster standard has an elegant simplicity, but there is so much more it could do.

Krita 3.0

Krita 3.0 is going to be the first Qt 5.x-based Krita. April 13th, 2006, we ported Krita to Qt 4. Seventeen days after porting started, I could publish Krita 2.0 runs!":

Back then, I was young and naive and deluded enough to think that porting Krita to a new version of Qt would automatically make Krita better. Porting itself was an exciting adventure, and using new technology was fun all by itself.

But porting to Qt 4 was a complete and utter disaster and it took us many years to finally get to a Qt 4-based version of Krita that was ready for users: Krita 2.4, released April 11th, 2012. There were reasons for that beyond the mere porting, of course, including the fact that, like fools, we couldn't resist doing a complete refactoring of the basic KOffice libraries.

This time, we're not doing that. But that's not to say that I'm at all confident that we'll have a Krita 3.0 that is as good for end users as 2.9. We started porting March 6th. Right now, Krita starts, but you cannot load or save an image and you cannot use any tool. Our input handling is broken because of changes in the event filter handling in Qt. Also, ugly big fonts and no icons. Really simple things, like the list of brush engines, are broken.

I know that we have to port Krita to Qt 5, because Qt 4 is not going to be maintained for much longer, because Linux distributions want to move on, because Qt 5 is being actively developed (except for the parts, like QtWebkit that are being actively deprecated). But it's taking a lot of effort away from what really counts: making Krita better for end users.

It's like this...

One can develop software for any number of reasons: because it's fun to write code, because you get paid for it or because your software makes users happy. I'm going for the last reason. I want to make users happy to use Krita. I want users to have fun using Krita, I want it to be an efficient tool, a pleasure to use.

In order to do that, code-wise, I have to do three things: implement cool new features (including workflow improvements), fix bugs and improve Krita's performance.

Similarly, I expect people working on libraries or build tools to have as their goal making it possible for me to reach my goals: after all, I'd hope they are writing software for me to use, otherwise I'd better use something else that does help me reach my goals.

Updating our build system and struggling with that because the new way of specifying the list of directories where the compiler looks for header files isn't compatible with third party software that uses cmake, well, that does not contribute to my goal. This problem has taken four or five people each over four hours of time looking into and it hasn't been solved yet! Now realize that Calligra has 609 CMakeLists.txt files and about 300 plugins. There are over seventy libraries in Calligra.

Likewise, having to rewrite existing code because someone found a new, better way of handling boolean values in a C library for reading and writing a particular file format is not helping. Your library might be much better quality, now, code-wise, but I, as your user don't give a damn. Just like my users don't give a damn what my code looks like; it only needs to work. I only care about whether your software helps me to deliver features to my users. Don't tell me the new way is cleaner -- that's an illusion anyway. Don't insult me by telling me the new way will make my maintenance-burden smaller, because you've just added a load to it right away.

In general, any change in a library or in a build system that makes work for me without materially improving Krita for Krita's users is a waste of my time. Doubly so if it's badly documented. I resent that waste. I don't have enough time already.

Let's take this tiny example... QModelIndex::internalId() no longer returns a qint64, but a kind of a pointer abstraction, a signed integer. Well, we have some code that compared that internalId() to -1. This code was written in 2011 by someone who is no longer around. The Qt documentation used to say

"Returns a qint64 used by the model to associate the index with the internal data structure."

Now it says

"Returns a quintptr used by the model to associate the index with the internal data structure."

It might just be me... But this change isn't mentioned in C++ API changes -- so, what do we do now? And once we've done it, is the style selector for the text tool really better?

Likewise, what do I care that "QCoreApplication::setEventFilter() and QApplication::x11EventFilter/macEventFilter/qwsEventFilter/winEventFilter are replaced with QCoreApplication::installNativeEventFilter() and QCoreApplication::removeNativeEventFilter() for an API much closer to QEvent filtering." Nothing. Nada. Zilch. I just don't want to port our event filter to a new api; it worked fine, let it go on working!

I could enumerate examples until dawn, and we're only a month into porting. We've disabled all deprecation warnings, even, because they were so numerous they obscured the real errors.

So, to conclude, I suspect that it'll take at least six months before the Qt 5 port of Krita is usable by end users, and that we'll be adding new features and fixing bugs in Krita 2.9 for at least a year to come. Because if there's one thing that I desperately want to avoid, it's losing our userbase just when it's nicely growing because we spend months doing stuff none of our users gives a damn about.

OpenRaster Metadata

Summary: plugin updated to allow round-trip of metadata.

OpenRaster does not yet make any suggestions on how to store metadata. My preference is for OpenRaster to continue to borrow from OpenDocument and use the same format meta.xml file, but that can be complicated. Rather than taking the time to write a whole lot of code and waiting do metadata the best way, I found another way that is good enough, and expedient. I think ordinary users will find it useful -- which is the most important thing -- to be able to round-trip metadata in the OpenRaster format, so despite my reservations about creating code that might discourage developers (myself included) from doing things a better way in future I am choosing the easy option. (In my previous post I mentioned my concern about maintainability, this is what I was alluding to.)

A lot of work has been done over the years to make the The GNU Image Manupilation Program (GIMP) work with existing standards. One of those standards is XMP, the eXtensible Metadata Platform originally created by Adobe Systems, which used the existing Dublin Core metadata standard to create XML packets that can be inserted inside (or alongside) an image file. The existing code creates an XMP packet, let's call it packet.xmp and include it in the OpenRaster file. There's a little more code to the load the information back in and users should be able to go to menu File, Properties and in Properties dialog go to the tab labelled Advanced to view (or set) metadata.

This approach may not be particularly useful to users who want to get their information out into other applications such as MyPaint or Krita (or Drawpile or Lazpaint) but it at least allows them not to lose metadata information when they use OpenRaster. (In the long run other programs will probably want to implement code to read XMP anyway, so I think this is a reasonable compromise, even though I want OpenRaster to stay close to OpenDocument and benefit from being part of that very large community.)

You can get the updated plugin immediately from the OpenRaster plugin gitorious project page.

If you are a developer and want to modify or reuse the code, it is published under the ISC License.

Quickly seeing bird sightings maps on eBird

The local bird community has gotten me using eBird. It's sort of social networking for birders -- you can report sightings, keep track of what birds you've seen where, and see what other people are seeing in your area.

The only problem is the user interface for that last part. The data is all there, but asking a question like "Where in this county have people seen broad-tailed hummingbirds so far this spring?" is a lengthy process, involving clicking through many screens and typing the county name (not even a zip code -- you have to type the name). If you want some region smaller than the county, good luck.

I found myself wanting that so often that I wrote an entry page for it.

My Bird Maps page is meant to be used as a smart bookmark (also known as bookmarklets or keyword bookmarks), so you can type birdmap hummingbird or birdmap golden eagle in your location bar as a quick way of searching for a species. It reads the bird you've typed in, and looks through a list of species, and if there's only one bird that matches, it takes you straight to the eBird map to show you where people have reported the bird so far this year.

If there's more than one match -- for instance, for birdmap hummingbird or birdmap sparrow -- it will show you a list of possible matches, and you can click on one to go to the map.

Like every Javascript project, it was both fun and annoying to write. Though the hardest part wasn't programming; it was getting a list of the nonstandard 4-letter bird codes eBird uses. I had to scrape one of their HTML pages for that. But it was worth it: I'm finding the page quite useful.

How to make a smart bookmark

I think all the major browsers offer smart bookmarks now, but I can only give details for Firefox. But here's a page about using them in Chrome.

Firefox has made it increasingly difficult with every release to make smart bookmarks. There are a few extensions, such as "Add Bookmark Here", which make it a little easier. But without any extensions installed, here's how you do it in Firefox 36:

[Firefox bookmarks dialog] First, go to the birdmap page (or whatever page you want to smart-bookmark) and click on the * button that makes a bookmark. Then click on the = next to the *, and in the menu, choose Show all bookmarks. In the dialog that comes up, find the bookmark you just made (maybe in Unsorted bookmarks?) and click on it.

Click the More button at the bottom of the dialog.
(Click on the image at right for a full-sized screenshot.)
[Firefox bookmarks dialog showing keyword]

Now you should see a Keyword entry under the Tags entry in the lower right of that dialog.

Change the Location to http://shallowsky.com/birdmap.html?bird=%s.

Then give it a Keyword of birdmap (or anything else you want to call it).

Close the dialog.

Now, you should be able to go to your location bar and type:
birdmap common raven or birdmap sparrow and it will take you to my birdmap page. If the bird name specifies just one bird, like common raven, you'll go straight from there to the eBird map. If there are lots of possible matches, as with sparrow, you'll stay on the birdmap page so you can choose which sparrow you want.

How to change the default location

If you're not in Los Alamos, you probably want a way to set your own coordinates. Fortunately, you can; but first you have to get those coordinates.

Here's the fastest way I've found to get coordinates for a region on eBird:

  • Click "Explore a Region"
  • Type in your region and hit Enter
  • Click on the map in the upper right

Then look at the URL: a part of it should look something like this: env.minX=-122.202087&env.minY=36.89291&env.maxX=-121.208778&env.maxY=37.484802 If the map isn't right where you want it, try editing the URL, hitting Enter for each change, and watch the map reload until it points where you want it to. Then copy the four parameters and add them to your smart bookmark, like this: http://shallowsky.com/birdmap.html?bird=%s&minX=-122.202087&minY=36.89291&maxX=-121.208778&maxY=37.484802

Note that all of the the "env." have been removed.

The only catch is that I got my list of 4-letter eBird codes from an eBird page for New Mexico. I haven't found any way of getting the list for the entire US. So if you want a bird that doesn't occur in New Mexico, my page might not find it. If you like birdmap but want to use it in a different state, contact me and tell me which state you need, and I'll add those birds.

April 05, 2015

Life reset

houston-835-1920x1080

My time has come, finally I’m free from all limitations I had in Cuba: I’m in United States of America, and I’m here to stay. It was a long dream and a hard road to get here, even traumatic to the point I will not share, but I’ve made it!

Is literally a life reset, I have to start from scratch in a new world full of strange things to me but also full of opportunities. I really cant express what I’m feeling now, is a mixture of everything: happiness for the dream come true and sadness for the loved ones I left behind, impressed by all the beauty and fear for my future, hope for opportunities and uncertainty for my new path and I can go on.

But after all there’s a subtle feeling ever present, women better know: is like a void, after you give birth, when you reach your hardest goal with nothing planned after that, preventing you from fully enjoy your victory and after all, feeling loneliness.

But all of this is normal, is the same countless people have felt when they leave the nest, and more importantly, is the price we have to pay for finally live our lives as we should.

Thank you everyone!


April 03, 2015

What DAW (or other music software) is the right for me?

Dear lazyweb,

Is Ableton Live the right DAW for me?

I am not a musician or "producer". I don't know how to play any instrument. I don't really know musical notation (and have little desire to learn). But I do have some basic understanding of musical theory, an open mind, and would enjoy making experimental music. The kind I like to listen to. (Or, for the matter, why not more traditional electronic music, even dancey stuff, too.)

(I do listen to more "normal" music, too, not just experimental bleeps and drones;)

Among my favourite composers / artists are the usual suspects like Scelsi, Ligeti, Reich, Glass, Eno, Fripp, Kraftwerk, and contemporary ones like Max Richter, Anna Thorvaldsdottir, Nils Frahm, and of course acts like Circlesquare, Monolake, Plastikman etc. My favourite radio show is WNYC's New Sounds .

The software I have been ogling most is Ableton Live. What I like about it is that it is popular, cool, and has a thriving development, apparently, with relatively frequent updates etc. It seems not likely to go away suddenly. I love the total (?) lack of skeuomorphism in the user interface. (I strongly dislike the trend of faux real hardware look and feel in 3rd-party plugins.)

I certainly have no wish to use the "live" aspect of Live. And, as one of the main points, as I understand it, of Live is to make it easy to launch clips that will be automatically properly aligned in live performance scenarios, will Live be suitable for stuff like having multiple loops or patterns playing simultaneously *without* being synchronised? You know, like emulating Frippertronics, or patterns being slightly out of phase in the style of Reich, or Eno.

And what about microtonal aspects?

So, will Live have too many limitations? Should I look somewhere else? Maybe the generative music scene is what I should be looking into, like Intermorphic's Noatikl? Although with that I would definitely be afraid of the proprietary-software-maker-goes-belly-up scenario.

Maybe even some Open Source software? Csound? Pd? Note that I wouldn't want to get lured into hacking (programming) on some Open Source software, for once I want to be "just a user"...

Oh, and I use a Mac as my desktop environment, so that is also a limiting factor.

Or am I silly to even think I could create something interesting (to myself, that is) without starting by learning how to do stuff "by the book" first?

--tml

April 02, 2015

High Contrast Refresh

One of the major visual updates of the 3.16 release is the high contrast accessible theme. Both the shell and the toolkit have received attention in the HC department. One noteworthy aspect of the theme is the icons. To guarantee some decent amount of contrast of an icon against any background, back in GNOME 2 days, we solved it by “double stroking” every shape. The term double stroke comes from a special case, when a shape that was open, having only an outline, would get an additional inverted color outline. Most of the time it was a white outline of a black silhouette though.

Fuzzy doublestroke PNGs of the old HC theme

In the new world, we actually treat icons the same way we treat text. We can adjust the best contrast by controlling the color at runtime. We do this the same way we’ve done it for symbolic icons, using and embedded CSS stylesheet inside SVG icons. And in fact we are using the very same symbolic icons for the HC variant. You would be right arguing that there are specific needs for high contrast, but in reality majority of the double stroked icons in HC have already been direct conversions of their symbolic counterparts.

Crisp recolorable SVGs of the post 3.16 world

While centralized theme that overrides all application never seemed like a good idea, as the application icon is part of its identity and should be distributed and maintained alongside the actual app, the process to create a high contrast variant of an icon was extremely cumbersome and required quite a bit of effort. With the changes in place for both the toolkit and the shell, it’s far more reasonable to mandate applications to include a symbolic/high contrast variant of its app icon now. I’ll be spending my time transforming the existing double stroke assets into symbolic, but if you are an application author, please look into providing a scalable stencil variant of your app icon as well. Thank you!

Audi Quattro

Winter is definitelly losing its battle and last weekend we had some fun filming with my new folding Xu Gong v2 quad.

Audi Quattro from jimmac on Vimeo.

Making of GNOME 3.14

The release of GNOME 3.14 is slowly approaching, so I stole some time from actual design work and created this little promo to show what goes into a release that probably isn’t immediately obvious (and a large portion of it doesn’t even make it in).

Watch on Youtube

I’d like to thank all the usual suspects that make the wheels spinning, Matthias, Benjamin and Allan in particular. The crown goes to Lapo Calamandrei though, because the amount of work he’s done on Adwaita this cycle will really benefit us in the next couple of releases. Thanks everyone, 3.14 will be a great release*!

* I keep saying that every release, but you simply feel it when you’re forced to log in to your “old” GNOME session rather than jhbuild.

Open Flight Controllers

In my last multirotor themed entry I gave an insight into the magical world of flying cameras. I also gave a bit of a promise to write about the open source flight controllers that are out there. Here’s a few that I had the luck laying my hands on. We’ll start with some acro FCs, with a very differt purpose to the proprietary NAZA I started on. These are meant for fast and acrobatic flying, not for flying your expensive cameras on a stabilized gimbal. Keep in mind, I’m still fairly inexperienced so I don’t want to go into specifics and provide my settings just yet.

Blackout: Potsdam from jimmac on Vimeo.

CC3D

The best thing to be said about CC3D is that while being aimed at acro pilots, it’s relatively newbie friendly. The software is fairly straight forward. Getting the QT app built, set up the radio, tune motors and tweak gains is not going to make your eyes roll in the same way APM’s ground station would (more on that in a future post, maybe). The defaults are reasonable and help you achieve a maiden flight rather than a maiden crash. Updating to the latest firmware over the air is seamless.

Large number of receivers and connection methods is supported. Not only the classic PWM, or the more reasonable “one cable” CPPM method, but even Futaba proprietary SBUS can be used with CC3D. I’ve flown it with Futaba 8J, 14SG and even the Phantom radio (I actually quite like the compact receiver and the sticks on the TX feel good. Maybe it’s just that it’s something I’ve started on). As you’re gonna be flying proximity mostly, the range is not an issue, unless you’re dealing with external interference where a more robust frequency hopping radio would be safer. Without a GPS “break” or even a barometer, losing signal for even a second is fatal. It’s extremely nasty to get a perfect 5.8 video of your unresponsive quad plumetting to the ground :)

Overall a great board and software, and with so much competition, the board price has come down considerably recently. You can get non-genuine boards for around EUR20-25 on ebay. You can learn more about CC3D on openpilot website

Naze32

Sounding very similar to the popular DJI flight controller, this open board is built around the 32-bit STM32 processor. Theoretically it could be used to fly a bit larger kites with features like GPS hold. You’re not limited to the popular quad or hexa setups with it either, you can go really custom with defining your own motor mix. But you’d be stepping in the realm of only a few and I don’t think I’d trust my camera equipment to a platform that hasn’t been so extensively tested.

Initially I didn’t manage to get the cheap acro variant ideal for the minis, so I got the ‘bells & whistles’ edition, only missing the GPS module. The mag compass and air pressure barometer is already on the board, even though I found no use for altitude hold (BARO). You’ll still going to worry about momentum and wind so reaching for those goggles mid flight is still not going to be any less difficult than just having it stabilized.

If you don’t count some youtube videos, there’s not a lot of handholding for the naze32. People assume you have prior experience with similar FCs. There are multiple choices of configuration tools, but I went for the most straight forward one — a Google Chrome/Chromium Baseflight app. No compiling necessary. It’s quite bare bones, which I liked a lot. Reasonably styled few aligned boxes and CLI is way easier to navigate than the non-searchable table with bubblegum styling than what APM provides for example.

One advanced technique that caught my eye, as the typical process is super flimsy and tedious, is ESC calibration. To set the full range of speeds based on your radio, you usually need to make sure to provide power to the RX, and setting the top and bottom throttle leves to each esc. With this FC, you can actually set the throttle levels from the CLI, calibrating all ESCs at the same time. Very clever and super useful.

Another great feature is that you can have up to three setting profiles, depending on the load, wind conditions and the style you’re going for. Typically when flying proximity, between trees and under park benches, you want very responsive controls at the expense of fluid movement. On the other hand if you plan on going up and fast and pretend to be a plane (or a bird), you really need to have that fluid non-jittery movement. It’s not a setting you change mid-flight, using up a channel, but rather something you choose before arming.

To do it, you hold throttle down and yaw to the left and with the elevator/aileron stick you choose the mode. Left is for preset1, up is for preset 2 and right is for preset 3. Going down with the pitch will recalibrate the IMU. It’s good to solder on a buzzer that will help you find a lost craft when you trigger it with a spare channel (it can beep on low voltage too). The same buzzer will beep for selecting profiles as well.

As for actual flying characteristics, the raw rate mode, which is a little tricky to master (and I still have trouble flying 3rd person with it), is very solid. It feels like a lot larger craft, very stable. There’s also quite a feat in the form of HORI mode, where you get a stabilized flight (kite levels itself when you don’t provide controls), but no limit on the angle, so you’re still free to do flips. I can’t say I’ve masted PID tuning to really get the kind of control over the aircraft I would want. Regardless of tweaking the control characteristics, you won’t get a nice fluid video flying HORI or ANGLE mode, as the self leveling will always do a little jitter to compensate for wind or inaccurate gyro readings which seems to not be there when flying rate. Stabilizing the footage in post gets rid of it mostly, but not perfectly:

Minihquad in Deutschland

You can get the plain acro version for about EUR30 which is an incredible value for a solid FC like this. I have a lot of practice ahead to truly get to that fluid fast plane-like flight that drew me into these miniquads. Check some of these masters below:

APM and Sparky next time. Or perhaps you’d be more interested in the video link instead first? Let me know in the comments.

Update: Turns out NAZE32 supports many serial protocols apart form CPPM, such as Futaba SBUS and Graupner SUMD.

Almost half a million downloads per month

In a regular month, without a release, Blender.org serves 430,000 downloads directly from the download page. This doesn’t count all the people who get the sources, doesn’t include release candidates, it doesn’t include the other websites that offer Blender.

The image below can also be viewed as a pdf here.

Also, added below the full 1 year stat of registered downloads!

Screen Shot 2015-04-02 at 17.56.14

Screen Shot 2015-04-03 at 10.24.48

JdLL 2015

Presentation and conferencing

Last week-end, in the Salle des Rancy in Lyon, GNOME folks (Fred Peters, Mathieu Bridon and myself) set up our booth at the top of the stairs, the space graciously offered by Ubuntu-FR and Fedora being a tad bit small. The JdLL were starting.

We gave away a few GNOME 3.14 Live and install DVDs (more on that later), discussed much-loved features, and hated bugs, and how to report them. A very pleasant experience all-in-all.



On Sunday afternoon, I did a small presentation about GNOME's 15 years. Talking about the upheaval, dragging kernel drivers and OS components kicking and screaming to work as their APIs say they should, presenting GNOME 3.16 new features and teasing about upcoming GNOME 3.18 ones.

During the Q&A, we had a few folks more than interested in support for tablets and convertible devices (such as the Microsoft Surface, and Asus T100). Hopefully, we'll be able to make the OS support good enough for people to be able to use any Linux distribution on those.

Sideshow with the Events box

Due to scheduling errors on my part, we ended up with the "v1" events box for our booth. I made a few changes to the box before we used it:

  • Removed the 17" screen, and replaced it with a 21" widescreen one with speakers builtin. This is useful when we can't setup the projector because of the lack of walls.
  • Upgraded machine to 1GB of RAM, thanks to my hoarding of old parts.
  • Bought a French keyboard and removed the German one (with missing keys), cleaned up the UK one (which still uses IR wireless).
  • Threw away GNOME 3.0 CDs (but kept the sleeves that don't mention the minor version). You'll need to take a sharpie to the small print on the back of the sleeve if you don't fill it with an OpenSUSE CD (we used Fedora 21 DVDs during this event).
  • Triaged the batteries. Office managers, get this cheap tester!
  • The machine's Wi-Fi was unstable, causing hardlocks (please test again if you use a newer version of the kernel/distributions). We tried to get onto the conference network through the wireless router, and installed DD-WRT on it as the vendor firmware didn't allow that.
  • The Nokia N810 and N800 tablets will going to kernel developers that are working on Nokia's old Linux devices and upstreaming drivers.
The events box is still in Lyon, until I receive some replacement hardware.

The machine is 7 years-old (nearly 8!) and only had 512MB of RAM, after the 1GB upgrade, the machine was usable, and many people were impressed by the speed of GNOME on a legacy machine like that (probably more so than a brand new one stuttering because of a driver bug, for example).

This makes you wonder what the use for "lightweight" desktop environments is, when a lot of the features are either punted to helpers that GNOME doesn't need or not implemented at all (old CPU and no 3D driver is pretty much the only use case for those).

I'll be putting it in a small SSD into the demo machine, to give it another speed boost. We'll also be needing a new padlock, after an emergency metal saw attack was necessary on Sunday morning. Five different folks tried to open the lock with the code read off my email, to no avail. Did we accidentally change the combination? We'll never know.

New project, ish

For demo machines, especially newly installed ones, you'll need some content to demo applications. This is my first attempt at uniting GNOME's demo content for release notes screenshots, with some additional content that's free to re-distribute. The repository will eventually move to gnome.org, obviously.

Thanks

The new keyboard and mouse, monitor, padlock, and SSD (and my time) were graciously sponsored by Red Hat.

Krita 2.9.2 Released

It’s April! We’ve got another bug-fix and polish release of Krita! Here are the improvements:

  • Make the eraser end of the stylus erase by default
  • Make krita remember the presets chosen for each stylus and stylus end
  • Don’t show the zoom level on-canvas message while loading
  • Fix initialization of the multi-brush axis
  • Add some more kickstarter backers to the about box
  • Fix memory leak when loading presets (and a bunch more memory leaks)
  • Fix crashes related to progress reporting when running a g’mic action
  • Add a toggle button to hide/show the filter selection tree in the filter dialog
  • Fix a focus bug that made it hard to edit e.g. layer names when activating the editor in the docker with a tablet stylus
  • Fix geometry of the toolbox on startup in some cases
  • Fix lock proportions in the free transform tool when locking aspect ratio
  • Add an option to hide the docker titlebars
  • Update the resource manager lists after loading a resource bundle
  • Make the resource manager look for bundles by default
  • Make Krita start faster by only loading images for the references docker when the references docker is visible
  • Fix a crash in the g’mic docker when there’s no preview widget selected
  • On switching images, show the selected layer in the layer box, not the bottom one
  • Show the selected monitor profile in the color management settings page instead of the default one
  • Make the Image Split dialog select the right export file type.
  • Fix saving and loading masks for file layers
  • Make the default MDI background darker
  • Fix loading some older .kra files that contained an image name with a number after a /
  • Don’t crash if the linux colord colormanager cannot find a color-managed output device
  • Clean the code following a number of PVS studio code analyzer warnings
  • Add tooltips to the presets in the popup palette
  • Fix a problem where brush presets in the popup palette were sometimes misaligned
  • Fix loading most types of images in the reference docker on Windows

Most work in the past month has gone into the Qt 5 port (Krita now starts, yay! But it doesn’t work yet…) and most of all the Photoshop-style Layerstyle feature. We’ve got most of the effects implemented, most of the dialog box, too — only the contour selector, the style library selector and the blending mode page is still missing. Loading and saving is still to be done.

But here’s a teaser screenshot of of a layer style on a vector layer (layer styles work on all types of layers, paint layers, vector, group, clone, file…)

 

layerstyle

We hope the loading and saving will be ready in time for 2.9.3!

Note on G’Mic on Windows: Lukas, David and Boudewijn are trying to figure out how to make G’Mic stable on Windows. The 32 bits 2.9.1 Windows build doesn’t include G’Mic at all. The 64 bits build does, and on a large enough system, most of the filters are stable. We’re currently trying different compilers because it seems that most problems are causes by Microsoft Visual Studio 2012 generating buggy code. We’re working like crazy to figure out how to fix this, but please, for now, on 64 bits Windows treat G’Mic as entirely experimental. We still haven’t managed to find a combination of compilers that will let us build Krita and G’Mic and make it work reliable.

If you’ve got experience cross-compiling from Linux to Windows and want to help out: that’s about the last thing we haven’t done. I’ve tried to create a cross-compilation setup, but got stuck on making Qt build with OpenGL support for Windows on Linux. If you can help, please join us on #krita.

Note for Windows users with an Intel GPU: If krita shows a black screen after opening an image, you need to update your Intel graphics drivers. This isn’t a bug in Krita, but in Intel’s OpenGL support. Update to 10.18.14 or later. Most Ultrabooks and the Cintiq Companion can suffer from outdated drivers.

Note on OSX: Krita on OSX is still experimental and not suitable for real work. Many features are still missing. Krita will only work on Mavericks and up.

Downloads

OSX:

Fun with a Surface Pro 3

Micosoft's Surface Pro 3 is, or should be, pretty much a perfect sketchbook device. Nice aspect ratio, high resolution, pen included. Of course, the screen is glossy and the pen isn't a Wacom. But when it was cheap with the keyboard cover thrown in for free I got one anyway -- as a test device for Krita on Windows.

In the end, hardware-wise, it's nice and light, it just looks good, the keyboard doesn't feel too bad, actually, and it's got home, end, page-up and page-down keys, a trick that Dell hasn't managed. The kickstand is rather meh -- it's sharp, hard to open and doesn't feel secure on my lap. Still, not a big problem.

Windows

I can handle Windows these days, even Windows 8. It's OSX that I truly despise to work with. However, when I got home and switched it on, I got my first shock... After the usual Windows setup sequence, not only did the device refuse to install the 70 or so critical updates, any and all https traffic was broken!

Turns out that the setup sequence picked the wrong timezone, and that in turn broke everything! Now, I might be a bit of an idiot, but I knew for sure that I had chosen Netherlands as location, US English as language, US English as keyboard. And besides, if your average Linux distribution can automatically set the right timezone, Windows should be able to, too! Since then, I've used the restore option a couple of times, and it seems to be really hit and miss whether the setup sequence understands this set of choices!

N-Trig

So, I restored to blank and started over. This time it did install everything. So, I installed the N-Trig wintab driver, Krita X64 2.9.0 and gave it a whirl. Everything worked perfectly. Cool!

Then I tried some other software, and when we released 2.9.1, I restored to blank, installed 86 updates and Krita 2.9.1. Now the 64 bits version of Krita didn't work properly with the pen anymore: no pressure sensitivity. That's something others have reported as well: the 32 bits version worked fine... Now N-Trig releases two versions of their drivers: 32 bits and 64 bits, and they claim that you need to use the x86 driver with x86 apps and the x64 driver with x64 apps, but... No matter which driver is installed, x86 Krita and x86 photoshop CS2 work fine. I don't have any other x64 based Wacom-based drawing application to test with, but all of this sounds very suspicious.

Especially when I noticed that on their other wintab driver download page, they will send you to a driver that fits your OS: 32 bit driver for a 32 bit Windows, 64 bit driver for a 64 bit Windows. I'm not totally sure that the N-Trig people actually know what they're doing.

And then I tested Krita 2.9.1 on another Windows 8.1 device with a more modern (1024 levels of pressure) N-Trig pen. With the 64 bit driver both x64 and x86 versions of Krita had pressure sensitivity. But... The driver reports the wrong number of sensitivity levels, in one place it claims 256, like the old pens, in one 1024, so Krita gets confused about that, unless we hack around it.

Still, this must be an issue with the drivers installed on the Surface Pro 3, and I haven't yet managed to make it work again, despite a couple of other wipes and reinstalls. Copying the drivers from the Intel N-Trig laptop to the Surface Pro also doesn't make a difference. Installing the Surface Pro 3 app doesn't make a difference. I guess I'd best mail the N-Trig developers.

For now, it's irritating as heck that I can't run the 64 bits version of Krita on the Surface Pro 3.

Linux

OpenSUSE 13.2 boots on it, and the pen even moves the cursor around. No idea about pressure sensitivity because while the trackpad is fine, the keyboard doesn't work yet, so I couldn't actually install stuff. Google suggests that kernel patches might help there, but it's the pen that I care about, and nobody mentions that.

As a drawing tablet

A good artist can do great stuff with pretty much everything. People can do wonders with an ipad and a capacitive stylus with no pressure sensitivity. I'm not much of an artist. I used to believe I could draw, but that was twenty-five years ago, and is another story besides. But I spent an evening with Krita 2.9.1 and the Surface, to get a feel for how it differs from, e.g. the Wacom Hybrid Companion.

The screen is glossy and very smooth. That's not as nice as the Companion's matte screen, but the high resolution makes up for it. It's really hard to see pixels. But... Every time you touch the screen with the pen, it deforms a little bit and becomes a little bit lighter. That's pretty distracting!

The pen also lags. Not just in Krita, not just when painting, but when hovering over buttons or menus, the cursor is always a bit behind.

There's no parallax, which is really irriting on the Cintiq. There's also no calibration needed, the pen is accurate into the deepest corners. That is pretty awesome, especially since it lets me use the zoom slider with the pen. The pen really feels very accurate. In Krita at least: in Photoshop CS2, it's very hit and miss whether a quick stroke will register.

The tablet itself is nice and light, the form factor pretty much perfect. I can hold it in one hand, draw with the other one, in portrait mode, for quite some time. Try that with a Companion!

The hybrid companion doesn't get hot, though I heard that the Windows companion does. The Surface certainly does get hot! But the heat is located in one place, not the place where I held the tablet, it was only noticeable because I rest my hand on the screen while drawing. Note for other Surface users: disabling flicks in the control center makes life much easier!

For Krita

I have had reports that the virtual keyboard didn't work from a Cintiq Companion user, but it works just fine for me on the Surface. I can enter values in all the dockers, sliders and dialogs in desktop mode.

As for the N-Trig pen, we support pressure sensitivity, but the two buttons are weird things and I don't know yet how to work with them. Which means, no quick-access palette, no quick panning yet.

In any case, we need to do a lot of work on the Sketch gui to make it as usable as, e.g., Art Flow on Android. Heck, we need to do a lot of work on Sketch to bring it up to 2.9! But a good tablet-mode gui is something I really want to work on.

One-antlered stags

[mule deer stag with one antler] This fellow stopped by one evening a few weeks ago. He'd lost one of his antlers (I'd love to find it in the yard, but no luck so far). He wasn't hungry; just wandering, maybe looking for a place to bed down. He didn't seem to mind posing for the camera.

Eventually he wandered down the hill a bit, and a friend joined him. I guess losing one antler at a time isn't all that uncommon for mule deer, though it was the first time I'd seen it. I wonder if their heads feel unbalanced.
[two mule deer stags with one antler each]

Meanwhile, spring has really sprung -- I put a hummingbird feeder out yesterday, and today we got our first customer, a male broad-tailed hummer who seemed quite happy with the fare here. I hope he stays around!

March 31, 2015

Introducing the darktable app store

Today we are happy to announce a big new feature that we will not only ship with the big 2.0 release later this year but also with our next point release, 1.6.4, which is due in about a week: even more darkroom modules!

One of the big strengths of darktable has always been its varied selection of modules to tweak your image. However, that has also been one of the main points of criticism: too much, too many and too complicated to grasp. To make it easier for the user to deal with the flood of tools darktable has had the “more modules” list since many years. It changed its appearance a few times, we added module categories, allowed to select favorite modules, and all of that has proven to be useful. Thus there have always been people that approached us with great new ideas for new modules, especially since we moved to GitHub a while ago with its powerful Pull Request system, yet we couldn't accept many of them. Some were not that great codewise, some didn't really fit our product vision – and then there were some that looked nice and certainly benefited some users, but we felt it wasn't generic enough to justify polluting our module list even more. Of course this was a bad situation, after all these people invested quite some time into providing us with a new feature that we turned down. No one likes to waste their time.

In the default state the new dialog doesn't clutter the gui

In the default state the new dialog doesn't clutter the gui

After initial discussions about this topic at last year's LGM in Leipzig we started working on a solution later that year and now we feel confident to present you the new module store. Think of it as an in-game app store (for all you gamers out there), or Firefox' Add-On system. Instead of bloating the list of modules shipped with darktable you can now easily browse for exciting new features from within the darkroom GUI, installing new modules on the fly (or uninstall them if you don't like the result), and even see previews of the module's effect on the currently opened image. We are certain that you will like this.

Who will also like it are module developers. Writing new image modules has always been quite easy: clone the darktable sources, create one new C file and add it to the CMake system. But that was only the first part, after all you wanted to allow people to use your work in the end. So you either had to convince us to include your module into the official darktable release (with the problems outlined above), or provide a patched version of darktable for people to compile themselves. In theory you could also have provided a binary package or just the module compiled into a shared library for people to just copy to their install directory, but we have never seen anyone taking that route. With our new module system this will become easier. Instead of creating a patched version of darktable you can now make use of our Module Developers Package which contains all the required header files and a CMake stub to quickly compile your module into a shared library that can be used with a stock installation of darktable. And since we will release these files under a LGPL license you could even write non-free modules. Once you are happy you can submit your code for us to review (this step is still manual to prevent malicious code being shipped) and once we approved it everyone can install the module.

Currently there is just the one module in store. More to come!

Currently there is just the one module in store. More to come!

All of the things described until now are implemented already and will be part of the next releases. Once it has proven to be working reliably we also plan to allow developers to make some money with their work as an incentive to attract more and better developers to our community. We are currently evaluating what payment models would work best, at the moment PayPal looks like a strong contender, but we are open for suggestions.

In case you are curious how it's implemented, it is based on the GHNS system that is already used by KDE and others, and might eventually also be merged with the styles you can find on http://dtstyle.net/. On the server side there is a continuous integration system (Jenkins in our case) that recompiles everything whenever something got changed, taking care of the different target architectures and operating systems with their dependencies. And if you don't want to wait for the release just try a development build, the code is merged and ready to be tested. As a first example we moved the new “color reconstruction” module from the regular darktable installation to the store.

PS: Thou shalt not believe what got posted on the Internet on April 1st.

March 30, 2015

Interview with Odysseas Stamoglou

scifi-girl-800

Could you tell us something about yourself?

My name is Odysseas Stamoglou, I am an artist born in Athens, Greece. I am currently based in Vienna, working as a freelancer illustrator for board games and historical covers and comics.

I am also a musician, giving concerts and composing music for film and documentaries.

Do you paint professionally, as a hobby artist, or both?

Professionally, and… both! To me, making pictures is an important way of communication and understanding. I take in my real world experience, and these images come out. So I would say that I became a professional artist because of my need to stay in touch and evolve this essential communication skill.

What genre(s) do you work in?

My favourite themes usually involve things that cannot be photographed. So, science fiction, fantasy and history are the fields I mostly work on.

Whose work inspires you most — who are your role models as an artist?

There are so many amazing artists out there that I would have to make a long list. I am absolutely fascinated by the classic renaissance and romantic painters, and I greatly appreciate the work of many modern illustrators and industrial designers. However, my great inspiration and teacher is real life experience, as this is the source where we all draw from.

What made you try digital painting for the first time?

My illustration teacher introduced me to digital painting, back when I was studying in 2004.

What makes you choose digital over traditional painting?

Speed and no material restrictions would be the obvious answer for me. I have worked a lot with acrylics and watercolors, inks and oils. Although computers are still lacking the freedom, and the organic randomness and the “dirtyness” of natural media, digital painting is the obvious choice for a professional illustrator.

You don’t need to “wait for the paint to dry”, and you can explore your designs, colour combinations and even different versions of your painting for ever. Plus, computers are like a big melting pot where you can throw paintings, drawings, photographs… anything you like to enhance your image. However, more often than not I find myself working traditionally up to a certain point and then finishing the image digitally. Each medium has its strengths and weaknesses, but in the end they are just tools.

PHOOW-Rome

How did you find out about Krita?

I found Krita a couple of years ago, when I was randomly looking what’s new in the open source world.

What was your first impression?

I believe the first version I got my hands on was 2.4 or 2.5. Well, it was obvious that I couldn’t use this for my work, but it looked promising and from that point on I started to keep an eye on the progress of Krita.

What do you love about Krita?

I love the fact that it is open source. The ability to directly communicate with the creators of my painting tool and actively participate in a great and positive community dedicated to making it better, is invaluable.

The enthusiasm and the energy the developers are constantly putting in it, and also the frequency in which they are releasing updates and fixes is simply amazing.

What do you think needs improvement in Krita? Also, anything that you really hate?

Krita has come a long way since I started working with it around two years ago. All the main features are great and constantly refined and improved.

Since last year I am confident enough with this software and I trust it enough to use it for 90% of my professional painting work.

I would like Krita to be more stable (occasional crashes, overal performance) and less distractive (it seems like Krita still thinks that the dockers and the canvas belong to separate worlds. One has to click on the canvas every time in order to trigger any canvas-related  actions.)

But hate? No, I don’t think there is room for hate here.

In your opinion, what sets Krita apart from the other tools that you use?

Well, there are programs out there that emulate watercolors better, or have a better overal performance. But Krita really feels like home to me. I can customize it to my heart’s delight and the toolset of Krita combines many great ideas in one place. To name a few of my favourites:

– The brush engine
– The amazing ruler assistant tool, you guys brought a much needed feature the digital world was lacking until now.
– The file layers are simply ingenious.
– The favourites popup

If you had to pick one favourite of all your work done in Krita so far, what would it be? Why this particular picture?

It would be the SciFi girl! I guess, because it was a quick, spontaneous, simple, funny and meaningful picture that I made during a random evening last year and which I enjoyed very much.

What techniques and brushes did you use in it?

The simplest available in the market, really..A pencil-like brush for a quick sketch and most of the rest was done using a custom made pastel brush. In some minor places I used some selections and airbrushing.

Where can people see more of your work?

They can visit my website: www.odysseus-art.com, and get updates on twitter and facebook.

Anything else you’d like to share?

Thank you for inviting me to this interview. I am very enthusiastic about Krita, and proud to call it my favourite painting tool. You guys rock!

First Krita Book Published in Japan

Today we got mail from Kayoko Matsumoto that the first book about Krita has gone into print!

krita-book-japan

You can get the book from Amazon or Mynavi. Kayoko has promised to send the Krita Foundation a copy, too, and we’re waiting with bated breath for it to arrive. And you can download a sample chapter, too.

March 29, 2015

An API is only as good as its documentation.

Your APIs are only as good as the documentation that comes with them. Invest time in getting docs right. — @rubenv on Twitter

If you are in the business of shipping software, chances are high that you’ll be offering an API to third-party developers. When you do, it’s important to realize that APIs are hard: they don’t have a visible user interface and you can’t know how to use an API just by looking at it.

For an API, it’s all about the documentation. If an API feature is missing from the documentation, it might as well not exist.

Sadly, very few developers enjoy the tedious work of writing documentation. We generally need a nudge to remind us about it.

At Ticketmatic, we promise that anything you can do through the user interface is also available via the API. Ticketing software rarely stands alone: it’s usually integrated with e.g. the website or some planning software. The API is as important as our user interface.

To make sure we consistently document our API properly, we’ve introduced tooling.

Similar to unit tests, you should measure the coverage of your documentation.

After every change, each bit of API endpoint (a method, a parameter, a result field, …) is checked and cross-referenced with the documentation, to make sure a proper description and instructions are present.

The end result is a big documentation coverage report which we consider as important as our unit test results.

Constantly measure and improve the documentation coverage metric.

More than just filling fields

A very important things was pointed out while circulating these thoughts on Twitter.

Shaun McCance (of GNOME documentation fame) correctly remarked:

@rubenv I’ve seen APIs that are 100% documented but still have terrible docs. Coverage is no good if it’s covered in crap. — @shaunm on Twitter

Which is 100% correct. No amount of metrics or tooling will guarantee the quality of the end-result. Keeping quality up is a moral obligation shared by anyone in the team and that can never be replaced with software.

Nevertheless, getting a slight nudge to remind you of your documentation duties never hurts.


Comments | @rubenv on Twitter

March 27, 2015

The World is Wrong!!!

"No shit, the world is wrong! It ain't got a clue! But here, in this one minute video, I will explain what's wrong and how I have discovered the right way."

As soon as you encounter something that can be reduced to the above, it's a pretty fair sign that the author doesn't know what he's talking about. Anyone who thinks they're so unique that they can come up with something nobody else has thought before is, with likelihood bordering on certainty deluded. The world is full of smart people who have encountered the problem before. Any problem.

That means that a video that's been doing the rounds on how "Computer Color is Broken" is a case in point. The author brings us his amazing discovery that linear rgb is better than non-linear, except that everyone who's been working on computer graphics has known all of that, for ages. It's textbook stuff. It's not amazing, it's just the way maths work. The same with the guy who some years ago proved that all graphics applications scale the WRONG way! "And how much did you pay for your expensive graphics software?", he asked. "Eh? You sucker, you got suckered", he effectively said, "but fortunately, here's me to put you right!" It's even the same thing, actually.

Whether it's about color, graphics, or finding the Final Synthesis between Aristotle and Plato, this is my rule of thumb: people who think everone else in the world is wrong, are certainly wrong. (Also, Basque really is not the mother of all languages.)

And when it comes to color blending or image scaling: with Krita you got the choice. Use 16 bit rgb with a linear color profile, and you won't see the artefacts, or don't use it, and get the artefacts you probably were already used to, and were counting on for the effect you're trying to achieve. We've had support for that for a decade now.

Note: I won't link to any of these kookisms. They get enough attention already.

Hide Google's begging (or any other web content) via a Firefox userContent trick

Lately, Google is wasting space at the top of every search with a begging plea to be my default search engine.

[Google begging: Switch your default search engine to Google] Google already is my default search engine -- that's how I got to that page. But if you don't have persistent Google cookies set, you have to see this begging every time you do a search. (Why they think pestering users is the way to get people to switch to them is beyond me.)

Fortunately, in Firefox you can hide the begging with a userContent trick. Find the chrome directory inside your Firefox profile, and edit userContent.css in that directory. (Create a new file with that name if you don't already have one.) Then add this:

#taw { display: none !important; }

Restart Firefox, do a Google search and the begs should be gone.

In case you have any similar pages where there's pointless content getting in the way and you want to hide it: what I did was to right-click inside the begging box and choose Inspect element. That brings up Firefox's DOM inspector. Mouse over various lines in the inspector and watch what gets highlighted in the browser window. Find the element that highlights everything you want to remove -- in this case, it's a div with id="taw". Then you can write CSS to address that: hide it, change its style or whatever you're trying to do.

You can even use Inspect element to remove elements immediately. That won't help you prevent them from showing up later, but it can be wonderful if you need to use a page that has an annoying blinking ad on it, or a mis-designed page that has images covering the content you're trying to read.

March 26, 2015

Skin Retouching with Wavelets on PIXLS.US

Anyone who has been reading here for a little bit knows that I tend to spend most of my skin retouching time with wavelet scales. I've written about it originally here, then revisited it as part of an Open Source Portrait tutorial, and even touched upon the theme one more time (sorry about that - I couldn't resist the “touching” pun).

Because I haven’t possibly beat this horse dead enough yet, I have now also compiled all of those thoughts into a new post over on PIXLS.US that is now pretty much done:

PIXLS.US: Skin Retouching with Wavelet Decompose

Of course, we get another view of the always lovely Mairi before & after (from an older tutorial that some may recognize):


As well as the lovely Nikki before & after:


Even if you've read the other material before this might be worth re-visiting.

Don't forget, Ian Hex has an awesome tutorial on using luminosity masks in darktable, as well as the port of my old digital B&W article! These can all be found at the moment on the Articles page of the site.

The Other Blog

Don't forget that I also have a blog I'm starting up over on PIXLS.US that documents what I'm up to as I build the site and news about new articles and posts as they get published! You can follow the blog on the site here:


There's also an RSS feed there if you use a feed reader (RSS).

Write For PIXLS

I am also entertaining ideas from folks who might like to publish a tutorial or article for the site. If you might be interested feel free to contact me with your idea! Spread the love! :)

Call for Content Blenderart Magazine #47

We are ready to start gathering up tutorials, making of articles and images for Issue # 47 of Blenderart Magazine.

The theme for this issue is What’s your Passion?

Everyone has an area or topic that motivates them to try harder, work longer and push beyond their comfort zone. What is your singular creative joy? Is there an area of Blender you love to explore? A project you want to start and complete? Have you already completed an amazing new project this year? Any challenges you have started or completed? Any projects you completed or started last year that you want to explore further or have led you to new areas of exploration in your art?

What are you working on? We would love to hear about it and cheer you on. We are looking for articles on:

    • Challenges (30 day, year long, organized or personal)

    • New or on going projects

    • Areas of Blender that you want to or are currently exploring

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P

Articles

Send in your articles to sandra
Subject: “Article submission Issue # 47 [your article name]”

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “New Beginnings”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 47″

Note: Image size should be of 1024x (width) at max.

Last date of submissions May 5, 2015.

Good luck!
Blenderart Team

Blenderart Mag Issue #46 now available

Welcome to Issue #46, “FANtastic FANart

In this issue, we pay tribute to the creative geniuses that inspire us to attempt creative masterpieces of our own. The “FANtastic Fanart” gathered within is sure to inspire you to practice your skills. So settle in with your favorite beverage and check out all the fun goodies we have gathered for you.

Table of Contents:

Modeling Clay Characters

  • Final Inspection
  • Making of DJ. Boyie
  • Back to the 80’s
  • Tribute to Pierre Gilhodes
  • Minas Tirith

And Lot More…

March 25, 2015

Summary of Enabling New Contributors Brainstorm Session

Photo of Video Chat

So today we had a pretty successful brainstorm about enabling new contributors in Fedora! Thank you to everyone who responded my call for volunteers yesterday – we were at max capacity within an hour or two of the post! :) It just goes to show this is a topic a lot of folks are passionate about!

Here is a quick run-down of how it went down:

Video Conference Dance

We tried to use OpenTokRTC but had some technical issues (we were hitting an upper limit and people were getting booted, and some folks could see/hear some but not others. So we moved onto the backup plan – BlueJeans – and that worked decently.

Roleplay Exercise: Pretend You’re A Newbie!

Watch this part of the session starting here!

For about the first 30 minutes, we brainstormed using a technique called Understanding Chain to roleplay as if we were new contributors trying to get started in Fedora and noting all of the issues we would run into. We started thinking about how would we even begin to contribute, and then we started thinking about what barriers we might run up against as we continued on. Each idea / thought / concept got its own “sticky note” (thanks to Ryan Lerch for grabbing some paper and making some large scale stickies,) I would write the note out, Ryan would tack it up, and Stephen would transcribe it into the meeting piratepad.

Photo of the whiteboard with all of the sticky notes taped to it.

Walkthrough of the Design Hubs Concept Thus Far

Watch this part of the session starting here!

Next, I walked everyone through the design hubs concept and full set of mockups. You can read up more on the idea at the original blog post explaining the idea from last year. (Or poke through the mockups on your own.)

Screenshot of video chat: Mo explaining the Design Hubs Concept

Comparing Newbie Issues to Fedora Hubs Offering

Watch this part of the session starting here!

We spent the remainder of our time wakling through the list of newbie issues we’d generated during the first exercise and comparing them to the Fedora Hubs concept. For each issue, we asked these sorts of questions:

  • Is this issue addressed by the Fedora Hubs design? How?
  • Are there enhancements / new features / modifications we could make to the Fedora Hubs design to better address this issue?
  • Does Fedora Hubs relate to this issue at all?

We came up with so many awesome ideas during this part of the discussion. We had ideas inline with the issues that we’d come up with during the first exercise, and we also had random ideas come up that we put in their own little section on the piratepad (the “Idea Parking Lot.”)

Here’s a little sampling of ideas we had:

  • Fedorans with the most cookies are widely helpful figures within Fedora, so maybe their profiles in hubs could be marked with some special thing (a “cookie monster” emblem???) so that new users can find folks with a track record of being helpful more easily. (A problem we’d discussed was new contributors having a hard time tracking down folks to help them.)
  • User hub profiles can serve as the centralized, canonical user profile for them across Fedora. No more outdated info on wiki user pages. No more having to log into FAS to look up information on someone. (A problem we’d discussed was multiple sources for the same info and sometimes irrelvant / outdated information.)
  • The web IRC client we could build into hubs could have a neat affordance of letting you map an IRC nick to a real life name / email address with a hover tool tip thingy. (A problem we’d discussed was difficulty in finding people / meeting people.)
  • Posts to a particular hub on Fedora hubs are really just content aggregated from many different data sources / feeds. If a piece of data goes by that proves to be particularly helpful, the hub admins can “pin” it to a special “Resources” area attached to the hub. So if there’s great tutorials or howtos or general information that is good for group members to know, they can access it on the team resource page. (A problem we’d discussed was bootstrapping newbies and giving them helpful and curated content to get started.)
  • Static information posted to the hub (e.g. basic team metadata, etc.) could have a set “best by” date and some kind of automation could email the hub admins every so often (every 6 months?) and ask them to re-read the info and verify if it’s still good or update it if not. (The problem we’d discussed here was out-of-date wiki pages.)
  • Having a brief ‘intake questionnaire’ for folks creating a new FAS account to get an idea of their interests and to be able to suggest / recommend hubs they might want to follow. (Problem-to-solve: a lot of new contributors join ambassadors and aren’t aware of what other teams exist that could be a good place for them.)

There’s a lot more – you can read through the full piratepad log to see everything we came up with.

Screenshot of video chat discussion

Next Steps

Watch this part of the session starting here!

Here’s the next steps we talked about at the end of the meeting. If you have ideas for others or would like to claim some of these items to work on, please let me know in the comments!

  1. We’re going to have an in-person meetup / hackfest in early June in the Red Hat Westford office. (mizmo will plan agenda, could use help)
  2. We need a prioritized requirements list of all of the features. (mizmo will work on this, but could use help if anybody is interested!)
  3. The Fedora apps team will go through the prioritized requirements list when it’s ready and give items an implementation difficult rating.
  4. We should do some resarch on the OpenSuSE Connect system and how it works, and Elgg, the system they are using for the site. (needs a volunteer!)
  5. We should take a look at the profile design updates to StackExchange and see if there’s any lessons to be learned there for hubs. (mizmo will do this but would love other opinions on it.)
  6. We talked about potentially doing another video chat like this in late April or early May, before the hackfest in June.
  7. MOAR mockups! (mizmo will do, but would love help :))

How to Get Involved / Resources

So we have a few todos listed above that could use a volunteer or that I could use help with. Here’s the places to hang out / the things to read to learn more about this project and to get involved:

Please let us know what you think in the comments! :)

GNOME 3.16 is out!

Did you see?

It will obviously be in Fedora 22 Beta very shortly.

What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!



Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.



I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

March 24, 2015

How to turn the Chromebook Pixel into a proper developer laptop

Recently I spent about a day installing Fedora 22 + jhbuild on a Chromebook and left it unplugged overnight. The next day I turned it on with a flat battery, grabbed the charger, and the coreboot bios would not let me do the usual ctrl+L boot-to-SeaBIOS trick. I had to download the ChromeOS image to an SD card, reflash the ChromeOS image and thet left me without any of my Fedora workstation I’d so lovingly created the day before. This turned a $1500 laptop with a gorgeous screen into a liability that I couldn’t take anywhere for fear of losing all my work, again. The need to do CTRL+L every time I rebooted was just crazy.

I didn’t give up that easily; I need to test various bits of GNOME on a proper HiDPI screen and having a loan machine sitting in a bag wasn’t going to help anyone. So I reflashed the BIOS, and now have a machine that boots straight into Fedora 22 without any of the other Chrome stuff getting in the way.

Reflashing a BIOS on a Chromebook Pixel isn’t for the feignt of heart, but this is the list of materials you’ll need:

  • Set of watchmakers screwdrivers
  • Thin plastic shim (optional)
  • At least 1Gb USB flash drive
  • An original Chromebook Pixel
  • A BIOS from here for the Pixel
  • A great big dollop of courage

This does involve deleting the entire contents of your Pixel, so back anything up you care about before you start, unless it’s hosted online. I’m also not going to help you if you brick your machine, cateat emptor and all that. So, lets get cracking:

  • Boot chromebook into Recovery Mode (escape+refresh at startup) then do Control+D, then Enter, wait for ~5 mins while the Pixel reflashes itself
  • Power down the machine, remove AC power
  • Remove the rubber pads from the underside of the Pixel, remove all 4 screws
  • Gently remove the adhesive from around the edges, and use the smallest shim or screwdriver you have to release the 4 metal catches from the front and sides. You can leave the glue on the rear as this will form a hinge you can use. Hint: The tabs have to be released inwards, although do be aware there are 4 nice lithium batteries that might kinda explode if you slip and stab them hard with a screwdriver.
  • Remove the BIOS write protect screw AND the copper washer that sits between the USB drives and the power connector. Put it somewhere safe.
  • Gently close the bottom panel, but not enough for the clips to pop in. Turn over the machine and boot it.
  • Do enough of the registration so you can logon. Then logout.
  • Do the CTRL+ALT+[->] (really F2) trick to get to a proper shell and login as the chromos user (no password required). If you try to do it while logged in via the GUI it will not work.
  • On a different computer, format the USB drive as EXT4 and copy the squashfs.img, vmlinuz and initrd.img files there from your nearest Fedora mirror.
  • Also copy the correct firmware file from johnlewis.ie
  • Unmount the USB drive and remove
  • Insert the USB drive in the Pixel and mount it to /mnt
  • Make a backup of the firmware using /usr/sbin/flashrom -r /mnt/backup.rom
  • Flash the new firmware using /usr/sbin/flashrom -w /mnt/the_name_of_firmware.rom
  • IMPORTANT: If there are any warnings or errors you should reflash with the backup; if you reboot now you’ll have a $1500 brick. If you want to go back to the backup copy just use /usr/sbin/flashrom -w /mnt/backup.rom, but lets just assume it went well for now.
  • /sbin/shutdown -h now, then remove power again
  • Re-open the bottom panel, which should be a lot easier this time, and re-insert the BIOS write washer and screw, but don’t over-tighten.
  • Close the bottom panel and insert the clips carefully
  • Insert the 4 screws and tighten carefully, then convince the sticky feet to get back into the holes. You can use a small screwdriver to convince them a little more.
    Power the machine back on and it will automatically boot to the BIOS. Woo! But not done yet.
  • It will by default boot into JELTKA which is “just enough Linux to kexec another”.
  • When it looks like it’s hung, enter “root” then enter and it’ll log into a root prompt.
  • Mount the USB drive into /mnt again
  • Do something like kexec -l /mnt/vmlinuz --initrd=/mnt/initrd.img --append=stage2=hd:/dev/sdb1:/squashfs.img
  • Wait for the Fedora installer to start, then configure a network mirror where you can download packages. You’ll have to set up Wifi before you can download package lists.

This was all done from memory, so feel free to comment if you try it and I’ll fix things up as needed.

Fedora Design Team Update

Fedora Design Team Logo

Fedora Design Team Meeting 24 March 2015

Completed Tickets

Ticket 361: Fedora Reflective Bracelet

This ticket involved a simple design for a reflective bracelet for bike riders to help them be more visible at night. The imprint area was quite small and the ink only one color, so this was fairly simple.

Tickets Open For You to Take!

One of the things we required to join the design team is that you take and complete a ticket. We have one ticket currently open and awaiting you to claim it and contribute some design work for Fedora :):

Discussion

Fedora 22 Supplemental Wallpapers Vote Closes Tomorrow!

Tomorrow (Wednesday, March 25) is the last day to get in your votes for Fedora 22’s supplemental wallpapers! Vote now! (All Fedora contributors are eligible to vote.)

(Oh yeah, don’t forget – You’ll get a special Fedora badge just for voting!)

Fedora 22 Default Wallpaper Plan

A question came up what our plan was with the Fedora 22 wallpaper – Ryan Lerch created the mockups that we shipped / will ship in the alpha and beta and the feedback we’ve got on these is positive thus far so we’ll likely not change direction for Fedora 22’s default wallpaper. The pattern is based on the pattern Ryan designed for the Fedora.next product artwork featured on getfedora.org.

However, it is never too early to think about F23 wallpaper. If you have some ideas to share, please share them on the design team list!

2015 Flock Call for Papers is Open!

Flock is going to be at the Hyatt Regency in Rochester, New York. The dates are August 12 to August 15.

Gnokii proposed that we figure out which design team members are intending to go, and perhaps we could plan out different sessions for a design track. Some of the sessions we talked about:

  • Design Clinic – bring your UI or artwork or unfiled design team ticket to an open “office hours” session with design team members and get feedback / critique / help.
  • Wallpaper Hunt – design team members with cameras could plan a group photoshoot to get nice pictures that could make good wallpapers for F23 (rietcatnor suggested Highland Park as a good potential place to go.
  • Badge Design Workshop – riecatnor is going to propose this talk!

I started a basic wiki page to track the Design Team Flock 2015 presence – add your name if you’re intending to go and your ideas for talk proposals so we can coordinate!

(I will message the design-team list with this idea too!)

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

Enabling New Contributors Brainstorm Session

You (probably don’t, but) may remember an idea I posted about a while back when we were just starting to plan out how to reconfigure Fedora’s websites for Fedora.next. I called the idea “Fedora Hubs.”

Some Backstory

The point behind the idea was to provide a space specifically for Fedora contributors that was separate from the user space, and to make it easier for folks who are non-packager contributors to Fedora to collaborate by providing them explicit tools to do that. Tools for folks working in docs, marketing, design, ambassadors, etc., to help enable those teams and also make it easier for them to bring new contributors on-board. (I’ve onboarded 3 or 4 people in the past 3 months and it still ain’t easy! It’s easy for contributors to forget how convoluted it can be since we all did it once and likely a long time ago.)

Well, anyway, that hubs idea blog post was actually almost a year ago, and while we have a new Fedora project website, we still don’t have a super-solid plan for building out the Fedora hub site, which is meant to be a central place for Fedora contributors to work together:

The elevator pitch is that it’s kind of like a cross between Reddit and Facebook/G+ for Fedora contributors to keep on top of the various projects and teams they’re involved with in Fedora.

There are some initial mockups that you can look through here, and a design team repo with the mockups and sources, but that’s about it, and there hasn’t been a wide or significant amount of discussion about the idea or mockups thus far. Some of the thinking behind what would drive the site is that we could pull in a lot of the data from fedmsg, and for the account-specific stuff we’d make API calls to FAS.

Let’s make it happen?

"Unicorn - 1551"  by j4p4n on openclipart.org. Public Domain.

“Unicorn – 1551″ by j4p4n on openclipart.org. Public Domain.

Soooo…. Hubs isn’t going to magically happen like unicorns, so we probably need to figure out if this is a good approach for enabling new contributors and if so how is it going to work, who is going to work on it, what kind of timeline are we looking at – etc. etc. So I’m thinking we could do a bit of a design thinking / brainstorm session to figure this out. I want to bring together representatives of different teams within Fedora – particularly those teams who could really use a tool like this to collaboate and bring new contributors on board – and have them in this session.

For various reasons, logistically I think Wednesday, March 25 is the best day to do this, so I’m going to send out invites to the following Fedora teams and ask them to send someone to participate. (I realize this is tomorrow – ugh – let’s try anyway.) Let me know if I forgot your team or if you want to participate:

  • Each of the three working groups (for development representation)
  • Infrastructure
  • Websites
  • Marketing
  • Ambassadors
  • Docs
  • Design

I would like to use OpenTokRTC for the meeting, as it’s a FLOSS video chat tool that I’ve used to chat with other Fedorans in the past and it worked pretty well. I think we should have an etherpad too to track the discussion. I’m going to pick a couple of structured brainstorming games (likely from gamestorming.com) to help guide the discussion. It should be fun!

The driving question for this brainstorm session is going to be:

How can we lower the bar for new Fedora contributors to get up and running?

Let me know if this question haunts you too. :)

This is the time we’re going to do this:

  • Wednesday March 25 (tomorrow!) from 14:00-16:00 GMT (10-12 AM US Eastern.)

Since this is short-notice, I am going to run around today and try to personally invite folks to join and try to build a team for this event. If you are interested let me know ASAP!

(‘Wait, what’s the rush?’ you might ask. I’m trying to have a session while Ryan Lerch is still in the US Eastern timezone. We may well end up trying another session for after he’s in the Australian timezone.)


Update

I think we’re just about at the limit of folks we can handle from both the video conferencing pov and the effectiveness of the brainstorm games I have planned. I have one or two open invites I’m hoping to hear back from but otherwise we have full representation here including the Join SIG so we are in good shape :) Thanks Fedora friends for your quick responses!

March 23, 2015

OpenRaster Python Plugin

Thanks to developers Martin Renold and Jon Nordby who generously agreed to relicense the OpenRaster plugin under the Internet Software Consortium (ISC) license (it is a permissive license, it is the license preferred by the OpenBSD project, and also the license used by brushlib from MyPaint). Hopefully other applications will be encouraged to take another look at implementing OpenRaster.

[Edit: The code might possibly also be useful to anyone interested in writing a plugin for other file formats that use ZIP containers or XML. For example XML Paper Specification (XPS) JavaFX (.fxz and the unzipped .fxd), or even OpenDocument]

The code has been tidied to conform to the PEP8 style guide, with only 4 warnings remaining, and they are all concerning long lines of more than 80 characters (E501).

The OpenRaster files are also far tidier. For some bizarre reason the Python developers choose to make things ugly by default, and neglected to include any line breaks in the XML. Thanks to Fredrik Lundh and Effbot.org for the very helpful pretty-printing code. The code has also been changed so that many optional tags are included if and only if they are needed, so if you ever do need to read the raw XML it should be a lot easier.

There isn't much for normal users unfortunately. The currently selected layer is marked in the OpenRaster file, and also if a layer is edit locked. If you are sending files to MyPaint it will correctly select the active layer, and recognize which layers were locked. (No import back yet though.) Unfortunately edit locking (or "Lock pixels") does require version 2.8 so if there is anyone out there stuck on version 2.6 or earlier I'd be interested to learn more, and I will try to adjust the code if I get any feedback. [Edit: No feedback but I fixed it anyway, to keep the plugin compatible with version 2.6.]
I've a few other changes that are almost ready but I'm concerned about compatibility and maintainability so I'm going to take a bit more time before releasing those changes.

The latest code is available from the OpenRaster plugin gitorious project page. [Edit: ... but only until May 2015.]