March 26, 2015

Skin Retouching with Wavelets on PIXLS.US

Anyone who has been reading here for a little bit knows that I tend to spend most of my skin retouching time with wavelet scales. I've written about it originally here, then revisited it as part of an Open Source Portrait tutorial, and even touched upon the theme one more time (sorry about that - I couldn't resist the “touching” pun).

Because I haven’t possibly beat this horse dead enough yet, I have now also compiled all of those thoughts into a new post over on PIXLS.US that is now pretty much done:

PIXLS.US: Skin Retouching with Wavelet Decompose

Of course, we get another view of the always lovely Mairi before & after (from an older tutorial that some may recognize):


As well as the lovely Nikki before & after:


Even if you've read the other material before this might be worth re-visiting.

Don't forget, Ian Hex has an awesome tutorial on using luminosity masks in darktable, as well as the port of my old digital B&W article! These can all be found at the moment on the Articles page of the site.

The Other Blog

Don't forget that I also have a blog I'm starting up over on PIXLS.US that documents what I'm up to as I build the site and news about new articles and posts as they get published! You can follow the blog on the site here:


There's also an RSS feed there if you use a feed reader (RSS).

Write For PIXLS

I am also entertaining ideas from folks who might like to publish a tutorial or article for the site. If you might be interested feel free to contact me with your idea! Spread the love! :)

Call for Content Blenderart Magazine #47

We are ready to start gathering up tutorials, making of articles and images for Issue # 47 of Blenderart Magazine.

The theme for this issue is What’s your Passion?

Everyone has an area or topic that motivates them to try harder, work longer and push beyond their comfort zone. What is your singular creative joy? Is there an area of Blender you love to explore? A project you want to start and complete? Have you already completed an amazing new project this year? Any challenges you have started or completed? Any projects you completed or started last year that you want to explore further or have led you to new areas of exploration in your art?

What are you working on? We would love to hear about it and cheer you on. We are looking for articles on:

    • Challenges (30 day, year long, organized or personal)

    • New or on going projects

    • Areas of Blender that you want to or are currently exploring

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P

Articles

Send in your articles to sandra
Subject: “Article submission Issue # 47 [your article name]”

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “New Beginnings”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 47″

Note: Image size should be of 1024x (width) at max.

Last date of submissions May 5, 2015.

Good luck!
Blenderart Team

Blenderart Mag Issue #46 now available

Welcome to Issue #46, “FANtastic FANart

In this issue, we pay tribute to the creative geniuses that inspire us to attempt creative masterpieces of our own. The “FANtastic Fanart” gathered within is sure to inspire you to practice your skills. So settle in with your favorite beverage and check out all the fun goodies we have gathered for you.

Table of Contents:

Modeling Clay Characters

  • Final Inspection
  • Making of DJ. Boyie
  • Back to the 80’s
  • Tribute to Pierre Gilhodes
  • Minas Tirith

And Lot More…

March 25, 2015

Summary of Enabling New Contributors Brainstorm Session

Photo of Video Chat

So today we had a pretty successful brainstorm about enabling new contributors in Fedora! Thank you to everyone who responded my call for volunteers yesterday – we were at max capacity within an hour or two of the post! :) It just goes to show this is a topic a lot of folks are passionate about!

Here is a quick run-down of how it went down:

Video Conference Dance

We tried to use OpenTokRTC but had some technical issues (we were hitting an upper limit and people were getting booted, and some folks could see/hear some but not others. So we moved onto the backup plan – BlueJeans – and that worked decently.

Roleplay Exercise: Pretend You’re A Newbie!

Watch this part of the session starting here!

For about the first 30 minutes, we brainstormed using a technique called Understanding Chain to roleplay as if we were new contributors trying to get started in Fedora and noting all of the issues we would run into. We started thinking about how would we even begin to contribute, and then we started thinking about what barriers we might run up against as we continued on. Each idea / thought / concept got its own “sticky note” (thanks to Ryan Lerch for grabbing some paper and making some large scale stickies,) I would write the note out, Ryan would tack it up, and Stephen would transcribe it into the meeting piratepad.

Photo of the whiteboard with all of the sticky notes taped to it.

Walkthrough of the Design Hubs Concept Thus Far

Watch this part of the session starting here!

Next, I walked everyone through the design hubs concept and full set of mockups. You can read up more on the idea at the original blog post explaining the idea from last year. (Or poke through the mockups on your own.)

Screenshot of video chat: Mo explaining the Design Hubs Concept

Comparing Newbie Issues to Fedora Hubs Offering

Watch this part of the session starting here!

We spent the remainder of our time wakling through the list of newbie issues we’d generated during the first exercise and comparing them to the Fedora Hubs concept. For each issue, we asked these sorts of questions:

  • Is this issue addressed by the Fedora Hubs design? How?
  • Are there enhancements / new features / modifications we could make to the Fedora Hubs design to better address this issue?
  • Does Fedora Hubs relate to this issue at all?

We came up with so many awesome ideas during this part of the discussion. We had ideas inline with the issues that we’d come up with during the first exercise, and we also had random ideas come up that we put in their own little section on the piratepad (the “Idea Parking Lot.”)

Here’s a little sampling of ideas we had:

  • Fedorans with the most cookies are widely helpful figures within Fedora, so maybe their profiles in hubs could be marked with some special thing (a “cookie monster” emblem???) so that new users can find folks with a track record of being helpful more easily. (A problem we’d discussed was new contributors having a hard time tracking down folks to help them.)
  • User hub profiles can serve as the centralized, canonical user profile for them across Fedora. No more outdated info on wiki user pages. No more having to log into FAS to look up information on someone. (A problem we’d discussed was multiple sources for the same info and sometimes irrelvant / outdated information.)
  • The web IRC client we could build into hubs could have a neat affordance of letting you map an IRC nick to a real life name / email address with a hover tool tip thingy. (A problem we’d discussed was difficulty in finding people / meeting people.)
  • Posts to a particular hub on Fedora hubs are really just content aggregated from many different data sources / feeds. If a piece of data goes by that proves to be particularly helpful, the hub admins can “pin” it to a special “Resources” area attached to the hub. So if there’s great tutorials or howtos or general information that is good for group members to know, they can access it on the team resource page. (A problem we’d discussed was bootstrapping newbies and giving them helpful and curated content to get started.)
  • Static information posted to the hub (e.g. basic team metadata, etc.) could have a set “best by” date and some kind of automation could email the hub admins every so often (every 6 months?) and ask them to re-read the info and verify if it’s still good or update it if not. (The problem we’d discussed here was out-of-date wiki pages.)
  • Having a brief ‘intake questionnaire’ for folks creating a new FAS account to get an idea of their interests and to be able to suggest / recommend hubs they might want to follow. (Problem-to-solve: a lot of new contributors join ambassadors and aren’t aware of what other teams exist that could be a good place for them.)

There’s a lot more – you can read through the full piratepad log to see everything we came up with.

Screenshot of video chat discussion

Next Steps

Watch this part of the session starting here!

Here’s the next steps we talked about at the end of the meeting. If you have ideas for others or would like to claim some of these items to work on, please let me know in the comments!

  1. We’re going to have an in-person meetup / hackfest in early June in the Red Hat Westford office. (mizmo will plan agenda, could use help)
  2. We need a prioritized requirements list of all of the features. (mizmo will work on this, but could use help if anybody is interested!)
  3. The Fedora apps team will go through the prioritized requirements list when it’s ready and give items an implementation difficult rating.
  4. We should do some resarch on the OpenSuSE Connect system and how it works, and Elgg, the system they are using for the site. (needs a volunteer!)
  5. We should take a look at the profile design updates to StackExchange and see if there’s any lessons to be learned there for hubs. (mizmo will do this but would love other opinions on it.)
  6. We talked about potentially doing another video chat like this in late April or early May, before the hackfest in June.
  7. MOAR mockups! (mizmo will do, but would love help :))

How to Get Involved / Resources

So we have a few todos listed above that could use a volunteer or that I could use help with. Here’s the places to hang out / the things to read to learn more about this project and to get involved:

Please let us know what you think in the comments! :)

GNOME 3.16 is out!

Did you see?

It will obviously be in Fedora 22 Beta very shortly.

What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!



Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.



I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

March 24, 2015

How to turn the Chromebook Pixel into a proper developer laptop

Recently I spent about a day installing Fedora 22 + jhbuild on a Chromebook and left it unplugged overnight. The next day I turned it on with a flat battery, grabbed the charger, and the coreboot bios would not let me do the usual ctrl+L boot-to-SeaBIOS trick. I had to download the ChromeOS image to an SD card, reflash the ChromeOS image and thet left me without any of my Fedora workstation I’d so lovingly created the day before. This turned a $1500 laptop with a gorgeous screen into a liability that I couldn’t take anywhere for fear of losing all my work, again. The need to do CTRL+L every time I rebooted was just crazy.

I didn’t give up that easily; I need to test various bits of GNOME on a proper HiDPI screen and having a loan machine sitting in a bag wasn’t going to help anyone. So I reflashed the BIOS, and now have a machine that boots straight into Fedora 22 without any of the other Chrome stuff getting in the way.

Reflashing a BIOS on a Chromebook Pixel isn’t for the feignt of heart, but this is the list of materials you’ll need:

  • Set of watchmakers screwdrivers
  • Thin plastic shim (optional)
  • At least 1Gb USB flash drive
  • An original Chromebook Pixel
  • A BIOS from here for the Pixel
  • A great big dollop of courage

This does involve deleting the entire contents of your Pixel, so back anything up you care about before you start, unless it’s hosted online. I’m also not going to help you if you brick your machine, cateat emptor and all that. So, lets get cracking:

  • Boot chromebook into Recovery Mode (escape+refresh at startup) then do Control+D, then Enter, wait for ~5 mins while the Pixel reflashes itself
  • Power down the machine, remove AC power
  • Remove the rubber pads from the underside of the Pixel, remove all 4 screws
  • Gently remove the adhesive from around the edges, and use the smallest shim or screwdriver you have to release the 4 metal catches from the front and sides. You can leave the glue on the rear as this will form a hinge you can use. Hint: The tabs have to be released inwards, although do be aware there are 4 nice lithium batteries that might kinda explode if you slip and stab them hard with a screwdriver.
  • Remove the BIOS write protect screw AND the copper washer that sits between the USB drives and the power connector. Put it somewhere safe.
  • Gently close the bottom panel, but not enough for the clips to pop in. Turn over the machine and boot it.
  • Do enough of the registration so you can logon. Then logout.
  • Do the CTRL+ALT+[->] (really F2) trick to get to a proper shell and login as the chromos user (no password required). If you try to do it while logged in via the GUI it will not work.
  • On a different computer, format the USB drive as EXT4 and copy the squashfs.img, vmlinuz and initrd.img files there from your nearest Fedora mirror.
  • Also copy the correct firmware file from johnlewis.ie
  • Unmount the USB drive and remove
  • Insert the USB drive in the Pixel and mount it to /mnt
  • Make a backup of the firmware using /usr/sbin/flashrom -r /mnt/backup.rom
  • Flash the new firmware using /usr/sbin/flashrom -w /mnt/the_name_of_firmware.rom
  • IMPORTANT: If there are any warnings or errors you should reflash with the backup; if you reboot now you’ll have a $1500 brick. If you want to go back to the backup copy just use /usr/sbin/flashrom -w /mnt/backup.rom, but lets just assume it went well for now.
  • /sbin/shutdown -h now, then remove power again
  • Re-open the bottom panel, which should be a lot easier this time, and re-insert the BIOS write washer and screw, but don’t over-tighten.
  • Close the bottom panel and insert the clips carefully
  • Insert the 4 screws and tighten carefully, then convince the sticky feet to get back into the holes. You can use a small screwdriver to convince them a little more.
    Power the machine back on and it will automatically boot to the BIOS. Woo! But not done yet.
  • It will by default boot into JELTKA which is “just enough Linux to kexec another”.
  • When it looks like it’s hung, enter “root” then enter and it’ll log into a root prompt.
  • Mount the USB drive into /mnt again
  • Do something like kexec -l /mnt/vmlinuz --initrd=/mnt/initrd.img --append=stage2=hd:/dev/sdb1:/squashfs.img
  • Wait for the Fedora installer to start, then configure a network mirror where you can download packages. You’ll have to set up Wifi before you can download package lists.

This was all done from memory, so feel free to comment if you try it and I’ll fix things up as needed.

Fedora Design Team Update

Fedora Design Team Logo

Fedora Design Team Meeting 24 March 2015

Completed Tickets

Ticket 361: Fedora Reflective Bracelet

This ticket involved a simple design for a reflective bracelet for bike riders to help them be more visible at night. The imprint area was quite small and the ink only one color, so this was fairly simple.

Tickets Open For You to Take!

One of the things we required to join the design team is that you take and complete a ticket. We have one ticket currently open and awaiting you to claim it and contribute some design work for Fedora :):

Discussion

Fedora 22 Supplemental Wallpapers Vote Closes Tomorrow!

Tomorrow (Wednesday, March 25) is the last day to get in your votes for Fedora 22’s supplemental wallpapers! Vote now! (All Fedora contributors are eligible to vote.)

(Oh yeah, don’t forget – You’ll get a special Fedora badge just for voting!)

Fedora 22 Default Wallpaper Plan

A question came up what our plan was with the Fedora 22 wallpaper – Ryan Lerch created the mockups that we shipped / will ship in the alpha and beta and the feedback we’ve got on these is positive thus far so we’ll likely not change direction for Fedora 22’s default wallpaper. The pattern is based on the pattern Ryan designed for the Fedora.next product artwork featured on getfedora.org.

However, it is never too early to think about F23 wallpaper. If you have some ideas to share, please share them on the design team list!

2015 Flock Call for Papers is Open!

Flock is going to be at the Hyatt Regency in Rochester, New York. The dates are August 12 to August 15.

Gnokii proposed that we figure out which design team members are intending to go, and perhaps we could plan out different sessions for a design track. Some of the sessions we talked about:

  • Design Clinic – bring your UI or artwork or unfiled design team ticket to an open “office hours” session with design team members and get feedback / critique / help.
  • Wallpaper Hunt – design team members with cameras could plan a group photoshoot to get nice pictures that could make good wallpapers for F23 (rietcatnor suggested Highland Park as a good potential place to go.
  • Badge Design Workshop – riecatnor is going to propose this talk!

I started a basic wiki page to track the Design Team Flock 2015 presence – add your name if you’re intending to go and your ideas for talk proposals so we can coordinate!

(I will message the design-team list with this idea too!)

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

Enabling New Contributors Brainstorm Session

You (probably don’t, but) may remember an idea I posted about a while back when we were just starting to plan out how to reconfigure Fedora’s websites for Fedora.next. I called the idea “Fedora Hubs.”

Some Backstory

The point behind the idea was to provide a space specifically for Fedora contributors that was separate from the user space, and to make it easier for folks who are non-packager contributors to Fedora to collaborate by providing them explicit tools to do that. Tools for folks working in docs, marketing, design, ambassadors, etc., to help enable those teams and also make it easier for them to bring new contributors on-board. (I’ve onboarded 3 or 4 people in the past 3 months and it still ain’t easy! It’s easy for contributors to forget how convoluted it can be since we all did it once and likely a long time ago.)

Well, anyway, that hubs idea blog post was actually almost a year ago, and while we have a new Fedora project website, we still don’t have a super-solid plan for building out the Fedora hub site, which is meant to be a central place for Fedora contributors to work together:

The elevator pitch is that it’s kind of like a cross between Reddit and Facebook/G+ for Fedora contributors to keep on top of the various projects and teams they’re involved with in Fedora.

There are some initial mockups that you can look through here, and a design team repo with the mockups and sources, but that’s about it, and there hasn’t been a wide or significant amount of discussion about the idea or mockups thus far. Some of the thinking behind what would drive the site is that we could pull in a lot of the data from fedmsg, and for the account-specific stuff we’d make API calls to FAS.

Let’s make it happen?

"Unicorn - 1551"  by j4p4n on openclipart.org. Public Domain.

“Unicorn – 1551″ by j4p4n on openclipart.org. Public Domain.

Soooo…. Hubs isn’t going to magically happen like unicorns, so we probably need to figure out if this is a good approach for enabling new contributors and if so how is it going to work, who is going to work on it, what kind of timeline are we looking at – etc. etc. So I’m thinking we could do a bit of a design thinking / brainstorm session to figure this out. I want to bring together representatives of different teams within Fedora – particularly those teams who could really use a tool like this to collaboate and bring new contributors on board – and have them in this session.

For various reasons, logistically I think Wednesday, March 25 is the best day to do this, so I’m going to send out invites to the following Fedora teams and ask them to send someone to participate. (I realize this is tomorrow – ugh – let’s try anyway.) Let me know if I forgot your team or if you want to participate:

  • Each of the three working groups (for development representation)
  • Infrastructure
  • Websites
  • Marketing
  • Ambassadors
  • Docs
  • Design

I would like to use OpenTokRTC for the meeting, as it’s a FLOSS video chat tool that I’ve used to chat with other Fedorans in the past and it worked pretty well. I think we should have an etherpad too to track the discussion. I’m going to pick a couple of structured brainstorming games (likely from gamestorming.com) to help guide the discussion. It should be fun!

The driving question for this brainstorm session is going to be:

How can we lower the bar for new Fedora contributors to get up and running?

Let me know if this question haunts you too. :)

This is the time we’re going to do this:

  • Wednesday March 25 (tomorrow!) from 14:00-16:00 GMT (10-12 AM US Eastern.)

Since this is short-notice, I am going to run around today and try to personally invite folks to join and try to build a team for this event. If you are interested let me know ASAP!

(‘Wait, what’s the rush?’ you might ask. I’m trying to have a session while Ryan Lerch is still in the US Eastern timezone. We may well end up trying another session for after he’s in the Australian timezone.)


Update

I think we’re just about at the limit of folks we can handle from both the video conferencing pov and the effectiveness of the brainstorm games I have planned. I have one or two open invites I’m hoping to hear back from but otherwise we have full representation here including the Join SIG so we are in good shape :) Thanks Fedora friends for your quick responses!

High Contrast Refresh

One of the major visual updates of the 3.16 release is the high contrast accessible theme. Both the shell and the toolkit have received attention in the HC department. One noteworthy aspect of the theme is the icons. To guarantee some decent amount of contrast of an icon against any background, back in GNOME 2 days, we solved it by “double stroking” every shape. The term double stroke comes from a special case, when a shape that was open, having only an outline, would get an additional inverted color outline. Most of the time it was a white outline of a black silhouette though.

Fuzzy doublestroke PNGs of the old HC theme

In the new world, we actually treat icons the same way we treat text. We can adjust the best contrast by controlling the color at runtime. We do this the same way we’ve done it for symbolic icons, using and embedded CSS stylesheet inside SVG icons. And in fact we are using the very same symbolic icons for the HC variant. You would be right arguing that there are specific needs for high contrast, but in reality majority of the double stroked icons in HC have already been direct conversions of their symbolic counterparts.

Crisp recolorable SVGs of the post 3.16 world

While centralized theme that overrides all application never seemed like a good idea, as the application icon is part of its identity and should be distributed and maintained alongside the actual app, the process to create a high contrast variant of an icon was extremely cumbersome and required quite a bit of effort. With the changes in place for both the toolkit and the shell, it’s far more reasonable to mandate applications to include a symbolic/high contrast variant of its app icon now. I’ll be spending my time transforming the existing double stroke assets into symbolic, but if you are an application author, please look into providing a scalable stencil variant of your app icon as well. Thank you!

March 23, 2015

OpenRaster Python Plugin

Thanks to developers Martin Renold and Jon Nordby who generously agreed to relicense the OpenRaster plugin under the Internet Software Consortium (ISC) license (it is a permissive license, it is the license preferred by the OpenBSD project, and also the license used by brushlib from MyPaint). Hopefully other applications will be encouraged to take another look at implementing OpenRaster.

The code has been tidied to conform to the PEP8 style guide, with only 4 warnings remaining, and they are all concerning long lines of more than 80 characters (E501).

The OpenRaster files are also far tidier. For some bizarre reason the Python developers choose to make things ugly by default, and neglected to include any line breaks in the XML. Thanks to Fredrik Lundh and Effbot.org for the very helpful pretty-printing code. The code has also been changed so that many optional tags are included if and only if they are needed, so if you ever do need to read the raw XML it should be a lot easier.

There isn't much for normal users unfortunately. The currently selected layer is marked in the OpenRaster file, and also if a layer is edit locked. If you are sending files to MyPaint it will correctly select the active layer, and recognize which layers were locked. (No import back yet though.) Unfortunately edit locking (or "Lock pixels") does require version 2.8 so if there is anyone out there stuck on version 2.6 or earlier I'd be interested to learn more, and I will try to adjust the code if I get any feedback.
I've a few other changes that are almost ready but I'm concerned about compatibility and maintainability so I'm going to take a bit more time before releasing those changes.

The latest code is available from the OpenRaster plugin gitorious project page.

WebKitGTK+ 2.8.0

We are excited and proud of announcing WebKitGTK+ 2.8.0, your favorite web rendering engine, now faster, even more stable and with a bunch of new features and improvements.

Gestures

Touch support is one the most important features missing since WebKitGTK+ 2.0.0. Thanks to the GTK+ gestures API, it’s now more pleasant to use a WebKitWebView in a touch screen. For now only the basic gestures are implemented: pan (for scrolling by dragging from any point of the WebView), tap (handling clicks with the finger) and zoom (for zooming in/out with two fingers). We plan to add more touch enhancements like kinetic scrolling, overshot feedback animation, text selections, long press, etc. in future versions.

HTML5 Notifications

notifications

Notifications are transparently supported by WebKitGTK+ now, using libnotify by default. The default implementation can be overridden by applications to use their own notifications system, or simply to disable notifications.

WebView background color

There’s new API now to set the base background color of a WebKitWebView. The given color is used to fill the web view before the actual contents are rendered. This will not have any visible effect if the web page contents set a background color, of course. If the web view parent window has a RGBA visual, we can even have transparent colors.

webkitgtk-2.8-bgcolor

A new WebKitSnapshotOptions flag has also been added to be able to take web view snapshots over a transparent surface, instead of filling the surface with the default background color (opaque white).

User script messages

The communication between the UI process and the Web Extensions is something that we have always left to the users, so that everybody can use their own IPC mechanism. Epiphany and most of the apps use D-Bus for this, and it works perfectly. However, D-Bus is often too much for simple cases where there are only a few  messages sent from the Web Extension to the UI process. User script messages make these cases a lot easier to implement and can be used from JavaScript code or using the GObject DOM bindings.

Let’s see how it works with a very simple example:

In the UI process, we register a script message handler using the WebKitUserContentManager and connect to the “script-message-received-signal” for the given handler:

webkit_user_content_manager_register_script_message_handler (user_content, 
                                                             "foo");
g_signal_connect (user_content, "script-message-received::foo",
                  G_CALLBACK (foo_message_received_cb), NULL);

Script messages are received in the UI process as a WebKitJavascriptResult:

static void
foo_message_received_cb (WebKitUserContentManager *manager,
                         WebKitJavascriptResult *message,
                         gpointer user_data)
{
        char *message_str;

        message_str = get_js_result_as_string (message);
        g_print ("Script message received for handler foo: %s\n", message_str);
        g_free (message_str);
}

Sending a message from the web process to the UI process using JavaScript is very easy:

window.webkit.messageHandlers.foo.postMessage("bar");

That will send the message “bar” to the registered foo script message handler. It’s not limited to strings, we can pass any JavaScript value to postMessage() that can be serialized. There’s also a convenient API to send script messages in the GObject DOM bindings API:

webkit_dom_dom_window_webkit_message_handlers_post_message (dom_window, 
                                                            "foo", "bar");

 

Who is playing audio?

WebKitWebView has now a boolean read-only property is-playing-adio that is set to TRUE when the web view is playing audio (even if it’s a video) and to FALSE when the audio is stopped. Browsers can use this to provide visual feedback about which tab is playing audio, Epiphany already does that :-)

ephy-is-playing-audio

HTML5 color input

Color input element is now supported by default, so instead of rendering a text field to manually input the color  as hexadecimal color code, WebKit now renders a color button that when clicked shows a GTK color chooser dialog. As usual, the public API allows to override the default implementation, to use your own color chooser. MiniBrowser uses a popover, for example.

mb-color-input-popover

APNG

APNG (Animated PNG) is a PNG extension that allows to create animated PNGs, similar to GIF but much better, supporting 24 bit images and transparencies. Since 2.8 WebKitGTK+ can render APNG files. You can check how it works with the mozilla demos.

webkitgtk-2.8-apng

SSL

The POODLE vulnerability fix introduced compatibility problems with some websites when establishing the SSL connection. Those problems were actually server side issues, that were incorrectly banning SSL 3.0 record packet versions, but that could be worked around in WebKitGTK+.

WebKitGTK+ already provided a WebKitWebView signal to notify about TLS errors when loading, but only for the connection of the main resource in the main frame. However, it’s still possible that subresources fail due to TLS errors, when using a connection different to the main resource one. WebKitGTK+ 2.8 gained WebKitWebResource::failed-with-tls-errors signal to be notified when a subresource load failed because of invalid certificate.

Ciphersuites based on RC4 are now disallowed when performing TLS negotiation, because it is no longer considered secure.

Performance: bmalloc and concurrent JIT

bmalloc is a new memory allocator added to WebKit to replace TCMalloc. Apple had already used it in the Mac and iOS ports for some time with very good results, but it needed some tweaks to work on Linux. WebKitGTK+ 2.8 now also uses bmalloc which drastically improved the overall performance.

Concurrent JIT was not enabled in GTK (and EFL) port for no apparent reason. Enabling it had also an amazing impact in the performance.

Both performance improvements were very noticeable in the performance bot:

webkitgtk-2.8-perf

 

The first jump on 11th Feb corresponds to the bmalloc switch, while the other jump on 25th Feb is when concurrent JIT was enabled.

Plans for 2.10

WebKitGTK+ 2.8 is an awesome release, but the plans for 2.10 are quite promising.

  • More security: mixed content for most of the resources types will be blocked by default. New API will be provided for managing mixed content.
  • Sandboxing: seccomp filters will be used in the different secondary processes.
  • More performance: FTL will be enabled in JavaScriptCore by default.
  • Even more performance: this time in the graphics side, by using the threaded compositor.
  • Blocking plugins API: new API to provide full control over the plugins load process, allowing to block/unblock plugins individually.
  • Implementation of the Database process: to bring back IndexedDB support.
  • Editing API: full editing API to allow using a WebView in editable mode with all editing capabilities.

March 22, 2015

My FreeCAD talk at FOSDEM 2015

This is a video recording of the talk I did at FOSDEM this year. Enjoy! The pdf slides are here. Enjoy! FreeCAD talk at FOSDEM 2015 from Yorik van Havre on Vimeo

March 21, 2015

Tumblr showcase blog: Made with MyPaint

Check out our showcase blog on Tumblr, Made with MyPaint! It reblogs all the best new art on Tumblr where our little program was used.

Screengrab of the made-with-mypaint blog on tumblr

Go follow it now! It’s curated by one of our own, and we’d love suggestions for awesome SFW art you’d like to include: just ask via the blog.

March 20, 2015

"GNOME à 15 ans" aux JdLL de Lyon



Le week-end prochain, je vais faire une petite présentation sur les quinze ans de GNOME aux JdLL.

Si les dieux de la livraison sont cléments, GNOME devrait aussi avoir une présence dans le village associatif.

Call for submissions: Libre Graphics magazine 2.4

cfs_lgmag24

Issue 2.4: Capture

Data capture sounds like a thoroughly dispassionate topic. We collect information from peripherals attached to computers, turning keystrokes into characters, turning clicks into actions, collecting video, audio and images of varying quality and fidelity. Capture in this sense is a young word, devised in the latter half of the twentieth century. For the four hundred years previous, the word suggested something with far higher stakes, something more passionate and visceral. To capture was to seize, to take, like the capture of a criminal or of a treasure trove. Computation has rendered capture routine and safe.

But capture is neither simply an act of forcible collection or of technical routine. The sense of capture we would like to approach in this issue is gentler, more evocative. Issue 2.4 of Libre Graphics magazine, the last in volume 2, looks at capture as the act of encompassing, emulating and encapsulating difficult things, subtle qualities. Routinely, we capture with keyboards, mice, cameras, audio recorders, scanners, browsing histories, keyloggers. We might capture a fleeting expression in a photo, or a personal history in an audio recording. Our methods of data capture, though they may seem commonplace at first glance, offer opportunities to catch moments.

We’re looking for work, both visual and textual, exploring the concept of capture, as it relates to or is done with F/LOSS art and design. All kinds of capture, metaphorical or literal, are welcome. Whether it’s a treatise on the politics of photo capture in public places, a series of photos taken using novel F/LOSS methods, documentation of a homebrew 3D scanner, any riff on the idea of capture is invited. We encourage submissions for articles, showcases, interviews and anything else you might suggest. Proposals for submissions (no need to send us the completed work right away) can be sent to submissions@libregraphicsmag.com.The deadline for submissions is May 11th, 2015.

Capture is the fourth and final issue in volume two of Libre Graphics magazine. Libre Graphics magazine is a print publication devoted to showcasing and promoting work created with Free/Libre Open Source Software. We accept work about or including artistic practices which integrate Free, Libre and Open software, standards, culture, methods and licenses.

March 19, 2015

Hints on migrating Google Code to GitHub

Google Code is shutting down. They've sent out notices to all project owners suggesting they migrate projects to other hosting services.

I moved all my personal projects to GitHub years ago, back when Google Code still didn't support git. But I'm co-owner on another project that was still hosted there, and I volunteered to migrate it. I remembered that being very easy back when I moved my personal projects: GitHub had a one-click option to import from Google Code. I assumed (I'm sure you know what that stands for) that it would be just as easy now.

Nope. Turns out GitHub no longer has any way to import from Google Code: it tells you it can't find a repository there when you give it the address to Google's SVN repository.

Google's announcement said they were providing an exporter to GitHub. So I tried that next. I had the new repository ready on GitHub -- under the owner's account, not mine -- and I expected Google's exporter to ask me for the repository.

Not so. As soon as I gave it my OAuth credentials, it immediately created a new repository on GitHub under my name, using the name we had used on Google Code (not the right name, since Google Code project names have to be globally unique while GitHub projects don't).

So I had to wait for the export to finish; then, on GitHub, I went to our real repository, and did an import there from the new repository Google had created under my name. I have no idea how long that took: GitHub's importer said it would email me when the import was finished, but it didn't, so I waited several hours and decided it was probably finished. Then I deleted the intermediate repository.

That worked fine, despite being a bit circuitous, and we're up and running on GitHub now.

If you want to move your Google Code repository to GitHub without the intermediate step of making a temporary repository, or if you don't want to give Google OAuth access to your GitHub account, here are some instructions (which I haven't tested) on how to do the import via a local copy of the repo on your own machine, rather than going directly from Google to GitHub: krishnanand's steps for migrating Google code to GitHub

Help Making a Krita Master Class Possible!

The Belgium Blender User Group is currently holding a crowdfunding campaign to make it possible to organize four master classes about 3D and Digital art in Brussels. Four internationally renowned artists: David Revoy,  Sarah Laufer, François Gastaldo and François Grassard will teach in-depth about creating art using free graphics software: Krita and Blender.

David Revoy will be teaching Krita, with a focus on concept art and the challenges of digital painting — and he’ll introduce the new features we just released with Krita 2.9! Sarah Laufer has founded her own animation studio, regularly gives Blender courses in San Jose, and is now, of course, in the Netherlands for Project Gooseberry. She will focus on animating characters. François Gastaldo is an Open Shading Language expert and that’s the topic of his master class, while François Grassard from University Paris-8 has led the transition to free tools: Krita, Blender, Natron. He will talk about his experiences, but also about camera tracking, 3D integration and particle systems.

The organizers are committed to publishing videos afterwards. The Master Classes will be given in French, but there’s the intention to add subtitles in English.

The funding is meant to defray the travel expenses of the four speakers: if the campaign goes over budget, the surplus will be divided between the Krita Foundation and the Blender Foundation.

March 17, 2015

Announce: Entangle “Charm” release 0.7.0 – an app for tethered camera control & capture

I am pleased to announce a new release 0.7.0 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

The main features introduced in this release are a brand new logo, a plugin for automated capture of image sequences and the start of a help manual. The full set of changes is

  • Require GLib >= 2.36
  • Import new logo design
  • Switch to using zanata.org for translations
  • Set default window icon
  • Introduce an initial help manual via yelp
  • Use shared library for core engine to ensure all symbols are exported to plugins
  • Add framework for scripting capture operations
  • Workaround camera busy problems with Nikon cameras
  • Add a plugin for repeated capture sequences
  • Replace progress bar with spinner icon

The Entangle project has a bit of a quantum physics theme in its application name and release code names. So primary inspiration for the new logo was the interference patterns from (electromagnetic) waves. As well as being an alternate representation of an interference pattern, the connecting filaments can also be considered to represent the (USB) cable connecting camera & computer. The screenshot of the about dialog shows the new logo used in the application:

Logo

Introducing ColorHug ALS

Ambient light sensors let us change the laptop panel brightness so that you can still see your screen when it’s sunny outside, but we can dim it when the ambient room light level is lower to save power.

colorhug-als1-large

I’ve spent a bit of time over the last few months designing a small OpenHardware USB device that acts as a ambient light sensor. It’s basically an uncalibrated ColorHug1 design with a less powerful processor, but speaking a subset of same protocol so all the firmware update and test tools just work out of the box.

colorhug-als2-large

The sensor itself is a very small (12x22mm) printed circuit board that inserts directly into a spare USB socket. It only sticks out about 9mm from the edge of the laptop as most of the PCB actually gets pushed into the USB slot.

colorhug-als-pcb-large

ColorHugALS can currently control the backlight when running the colorhug-backlight utility. The Up/Down header buttons do the same as the hardware BrightnessUp and BrightnessDown keys. You can still set the absolute backlight so you’re in control of the absolute level right now, the ALS modifying the level either side of what you just set in the coming minutes. The brightness is modified using a exponential moving average, which makes the brightness changes smooth and unnoticeable on hardware with enough brightness levels.

colorhug-backlight-large

We also use the brightness value at start to be what you consider “normal” so the algorithm tries to stay out of the way. When we’ve got some defaults that work well and has been tested the aim is to push this into gnome-control-center and gnome-settings-daemon for GNOME 3.18 so that no additional software is required.

I’ve got 42 devices in stock now. Buy one here!

March 16, 2015

Krita 2.9.1 Released

The first bugfix release for Krita 2.9 is out! There are now builds for Windows, OSX and CentOS 6 available. While bug fixing is going on unabated, Dmitry has started working on the Photoshop layer style Kickstarter feature, too: drop shadows already work, and the rest of the layer styles are coming.The goal is to have this feature done for Krita 2.9.2, which should be out next month. And we’re working on a new Kickstarter project!

  • Fix the outline cursor on CentOS 6.5
  • Update G’Mic to the very latest version (but the problems on Windows are still not resolved)
  • Improve the layout of the filter layer/⁠mask dialog’s filter selector
  • Fix the layout of the pattern selector for fill layers
  • Remove the dependency of QtUiTools, which means that no matter whether the version Qt installed and the version Qt Krita was built agast differ, the layer metadata editors work
  • Fix a bug that happened when switching between workspaces
  • Fix bug 339357: the time dynamic didn’t start reliably
  • Fix bug 344862: a crash when opening a new view with a tablet stylus
  • Fix bug 344884: a crash when selecting too small a scale for a brush texture
  • Fix bug 344790: don’t crash when resizing a brush while drawing
  • Fix setting the toolbox to only one icon wide
  • Fix bug 344478: random crash when using liquify
  • Fix bug 344346: Fix artefacts in fill layers when too many parallel updates happened
  • Fix bug 184746: merging two vector layers now creates a vector layer instead of rendering the vectors to pixels
  • Add an option to disable the on-canvas notification messages (for some people, they slow down drawing)
  • Fix bug 344243: make the preset editor visible in all circumstances

Note on G’Mic on Windows: Lukas, David and Boudewijn are trying to figure out how to make G’Mic stable on Windows. The 32 bits 2.9.1 Windows build doesn’t include G’Mic at all. The 64 bits build does, and on a large enough system, most of the filters are stable. We’re currently trying different compilers because it seems that most problems are causes by Microsoft Visual Studio 2012 generating buggy code. We’re working like crazy to figure out how to fix this, but please, for now, on 64 bits Windows treat G’Mic as entirely experimental.

Note for Windows users with an Intel graphics board: If krita shows a black screen after opening an image, you need to update your Intel graphics drivers. This isn’t a bug in Krita, but in Intel’s OpenGL support. Update to 10.18.14 or later. Most Ultrabooks and the Cintiq Companion can suffer from outdated drivers.

Note on OSX: Krita on OSX is still experimental and not suitable for real work. Many features are still missing. Krita will only work on Mavericks and up.

Downloads

OSX:

Interview with Abbigail Ward

Mountains by Abbigail Ward

Would you like to tell us something about yourself?

Hi, my name is Abbigail Ward. I am a published illustrator and fine art student.

Do you paint professionally or as a hobby artist?

I have been drawing as a hobby since I was little but just started doing professional art work in the past couple of years. My first published book Monster Parade, written by Gregory Moss, came out in January, so that’s exciting! Besides that I have been doing small projects like character portraits and album covers.

When and how did you end up trying digital painting for the first time?

I first started digital painting around 9 years ago when my mother bought me a small Wacom tablet for Christmas. I loved that tablet, still have it in fact! Though I use a different tablet now. If I remember correctly, I believe it was digital art on elfwood.com that first made me want to try digital painting. I’ve always loved fantasy art.

What is it that makes you choose digital over traditional painting?

I wouldn’t say that I choose digital over traditional. I am actually going to college for fine arts at the moment and I love both. I will say that I find digital to be a better fit for my illustration work. It’s cheaper and faster and tends to translate better for printing. I also like the experimenting that I can do painting digitally. If I learn how to do something traditionally, I like to see if I can imitate that same technique on the computer. For instance I made a tutorial on how to make a drawing that looks like a pencil drawing. That was fun.

How did you first find out about open source communities? What is your opinion about them?

I can’t say exactly how I first heard of them. But I think they are wonderful! While I have used some FOSS for a few years now, I am still new to learning about the communities themselves. I wish I could do more to contribute, myself, but I don’t know what I could do.

Have you worked for any FOSS project or contributed in some way?

I haven’t contributed to any FOSS projects though I do try to spread the word about the programs I use and have started trying to make tutorials on how I make my art using them.

How did you find out about Krita?

I found out about Krita through David Revoy and looking for alternatives to Corel Painter.

What was your first impression?

Since it was a few years ago it was slow for me since I was using Krita on an old Windows laptop. Even then I loved it.

What do you love about Krita?

I feel like Krita is the closest fit to what I want in a program. It lets me imitate traditional media or have a more digital look without having to switch programs. A lot of the new features released in 2.9 make it even more efficient.

What do you think needs improvement in Krita? Also, anything that you really hate?

Hrm, most of what I would want to see improved has already been listed as future goals. The main improvement I would want to see would probably be the text box feature. That way I could edit the text without having to go to a different program.

In your opinion, what sets Krita apart from the other tools that you use?

Krita really is more focused to creating images from scratch without being chained to imitating traditional media as close to possible. I’ve tried quite a few programs, but Krita is the program that works the best for me, maybe because I like both traditional and digital media.

If you had to pick one favourite of all your work done in Krita so far, what would it be?

I can’t really pick a favourite picture. But I have plenty of pictures made with Krita in my gallery. I think if I had to choose one picture to share it might be  “Mountains” since it’s my newest finished work.

What brushes did you use in it?

For that one I used mostly the bristle, knife/flat brushes, and added a canvas texture.

Would you like to share it with our site visitors?

Anyone is free to use it since I uploaded it with a Creative Commons license.

Anything else you’d like to share?

Thanks for being awesome!

March 14, 2015

Making a customized Firefox search plug-in

It's getting so that I dread Firefox's roughly weekly "There's a new version -- do you want to upgrade?" With every new upgrade, another new crucial feature I use every day disappears and I have to spend hours looking for a workaround.

Last week, upgrading to Firefox 36.0.1, it was keyword search: the feature where, if I type something in the location bar that isn't a URL, Firefox would instead search using the search URL specified in the "keyword.URL" preference.

In my case, I use Google but I try to turn off the autocomplete feature, which I find it distracting and unhelpful when typing new search terms. (I say "try to" because complete=0 only works sporadically.) I also add the prefix allintext: to tell Google that I only want to see pages that contain my search term. (Why that isn't the default is anybody's guess.) So I set keyword.URL to: http://www.google.com/search?complete=0&q=allintext%3A+ (%3A is URL code for the colon character).

But after "up"grading to 36.0.1, search terms I typed in the location bar took me to Yahoo search. I guess Yahoo is paying Mozilla more than Google is now.

Now, Firefox has a Search tab under Edit->Preferences -- but that just gives you a list of standard search engines' default searches. It would let me use Google, but not with my preferred options.

If you follow the long discussions in bugzilla, there are a lot of people patting each other on the back about how much easier the preferences window is, with no discussion of how to specify custom searches except vague references to "search plugins". So how do these search plugins work, and how do you make one?

Fortunately a friend had a plugin installed, acquired from who knows where. It turns out that what you need is an XML file inside a directory called searchplugins in your profile directory. (If you're not sure where your profile lives, see Profiles - Where Firefox stores your bookmarks, passwords and other user data, or do a systemwide search for "prefs.js" or "search.json" or "cookies.sqlite" and it should lead you to your profile.)

Once you have one plugin installed, it's easy to edit it and modify it to do anything you want. The XML file looks roughly like this:

<SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/">
<os:ShortName>MySearchPlugin</os:ShortName>
<os:Description>The search engine I prefer to use</os:Description>
<os:InputEncoding>UTF-8</os:InputEncoding>
<os:Image width="16" height="16">data:image/x-icon;base64,ICON GOES HERE</os:Image>
<SearchForm>http://www.google.com/</SearchForm>
<os:Url type="text/html" method="GET" template="https://www.google.com/search">
  <os:Param name="complete" value="0"/>
  <os:Param name="q" value="allintext: {searchTerms}"/>
  <!--os:Param name="hl" value="en"/-->
</os:Url>
</SearchPlugin>

There are four things you'll want to modify. First, and most important, os:Url and os:Param control the base URL of the search engine and the list of parameters it takes. {searchTerms} in one of those Param arguments will be replaced by whatever terms you're searching for. So <os:Param name="q" value="allintext: {searchTerms}"/> gives me that allintext: parameter I wanted.

(The other parameter I'm specifying, <os:Param name="complete" value="0"/>, used to make Google stop the irritating autocomplete every time you try to modify your search terms. Unfortunately, this has somehow stopped working at exactly the same time that I upgraded Firefox. I don't see how Firefox could be causing it, but the timing is suspicious. I haven't been able to figure out another way of getting rid of the autocomplete.)

Next, you'll want to give your plugin a ShortName and Description so you'll be able to recognize it and choose it in the preferences window.

Finally, you may want to modify the icon: I'll tell you how to do that in a moment.

Using your new search plugin

[Firefox search prefs]

You've made all your modifications and saved the file to something inside the searchplugins folder in your Firefox profile. How do you make it your default?

I restarted firefox to make sure it saw the new plugin, though that may not have been necessary. Then Edit->Preferences and click on the Search icon at the top. The menu near the top under Default search engine is what you want: your new plugin should show up there.

Modifying the icon

Finally, what about that icon?

In the plugin XML file I was copying, the icon line looked like:

<os:Image width="16"
height="16">data:image/x-icon;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAAAAAAA
... many more lines like this then ... ==</os:Image>
So how do I take that and make an image I can customize in GIMP?

I tried copying everything after "base64," and pasting it into a file, then opening it in GIMP. No luck. I tried base64 decoding it (you do this with base64 -d filename >outfilename) and reading it in with GIMP. Still no luck: "Unknown file type".

The method I found is roundabout, but works:

  1. Copy everything inside the tag: data:image/x-icon;base64,AA ... ==
  2. Paste that into Firefox's location bar and hit return. You'll see the icon from the search plugin you're modifying.
  3. Right-click on the image and choose Save image as...
  4. Save it to a file with the extension .ico -- GIMP won't open it without that extension.
  5. Open it in GIMP -- a 16x16 image -- and edit to your heart's content.
  6. File->Export as...
  7. Use the type "Microsoft Windows icon (*.ico)"
  8. Base64 encode the file you just saved, like this: base64 yourfile.ico >newfile
  9. Copy the contents of newfile and paste that into your os:Image line, replacing everything after data:image/x-icon;base64, and before </os:Image>

Whew! Lots of steps, but none of them are difficult. (Though if you're not on Linux and don't have the base64 command, you'll have to find some other way of encoding and decoding base64.)

But if you don't want to go through all the steps, you can download mine, with its lame yellow smiley icon, as a starting point: Google-clean plug-in.

Happy searching! See you when Firefox 36.0.2 comes out and they break some other important feature.

The 2:1 Form Factor

At KO GmbH, we did several projects to show off the 2:1 convertible laptop form factor. Krita Gemini and Calligra Gemini are applications that automatically switch from laptop to tablet gui mode when you switch your device. Of course, one doesn't get that to work without some extensive testing, so here's a little collection of devices showing off all existing (I believe) ways of making a laptop convertible:

There's rip'n'flip, as exemplified by the Lenovo Helix, and arguably by the Surface Pro 3 (which its own twist, the kickstand). There's the bend-over-and-over-and-over model pioneered by the Thinkpad Yoga (but this is an Intel SDP, not a Yoga, all the Yoga's are with other ex-KO colleagues) and finally the screen tumbler of the Dell XPS 12.

Every model on the table has its own foibles.

The Helix actually doesn't do the 2:1 automatic switch trick, but that's because it's also the oldest model we've got around. The Helix basically has only one angle between the screen and keyboard, and that's it, and it's fairly upright. The keyboard is pretty good, the trackpoint is great, of course. In contrast to all the other devices, it also runs Linux quite well. The power button is sort recessed and really hard to press: it's hard to switch the device on. The built-in wacom pen is impossible to calibrate correctly, and it just won't track correctly at the screen edges. As tablet, it's nice and light, but the rip'n'flip thing is flawed: doesn't alway re-attach correctly.

The Dell XPS 12 is one of the nicest devices of the four. The screen rotation mechanism looks scary at first, but it works very, very well. It's a nice screen, too. The keyboard is ghastly, though, missing a bunch of essential keys like separate home, end, page-up, page-down. The powerbutton is placed at the left side, and it's just the right thing to play with, mindlessly, while thinking. This leads to the laptop suspending, of course! The device is heavy, too, too heavy to comfortably use as a tablet. As a laptop, except for the keyboard, it's pretty good. Linux compatibility is weird: you can either have the trackpad properly supported, or the wifi adapter, but not both. Not even with the latest Kubuntu! There's no pen, which is a pity...

I cannot talk about the what's in the Intel SDP system, because that's under NDA. It's got a pen, just like the Surface Pro 3, and it's nice, light and a good harbinger of things to come. The form factor works fine for me. Some people are bothered by feeling the keys at the back of the device when it's in tablet mode, but I don't care. Tent mode nice for watching movies, and presentation mode sort of makes it a nice drawing tablet.

The Surface Pro 3 is rather new. I got it as test system for Krita on Windows. It's the lowest spec model, because that makes the best test, right? I can use the thing on my lap, with the kickstand, but only if I put my feet up on the table and make long legs... The n-trig pen is sort of fine... It's accurate, there's no parallax as with the Cintiq or Helix, the pressure levels are fine, too, for my usage that is. But because it's a bluetooth device, there's a noticable delay. It's a bit as if you're painting with too thick oil paint. I never remove the keyboard cover, which, btw, is perfectly fine to type on. It feels a bit tacky, folded back, but not a big problem.

So that's it... Four convertible laptops, three have trouble running Linux at the moment, but then, I should do more testing of Krita on Windows anyway. It's where 90% of Krita's user base is, it seems. I'd like the Dell way of converting best, if the device weren't so heavy as a consequence. The Helix convertible never gets turned into a pure tablet in practice; that seems to go with the rip'n'flip design, because the same holds for the Surface Pro 3. The back-bendy type of convertible is just fine as well...

March 12, 2015

Memories

I first encountered Terry Pratchett's work in 1986, when Fergus McNeill's Quilled adventure game adaption of the Colour of Magic was released for the ZX Spectrum. Back then, Fergus was a bigger name in my mind than Terry Pratchett. I enjoyed the game a lot, but couldn't get the book anywhere -- this was 1986, the Netherlands, no Internet, Oosterhout, so no bookshop carrying any fantasy books in English beyond Lord of the Rings.

When I was in my first year in Leiden, eighteen years old, studying Sinology, a friend of mine and me, we went to London for a book-buying expedition. Forget about the Tower, the V&A or the National Portrait Gallery. We went for Foyles, The Fantasy Book Center and the British Library. I acquired the full set of Fritz Leiber's "Fafhrd and the Gray Mouser" series, Frank got Lord Dunsany's autobiography, I got Clark Ashton Smith's collected short stories and Lord Dunsany's Gods of Pegana (straight, apparently, from the rare books locker from the University of Buffalo).

I also bought Mort.

That was the first Terry Pratchett novel I read, and I was hooked. I read and re-read it a dozen times that week.

When I first met Irina, we had an overlapping taste, but very few books in common... The first book I foisted upon her was Equal Rites. I think, I'm not so sure anymore, I recognize books by their colour, and all my Terry Pratchett paperbacks have vaguely white splotchy spines by now.

If you look at our fantasy shelves, it's easy to see when I got my first job. That was 1994, when I bought my first Terry Pratchett hardcover. Since then, I've bought all his books in hardcover when they were released.

I fondly remember the Terry Pratchett and discworld Usenet newsgroups, back when Usenet was fun. alt.books.Pratchett, alt.fan.Pratchett. The annotated FAQ. L-Space. Pterry.

Deciding that, well, sure, I couldn't wait for the paperback, and would get the hardback, no matter what. Seeing the books' spines go all skewed with re-reading.

Were all his books awesome? No, of course not. Though I guess nobody will agree with me which ones were less awesome. And I sometimes got fed up with his particular brand of moralizing, even.

But, in my mind, Terry Pratchett falls in the same slot as Wodehouse and Diana Wynne Jones. Wodehouse had about thirty years more of productive life; and Wodehouse' sense of language was, honestly, better. But Terry Pratchett's work showed much more versatility, though there, Diana Wynne Jones surely was the greater master. But there are books, like Feet of Clay, that I read, re-read and will keep re-reading.

An author of a body of work that will last a long time.

Audi Quattro

Winter is definitelly losing its battle and last weekend we had some fun filming with my new folding Xu Gong v2 quad.

Audi Quattro from jimmac on Vimeo.

Cat Splash Fever

Blender 2.74 is nearly out (in fact, you can test out Release Candidate 1 already!) and as with previous releases, there was a contest held with the community for the splash image that appears when Blender first launches. The theme for this release? Cats!

The rules were simple (as posted in a thread on blenderartists.org):

  • All cat renders will be fine, but preferred are the hairy fluffy Cycles rendered ones.
  • One cat, many cats, cartoon cats, crazy cats, angry cats, happy cats. All is fine.
  • Has to be a single F12 render+composite (no post process in other editors)
  • The selected artist should be willing to share the .blend with textures with everyone under a CC-BY-SA or CC-BY.
  • Deadline Saturday March 7.

And we got a winner! This excellent image by Manu Järvinen (maxon) is what you’ll see in Blender 2.74’s splash:

maxon

You can download a .zip with the splash and .blend (public domain!), or watch the making-of video.

But wait! That’s not all. There were many, many fantastic entries that were submitted. It would be a shame not to share them all. Check out this gallery of the top 10 runner-up submissions (in no particular order):

Agent_AL Jyri Unt (cgstrive) Davide Maimone (dawidh) kubo ^^NOva Robert J. Tiess (RobertT) Stan.1 StevenvdVeen Derek Barker (LordOdin - Theory Animation) Julian Perez (julperado)

March 11, 2015

Film Emulation in RawTherapee

This is old news but I just realized that I hadn't really addressed it before.

The previous work I had done on Film Emulation with G'MIC in GIMP (here and here) are also now available in RawTherapee directly! You'll want to visit this page on the RawTherapee wiki to see how it works, and to download the film emulation collection to use.


This is handy for those that may work purely in RawTherapee or that don't want to jump into GIMP just to do some color toning. It's a pretty big collection of emulations, so hopefully you'll be able to find something that you like. Here's the list of what I think is in the package (there may be more there now):

  • Fuji 160C, 400H, 800Z
  • Fuji Ilford HP5
  • Kodak Portra 160, 400, 800
  • Kodak TMAX 3200
  • Kodak Tri-X 400

  • Fuji Neopan 1600
  • Fuji Superia 100, 400, 800, 1600
  • Fuji Ilford Delta 3200
  • Kodak Portra 160 NC, 160 VC, 400 NC, 400 UC, 400 VC

  • Polaroid PX-70
  • Polaroid PX100UV
  • Polaroid PX-680
  • Polaroid Time Zero (Expired)

  • Fuji FP-100c
  • Fuji FP-3000b
  • Polaroid 665
  • Polaroid 669
  • Polaroid 690

  • Fuji Neopan 1600
  • Fuji Superia 100/400/800/1600
  • Ilford Delta 3200
  • Kodak Portra 160 NC/VC
  • Kodak Portra 400 NC/UC/VC

  • Fuji 160C
  • Fuji 400H
  • Fuji 800Z
  • Ilford HP5
  • Kodak Portra 160/400/800
  • Kodak TMax 3200
  • Kodak Tri-X 400

  • Polaroid PX-70
  • Polaroid PX100UV
  • Polaroid PX-680
  • Polaroid Time Zero (Expired)

  • Fuji FP-100c
  • Fuji FP-3000b
  • Polaroid 665
  • Polaroid 669
  • Polaroid 690

  • Agfa Precisa 100
  • Fuji Astia 100F
  • Fuji FP 100C
  • Fuji Provia 100F
  • Fuji Provia 400F
  • Fuji Provia 400X
  • Fuji Sensia 100
  • Fuji Superia 200 XPRO
  • Fuji Velvia 50
  • Generic Fuji Astia 100
  • Generic Fuji Provia 100
  • Generic Fuji Velvia 100
  • Generic Kodachrome 64
  • Generic Kodak Ektachrome 100 VS
  • Kodak E-100 GX Ektachrome 100
  • Kodak Ektachrome 100 VS
  • Kodak Elite Chrome 200
  • Kodak Elite Chrome 400
  • Kodak Elite ExtraColor 100
  • Kodak Kodachrome 200
  • Kodak Kodachrome 25
  • Kodak Kodachrome 64
  • Lomography X-Pro Slide 200
  • Polaroid 669
  • Polaroid 690
  • Polaroid Polachrome

  • Agfa Ultra Color 100
  • Agfa Vista 200
  • Fuji Superia 200
  • Fuji Superia HG 1600
  • Fuji Superia Reala 100
  • Fuji Superia X-Tra 800
  • Kodak Elite 100 XPRO
  • Kodak Elite Color 200
  • Kodak Elite Color 400
  • Kodak Portra 160 NC
  • Kodak Portra 160 VC
  • Lomography Redscale 100

  • Agfa APX 100
  • Agfa APX 25
  • Fuji Neopan 1600
  • Fuji Neopan Acros 100
  • Ilford Delta 100
  • Ilford Delta 3200
  • Ilford Delta 400
  • Ilford FP4 Plus 125
  • Ilford HP5 Plus 400
  • Ilford HPS 800
  • Ilford Pan F Plus 50
  • Ilford XP2
  • Kodak BW 400 CN
  • Kodak HIE (HS Infra)
  • Kodak T-Max 100
  • Kodak T-Max 3200
  • Kodak T-Max 400
  • Kodak Tri-X 400
  • Polaroid 664
  • Polaroid 667
  • Polaroid 672
  • Rollei IR 400
  • Rollei Ortho 25
  • Rollei Retro 100 Tonal
  • Rollei Retro 80s

Have fun with these, and don't forget to show off your results if you get a chance! It's always neat to see what folks do with these! :)

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

Stellarium in SOCIS 2015

Are you a student looking for an exciting summer job? Get paid this summer to work on Stellarium!

We were selected to be a mentoring organization for the ESA Summer of Code in Space 2015: a program funding european students for working on astronomy open source projects. Please review our ideas page and submit your application on http://sophia.estec.esa.int/socis/

March 10, 2015

Fedora Design Team Update

Fedora Design Team Logo

One of the things the Fedora Design Team decided to do following the Design Team Fedora Activity Day(s) we had back in January was to meet more regularly. We’ve started fortnightly meetings; we just had our second one.

During the FAD, we figured out a basic process for handling incoming design team tickets and Chris Roberts and Paul Frields wrote the SQL we needed to generate ticket reports for us to be able to triage and monitor our tickets more efficiently. From our whiteboard:

1

Anyhow, with those ticket reports in place and some new policies (if a ticket is older than 4 weeks with no response from the reporter, we’ll close it; if a ticket hasn’t had any updates in 2 weeks and the designer who took the ticket is unresponsive, we open it up for others) we went through a massive ticket cleanout during the FAD. We’ve been maintaining that cleanliness at our fortnightly meetings: we have only 16 open tickets now!

Were you to join one of our meetings, you’ll note we spend a lot of time triaging tickets together and getting updates on ticket progress; we also have an open floor for announcements and for designers to get critique on things they are working on.

Here’s a report from our latest meeting. I don’t know if I’ll have time to do this style of summary after every meeting, but I’ll try to do them after particularly interesting or full meetings. When I don’t post one of these posts, I will post the meetbot links to the design-team mailing list, so that is the best place to follow along.

Fedora Design Team Meeting 10 March 2015

Completed Tickets

FUDCon APAC 2015 Call for Proposals Poster

Shatadru designed a poster for FUDCon APAC in ticket 353; we closed the ticket since the CFP was closed.

353-3-1-compressed

LiveUSB Creator Icons

Gnokii took on a new ticket to design some icons for the new LiveUSB creator UI.

FUDCon Pune Logo design

logo date

Suchakra and Yogi together created the logo for FUDCon Pune, and we closed the ticket as the work was all done and accepted.

Standee Banner Design for Events

banner-czech2

Gnokii gave us a print-ready CMYK tiff for this banner design ticket; we updated it with a link to the file and asked for feedback from the reporter (siddesh.)

Fedora Magazine favicon

Ryan Lerch created a favicon for Fedora Magazine, so we closed the ticket seeing as it was done. :)

Tickets In Progress

Tickets Open For You to Take!

One of the things we required to join the design team is that you take and complete a ticket. We opened up 3 tickets for folks to be able to take – this could be you! Let me know if you want to work on any of these!

Discussion

Fedora 22 Supplemental Wallpapers Submission Window Closing Soon!

Gnokii pointed out that we really need more submissions for Fedora 22 supplemental wallpapers; the deadline is March 19. If you have some nice photography you’d like to submit or have a friend who has openly licensed photography you think would be a good match for Fedora, please submit it! All of the details are on gnokii’s blog post, and you can submit them directly in Nauncier, our wallpaper submission & voting app.

1/4 Page Ad for LinuxFest Northwest

One of our newest members, mleonova, put together some mockups for an ad for Fedora to go in the LinuxFest Northwest program. We gave her some critiques on her work and she is going to work on a final draft now.

New look for Fedora Magazine

Screenshot from 2015-03-10 14:38:11

Ryan Lerch put together a new design for Fedora Magazine on a test server and shared it with us for feedback; overall the feedback was overwhelmingly positive and we only had a couple of suggestions/ideas to add.

Ask.fedoraproject.org Redesign

Suchakra, Banas, and Sadin worked on a redesign of ask.fedoraproject.org during the Design Team FAD for ticket 199 and Suchakra showed us some things he’d been working on for that ticket. So far the work looks great, and it’s now listed as a possible summer project for a Fedora intern in GSoc.

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

March 08, 2015

Portable Float Map with 16-bit Half

Recently we saw some lively discussions about support of Half within the Tiff image format on the OpenEXR mailing list. That made me aware of the according oyHALF code paths inside Oyranos. In order to test easily, Oyranos uses the KISS format PPM. That comes with a three ascii lines header and then the uncompressed pixel data. I wanted to create some RGB images containing 16-bit floating point half channels, but that PFM format variant is not yet defined. So here comes a RFC.

A portable float map (PFM) starts with the first line identifier “Pf” or “PF” and contains 32-bit IEEE floating point data. The 16-bit IEEE/Nvidia/OpenEXR floating point data variant starts with a first line “Ph” or “PH” magic similar to PFM. “Ph” stands for grayscale with one sample. The “PH” identifier is used for RGB with three samples.

That’s it. Oyranos supports the format in git and maybe in the next 0.9.6 release.

GIMP: Turn black to another color with Screen mode

[20x20 icon, magnified 8 times] I needed to turn some small black-on-white icons to blue-on-white. Simple task, right? Except, not really. If there are intermediate colors that are not pure white or pure black -- which you can see if you magnify the image a lot, like this 800% view of a 20x20 icon -- it gets trickier.

[Bucket fill doesn't work for this] You can't use anything like Color to Alpha or Bucket Fill, because all those grey antialiased pixels will stay grey, as you see in the image at left.

And the Hue-Saturation dialog, so handy for changing the hue of a sky, a car or a dress, does nothing at all -- because changing hue has no effect when saturation is zero, as for black, grey or white. So what can you do?

I fiddled with several options, but the best way I've found is the Screen layer mode. It works like this:

[Make a new layer] In the Layers dialog, click the New Layer button and accept the defaults. You'll get a new, empty layer.

[Set the foreground color] Set the foreground color to your chosen color.

[Set the foreground color] Drag the foreground color into the image, or do Edit->Fill with FG Color.

Now it looks like your whole image is the new color. But don't panic!

[Use screen mode] Use the menu at the top of the Layers dialog to change the top layer's mode to Screen.

Layer modes specify how to combine two layers. (For a lot more information, see my book, Beginning GIMP). Multiply mode, for example, multiplies each pixel in the two layers, which makes light colors a lot more intense while not changing dark colors very much. Screen mode is sort of the opposite of Multiply mode: GIMP inverts each of the layers, multiplies them together, then inverts them again. All those white pixels in the image, when inverted, are black (a value of zero), so multiplying them doesn't change anything. They'll still be white when they're inverted back. But black pixels, in Screen mode, take on the color of the other layer -- exactly what I needed here.

Intensify the effect with contrast

[Mars sketch, colorized orange] One place I use this Screen mode trick is with pencil sketches. For example, I've made a lot of sketches of Mars over the years, like this sketch of Lacus Solis, the "Eye of Mars". But it's always a little frustrating: Mars is all shades of reddish orange and brown, not grey like a graphite pencil.

Adding an orange layer in Screen mode helps, but it has another problem: it washes out the image. What I need is to intensify the image underneath: increase the contrast, make the lights lighter and the darks darker.

[Colorized Mars sketch, enhanced  with brightness/contrast] Fortunately, all you need to do is bump up the contrast of the sketch layer -- and you can do that while keeping the orange Screen layer in place.

Just click on the sketch layer in the Layers dialog, then run Colors->Brightness/Contrast...

This sketch needed the brightness reduced a lot, plus a little more contrast, but every image will be different. Experiment!

March 07, 2015

Color Reconstruction

If you overexpose a photo with your digital camera you are in trouble. That’s what most photography related textbooks tell you – and it’s true. So you better pay close attention to your camera’s metering while shooting. However, what to do when the “bad thing” happened and you got this one non-repeatable shot, which is so absolutely brilliant, but unfortunately has some ugly signs of overexposure?

In this blog article I’d like to summarize how darktable can help you to repair overexposed images as much as possible. I’ll cover modules which have been part of darktable for a long time but also touch the brand new module “color reconstruction”.

Why overexposed highlights are a problem?

The sensor cells of a digital camera translate the amount of light that falls onto them into a digital reading. They can do so up to a certain sensor specific level – called the clipping value. If even more light falls onto the sensor it does not lead to any higher reading, the clipping value is the maximum. Think of a sensor cell as a water bucket; you can fill the bucket with liquid until it’s full but you cannot fill in more than its maximum volume.

For a digital camera to sense the color of light three color channels are required: red, green and blue. A camera sensor achieves color sensitivity by organizing sensor cells carrying color filters in a certain pattern, most frequently a Bayer pattern.

colorreconstruction_bayer_matrix

Combining this fact with the phenomenon of overexposure we can differentiate three cases:

  1. All three color channels have valid readings below the clipping value

  2. At least one color channel is clipped and at least one color channel has a valid reading

  3. All three color channels are clipped

Case (1) does not need to concern us in this context: all is good and we get all tonal and color information of the affected pixels.

Case (3) is the worst situation: no tonal nor any color information is available from the pixels in question. The best we can say about these pixels is that they must represent really bright highlights at or above the clipping value of the camera.

In case (2) we do not have correct color information as this would require valid readings of all three color channels. As it’s often the green channel that clips first, pixels affected by this case of overexposure typically show a strong magenta color cast if we do not take further action. The good news: at least one of the channels has stayed below the clipping value, so we may use this one to restore the tonal information of the affected pixels, alas, without color.

Dealing with overexposure in darktable

darktable has a modular structure. Therefore more than one module is typically involved when working on overexposed images. This is different from other applications where all the functionality may be part of a single general exposure correction panel. It is in the philosophy of darktable to not hide from the user the order in which modifications are made to an image.

Just in order to manage expectations: a heavily overexposed image or one with a fully blown-out sky is beyond repair. Only if at least some level of information is left in the highlights and if highlights represent only a limited part of the image there is a realistic chance to get a convincing result.

Correcting overall image exposure

Logically one of the basic modifications you need to consider for an overexposed image is an exposure correction. A negative exposure correction in the “exposure” module is frequently indispensable in order to bring brightest highlights into a more reasonable tonal range.

colorreconstruction_scr_1

Additionally you should take into account that the curve defined in the “base curve” module has a strong effect on highlights as well. You may try out different alternatives as offered in the module's presets to find the one that best fits to your expectations. A base curve with a more continuous increase that slowly reaches the top right corner (right example) is often better suited for images with overexposed areas than one that already reaches the top at a moderately high input luminance level (left example).

colorreconstruction_scr_2

colorreconstruction_scr_3

Bringing back detail into highlights

The “highlight reconstruction” module comes early in the pixel pipeline acting on raw data. This is the central module that deals with the different cases of overexposure as described above. As a default the module uses the “clip highlights” method: it will make sure that pixels, which have all or only part of the RGB channels clipped (cases 2 and 3), are converted to neutral white highlights instead of showing some kind of color cast. This is the minimum you want to do with highlights. For that reason this method is activated by default for all raw input images.

colorreconstruction_scr_4

As an alternative option the “highlight reconstruction” module offers the method “reconstruct in LCh”. This method is able to effectively deal with case (2) as described above. The luminance of partly clipped pixels can be reconstructed, the pixels get back their tonal information but result in a colorless neutral gray.

A third method offered by the “highlight reconstruction” module is called “reconstruct color”.

At first the idea of reconstructing color in highlights may sound surprising. As you know from what has been said above, overexposed highlights always lack color information (cases 2 and 3) and may even miss any luminance information as well (case 3). How can we then expect to reconstruct colors in these cases?

Now, the method that is used here is called “inpainting”. The algorithm assumes that an overexposed area is surrounded by non-overexposed pixels with the same color that the overexposed area had originally. The algorithm extrapolates those valid colors into the clipped highlights. This works remarkably well for homogeneous overexposed areas like skin tones.

Often it works perfectly but sometimes it might struggle to successfully fill all the highlights. In some cases it might produce moiré like patterns as an artifact, especially if the overexposed area is superposed by some sharp structures. As you will be able to identify limitations and potential problems immediately this method is always worth a try.

Bringing colors into highlights

The existing limitations of the “highlight reconstruction” module when it comes to colors has led to the development of a new module called “color reconstruction”. This module is currently part of the master development branch and will be part of darktable with the next feature release.

As we have discussed above there is no way to know the “true” color of a clipped highlight, we can only make an estimate.

The basic idea of the module is as follows: pixels which exhibit a luminance value above a user selectable threshold are assumed to have invalid colors. All pixels whose luminance value is below the threshold are assumed to have valid colors. The module now replaces invalid colors by valid ones based on proximity in the image’s x and y scale and in the luminance scale.

Let us assume we have an area of overexposed highlights, e.g. a reflection on a glossy surface. The reflection has no color information and is displayed as pure white if the “highlight reconstruction” module is active. If this overexposed area is very close to or surrounded by non-overexposed pixels the new module transfers the color of the non-overexposed area to the uncolored highlights. The luminance values of the highlight pixels remain unchanged.

Example 1

The following image is a typical case.

colorreconstruction_ex1_1

The fountain statue has a glossy gold-plated surface. Even with proper metering there is virtually no chance to get a photo of this object on a sunny day without overexposed highlights – there is always a reflection of the sun somewhere on the statue unless I would have gone for an exact back-lit composition (which would have had its own problems). In this case we see overexposed highlights on the left shoulder and arm and partly on the head of the figure – distracting as the highlights are pure white and present a strong contrast to the warm colors of the statue.

With the “color reconstruction” module activated I only needed to adjust the “luma threshold” to get the desired result.

colorreconstruction_scr_6

The highlights are converted into a gold-yellow cast which nicely blends with the surrounding color of the statue.

colorreconstruction_ex1_2

The “luma threshold” parameter is key for the effect. When you decrease it, you tell darktable to assume that an ever growing portion of the pixels is to be regarded as having invalid colors which darktable needs to replace. At the same time the number of pixels which are regarded as having valid colors decreases. darktable only replaces an invalid color if a “good” fit can be found – good means that a source color is available within a certain distance in terms of image coordinates and luminance relative to the target. Therefore, when shifting the slider too far to the left at some point the results gets worse again because too few valid colors are available – the slider typically shows a marked “sweet spot” where results are best. The sweet spot depends on the specifics of your image and you need to find it by trial and error.

The “color reconstruction” module uses a so called “bilateral grid” for fast color look-up (for further reading see [1]). Two parameters “spatial blur” and “range blur” control the details of the bilateral grid. With a low setting of “spatial blur” darktable will only consider valid colors that are found geometrically close to the pixels that need replacement. With higher settings colors get more and more averaged over a broader area of the image which delivers replacement colors that are more generic and less defined. This may or may not improve the visual quality of your image – you need to find out by trial and error. The same is true for the “range blur” which acts on the luminance axis of the bilateral grid. It controls how strong pixels with luminance values that are different from the target pixel, contribute to the color replacement.

Example 2

Here is a further example (photo supplied by Roman Lebedev).

colorreconstruction_ex2_1

The image shows an evening scene with a sky typical of the day time shortly after sunset. As a starting point a basic set of corrections has already been applied: “highlight reconstruction” has been used with the “reconstruct in LCh” method. If we would have used the “clip highlights” method the small cloud behind the flag post would have got lost. In addition we applied a negative exposure compensation by -1.5 EV units in the “exposure” module, we used the “lens correction” module mainly to fix vignetting, and we used the “contrast brightness saturation” module for some boosting effect on contrast and saturation.

Obviously the sky is overexposed and lacks good rendition of colors – visible by the arch-like area with wrong colors. With the “reconstruction module” and some tweaking of the parameters I got the following result, qualified by a much more credible evening sky:

colorreconstruction_ex2_2

These are the settings I used:

colorreconstruction_scr_7

If you let darktable zoom into the image you will immediately see that reconstructed colors change with every zoom step. This is an unwanted side-effect of the way darktable's pixel pipeline deals with zoomed-in images. As only the visible part of the image is processed for speed reasons our “color reconstruction” module “sees” different surroundings depending on the zoom level. These lead to different colors in the visible area. It is therefore recommended to adjust the “color reconstruction” parameters while viewing the full image in the darkroom. We'll try to fix this behavior in future versions of the module [ see below for an update ].

Example 3

As a final example let's look at this photo of the colorful window of a Spanish cathedral. Although this image is not heavily overexposed in the first place, the rose window clearly lacks color saturation, especially the centers of the lighter glass tiles look like washed out, which is mostly due to an too aggressive base curve. As an exercise let's try how to fix this with “color reconstruction”.

colorreconstruction_ex3_1

This time I needed to make sure that highlights do not get colored in some homogeneous orange-brownish hue that we would get when averaging all the various colors of the window tiles. Instead we need to take best care that each tile retains its individual color. Therefore, replacement colors need to be looked for in close geometrical proximity to the highlights. This requires a low setting of the “spatial blur” parameter. Here are the details:

colorreconstruction_scr_8

And here is the resulting image with some additional adjustment in the “shadows and highlights” module. The mood of the scene, which has been dominated by the rich and intensive primary colors, is nicely reconstructed.

colorreconstruction_ex3_2

One final word on authenticity. It should be obvious by now that the “color reconstruction” module only does an assumption of the colors that have been lost in the highlights. By no means can these colors be regarded as “authoritative”. You should be aware that “color reconstruction” is merely an interpretation rather than a faithful reproduction of reality. So if you strive for documentary photography, you should not rely on this technique but better go for a correct exposure in the first place. :)

Update

The behavior of this module on zoomed-in image views has been improved recently. In most cases you should now get a rendition of colors that is independent of the zoom level. There are a few known exceptions:

  • If you have highlight areas which are adjacent to high-contrast edges, you may observe a slight magenta shift when zooming in.
  • If you combine this module with the “reconstruct color” method of the “highlight reconstruction” module, highlights may be rendered colorless or in a wrong color when zooming in.

These artifacts only influence image display – the final output remains unaffected. Still we recommend to finetune the parameters of this module while viewing the full image.

[1] Chen J., Paris S., and Durand F. 2007. Real-time Edge-Aware Image Processing with the Bilateral Grid. In Proceedings of the ACM SIGGRAPH conference. (http://groups.csail.mit.edu/graphics/bilagrid/bilagrid_web.pdf)

Blender 2.74 Test Build

The next Blender release is coming soon! Here are a couple of useful links.

 

CC “Open Business Models”

The Creative Commons launched a new initiative to support “Open Business Models”.

creativecommons.org/weblog/entry/45022

Here’s the reply I posted there, still waiting to be approved (probably timezone issue :)

The Netherlands Blender Institute is doing business with CC-BY animation film and training since 2007. We’re very well known for pioneering with making a living selling CC media and using free/open source software exclusively. Based on my experience I have to make a couple of remarks though.

For me a definition of ‘open business model’ implies that a business is transparent and accessible for clients and customers – open about how costs work, how internal processes work, including sharing the revenue figures etc. This is even a new trend now. It’s the counter movement to answer to the financial crisis – and one of the positive outcomes of the “occupy” movement.

Calling “Making a living by selling your work under CC” an Open Business Model is just confusing people and potentially misleading. People who do business with free/open source software also don’t call their work “open business”. I even know corporations (Autodesk) who share training under CC-ND-NC, a license a sane person wouldn’t call “open business model” nor would I consider Autodesk to be interested in sharing and openess at all.

Let me state it stronger – doing bizz with CC should not be explicitly branded as such a special thing. The CC is there to stay and is one of the valuable choices artists can make (should be legally allowed to make!) when doing business. But it’s not a religion, it’s not exclusive. Leave the choice to artists themselves what to do. Sometimes CC works great, sometimes not.

What I always liked about CC so much is that they found an elegant solution to name something that’s related to essential user freedom, sharing and openess. All three aspects are relevant together.

-Ton-

For fun:

The Creative Commons official sharing site: photoshop files, clumsy graphics and they love Autodesk!
https://layervault.com/creative-commons/carousel

 

March 06, 2015

MyPaint wiki to close down in approx. 2 weeks

The MyPaint wiki will be closed down shortly: our hosting provider will be closing down the server it runs on.

That’s not necessarily a bad thing: it encourages us to migrate the content somewhere a little more central, and make hard-nosed decisions about what to keep and what to ignore that we’ve been putting off since forever (there has been so. much. spam.!)

So, we’ll be migrating at least the user manual and the brushpacks page to our home on Github so that they can still be maintained. If you think that anything else should be retained, please go to

http://wiki.mypaint.info/

and have a dig around. If you see an area which should be copied, please link to it on our tracking issue for this migration,

https://github.com/mypaint/mypaint/issues/242

Thank you.

Especial thanks to Techmight for hosting our site and our DNS for many years! And thanks to all previous contributors to the wiki too. We will try to retain your content, or give you all fair warning to move it elsewhere.

March 04, 2015

Getting Around in GIMP - Luminosity Masks Revisited


Brorfelde landscape by Stig Nygaard (cb)
After adding an aggressive curve along with a mid-tone luminosity mask.

I had previously written about adapting Tony Kuyper’s Luminosity Masks for GIMP. I won’t re-hash all of the details and theory here (just head back over to that post and brush up on them there), but rather I’d like to re-visit them using channels. Specifically to have another look at using the mid-tones mask to give a little pop to images.

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP
Original tutorial on Luminosity Masks:
Getting Around in GIMP - Luminosity Masks
Luminosity Masking in darktable:
PIXLS.US - Luminosity Masking in darktable





Let’s Build Some Luminosity Masks!

The way I had approached building the luminosity masks previously were to create them as a function of layer blending modes. In this re-visit, I’d like to build them from selection sets in the Channels tab of GIMP.

For the Impatient:
I’ve also written a Script-Fu that automates the creation of these channels mimicking the steps below.

Download from: Google Drive

Download from: GIMP Registry (registry.gimp.org)

Once installed, you’ll find it under:
Filters → Generic → Luminosity Masks (patdavid)
[Update]
Yet another reason to love open-source - Saul Goode over at this post on GimpChat updated my script to run faster and cleaner.
You can get a copy of his version at the same Registry link above.
(Saul’s a bit of a Script-Fu guru, so it’s always worth seeing what he’s up to!)


We’ll start off in a similar way as we did previously.

Duplicate your base image

Either through the menus, or by Right-Clicking on the layer in the Layer Dialog:
Layer → Duplicate Layer
Pat David GIMP Luminosity Mask Tutorial Duplicate Layer

Desaturate the Duplicated Layer

Now desaturate the duplicated layer. I use Luminosity to desaturate:
Colors → Desaturate…

Pat David GIMP Luminosity Mask Tutorial Desaturate Layer

This desaturated copy of your color image represents the “Lights” channel. What we want to do is to create a new channel based on this layer.

Create a New Channel “Lights”

The easiest way to do this is to go to your Channels Dialog.

If you don’t see it, you can open it by going to:
Windows → Dockable Dialogs → Channels

Pat David GIMP Luminosity Mask Tutorial Channels Dialog
The Channels dialog

On the top half of this window you’ll see the an entry for each channel in your image (Red, Green, Blue, and Alpha). On the bottom will be a list of any channels you have previously defined.

To create a new channel that will become your “Lights” channel, drag any one of the RGB channels down to the lower window (it doesn’t matter which - they all have the same data due to the desaturation operation).

Now rename this channel to something meaningful (like “L” for instance!), by double-clicking on its name (in my case it‘s called “Blue Channel Copy”) and entering a new one.

This now gives us our “Lights” channel, L :

Pat David GIMP Luminosity Mask Tutorial L Channel

Now that we have the “Lights” channel created, we can use it to create its inverse, the “Darks” channel...

Create a New Channel “Darks”

To create the “Darks” channel, it helps to realize that it should be the inverse of the “Lights” channel. We can get this selection through a few simple operations.

We are going to basically select the entire image, then subtract the “Lights” channel from it. What is left should be our new “Darks” channel.

Select the Entire Image

First, have the entire image selected:
Select → All

Remember, you should be seeing the “marching ants” around your selection - in this case the entire image.

Subtract the “Lights” Channel

With the entire image selected, now we just have to subtract the “Lights” channel. In the Channels dialog, just Right-Click on the “Lights” channel, and choose “Subtract from Selection”:

Pat David GIMP Luminosity Mask Tutorial L Channel Subtract

You’ll now see a new selection on your image. This selection represents the inverse of the “Lights” channel...

Create a New “Darks” Channel from the Selection

Now we just need to save the current selection to a new channel (which we’ll call... Darks!). To save the current selection to a channel, we can just use:
Select → Save to Channel

This will create a new channel in the Channel dialog (probably named “Selection Mask copy”). To give it a better name, just Double-Click on the name to rename it. Let’s choose something exciting, like “D”!

More Darker!

At this point, you’ll have a “Lights” and a “Darks” channel. If you wanted to create some channels that target darker and darker regions of the image, you can subtract the “Lights” channel again (this time from the current selection, “Darks”, as opposed to the entire image).

Once you’ve subtracted the “Lights” channel again, don’t forget to save the selection to a new channel (and name it appropriately - I like to name subsequent masks things like, “DD”, in this case - if I subtracted again, I’d call the next one “DDD” and so on…).

I’ll usually make 3 levels of “Darks” channels, D, DD, and DDD:

Pat David GIMP Luminosity Mask Tutorial Darks Channels
Three levels of Dark masks created.

Here’s what the final three different channels of darks looks like:

Pat David GIMP Luminosity Mask Tutorial All Darks Channels
The D, DD, and DDD channels

Lighter Lights

At this point we have one “Lights” channel, and three “Darks” channels. Now we can go ahead and create two more “Lights” channels, to target lighter and lighter tones.

The process is identical to creating the darker channels, just in reverse.

Lights Channel to Selection

To get started, activate the “Lights” channel as a selection:

Pat David GIMP Luminosity Mask Tutorial L Channel Activate

With the “Lights” channel as a selection, now all we have to do is Subtract the “Darks” channel from it. Then save that selection as a new channel (which will become our “LL” channel, and so on…

Pat David GIMP Luminosity Mask Tutorial Subtract D Channel
Subtracting the D channel from the L selection

To get an even lighter channel, you can subtract D one more time from the selection so far as well.

Here are what the three channels look like, starting with L up to LLL:

Pat David GIMP Luminosity Mask Tutorial All Lights Channels
The L, LL, and LLL channels

Mid Tones Channels

By this point, we’ve got 6 new channels now, three each for light and dark tones:

Pat David GIMP Luminosity Mask Tutorial L+D Channels

Now we can generate our mid-tone channels from these.

The concept of generating the mid-tones is relatively simple - we’re just going to intersect dark and light channels to produce whats left - midtones.

Intersecting Channels for Midtones

To get started, first select the “L” channel, and set it to the current selection (just like above). Right-Click → Channel to Selection.

Then, Right-Click on the “D” channel, and choose “Intersect with Selection”.

You likely won’t see any selection active on your image, but it’s there, I promise. Now as before, just save the selection to a channel:
Select → Save to Channel

Give it a neat name. Sayyy, “M”? :)

You can repeat for each of the other levels, creating an MM and MMM if you’d like.

Now remember, the mid-tones channels are intended to isolate mid values as a mask, so they can look a little strange at first glance. Here’s what the basic mid-tones mask looks like:

Pat David GIMP Luminosity Mask Tutorial Mid Channel
Basic Mid-tones channel

Remember, black tones in this mask represent full transparency to the layer below, while white represents full opacity, from the associated layer.


Using the Masks

The basic idea behind creating these channels is that you can now mask particular tonal ranges in your images, and the mask will be self-feathering (due to how we created them). So we can now isolate specific tones in the image for manipulation.

Previously, I had shown how this could be used to do some simple split-toning of an image. In that case I worked on a B&W image, and tinted it. Here I’ll do the same with our image we’ve been working on so far...

Split Toning

Using the image I’ve been working through so far, we have the base layer to start with:

Pat David GIMP Luminosity Mask Tutorial Split Tone Base

Create Duplicates

We are going to want two duplicates of this base layer. One to tone the lighter values, and another to tone the darker ones. We’ll start by considering the dark tones first. Duplicate the base layer:
Layer → Duplicate Layer

Then rename the copy something descriptive. In my example, I’ll call this layer “Dark” (original, I know):

Pat David GIMP Luminosity Mask Tutorial Split Tone Darks

Add a Mask

Now we can add a layer mask to this layer. You can either Right-Click the layer, and choose “Add Layer Mask”, or you can go through the menus:
Layer → Mask → Add Layer Mask

You’ll then be presented with options about how to initialize the mask. You’ll want to Initialize Layer Mask to: “Channel”, then choose one of your luminosity masks from the drop-down. In my case, I’ll use the DD mask we previously made:

Pat David GIMP Luminosity Mask Tutorial Add Layer Mask Split Tone

Adjust the Layer

Pat David GIMP Luminosity Mask Tutorial Split Tone Activate DD Mask
Now you’ll have a Dark layer with a DD mask that will restrict any modification you do to this layer to only apply to the darker tones.

Make sure you select the layer, and not it’s mask, by clicking on it (you’ll see a white outline around the active layer). Otherwise any operations you do may accidentally get applied to the mask, and not the layer.


At this point, we now want to modify the colors of this layer in some way. There are literally endless ways to approach this, bounded only by your creativity and imagination. For this example, we are going to tone the image with a cool teal/blue color (just like before), which combined with the DD layer mask, will restrict it to modifying only the darker tones.

So I’ll use the Colorize option to tone the entire layer a new color:
Colors → Colorize

To get a Teal-ish color, I’ll pull the Hue slider over to about 200:

Pat David GIMP Luminosity Mask Tutorial Split Tone Colorize

Now, pay attention to what’s happening on your image canvas at this point. Drag the Hue slider around and see how it changes the colors in your image. Especially note that the color shifts will be restricted to the darker tones thanks to the DD mask being used!

To illustrate, mouseover the different hue values in the caption of the image below to change the Hue, and see how it effects the image with the DD mask active:


Mouseover to change Hue to: 0 - 90 - 180 - 270

So after I choose a new Hue of 200 for my layer, I should be seeing this:

Pat David GIMP Luminosity Mask Tutorial Split Tone Dark Tinted

Repeat for Light Tones

Now just repeat the above steps, but this time for the light tones. So duplicate the base layer again, and add a layer mask, but this time try using the LL channel as a mask.

For the lighter tones, I chose a Hue of around 25 instead (more orange-ish than blue):

Pat David GIMP Luminosity Mask Tutorial Split Tone Light Tinted

In the end, here are the results that I achieved:

Pat David GIMP Luminosity Mask Tutorial Split Tone Result
After a quick split-tone (mouseover to compare to original)

The real power here comes from experimentation. I encourage you to try using a different mask to restrict the changes to different areas (try the LLL for instance). You can also adjust the opacity of the layers now to modify how strongly the color tones will effect those areas as well. Play!

Mid-Tones Masks

The mid-tone masks were very interesting to me. In Tony’s original article, he mentioned how much he loved using them to provide a nice boost to contrast and saturation in the image. Well, he’s right. It certainly does do that! (He also feels that it’s similar to shooting the image on Velvia).

Pat David GIMP Luminosity Mask Tutorial Mid Tones Mask
Let’s have a look.

I’ve deleted the layers from my split-toning exercise above, and am back to just the base image layer again.

To try out the mid-tones mask, we only need to duplicate the base layer, and apply a layer mask to it.

This time I’ll choose the basic mid-tones mask M.


What’s interesting about using this mask is that you can use pretty aggressive curve modifications to it, and still keep the image from blowing up. We are only targeting the mid-tones.

To illustrate, I’m going to apply a fairly aggressive compression to the curves by using Adjust Color Curves:
Colors → Curves

When I say aggressive, here is what I’m referring to:

Pat David GIMP Luminosity Mask Tutorial Aggresive Curve Mid Tone Mask

Here is the effect it has on the image when using the M mid-tones mask:


Aggressive curve with Mid-Tone layer mask
(mouseover to compare to original)

As you can see, there is an increase in contrast across the image, as well a nice little boost to saturation. You don’t need to worry about blowing out highlights or losing shadow detail, because the mask will not allow you to modify those values.

More Samples of the Mid-Tone Mask in Use

Pat David GIMP Luminosity Mask Tutorial
Pat David GIMP Luminosity Mask Tutorial
The lede image again, with another aggressive curve applied to a mid-tone masked layer
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Red Tailed Black Cockatoo at f/4 by Debi Dalio on Flickr (used with permission)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscape Ballon by Lennart Tange on Flickr (cb)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscapes by Tom Hannigan on Flickr (cb)
(mouseover to compare to original)



Mixing Films

This is something that I’ve found myself doing quite often. It’s a very powerful method for combining color toning that you may like from different film emulations. Consider what we just walked through.

These masks allow you to target modifications of layers to specific tones of an image. So if you like the saturation of, say, Fuji Velvia in the shadows, but like the upper tones to look similar to Polaroid Polachrome, then these luminosity masks are just what you’re looking for!

Just a little food for experimentation thought... :)

Stay tuned later in the week where I’ll investigate this idea in a little more depth.

In Conclusion

This is just another tool in our mental toolbox of image manipulation, but it’s a very powerful tool indeed. When considering your images, you can now look at them as a function of luminosity - with a neat and powerful way to isolate and target specific tones for modification.

As always, I encourage you to experiment and play. I’m willing to bet this method finds it’s way into at least a few peoples workflows in some fashion.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


Monthly Drawing Challenge

(by jmf)

The new monthly drawing challenge on the Krita forums now really boots up! The first run in February was mainly a test run. After that a lot of people said they were interested, so I decided to keep going.

stranger_by_tharindad-d8j6s4d

Last month’s winner: “Stranger” by tharindad.

The idea came when I was browsing the Krita forums in search of a drawing challenge and the only thing that came up was on Facebook. Not everybody has or wants Facebook, so we’ll have this challenge on the forum.

It’s not about competiton! It’s mostly a way to get rid of the “blank canvas syndrome”, to try something new and get new inspiration. If you want to draw but aren’t inspired, or want to step out of your comfort zone, this is for you!

This month’s topic is “Unusual Dinner”.

To enter, post your picture on this thread, The deadline is March 24, 2015. The winner is decided by vote on the forums and gets the privilege to choose next month’s topic.

released darktable 1.6.3

We are happy to announce that darktable 1.6.3 has been released.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.3
Please only use our provided packages ("darktable-1.6.3.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here's the direct link to tar.xz:
https://github.com/darktable-org/darktable/releases/download/release-1.6.3/darktable-1.6.3.tar.xz
and the DMG:
https://github.com/darktable-org/darktable/releases/download/release-1.6.3/darktable-1.6.3.dmg

this is another point release in the stable 1.6.x series.

sha256sum darktable-1.6.3.tar.xz
 852bb3d307b0e2b579d14cc162b347ba1193f7bc9809bb283f0485dfd22ff28d
sha256sum darktable-1.6.3.dmg
 be568ad20bfb75aed703e2e4d0287b27464dfed1e70ef2c17418de7cc631510f

Changes

  • Make camera import window transient
  • Allow soft limits on radius
  • Fix soft boundaries for black in exposure
  • Change order of the profile/intent combo in export dialog
  • Support read/write of chromaticities in EXR
  • Allow to default to :memory: db in config
  • Add mime handler for non-raw image file formats
  • Improved lens model name detection for Sony SAL lenses

Bug fixes

  • Fix buffer overrun in SSE clipping loop for highlight handling
  • Prevent exporting when an invalid export/storage is selected
  • Hopefully last fix for aspect ratios in crop and rotate (#9942)
  • No tooltip when dragging in monochrome (#10319)

RAW support

  • Panasonic LX100 (missing non-standard aspect ratio modes)
  • Panasonic TZ60
  • Panasonic FZ1000
  • KODAK EASYSHARE Z1015 IS
  • Canon 1DX (missing sRAW modes)
  • Canon A630 and SX110IS (CHDK RAW)

white balance presets

  • Panasonic FZ1000
  • Panasonic TZ60
  • Panasonic LX100

standard matrix

  • Canon Rebel T3 (non-european 1100D)

enhanced matrix

  • nikon d750

noise profiles

  • Canon EOS 1DX

March 03, 2015

Updating Firmware on Linux

A few weeks ago Christian asked me to help with the firmware update task that a couple of people at Red Hat have been working on for the last few months. Peter has got fwupdate to the point where we can “upload” sample .cap files onto the flash chips, but this isn’t particularly safe, or easy to do. What we want for Fedora and RHEL is to be able to either install a .rpm file for a BIOS update (if the firmware is re-distributable), or to get notified about it in GNOME Software where it can be downloaded from the upstream vendor. If we’re showing it in a UI, we also want some well written update descriptions, telling the user about what’s fixed in the firmware update and why they should update. Above all else, we want to be able to update firmware safely offline without causing any damage to the system.

So, lets back up a bit. What do we actually need? A binary firmware blob isn’t so useful, and so Microsoft have decided we should all package it up in a .cab file (a bit like a .zip file) along with a .inf file that describes the update in more detail. Parsing .inf files isn’t so hard in Linux as we can fix them up to be valid and open them as a standard key file. The .inf file gives us the hardware ID of what the firmware is referring to, as well as a vendor and a short (!) update description. So far the update descriptions have been less than awesome “update firmware” so we also need some way of fixing up the update descriptions to be suitable to show the user.

AppStream, again, to the rescue. I’m going to ask nice upstreams like Intel and the weird guy who does ColorHug to start shipping a MetaInfo file alongside the .inf file in the firmware .cab file. This means we can have fully localized update descriptions, along with all the usual things you’d expect from an update, e.g. the upstream vendor, the licensing information, etc. Of course, a lot of vendors are not going to care about good descriptions, and won’t be interested in shipping another 16k file in the update just for Linux users. For that, we can actually “inject” a replacement MetaInfo file when we curate the AppStream metadata. This allows us to download all the .cab files we care about, but are not allowed to redistribute, run the appstream-builder on them, then package up just the XML metadata which can be consumed by pretty much any distribution. Ideally vendors would do this long term, bu you need got master versions of basically everything to generate the file, so it’s somewhat of a big ask at the moment.

So, we’ve now got a big blob of metadata we can read in GNOME Software, and show to Fedora users. We can show it in the updates panel, just like a normal update, we just can’t do anything with it. We also don’t know if the firmware update we know about is valid for the hardware we’re running on. These are both solved by the new fwupd project that I’ve been hacking on for a few days. This exit-on-idle daemon allows normal users to apply firmware to devices (with appropriate PolicyKit checks, typically the root password) in a safe way. We check the .cab file is valid, is for the right hardware, and then apply the update to be flashed on next reboot.

A lot of people don’t have UEFI hardware that’s capable of using capsule firmware updates, so I’ve also added a ColorHug provider, which predictably also lets you update the firmware on your ColorHug device. It’s a lot lower risk testing all this super-new code with a £20 EEPROM device than your nice shiny expensive prototype hardware from Intel.

At the moment there’s not a lot to test, we still need to connect up the low level fwupdate code with the fwupd provider, but that will be a lot easier when we get all the prerequisites into Fedora. What’s left to do now is to write a plugin for GNOME Software so it can communicate with fwupd, and to write the required hooks so we can get the firmware upgrade status as a notification for boot+2. I’m also happy to accept patches for other hardware that supports updates, although the internal API isn’t 100% stable yet. This is probably quite interesting for phones and tablets, so I’d be really happy if this gets used on other non-Fedora, or non-desktop usecases.

Comments welcome. No screenshots yet, but coming soon.

Tue 2015/Mar/03

  • An inlaid GNOME logo, part 3

    Esta parte en español

    (Parts 1, 2)

    The next step is to make a little rice glue for the template. Thoroughly overcook a little rice, with too much water (I think I used something like 1:8 rice:water), and put it in the blender until it is a soft, even goop.

    Rice glue in the blender

    Spread the glue on the wood surfaces. I used a spatula; one can also use a brush.

    Spreading the glue

    I glued the shield onto the dark wood, and the GNOME foot onto the light wood. I put the toes closer to the sole of the foot so that all the pieces would fit. When they are cut, I'll spread the toes again.

    Shield, glued Foot, glued

March 02, 2015

Luminosity Masking in darktable (Ian Hex)

Photographer Ian Hex was kind enough to be a guest writer over on PIXLS.US with a fantastic tutorial on creating and using Luminosity Masks in the raw processing software darktable.


You can find the new tutorial over on PIXLS.US:



I had previously looked at a couple of amazing shots from Ian over on the PIXLS.US blog, when I introduced him as a guest writer. I thought it might be nice to re-post some of his work here...


The Reverence of St. Peter by Ian Hex (cc-by-sa-nc)


Fire of Whitby Abbey by Ian Hex (cc-by-sa-nc)


Wonder of Variety by Ian Hex (cc-by-sa-nc)

Ian has many more amazing images from Britain of breathtaking beauty over on his site, Lightsweep. Be sure to check them out!

PIXLS.US Update

I have also written an update on the status of the site over on the PIXLS.US blog. TL;DR: It's still coming along! :)

Interview with Igor Leskov

apes800

Would you like to tell us something about yourself?

I like cinema and I like to draw motion pictures. I do not like very much to draw static pictures but I can. I studied traditional painting for eight years in the art school and after that I’ve continued to do it myself for 36 years. I like to learn painting even more than to paint.

Do you paint professionally or as a hobby artist?

I work in the small animation studio as a 2D-3D artist. I draw storyboards and backgrounds in 2D. I make the full 3D film work: modelling, texturing, lighting, rigging and animation. I have very little time to paint personal works, unfortunately.

When and how did you end up trying digital painting for the first time?

It was terrific! I was scanning the black ink drawings on the paper and colouring them in Photoshop in 1996. It was my black-and-white comics for the regional newspaper.

What is it that makes you choose digital over traditional painting?

The choice is simple. No need to buy oil paints and squirrel brushes, it is so lazy. Laziness is the engine of technological progress.

evolve800

How did you first find out about open source communities? What is your opinion about them?

When I found out about Krita I wrote to Boudewijn Rempt and he answered! It was cool!

Have you worked for any FOSS project or contributed in some way?

I have no such experience yet and I have no ability to do that at the present day but I would like do it in the future.

How did you find out about Krita?

My favourite artists are Titian and Moebius (Jean Giraud). When the developers dedicated the another edition of Krita to Moebius I was interested in this and looked at Krita.

What was your first impression?

I liked it.

What do you love about Krita?

Krita is my favourite 2D package and I would like to do something for its development.

What do you think needs improvement in Krita? Also, anything that you really hate?

There is nothing to hate in Krita. I hate myself for that I can’t convince Boud to do what I want and not what he wants:)

volcano800

In your opinion, what sets Krita apart from the other tools that you use?

I like to write to Mr. Rempt and to Mr. Kazakov and I like how they answer.

If you had to pick one favourite of all your work done in Krita so far, what would it be?

I don’t have any favourites yet.

What is it that you like about it? What brushes did you use in it?

I like Smooth Zoom Tool, Wrap Around Mode and Mirror View. I use the standard brushes: Ink_brush_25, Airbrush_linear, Block_tilt, Basic_circle, Bristles_hairy, Basic_mix_soft. I make animated texture brushes and rotate them during painting manually.

Anything else you’d like to share?

Unfortunatelly I cannot share all my professional works to public. It is just because they are owned by the customers of Irkutsk small animation studio Forsight. I can do it a bit on some sites: http://megayustas.deviantart.com, http://ascomix.narod.ru and http://igor-leskov.livejournal.com.

February 27, 2015

Reddit IAmA today, 2-4pm EST

DeathKillCycleAMAProof? Here’s your proof.

Head on over here between 2 and 4pm EST today, Friday February 27.

UPDATE: Reddit is not allowing me to post. On my own IAMA. Granted, this IAMA was set up by someone else, who said he had duly submitted my handle (Nina_Paley, created a week ago) to the mods. But it didn’t work. I was on the calendar, but I can’t respond to questions. I am not happy about this but mods aren’t responding, so I give up. You can AMA on Twitter instead.

UPDATE 2: after half an hour the problem was corrected, and I went back and answered questions.

 

 

Share/Bookmark

flattr this!

Updated Windows Builds

 

We prepared new Windows builds today. They contain the following updates:

  • Improved brush presets. The existing presets were not optimized for Windows systems, so Scott Petrovic took a look at all of them and optimized where possible
  • The brush editor now opens in the right place even if the screen is too small
  • You can now disable the on-canvas message that pops up when zooming, rotating etc. This might solve some performance issues for some people
  • We increased the amount of memory available for G’Mic even more. This may mean that on big, beefy Windows machines you can now use G’Mic, but on other, less beefy machines filters might still crash. G’Mic is an awesome tool, but keep in mind that it’s a research project and that its Windows support is experimental.

We’ll move the new builds to the official download location as soon as possible, but in the meantime, here are the downloads:

 

February 26, 2015

Another fake flash story

I recently purchased a 64GB mini SD card to slot in to my laptop and/or tablet, keeping media separate from my home directory pretty full of kernel sources.

This Samsung card looked fast enough, and at 25€ include shipping, seemed good enough value.


Hmm, no mention of the SD card size?

The packaging looked rather bare, and with no mention of the card's size. I opened up the packaging, and looked over the card.

Made in Taiwan?

What made it weirder is that it says "made in Taiwan", rather than "Made in Korea" or "Made in China/PRC". Samsung apparently makes some cards in Taiwan, I've learnt, but I didn't know that before getting suspicious.

After modifying gnome-multiwriter's fake flash checker, I tested the card, and sure enough, it's an 8GB card, with its firmware modified to show up as 67GB (67GB!). The device (identified through the serial number) is apparently well-known in swindler realms.

Buyer beware, do not buy from "carte sd" on Amazon.fr, and always check for fake flash memory using F3 or h2testw, until udisks gets support for this.

Amazon were prompt in reimbursing me, but the Comité national anti-contrefaçon and Samsung were completely uninterested in pursuing this further.

In short:

  • Test the storage hardware you receive
  • Don't buy hardware from Damien Racaud from Chaumont, the person behind the "carte sd" seller account

The Second Plague (Frogs) – rough

Music is from “Frogs” by DJ Zeph featuring Azeem, from the album “Sunset Scavenger.” It’s from 2004, making it the most contemporary song in the film. I almost used Taylor Swift’s 2014 “Bad Blood” for Blood, but I ended up deciding Josh White’s 1933 “Blood Red River Blues” was simply a better song. It wasn’t due to fear of lawsuits; I decided long ago not to allow copyright to determine my artistic choices. If you don’t know my stance on Intellectual Disobedience, you can learn about it here:
youtube.com/watch?v=dfGWQnj6RNA
and here:
blog.ninapaley.com/2013/12/07/make-art-not-law-2/

I’m curious what frogs DJ Zeph and Azeem were originally referring to. Here, of course, the frogs are these:

“3 And the river shall bring forth frogs abundantly, which shall go up and come into thine house, and into thy bedchamber, and upon thy bed, and into the house of thy servants, and upon thy people, and into thine ovens, and into thy kneadingtroughs:

“4 And the frogs shall come up both on thee, and upon thy people, and upon all thy servants.” -Exodus 8, King James Version

Share/Bookmark

flattr this!

February 25, 2015

Krita 2.9

Congratulations to Krita on releasing version 2.9 and a very positive write-up for Krita by Bruce Byfield writing for Linux Pro Magazine.

I'm amused by his comment comparing Krita to "the cockpit of a fighter jet" and although there are some things I'd like to see done differently* I think Krita is remarkably clear for a program as complex as it is and does a good job of balancing depth and breadth. (* As just one example: I'm never going to use "File, Mail..." so it's just there waiting for me to hit it accidentally, but far as I know I cannot disable or hide it.)

Unfortunately Byfield writes about Krita "versus" other software. I do not accept that premise. Different software does different things, users can mix and match (and if they can't that is a different and bigger problem). Krita is another weapon in the arsenal. Enjoy Krita 2.9.

Mairi Trois


Readers who've been here for a little while might recognize my friend Mairi, who has modeled for me before. This time I had a brief opportunity for her to sit for me again for a few shots before she jet-setted her way over to Italy for a while.

I was specifically looking to produce the lede image you see above, Mairi Troisième. In particular, I was chasing some chiaroscuro portrait lighting that I had in mind for a while and I was quite happy with the final result!

Of course, I also had a large new light modifier, so bigger shots were fun to play with as well:


Mairi Color (in Black)
ƒ/6.3 1/200s ISO200


Mairi B&W
ƒ/8.0 1/200s ISO200

Those two shots were done using a big Photek Softlighter II [amazon] that I treated myself to late last year. (I believe the speedlight was firing @3/4 power for these shots).

It wasn't all serious, there were some funny moments as well...


My Eyes Are Up Here
ƒ/7.1 1/200s ISO200

Of course, I like to work up close to a subject personally. I think it gives a nicer sense of intimacy to an image.


More Mairi Experiments
ƒ/11.0 1/200s ISO200


Mairi Trois
ƒ/8.0 1/200s ISO200

Culminating at one of my favorites from the shoot, this nice chiaroscuro image up close:


Mairi (Closer)
ƒ/10.0 1/200s ISO200

It's always a pleasure to get a chance to shoot with Mairi. She's a natural in front of the camera, and has these huge expressive eyes that are always a draw.

Later this week, an update on PIXLS.US!

Krita 2.9.0: the Kickstarter Release

The culmination of over eight months of work, Krita 2.9 is the biggest Krita release until now! It’s so big, we can’t just do the release announcement in only one page, we’ve had to split it up into separate pages! Last year, 2014, was a huge year for Krita. We published Krita on Steam, we showed off Krita at SIGGRAPH, we got Krita reviewed in ImagineFX, gaining the Artist’s choice accolade —  and we got our first Kickstarter campaign more than funded, too! This meant that more work has gone into Krita than ever before.

And it shows: here are the results. Dozens of new features, improved functions, fixed bugs, spit and polish all over the place.  The initial port to OSX. Some of the new features took more than two years to implement, and others are a direct result of your support!

Eleven of the twelve Kickstarter-funded features are in, and we’ll be doing the last one for the next 2.9 release — 2.9.1. Krita can now open more than one image in a window, and show an image in more than one view or window. Great perspective drawing assistants. Creative painting in HDR more is now not just possible, it’s fun. Lots and lots of workflow improvements. So, be prepared… Wolthera and Scott have prepared a big, big overview of all the changes for you:

Overview of New Features and Release Notes
https://krita.org/krita-2-9-the-kickstarter-release/

Without all your support, whether through direct donations, Steam sales, the Kickstarter campaign, the work of testers and bug reporters and documentation, tutorial, translation and code contributors, Krita would never have gotten this far! So, thanks and hugs all around!

Enjoy!

 

mascot_20150204_kiki_c_1920x1080

Kiki in Spring Time, by Tyson Tan

February 24, 2015

Announcing issue 2.3 of Libre Graphics magazine

cover-photo-medium

We’re very pleased to announce the long-awaited release of Libre Graphics magazine issue 2.3. This issue is guest-edited by Manuel Schmalstieg and addresses a theme we’ve been wanting to tackle for some time: type design. From specimen design to international fonts, constraint-based type to foundry building, this issue shows off the many faces of libre type design.

With the usual cast of columnists, stunning showcases and intriguing features, issue 2.3, The Type Issue, given an entrée into what’s now and next in F/LOSS fonts.

The Type Issue is the third issue in volume two of Libre Graphics magazine. Libre Graphics magazine is a print publication devoted to showcasing and promoting work created with Free/Libre Open Source Software. We accept work about or including artistic practices which integrate Free, Libre and Open software, standards, culture, methods and licenses.

The theory of everything

Life of  Stephen Hawking’s, based on his ex-wife’s biography. The movie is attractive and romantic, yet not exaggerated or over dramatized. Instead of focusing on Hawking’s life tragedy or  listing his contributions to Physics, the movie takes a personal angle. The amazing cinematography, clean script and brilliant performances make the movie justified and impressive.

Tips for developing on a web host that offers only FTP

Generally, when I work on a website, I maintain a local copy of all the files. Ideally, I use version control (git, svn or whatever), but failing that, I use rsync over ssh to keep my files in sync with the web server's files.

But I'm helping with a local nonprofit's website, and the cheap web hosting plan they chose doesn't offer ssh, just ftp.

While I have to question the wisdom of an ISP that insists that its customers use insecure ftp rather than a secure encrypted protocol, that's their problem. My problem is how to keep my files in sync with theirs. And the other folks working on the website aren't developers and are very resistant to the idea of using any version control system, so I have to be careful to check for changed files before modifying anything.

In web searches, I haven't found much written about reasonable workflows on an ftp-only web host. I struggled a lot with scripts calling ncftp or lftp. But then I discovered curftpfs, which makes things much easier.

I put a line in /etc/fstab like this:

curlftpfs#user:password@example.com/ /servername fuse rw,allow_other,noauto,user 0 0

Then all I have to do is type mount /servername and the ftp connection is made automagically. From then on, I can treat it like a (very slow and somewhat limited) filesystem.

For instance, if I want to rsync, I can

rsync -avn --size-only /servername/subdir/ ~/servername/subdir/
for any particular subdirectory I want to check. A few things to know about this:
  1. I have to use --size-only because timestamps aren't reliable. I'm not sure whether this is a problem with the ftp protocol, or whether this particular ISP's server has problems with its dates. I suspect it's a problem inherent in ftp, because if I ls -l, I see things like this:
    -rw-rw---- 1 root root 7651 Feb 23  2015 guide-geo.php
    -rw-rw---- 1 root root 1801 Feb 14 17:16 guide-header.php
    -rw-rw---- 1 root root 8738 Feb 23  2015 guide-table.php
    
    Note that a file modified a week ago shows a modification time, but files modified today show only a day and year, not a time. I'm not sure what to make of this.
  2. Note the -n flag. I don't automatically rsync from the server to my local directory, because if I have any local changes newer than what's on the server they'd be overwritten. So I check the diffs by hand with tkdiff or meld before copying.
  3. It's important to rsync only the specific directories you're working on. You really don't want to see how long it takes to get the full file tree of a web server recursively over ftp.

How do you change and update files? It is possible to edit the files on the curlftpfs filesystem directly. But at least with emacs, it's incredibly slow: emacs likes to check file modification dates whenever you change anything, and that requires an ftp round-trip so it could be ten or twenty seconds before anything you type actually makes it into the file, with even longer delays any time you save.

So instead, I edit my local copy, and when I'm ready to push to the server, I cp filename /servername/path/to/filename.

Of course, I have aliases and shell functions to make all of this easier to type, especially the long pathnames: I can't rely on autocompletion like I usually would, because autocompleting a file or directory name on /servername requires an ftp round-trip to ls the remote directory.

Oh, and version control? I use a local git repository. Just because the other people working on the website don't want version control is no reason I can't have a record of my own changes.

None of this is as satisfactory as a nice git or svn repository and a good ssh connection. But it's a lot better than struggling with ftp clients every time you need to test a file.

February 23, 2015

Boyhood

Boyhood is stunning. Just like Linklater’s Trilogy, the movie deals the most sophisticated human emotion with a simple, micro storyline. This time Linklater’s narration follows a boy and his life for 12 years (and it took 12 years in making). The brilliant making, interesting way of story telling, indirect representation of time makes the awesome. […]

February 22, 2015

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

February 20, 2015

SVG Working Group Meeting Report — Sydney

The SVG Working Group had a four day face-to-face meeting in Sydney this month. The first day was a joint meeting with the CSS Working Group.

I would like to thank the Inkscape board for funding my travel. This was an expensive trip as I was traveling from Paris and Sydney is an expensive city… but I think it was well worth it as the SVG WG (and CSS WG, where appropriate) approved all of my proposals and worked through all of the issues I raised. Unfortunately, due to the high cost of this trip, I have exhausted the budgeted funding from Inkscape for SVG WG travel this year and will probably miss the two other planned meetings, one in Sweden in June and one in Japan in October. We target the Sweden meeting for moving the SVG 2 specification from Working Draft to Candidate Recommendation so it would be especially good to be there. If anyone has ideas for alternative funding, please let me know.

Highlights:

A summary of selected topics, grouped by day, follows:

Joint CSS and SVG Meeting

Minutes

  • SVG sizing in HTML.

    We spent some time discussing how SVG should be sized in HTML. For corner cases, the browsers disagree on how large an SVG should be displayed. There is going to be a lot work required to get this nailed down.

  • CSS Filter Effects:

    We spent a lot of time going through and resolving the remaining issues in the CSS Filter Effects specification. (This is basically SVG 1.1 filters repackaged for use by HTML with some extra syntax sugar coating.) We then agreed to publish the specification as a Candidate Recommendation.

  • CSS Blending:

    We discussed publishing the CSS Blending specification as a Recommendation, the final step in creating a specification. I raised a point that most of the tests assumed HTML content. It was requested that more SVG specific test be created. (Part of the requirement for Recommendation status is that there be a test suite and that two independently developed renderers pass each test in the suite.)

  • SVG in OpenType, Color Palettes:

    The new OpenType specification allows for multi-colored SVG glyphs. It would be nice to set those colors through CSS. We discussed several methods for doing so and decided on one method. It will be added to the CSS Fonts Level 4 specification.

  • Text Rendering:

    The ‘text-rendering‘ property gives renderers a hint on what speed/precision trade-offs should be made. It was pointed out that the layout of text flowed into a box will change as one zooms in and out on a page in Firefox due to font-hinting, font-size rounding, etc. The Google docs people would like to prevent this. It was decided that the ‘geometricPrecision’ value should require that font-metrics and text-measurement be independent of device resolution and zoom level. (Note: this property is defined in SVG but both Firefox and Chrome support it on HTML content.)

  • Text Properties:

    Text in SVG 2 relies heavily on CSS specifications that are in various states of readiness. I asked the CSS/SVG groups what is the policy for referencing these specs. In particular, SVG 2 needs to reference the CSS Shapes Level 2 specification in order to implement text wrapping inside of SVG shapes. The CSS WG agreed to publish CSS Shapes Level 2 as a Working Draft so we can reference it. We also discussed various technical issues in defining how text wraps around excluded areas and in flowing text into more than one shape.

SVG Day 1

Minutes

  • CamelCase Names

    The SVG WG decided some time ago to avoid new CamelCase names like ‘LinearGradient’ which cause problems with integration in HTML (HTML is case insensitive and CamelCase SVG names must be added by hand to HTML parsers). We went through the list of new CamelCase names in SVG 2 and decided which ones could be changed, weighing arguments for consistency against the desire to not introduce new CamelCase names. It was decided that <meshGradient> should be changed to <mesh>. This was mostly motivated by the ability to use a mesh as a standalone entity (and not only as a paint server). Other changes include: <hatchPath> to <hatchpath>, <solidColor> to <solidcolor>, …

  • Requiring <foreignObject> HTML to be rendered.

    There was a proposal to require any HTML content in a <foreignObject> element to be rendered. I pointed out that not all SVG renderers are HTML renderers (Inkscape as an example). It was decided to have separate conformance classes, one requiring HTML content to be rendered and one not.

  • Requiring Style Sheets Support:

    It was decided to require style sheet support. We discussed what kind of style sheets to require. We decided to require basic style sheet support at the CSS 1 or CSS 2.1 level (that part of the discussion was not minuted).

  • Open Issues:

    We spent considerable time going through the specification chapter by chapter looking at open issues that would block publishing the specification as a Candidate Recommendation. This was a long multi-day process.

SVG Day 2

Minutes

Note: Day 2 and Day 3 minutes are merged.

  • Superpaths:

    Superpaths is the name for the ability to reuse path segment data. This is useful, for example, to define the boundary between two shapes just once, reusing the path segment for both shapes. SVG renderers might be able to exploit this information to provide better anti-aliasing between two shapes knowing they share a common border. The SVG WG endorses this proposal but it probably won’t be ready in time for SVG 2. Instead, it will be developed in a separate Path enhancement module.

  • Line-Join: Miter Clipped:

    It was proposed on the SVG mailing list that there be a new behavior for the miter ‘line-join’ value in regards to the ‘miter-limit’ property. At the moment, if a miter produces a line cap that extends farther than the ‘miter-limit’ value then the miter type is changed to bevel. This causes abrupt jumps when the angle between the joined lines changes such that the miter length crosses over the ‘miter-limit’ value (see demo). A better solution is to clip the line join at the ‘miter-limit’. This is done by some rendering libraries including the one used on Windows. We decided to create a new value for ‘line-join’ with this behavior.

  • Auto-Path Closing:

    The ‘z’ path command closes paths by drawing a line segment to the first point in the path. This is fine if the path is made up of straight lines but becomes problematic if the path is made up of curves. For example, it can cause rendering problems for markers as there will be an extra line segment between the start and end of the path. If the last point is exactly on top of the first point, one can remove this closing line segment but this isn’t always possible, especially if one is using the relative path commands with rounding errors. A more detailed discussion can be found here. We decided to allow a ‘z’ command to fill in missing point data using the first point in the path. For example in: d=”m 100,125 c 0,-75 100,-75 100,0 c 0,75 -100,75 z” the missing point of the second Bezier curve is filled in by the first point in the path.

  • Text on a Shape:

    An Inkscape developer has been working on putting text on a shape by converting shapes to paths while storing the original shape in the <defs> section. It would be much easier if SVG just allowed text on a shape. I proposed that we include this in SVG 2. This is actually quite easy to specify as we have already defined how shapes are converted to paths (needed by markers on shapes and putting dash patterns on shapes). A couple minor points needed to be decided: Do we allow negative path offsets? (Yes) How do we decide which side of a path the text should be put? (A new attribute) The SVG WG approved adding text on a shape to SVG 2.

  • Marker knockouts, mid-markers, etc:

    A number of new marker features still need some work. To facilitate finishing SVG 2 we decided to move them to a separate specification. There is some hesitation to do so as there is fear that once removed from the main SVG specification they will be forgotten about. This will be a trial of how well separating parts of SVG 2 into separates specifications works. The marker knockout feature, very useful for arrowheads is one feature moved into the new specification. On day 3 we approved publishing the new Markers Level 1 specification as a First Public Working Draft.

  • Text properties:

    With our new reliance on CSS for text layout, just what CSS properties should SVG 2 support? We don’t want to necessarily list them all in the SVG 2 specification as the list could change as CSS adds new properties. We decided that we should support all paragraph level properties (‘text-indent’, ‘text-justification’, etc.). We’ll ask the CSS working group to create a definition for CSS paragraph properties that we can then reference.

  • Text ‘dx’, ‘dy’, and ‘rotate’ attributes:

    SVG 1.1 has the properties ‘dx’, ‘dy’, and ‘rotate’ attributes that allow individual glyphs to be shifted and rotated. While not difficult to support on auto-wrapped text (they would be applied after CSS text layout), we decided that they weren’t really needed. They can still be used on SVG 1.1 style text (which is still part of SVG 2).

SVG Day 3

Minutes

Note: Day 3 minutes are at end of Day 2 minutes.

  • Stroking Enhancements:

    As part of trying to push SVG 2 quickly, we decided to move some of the stroking enhancements that still need work into a separate specification. This includes better dashing algorithms (such as controlling dash position at intersections) and variable width strokes. We agreed to the publication of SVG Strokes as a First Public Working Draft.

  • Smoothing in Mesh Gradients:

    Coons-Patch mesh gradients have one problem: the color profile at the boundary between patches is not always smooth. This leads to visible artifacts which are enhanced by Mach Banding. I’ve discussed this in more detail here. I proposed to the SVG WG that we include the option of auto-smoothing meshes using monotonic-bicubic interpolation. (There is an experimental implementation in Inkscape trunk which I demonstrated to the group.) The SVG WG accepted my proposal.

  • Motion Path:

    SVG has the ability to animate a graphical object along a path. This ability is desired for HTML. The SVG and CSS working groups have produced a new specification, Motion Path Module Level 1, for this purpose. We agreed to publish the specification as a First Public Working Draft.

February 19, 2015

Finding core dump files

Someone on the SVLUG list posted about a shell script he'd written to find core dumps.

It sounded like a simple task -- just locate core | grep -w core, right? I mean, any sensible packager avoids naming files or directories "core" for just that reason, don't they?

But not so: turns out in the modern world, insane numbers of software projects include directories called "core", including projects that are developed primarily on Linux so you'd think they would avoid it ... even the kernel. On my system, locate core | grep -w core | wc -l returned 13641 filenames.

Okay, so clearly that isn't working. I had to agree with the SVLUG poster that using "file" to find out which files were actual core dumps is now the only reliable way to do it. The output looks like this:

$ file core
core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (375)

The poster was using a shell script, but I was fairly sure it could be done in a single shell pipeline. Let's see: you need to run locate to find any files with 'core" in the name.

Then you pipe it through grep to make sure the filename is actually core: since locate gives you a full pathname, like /lib/modules/3.14-2-686-pae/kernel/drivers/edac/edac_core.ko or /lib/modules/3.14-2-686-pae/kernel/drivers/memstick/core, you want lines where only the final component is core -- so core has a slash before it and an end-of-line (in grep that's denoted by a dollar sign, $) after it. So grep '/core$' should do it.

Then take the output of that locate | grep and run file on it, and pipe the output of that file command through grep to find the lines that include the phrase 'core file'.

That gives you lines like

/home/akkana/geology/NorCal/pinnaclesGIS/core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (523)

But those lines are long and all you really need are the filenames; so pass it through sed to get rid of anything to the right of "core" followed by a colon.

Here's the final command:

file `locate core | grep '/core$'` | grep 'core file' | sed 's/core:.*//'

On my system that gave me 11 files, and they were all really core dumps. I deleted them all.

February 18, 2015

OpenRaster Python Plugin

OpenRaster Python Plugin

Early in 2014, version 0.0.2 of the OpenRaster specification added a requirement that each file should include a full size pre-rendered image (mergedimage.png) so that other programs could more easily view OpenRaster files. [Developers: if your program can open a zip file and show a PNG you could add support for viewing OpenRaster files.*]

The GNU Image Manipulation Program includes a python plugin for OpenRaster support, but it did not yet include mergedimage.png so I made the changes myself. You do not need to wait for the next release, or for your distribution to eventually package that release you can benefit from this change immediately. If you are using the GNU Image Manipulation Program version 2.6 you will need to make sure you have support for python plugins included in your version (if you are using Windows you wont), and if you are using version 2.8 it should already be included.

It was only a small change but working with Python and not having to wait for code to compile make it so much easier.

* Although it would probably be best if viewer support was added at the toolkit level, so that many applications could benefit.
[Edit: Updated link]

Wed 2015/Feb/18

  • Integer overflow in librsvg

    Another bug that showed up through fuzz-testing in librsvg was due to an overflow during integer multiplication.

    SVG supports using a convolution matrix for its pixel-based filters. Within the feConvolveMatrix element, one can use the order attribute to specify the size of the convolution matrix. This is usually a small value, like 3 or 5. But what did fuzz-testing generate?

    <feConvolveMatrix order="65536">

    That would be an evil, slow convolution matrix in itself, but in librsvg it caused trouble not because of its size, but because C sucks.

    The code had something like this:

    struct _RsvgFilterPrimitiveConvolveMatrix {
        ...
        double *KernelMatrix;
        ...
        gint orderx, ordery;
        ...
    };
    	      

    The values for the convolution matrix are stored in KernelMatrix, which is just a flattened rectangular array of orderx × ordery elements.

    The code tries to be careful in ensuring that the array with the convolution matrix is of the correct size. In the code below, filter->orderx and filter->ordery have both been set to the dimensions of the array, in this case, both 65536:

    guint listlen = 0;
    
    ...
    
    if ((value = rsvg_property_bag_lookup (atts, "kernelMatrix")))
        filter->KernelMatrix = rsvg_css_parse_number_list (value, &listlen);
    
    ...
    
    if ((gint) listlen != filter->orderx * filter->ordery)
        filter->orderx = filter->ordery = 0;
    	    

    Here, the code first parses the kernelMatrix number list and stores its length in listlen. Later, it compares listlen to orderx * ordery to see if KernelMatrix array has the correct length. Both filter->orderx and ordery are of type int. Later, the code iterates through the values in the filter>KernelMatrix when doing the convolution, and doesn't touch anything if orderx or ordery are zero. Effectively, when those values are zero it means that the array is not to be touched at all — maybe because the SVG is invalid, as in this case.

    But in the bug, the orderx and ordery are not being sanitized to be zero; they remain at 65536, and the KernelMatrix gets accessed incorrectly as a result. Let's see what happens when you mutiply 65536 by itself with ints.

    (gdb) p (int) 65536 * (int) 65536
    $1 = 0
    	    

    Well, of course — the result doesn't fit in 32-bit ints. Let's use 64-bit ints instead:

    (gdb) p (long long) 65536 * 65536
    $2 = 4294967296
    	    

    Which is what one expects.

    What is happening with C? We'll go back to the faulty code and get a disassembly (I recompiled this without optimizations so the code is easy):

    $ objdump --disassemble --source .libs/librsvg_2_la-rsvg-filter.o
    ...
        if ((gint) listlen != filter->orderx * filter->ordery)
        4018:       8b 45 cc                mov    -0x34(%rbp),%eax    
        401b:       89 c2                   mov    %eax,%edx           %edx = listlen
        401d:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4021:       8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx     %ecx = filter->orderx
        4027:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        402b:       8b 80 ac 00 00 00       mov    0xac(%rax),%eax     %eax = filter->ordery
        4031:       0f af c1                imul   %ecx,%eax
        4034:       39 c2                   cmp    %eax,%edx
        4036:       74 22                   je     405a <rsvg_filter_primitive_convolve_matrix_set_atts+0x4c6>
            filter->orderx = filter->ordery = 0;
        4038:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        403c:       c7 80 ac 00 00 00 00    movl   $0x0,0xac(%rax)
        4043:       00 00 00 
        4046:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        404a:       8b 90 ac 00 00 00       mov    0xac(%rax),%edx
        4050:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4054:       89 90 a8 00 00 00       mov    %edx,0xa8(%rax)
    	    

    The highligted lines do the multiplication of filter->orderx * filter->ordery and the comparison against listlen. The imul operation overflows and gives us 0 as a result, which is of course wrong.

    Let's look at the overflow in slow motion. We'll set a breakpoint in the offending line, disassemble, and look at each instruction.

    Breakpoint 3, rsvg_filter_primitive_convolve_matrix_set_atts (self=0x69dc50, ctx=0x7b80d0, atts=0x83f980) at rsvg-filter.c:1276
    1276        if ((gint) listlen != filter->orderx * filter->ordery)
    (gdb) set disassemble-next-line 1
    (gdb) stepi
    
    ...
    
    (gdb) stepi
    0x00007ffff7baf055      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
    => 0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
       0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x10000  65536
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    ...
    eflags         0x206    [ PF IF ]
    	    

    Okay! So, right there, the code is about to do the multiplication. Both eax and ecx, which are 32-bit registers, have 65536 in them — you can see the 64-bit "big" registers that contain them in rax and rcx.

    Type "stepi" and the multiplication gets executed:

    (gdb) stepi
    0x00007ffff7baf058      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
       0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
    => 0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x0      0
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    eflags         0xa07    [ CF PF IF OF ]
    	    

    Kaboom. The register eax (inside rax) now is 0, which is the (wrong) result of the multiplication. But look at the flags! There is a big fat OF flag, the overflow flag! The processor knows! And it tries to tell us... with a single bit... that the C language doesn't bother to check!

    Handover

    (The solution in the code, at least for now, is simple enough — use gint64 for the actual operations so the values fit. It should probably set a reasonable limit for the size of convolution matrices, too.)

    So, could anything do better?

    Scheme uses exact arithmetic if possible, so (* MAXLONG MAXLONG) doesn't overflow, but gives you a bignum without you doing anything special. Subsequent code may go into the slow case for bignums when it happens to use that value, but at least you won't get garbage.

    I think Python does the same, at least for integer values (Scheme goes further and uses exact arithmetic for all rational numbers, not just integers).

    C# lets you use checked operations, which will throw an exception if something overflows. This is not the default — the default is "everything gets clipped to the operand size", like in C. I'm not sure if this is a mistake or not. The rest of the language has very nice safety properties, and it lets you "go fast" if you know what you are doing. Operations that overflow by default, with opt-in safety, seem contrary to this philosophy. On the other hand, the language will protect you if you try to do something stupid like accessing an array element with a negative index (... that you got from an overflowed operation), so maybe it's not that bad in the end.

February 17, 2015

Reanimation of MacBook Air

For some months our MacBook Air was broken. Finally good time to replace, I thought. On the other side, the old notebook was quite useful even 6 years after purchasing. Coding on the road, web surfing, SVG/PDF presentations and so on worked fine on the Core2Duo device from 2008. The first breaking symptoms started with video errors on a DVI connected WUXGA/HDTV+ sized display. The error looked like non stable frequency handling, with the upper scan lines being visually ok and the lower end wobbling to the right. A black desktop background with a small sized window was sometimes a workaround. This notebook type uses a Nvidia 9400M on the logic board. Another non portable computer of mine which uses Nvidia 9300 Go on board graphics runs without such issues. So I expected no reason to worry about the type of graphics chip. Later on, the notebook stopped completely, even without attached external display. It showed a well known one beep every 5 seconds during startup. On MacBook Pro/Air’s this symptom means usually broken RAM.

The RAM is soldered directly on the logic board. Replacing @ Apple appeared prohibitive. Now that I began to look around to sell the broken hardware to hobbyists, I found an article talking about these early MacBook Air’s. This specific one is a 2.1 rev A 2.13 GHz. It was mentioned, that early devices suffered from lead-free soldering, which performs somewhat worse in regards to ductility than normal soldering. The result was that many of these devices suffered from electrical disconnections of its circuitry during the course of warming and cooling and the related thermal expansion and contraction. The device showed the one beep symptom on startup without booting. An engineer from Apple was unofficially cited to suggest, that putting the logic board in around 100° Celsius for a few minutes would eventually suffice to solve the issue. That sounded worth a try to me. As I love to open up many devices to look into and eventually repair them, taking my time for dismounting the logic board and not bringing it to a repair service was fine for me. But be warned, doing so can be difficult for beginners. I placed the board on some wool in the oven @120 ° and after 10 minutes and some more for montage, the laptop started again to work. I am not sure if soldering is really solved now or if the experienced symptoms will come back. I guess that some memory chips on the board were resetted and stopped telling that RAM is broken. So my device works again and will keep us happy for a while – I hope.

February 16, 2015

Old projects, new images

We use to make 3D images of old projects of some of our clients, to give their websites a bit of a refresh, and we don't do it for ourselves? No sir, no more! Here is a bit of revamp on two oldies but goodies of our projects, Casa GL and the PACE ONG. ...

KMZ Zorki 4 (Soviet Rangefinder)

The Leica rangefinder

Rangefinder type cameras predate modern single lens reflex cameras. People still use them. It’s just a different way of shooting. Since they’re no longer a mainstream type camera most manufacturers have stopped making them a long time ago. Except Leica, Leica still makes digital and film rangefinders and as you might guess, they come at significant cost. Even old Leica film rangefinders easily cost upwards of € 1000. While Leica certainly wasn’t the only brand to manufacture rangefinders throughout photographic history, it was (and still is) certainly the most iconic rangefinder brand.

The Zorki rangefinder

Now the Soviets essentially tried to copy Leica’s cameras, the result of which, the Zorki series of cameras, was produced at KMZ. Many different versions exist, having produced nearly 2 million cameras across more than 15 years, the Zorki-4 was without a doubt it’s most popular incarnation. Many consider the Zorki-4 to be the one where the Soviets got it (mostly) right.

That said, the Zorki-4 vaguely looks like a Leica M with it’s single coupled viewfinder/rangefinder window. In most other ways it’s more like a pre-M Leica, with it’s 39mm LTM lens screw mount. Earlier Zorki-4’s have a body finished with vulcanite which is though as nails, but if damaged is very difficult to fix/replace. Later Zorki-4’s have a body finished with relatively cheap leatherette, which is much more easily damaged, and is commonly starting to peel off, but should be relatively easy to make better than new. Most Zorki’s come with either a Jupiter-8 50mm f/2.0 lens (being a Zeiss Sonnar inspired design), or an Industar-50 50mm f/3.5 (being a Zeiss Tessar inspired design). I’d highly recommend getting a Zorki-4 with a Jupiter-8 if you can find one.

Buying a Zorki rangefinder with a Jupiter lens

If you’re looking to buy a Zorki there are a few things to be aware of. Zorki’s were produced during the fifties, the sixties and the seventies in Soviet Russia often favoring quantity over quality presumably to be able to meet quota’s. The same is likely true for most Soviet optics as well. So they are both old and may not have met the highest quality standards to begin with. So when buying a Zorki you need to keep in mind it might need repairs and CLA (clean, lube, adjust). My particular Zorki had a dim viewfinder because of dirt both inside and out, the shutterspeed dial was completely stuck at 1/60th of a second and the film takeup spool was missing. I sent my Zorki-4 and Jupiter-8 to Oleg Khalyavin for repairs, shutter curtain replacement and CLA. Oleg was also able to provide me with a replacement film takeup spool or two as well. All in all having work done on your Zorki will easily set you back about € 100 including significant shipping expenses. Keep this in mind before buying. And even if you get your Zorki in a usable state, you’ll probably have to have it serviced at some point. You may very well want to consider having it serviced rather sooner than later, allowing yourself the benefit of enjoying a newly serviced camera.

Complementary accessories

Zorki’s usually come without a lens hood, and the Jupiter-8’s glass elements are said to be only single coasted, so a lens hood isn’t exactly a luxury. A suitable aftermarket lens hood isn’t hard to find though.

While my Zorki did come with it’s original clumsy (and in my case stinky) leather carrying case, it doesn’t come with a regular camera strap. Matin’s Deneb-12LN leather strap can be an affordable but stylish companion to the Zorki. The strap is relatively short, but it’s long enough to wear around your neck or arm. It’s also fairly stiff when it’s still brand new, but it will loosen up after using it for a few days. The strap seems to show signs of wear fairly quickly though.

To some it might seem asif the Zorki has a hot shoe, but it doesn’t, it’s actually a cold shoe, merely intended as an accessory mount and since it’s all metal even with a flash connected via PC Sync it’s likely to be permanently shorted. To mount a regular hot shoe flash you will need a hot shoe adapter both for isolation and PC Sync connectivity.

Choosing a film stock

So now you have a nice Zorki-4, waiting for film to be loaded into it. As of this writing (2015) there is a smörgåsbord of film available. I like shooting black & white, and I often shoot Ilford XP2 Super 400. Ilford’s XP2 is the only B&W film left that’s meant to be processed along with color print film in regular C41 chemicals (so it can be processed by a one-hour-photo service, if you’re lucky enough to still have one of those around). Like most color print film, XP2 has a big exposure latitude, remaining usable between ISO 50 — 800, which isn’t a luxury since the Zorki-4 is not equipped with a built-in lightmeter. While Ilford recommends shooting it at ISO 400, I’d suggest shooting it as if it’s ISO 200 film, giving you two stops of both underexposure and overexposure leeway.

Duckies

With regard to color print film, I’ve only shot Kodak Gold 200 color print film thus far with pretty decent results. Kodak New Portra 400 quickly comes to mind as another good option. An inexpensive alternative could possibly be Fuji Superia X-TRA 400, which can be found very cheaply as most store-brand 400 speed color print film.

Shooting with a Zorki rangefinder

Once you have a Zorki, there are still some caveats you need to be aware of… Most importantly, don’t change shutter speeds while the shutter isn’t cocked (cocking the shutter is done by advancing the film), not heeding this warning may damage the cameras internal mechanisms. Other notable issues of lesser importance are minding the viewfinder’s parallax error (particularly when shooting at short distances) and making sure you load the film straight, I’ve managed to load film at a slight angle a couple of times already.

As I’ve mentioned, the Zorki-4 does not come with a built-in lightmeter, which means the camera won’t be helping you getting the exposure right, you are on your own. You could use a pricy dedicated light meter (or a less pricy smartphone app, which may or may not work well on your particular phone), either of which are fairly cumbersome. Considering XP2’s wide exposure latitude means an educated guesswork approach becomes feasible. There’s a rule of thumb system called Sunny 16 for making educated guesstimates of exposure for outdoors environments. Sunny 16 states that if you set your shutter speed to the closest reciprocal of your film speed, bright sunny daylight requires an aperture of f/16 to get a decent exposure. Other weather conditions require opening up the aperture according to this table:


Sunny
Slightly
Overcast

Overcast
Heavy
Overcast
Open
Shade
f/16
f/11
f/8
f/5.6
f/4

If you have doubts when classifying shooting conditions, you may want to err on the side of overexposure as color print film tends to prefer overexposure over underexposure. If you’re shooting slide film you should probably avoid using Sunny 16 altogether, as slide film can be very unforgiving if improperly exposed. Additionally, you can manually read a film canisters DX CAS code to see what a films minimum exposure tolerance is.

Quick example: When shooting XP2 on an overcast day, assuming an alternate base ISO of 200 (as suggested earlier), the shutter speed should be set at 1/250th of a second and our aperture should be set at f/8, giving a fairly large field of depth. Now if we want to reduce our field of depth we can trade +2 aperture stops for -2 stops of shutterspeed, where we end up shooting at 1/1000th of a second at f/4.

Having film processed

After shooting a roll of XP2 (or any roll of color print film) you need to take it to a local photo shop, chemist or supermarket to have a it processed, scanned and printed. Usually you’ll be able to have your film processed in C41 chemicals, scanned to CD and get a set of small prints for about € 15 or so. Keep in mind that most shops will cut your filmroll into strips of 4, 5 or 6 negatives, if left to their own devices, depending on the type of protective sleeves they use. Some shops might not offer scanning services without ordering prints, since scanning may be considered a byproduct of the printmaking process. Resulting JPEG scans are usually about 2 megapixel (1800×1200), or sometimes slightly less (1536×1024). A particular note when using XP2, since it’s processed as if it’s color print film means it’s usually scanned as if it’s color print film, where the resulting should-be-monochrome scans (and prints for that matter) can often have a slight color cast. This color cast varies, my particular local lab usually does a fairly decent job, where the scans have a subtle color cast, which isn’t too unpleasant. But I’ve heard about nasty heavier color casts as well. Regardless you need to keep in mind that you might need to convert the scans to proper monochrome manually, which can be easily done with any random photo editing software in a heartbeat. Same goes for rotating the images, aside from the usual 90 degree turns occasionally I get my images scanned upside down, where they need either 180 degree or 270 degree turns, you’ll likely need to do that yourself as well.

Post-processing the scans

Generally speaking I personally like preprocessing my scanned images using some scripted commandline tools before importing them into an image management program like for example Shotwell.

First I remove all useless data from the source JPEG, and in particular for black and white film, like XP2, remove the JPEGs chroma channels, to losslessly remove any color cast (avoiding generational loss):

$ jpegtran -copy none -grayscale -optimize -perfect ORIGINAL.JPG > OUTPUT.JPG

Using the clean image we previously created as a base, we can then add basic EXIF metadata:

$ exiv2 \
   -M"set Exif.Image.Artist John Doe" \
   -M"set Exif.Image.Make KMZ" \
   -M"set Exif.Image.Model Zorki-4" \
   -M"set Exif.Image.ImageNumber \
      $(echo ORIGINAL.JPG | tr -cd '0-9' | sed 's#^0*##g')" \
   -M"set Exif.Image.Orientation 0" \
   -M"set Exif.Image.XResolution 300/1" \
   -M"set Exif.Image.YResolution 300/1" \
   -M"set Exif.Image.ResolutionUnit 2" \
   -M"set Exif.Photo.DateTimeDigitized \
      $(stat --format="%y" ORIGINAL.JPG | awk -F '.' '{print $1}' | tr '-' ':')" \
   -M"set Exif.Photo.UserComment Ilford XP2 Super" \
   -M"set Exif.Photo.ExposureProgram 1" \
   -M"set Exif.Photo.ISOSpeedRatings 400" \
   -M"set Exif.Photo.FocalLength 50/1" \
   -M"set Exif.Image.MaxApertureValue 20/10" \
   -M"set Exif.Photo.LensMake KMZ" \
   -M"set Exif.Photo.LensModel Jupiter-8" \
   -M"set Exif.Photo.FileSource 1" \
   -M"set Exif.Photo.ColorSpace 1" \
   OUTPUT.JPG

As I previously mentioned I tend to get my scans back upside down, which is why I’m usually setting the Orientation tag to 3 (180 degree turn). Other useful values are 0 (do nothing), 6 (rotate 90 degrees clockwise) and 9 (rotate 270 degrees clockwise).

Keeping track

When you’re going to shoot a lot of film it can become a bit of a challenge keeping track of the various rolls of film you may have at an arbitrary point in your workflow. FilmTrackr has you covered.

Manual

You can find a scanned manual for the Zorki-4 rangefinder camera on Mike Butkus’ website.

Moar

If you want to read more about film photography you may want to consider adding Film Is Not Dead and Hot Shots to your bookshelf. You may also want to browse through istillshootfilm.org which seems to be a pretty good resource as well. And for your viewing pleasure, the [FRAMED] Film Show on YouTube.

Interview with Chris Jones

exogenesis by Chris Jones
Would you like to tell us something about yourself?

I live in Melbourne, Australia, and have worked as an illustrator, concept artist, matte painter and 3D artist on a variety of print, game, film and TV projects. I’m probably best known for my short animated film The Passenger, and my on-going 3D human project.

Do you paint professionally or as a hobby artist?

Mostly professionally, but I’m hoping to work up some more personal pieces soon.

When and how did you end up trying digital painting for the first time?

I dabbled with Logo and Mouse Paint when I was a kid in the 1980s, but it wasn’t until 1996 that I was able to properly migrate my drawing and painting skills to the digital domain when I bought a Wacom tablet and Painter 4. I’ve barely touched a pencil or paintbrush ever since.

What is it that makes you choose digital over traditional painting?

Undo, redo and being able to revert to earlier versions; the freedom to experiment as much as I want without wasting expensive art materials; being able to use and create tools that don’t exist in reality; not needing any physical storage space (other than a hard drive); being able to back-up the originals without any loss of quality; no waiting for paint to dry, dealing with a clogged airbrush, wrestling with Frisket and getting paint fumes up my nose … need I go on? :)

I must admit though, I do miss perusing all the nice tools and materials in the art shop.

How did you first find out about open source communities? What is your opinion about them?

I don’t remember how I first found out about them, but it must have been sometime soon after I started using the internet in 1996. I was a bit puzzled as to what would possess people to give their commercially viable software away for free, with no strings attached.

Now that I’m using such software, I find that I have a more direct influence on the shape and direction of the tools I use, which provides me with incentive to contribute, and probably helps explain some of the driving force behind these communities.

Have you worked for any FOSS project or contributed in some way?

Krita is the only one I’ve been involved with in any way so far, other than Blender, which I’ve only skimmed the surface of.

How did you find out about Krita?

I first came across it a few years ago when I was looking for a replacement for my aging copy of Painter 8, but at the time it was either too uncooked or simply unavailable on Windows. In early 2013 I saw it mentioned in a forum discussion about Photoshop alternatives, so I thought I’d take another look.

What was your first impression?

It was still early on in its Windows development at the time so it was full of bugs and highly unstable, but despite this I was pleasantly surprised to find that feature-wise it compared favourably with Painter 8 (which itself was pretty buggy anyway), and even gave Painter 12 a run for its money. It was like a version of Painter with all the bloat stripped out, and some long-standing fundamental issues and omissions finally addressed.

What do you love about Krita?

The pop-up menu; flexible UI; transform and assistant tools; a plethora of colour blending modes; mirror modes; being able to flip the image instantaneously using the “m” key; being able to convert the currently selected brush into an eraser using the “e” key; undo history; layers that behave predictably; responsive developers who engage frequently and openly with users; the rapid pace of development; and of course an ongoing stream of free upgrades!

What do you think needs improvement in Krita? Also, anything that you really hate?

Nothing I’m particularly hateful about – mainly I’d like to see speed improvements, particularly when using large brushes and large images. I think I heard some murmurings about progress in that area though. Changing the layer order can also be quite sluggish, amongst other things. Stability is getting pretty good now, although there’s still room for improvement.

I’ve accumulated a list of niggles and requests that I’ll get around to verifying/reporting one of these days…

In your opinion, what sets Krita apart from the other tools that you use?

Most apps feel like they’re designed for someone else, and I have to try and adapt to their workflow. Krita feels more like it was built with me in mind, and whenever I feel something should behave differently, someone is usually already on the case before I even make mention of it. As far as 2D software goes, Krita fits my needs better than any of the alternatives.

Anything else you’d like to share?

Krita has infiltrated my 3D work as well (which can be found at www.chrisj.com.au), and it’s proven to be well suited to editing textures, as well as painting them from scratch. I look forward to using it more extensively in this area.

February 14, 2015

The Sangre de Cristos wish you a Happy Valentine's Day

[Snow hearts on the Sangre de Cristo mountains]

The snow is melting fast in the lovely sunny weather we've been having; but there's still enough snow on the Sangre de Cristos to see the dual snow hearts on the slopes of Thompson Peak above Santa Fe, wishing everyone for miles around a happy Valentine's Day.

Dave and I are celebrating for a different reason: yesterday was our 1-year anniversary of moving to New Mexico. No regrets yet! Even after a tough dirty work session clearing dead sage from the yard.

So Happy Valentine's Day, everyone! Even if you don't put much stock in commercial Hallmark holidays. As I heard someone say yesterday, "Valentine's day is coming up, and you know what that means. That's right: absolutely nothing!"

But never mind what you may think about the holiday -- you just go ahead and have a happy day anyway, y'hear? Look at whatever pretty scenery you have near you; and be sure to enjoy some good chocolate.



Dear lazyweb,

I am now using a very silent MacBook Air (with external monitor, keyboard and trackpad) as my desktop, and connect to remote boxes for (CPU-intensive) software builds. (One of those "remote" boxes is actually a relatively low-noise (i.e. big fan) Dell workstation under my desk.)

Will the noise become unbearable if I get a 27in iMac (SSD-only) and fully load its CPU a significant part of the day right in front of my eyes and ears?

February 13, 2015

Last Beta release for Krita 2.9

We’re getting so close to the release now! (Check the count-down counter on krita.org!) Sure, there are still a bunch of bugs to fix, but we’re down to very nearly no release blockers now. And we fixed an awful lot of bugs since the last beta release, too!

The 2.9.0 release is scheduled for February 26th, with monthly bug fix releases planned until we release Krita 3.1.

Here’s the fixes:

  • New splash screen by Tyson Tan!
  • Fix noisy complaints from libpng about nothing
  • Hide the next/previous blending mode, snap-to-grid and reload-file actions because they don’t work
  • Fix the shortcuts for setting brush opacity
  • Fix inverted softness (bug 342747)
  • Fix ghost pixels on group layers with no children (bug 331554)
  • Fix opacity setting for pattern fill layers
  • G’Mic: Add progress reporting for small previews
  • G’Mic: Cancel now stops execution of slow filters
  • G’Mic: Don’t crash when closing the G’Mic dialog after doing nothing
  • G’Mic; fix url for updates
  • Fix issues with Genius Tablets (bug 342641)
  • Fix painting on selection masks
  • Make the palettes docker follow the general background color setting
  • Add a temporary dialog to fix issues when the desktop resolution and the wintab resolution don’t match up
  • Fix crash when using the color transfer filter (bug 342287)
  • Fix KToolLine to handle end and cancel requests (bug 336959)
  • Fix a bunch of menu options to only be active at the right moment
  • Add unit of measurement to offset image dialog
  • Fix initialization of the crop tool (bug 342842)
  • Improve default values for the crop tool (bug 242844)
  • Fix crash in shaped gradient with shaped smaller than 3px wide (bug 342942)
  • Show open/save buttons in the ruler assistant tool on Windows (bug 342348)
  • Add auto-leveling to the adjust/levels filter (Aleksander Demko’s first patch!)
  • Don’t push the Copy action on the undo stack (bug 343328)
  • Remember the constrain proportions settings in the canvas size dialog (bug 343282)
  • Fix spacing of rotating brushes (bug 329026)
  • Fix crash when selecting the texture option for the pixel brush engine (bug 342749)
  • Make the “on hover” layer thumbnail configurable (bug 342168)
  • Fix an issue where you’d have to press cancel multiple times to close Krita (bug 343070)
  • Notify the user when copying an empty selection (bug 343092)
  • Expand the color picker tool to be able to use a radius up to 900 pixels (bug 337406)
  • Don’t crash if Krita’s settings has an active preset that no longer exists (bug 340229)
  • Don’t crash when closing Krita while the reference image docker is still loading thumbnails (bug 342896)
  • Switch to an appropriate tool when switching between pixel and vector layers (bug 335092)
  • Support indirect painting mode for masks (bug 318882)
  • Don’t crash when merging selected layers (bug 343540)
  • Bring back the undo docker’s preview thumbnails (bug 277884)
  • Don’t crash when undoing points in polyline stroke or selection (bug 342921)
  • Make shift-z undo points in all poly tools (bug 342921)
  • Make the shortcut for undoing poly tool points configurable (bug 342919)
  • Disable the arrow keys for panning the canvas by default (bug 342023)
  • Update the minimum zoom level after scaling the image (bug 342709)
  • Rearrange the settings menu to be more logical (bug 342068)
  • Only lock the tools if the layer is invisible, but allow moving and deleting of invisible layers (bug 337912)
  • Fix issues when working with the color selectors in HDR mode (bug 343531)
  • Don’t save the crop tool’s force ratio setting (bug 343287)
  • Fix cyclic updates of the currently selected color (bug 343531)
  • G’Mic: Don’t crash when enabling the small preview in interactive colorize (bug 343616)
  • Fix a crash on loading an image (bug 340752)
  • G’Mic: increase the stack size to ridiculous proportions on Windows so the parser doesn’t crash
  • G’Mic: start supporting interactive colorize on Windows (still not done, needs adding support for pthreads)
  • Follow the settings for visibillity of the scrollbars (bug 342217)
  • Fix shaped gradients for selections with holes (bug 343187)
  • Fix floodfill for 16 bits integer/channel RGB images (bug 343365)
  • Fix the zoom level of the scratchpad
  • Fix a big slowdown in the layer properties dialog with big layers and big images (bug 343685)
  • Fix file layer position resetting
  • Add the recent documents to the list in the new images dialog (bug 340949)
  • Fix support for Wacom Airbrush devices. Patch by Arturg. (bug 343545)
  • Make the text brush load and display values from the current brush
  • Fix the text brush when rotation is set to drawing angle (bug 330185)
  • Fix recognizing the bamboo stylus (bug 343545)
  • Don’t deadlock when loading a fill layer (bug 343734)
  • Don’t assert when checking the texture option (bug 343837)
  • Make it possible again to select an image in the color transfer filter on Windows (bug 343706)
  • G’Mic: fix a crash when browsing through the filters
  • Update the layer thumbnails in the layerbox after every stroke, instead of when hovering over the layerbox (bug 343699)
  • Fix a crash when applying changes with the style manager.
  • Fix crash when using drop caps in the multiline text object (bug 342185)
  • Remove inline objects from manager when Delete command is executed. (bug 303492)
  • Make text flowing around shapes with shadow. (bug 335784)
  • SVG file was wrongly placed. (bug 322377)
  • File dialogs: Fix all-supported formats on Gnome and Windows.
  • File dialogs: Restore the All Formats option on Gnome and Windows.

Downloads

There are still 157 bugs at the moment, but quite a lot are rather minor. We fixed nearly 200 bugs. Lots and lots and lots of thanks to all the beta testers who have been sending in report after report: we got over 100 new reports, too!

For Linux users, Krita Lime has been updated. Remember that launchpad is very strict about the versions of Ubuntu it supports. So the update is only available for 14.04 and up.

OpenSUSE users can use the new OBS repositories created by Leinir:

Windows users can choose between an installer and the zip file. You can unzip the zip file anywhere and start Krita by executing bin/krita.exe. The Surface Pro 3 tablet offset issue has been fixed! We only have 64 bits Windows builds at the moment, we’re working on fixing a problem with the 32 bits build.

Right now, we’re trying to fix a bug in the OSX builds were icons aren’t loaded, so we don’t have OSX builds for Beta 3 yet.

16F1454 RA4 input only

To save someone else a wasted evening, RA4 on the MicroChip PIC 16F1454 is an input-only pin, not I/O like stated in the datasheet. In other news, I’ve prototyped the ColorHug ALS on a breadboard (which, it turns out was a good idea!) and the PCB is now even smaller. 12x19mm is about as small as I can go…

OpenRaster and OpenDocument: Metadata

OpenRaster is a file format for the exchange of layered images, and is loosely based on the OpenDocument standard. I previously wrote about how a little extra XML can make a file that is both OpenRaster and OpenDocument compatible. The OpenRaster specification is small and relatively simple, but it does not do everything, so what happens if a developer wants to do something not covered by the standard? What if you want to include metadata?

How about doing it the same way as OpenDocument, it does not have to be complicated. OpenDocument already cleverly reused the existing Dublin Core (dc) standard for metadata, and includes a file called meta.xml in the zip container. A good idea worth borrowing, a simplified example file follows:

Sample OpenDocument Metadata[Pastebin]

(if you can't see the XML here directly, see the link to Pastebin instead.)

I extended the OpenRaster code in Pinta to support metadata in this way. This is the easy part, it gets more complicated if you want to do more than import and export within the same program. As before the resulting file can renamed from .ora to .odg and be opened using OpenOffice* allowing you to view the image and the metadata too. The code is Pinta OraFormat.cs is freely available on GitHub under the same license (MIT X11) as Pinta. The relevant sections of are "ReadMeta" and "GetMeta". A Properties dialog and other code was also added, and I've edited a screenshot of Pinta to show both the menu and the dialog at the same time:

[* OpenOffice 3 is quite generous, and opens the file without complaint. LibreOffice 4 is far less forgiving and gives an error unless I specifically choose "ODF Drawing (.odg)" as the file type in the Open dialog]

February 12, 2015

On Lens Detection and Correction

darktable (and some other projects, like for example ufraw) don't do any real lens detection or correction by itself. We depend on two libraries which in most cases are provided by the Linux distribution you're using.

Lens Detection

Many image files contain metadata about how the image was created. In case of digital camera images, a standard called Exif is used, this standard allows a camera to record many details about how an image was taken. However Exif is not a singular well defined specification, there is a common part that is well defined, and there are the so-called MakerNotes. The MakerNotes are parts of Exif that each vendors gets to do with whatever they like. They are typically completely undocumented, and have to be reverse-engineered to be able to handle them in any way. For most vendors this reverse engineering has been done to some degree and at least parts of the MakerNotes can be deciphered most of the time.

It's in these undocumented MakerNotes that the camera vendors tend to encode a lens id, this lens id typically is just a number for which the camera vendors provide no reference. And without a reference lookup table such a number is quite useless. Open source tools end up having to crowd source it and collate lensid – lensname pairs to be able to identify lenses. darktable, like many others, in particular uses the Exiv2 library.

You can use Exiv2's command line tool to search for lens related tags in your own raw files like so:

# exiv2 -pt IMG_1234.CR2 | grep -ai lens

And if you have a lens that's already been reported to the Exiv2 project, you'll see something along these lines:

Exif.CanonCs.LensType       Short       1  Canon EF-S 24mm f/2.8 STM

However if you have a fairly new lens, chances are it hasn't been reported yet, and you'll get something like this:

Exif.CanonCs.LensType       Short       1  (4154)

There are also cases, where Exiv2 might report the wrong lens, this happens because the vendors don't preallocate numbers for third party lens manufacturers, so they end up having to occupy random lens ids, which can end up conflicting with new first party lenses later on. Exiv2 tries to resolve such conflicts on a best effort basis using some heuristics, like trying to match min/max focal lengths and min/max aperture.

The Exiv2 project, like many open source projects, isn't particularly overstaffed, so they tend to release fairly infrequently, thus the difference between what's available in their development tree and what's available in released distributions can diverge significantly. Practically this means that any lens newly released in the last 6 – 12 months is not likely to be detected properly.

So if you have a lens that is not being properly reported, the best course of action is to check Exiv2's development sources, to see if it's already known (the line references may drift over time, so you might need to scroll around a bit):

If you can find your lens in one of the above source files, it means even though your current version of Exiv2 might not recognize the lens, the developers are already aware of it, and a future release of Exiv2 will likely be able to recognize that lens. We highly recommend against trying to update your Exiv2 library manually.

If you still can't find your lens, please file a feature request with the Exiv2 project here (and yes, you'll need to create an account):

Please include the following information:

  • Full output of:
    exiv2 -pt FILENAME | grep -ai lens
  • The proper full name of the lens (be mindful of capitalization)
  • Preferably include a link to the lens' product page on the manufacturers website
  • Attach a sample low resolution JPEG (unmodified, most cameras allow you to shoot lower resolution JPEGs)

Lens Correction

Presuming Exiv2 (and thus darktable) detects your lens properly, it passes the lens name off to the Lensfun library, which searches its own lens correction database for that particular name. And if it finds a match, it applies that correction data to your image.

But for this to work the name Exiv2 supplies and the name in the Lensfun database need to be a fairly close match. As far as I'm aware Lensfun does ignore punctuation, but other than that, you need a proper match for the correction to be automatic.

Also keep in mind that the lens correction data isn't provided by the vendors, so this is yet again something that needs to be crowd sourced. However in stark contrast to the Exiv2 situation, the process of generating decent correction data is quite a bit more involved.

So if you're missing your particular lens in your particular version of Lensfun, you can check here to see if your lens might already have correction data in Lensfun's development tree:

As you might notice, not all types of correction are available for all lenses.

If you want to generate correction data for your own lenses and submit it to the Lensfun project, please have a look here:

 

February 11, 2015

Tue 2015/Feb/10

  • I'm taking over the maintainership of librsvg.

    I've been fixing a few crashers, and the code is interesting, so I'll blog a bit about the bugs. It's rather peculiar how people's mindset has changed from the time when "feeding an invalid file leads to a crash" was just considered garbage-in, garbage-out — to the present time, when a crasher on invalid data is "OMG a government agency surely is going to write malicious vector images to pwn you every way it can".

    Atte Kettunen of the Oulu University Secure Programming Group has been doing fuzz-testing on librsvg, and this is producing very interesting results. Check out their fuzz-testing tools! My next blog posts will be about the bugs in librsvg and why C is a shitty language for userland code.

  • librsvg bug #703102 - out of bounds memory access

    In librsvg bug 703102 we get an SVG that starts with

        <svg version="1.1" baseProfile="basic" id="svg-root"
             width="100%" height="100%" viewBox="0 170141183460469231731687303715884105727 480 360"
    

    The bounding box is obviously invalid, and the code crashed in this function:

    static void
    rsvg_alpha_blt (cairo_surface_t *src,
                    gint srcx,
                    gint srcy,
                    gint srcwidth,
                    gint srcheight,
                    cairo_surface_t *dst,
                    gint dstx,
                    gint dsty)
    

    This is a fairly typical function for "take a rectangle from this cairo_surface_t and composite it over this other cairo_surface_t".

    The function used to start with some code to clip the coordinates to the actual surfaces... but it was broken. Eventually the loops that iterate through the pixels in the destination region would go past the bounds of the allocated buffers.

    I replaced the broken clipping code with something similar to our venerable gdk_rectangle_intersect(), and the bug went away.

    C sucks

    The ubiquitous pattern to access rectangular image buffers is, "give me a pointer to the start of the pixels", then "give me the rowstride, i.e. the length of each line in bytes".

    The code has to be careful to not go past the bounds of buffers. Things get complicated when you have two images with different dimensions, or different rowstrides — lots of variables to keep track of.

    A civilized language would let you access the byte arrays for the pixel data, but it would not let you access past their bounds. It would halt the program if you do buffer[-5] or buffer[BIGNUM].

    C doesn't give a fuck. C gives you a buffer overrun:

    Buffer overrun at Montparnasse

February 10, 2015

FreeCAD, Architecture and future

There is quite some time I didn't write here about FreeCAD and the development of the Architecture module. This doesn't mean it has stopped, but rather that I have temporarily been busy with another project: The Path module, plus there has been my FOSDEM talk, and finally we're on the verge of releasing version 0.15...

Making flashblock work again; and why HTML5 video doesn't work in Firefox

Back in December, I wrote about Problems with Firefox 35's new deprecation of flash, and a partial solution for Debian. That worked to install a newer version of the flash plug-in on my Debian Linux machine; but it didn't fix the problem that the flashblock program no longer works properly on Firefox 35, so that clicking on the flashblock button does nothing at all.

A friend suggested that I try Firefox's built-in flash blocking. Go to Tools->Add-ons and click on Plug-ins if that isn't the default tab. Under Shockwave flash, choose Ask to Activate.

Unfortunately, the result of that is a link to click, which pops up a dialog that requires clicking a button to dismiss it -- a pointless and annoying extra step. And there's no way to enable flash for just the current page; once you've enabled it for a domain (like youtube), any flash from that domain will auto-play for the remainder of the Firefox session. Not what I wanted.

So I looked into whether there was a way to re-enable flashblock. It turns out I'm not the only one to have noticed the problem with it: the FlashBlock reviews page is full of recent entries from people saying it no longer works. Alas, flashblock seems to be orphaned; there's no comment about any of this on the main flashblock page, and the links on that page for discussions or bug reports go to a nonexistent mailing list.

But fortunately there's a comment partway down the reviews page from user "c627627" giving a fix.

Edit your chrome/userContent.css in your Firefox profile. If you're not sure where your profile lives, Mozilla has a poorly written page on it here, Profiles - Where Firefox stores your bookmarks, passwords and other user data, or do a systemwide search for "prefs.js" or "search.json" or "cookies.sqlite" and it will probably lead you to your profile.

Inside yourprofile/chrome/userContent.css (create it if it doesn't already exist), add these lines:

@namespace url(http://www.w3.org/1999/xhtml);
@-moz-document domain("youtube.com"){
#theater-background { display:none !important;}}

Now restart Firefox, and flashblock should work again, at least on YouTube. Hurray!

Wait, flash? What about HTML5 on YouTube?

Yes, I read that too. All the tech press sites were reporting week before last that YouTube was now streaming HTML5 by default.

Alas, not with Firefox. It works with most other browsers, but Firefox's HTML5 video support is too broken. And I guess it's a measure of Firefox's increasing irrelevance that almost none of the reportage two weeks ago even bothered to try it on Firefox before reporting that it worked everywhere.

It turns out that using HTML5 video on YouTube depends on something called Media Source Extensions (MSE). You can check your MSE support by going to YouTube's HTML5 info page. In Firefox 35, it's off by default.

You can enable MSE in Firefox by flipping the media.mediasource preference, but that's not enough; YouTube also wants "MSE & H2.64". Apparently if you care enough, you can set a new preference to enable MSE & H2.64 support on YouTube even though it's not supported by Firefox and is considered too buggy to enable.

If you search the web, you'll find lots of people talking about how HTML5 with MSE is enabled by default for Firefox 32 on youtube. But here we are at Firefox 35 and it requires jumping through hoops. What gives?

Well, it looks like they enabled it briefly, discovered it was too buggy and turned it back off again. I found bug 1129039: Disable MSE for Firefox 36, which seems an odd title considering that it's off in Firefox 35, but there you go.

Here is the dependency tree for the MSE tracking bug, 778617. Its dependency graph is even scarier. After taking a look at that, I switched my media.mediasource preference back off again. With a dependency tree like that, and nothing anywhere summarizing the current state of affairs ... I think I can live with flash. Especially now that I know how to get flashblock working.

February 09, 2015

Graphical profiling under Linux

The Oyranos library became quite slower during the last development cycle for 0.9.6 . That is pretty normal, as new features were added and more ideas waited for implementation letting not much room for all details as wanted. The last two weeks, I took a break and mainly searched for bottlenecks inside the code base and wanted to bring performance back to satisfactory levels. One good starting point for optimisations in Oyranos are the speed tests inside the test suite. But that gives only help on starting a few points. What I wished to be easy, is seeing where code paths spend lots of time and perhaps, which line inside the source file takes much computation time.

I knew from old days the oprofile suite. So I installed it on my openSUSE machine, but had not much success to get callgraphs working. The web search for “Linux profiling” brought me to a article on pixel beat and to perf. I found the article very informative and do not want to duplicate it here. The perf tools are impressive. The sample recording needs to run as root. On the other hand the obtained sample information is quite useful. Most tools of perf are text based. So getting to the hot spots is not straight forward for my taste. However the pixel beat site names a few graphical data representations, and has a screenshot of kcachegrind. The last link under misc guides to flame graphs. The flame graphs are amazing representations of what happens inside Oyranos performance wise. They show in a very intuitive way, which code paths take most time. The graphs are zoom able SVG.

Here an example with expensive hash computation and without in oyranos-profiles:

Computation time has much reduced. An other bottleneck was expensive DB access. I talked with Markus about that already some time ago but forgot to implement. The according flame graph reminded me about that open issue. After some optimisation the DB bottleneck is much reduced.

The command to create the data is:

root$ perf record -g my-command

user& perf-flame-graph.sh my-command-graph-title

… with perf-flame-graph.sh somewhere in your path:

#!/bin/sh
path=/path/to/FlameGraph
output=”$1″
if [ "$output" = "" ]; then
output=”perf”
fi

 

perf script | $path/stackcollapse-perf.pl > $TMPDIR/$USER-out.perf-folded
$path/flamegraph.pl $TMPDIR/$USER-out.perf-folded > $TMPDIR/$USER-$output.svg
firefox $TMPDIR/$USER-$output.svg

One needs FlameGraph, a set of perl script, installed and perf. The above script is just a typing abbreviation.

February 07, 2015

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

Vienna GNOME/.NET hackfest report

I had a great time attending the GNOME/.NET hackfest last month in Vienna. My goal for the week was to port SparkleShare's user interface to GTK+3 and integrate with GNOME 3.

A lot of work got done. Many thanks to David and Stefan for enabling this by the smooth organisation of the space, food, and internet. Bertrand, Stephan, and Mirco helped me get set up to build a GTK+3-enabled SparkleShare pretty quickly. The porting work itself was done shortly after that, and I had time left to do a lot of visual polish and behavioural tweaks to the interface. Details matter!

Last week I released SparkleShare 1.3, a Linux-only release that includes all the work done at the hackfest. We're still waiting for the dependencies to be included in the distributions, so the only way you can use it is to build from source yourself for now. Hopefully this will change soon.

One thing that's left to do is to create a gnome-shell extension to integrate SparkleShare into GNOME 3 more seamlessly. Right now it still has to use the message tray area, which is far from optimal. So if you're interested in helping out with that, please let me know.

Tomboy Notes

The rest of the time I helped out others with design work. Helped out Mirco with the Smuxi preference dialogues using my love for the Human Interface Guidelines and started a redesign of Tomboy Notes. Today I sent out the new design to their mailing list with the work done so far.

Sadly there wasn't enough time for me to help out with all of the other applications… I guess that's something for next year.

Sponsors

I had a fun week in Vienna (which is always lovely no matter the time of year) and met many new great people. Special thanks to the many sponsors that helped making this event possible: Norkart, Collabora, Novacoast IT, University of Vienna and The GNOME Foundation.

Trip to Nuremberg and Munich

This month I visited my friend and colleague Garrett in Germany. We visited the Christmas markets there. Lots of fun. Here are some pictures.

Attending the Vienna GNOME/.NET hackfest

Today I arrived in the always wonderful city of Vienna for the GNOME/.NET Hackfest. Met up and had dinner with the other GNOME and .NET fans.

SparkleShare has been stuck on GTK+2 for a while. Now that the C# bindings for GTK+3 are starting to get ready, and Bindinator is handling any other dependencies that need updating (like WebKit), it is finally time to take the plunge.

My goal this week is to make some good progress on the following things:

  1. Port SparkleShare's user interface to GTK+3.
  2. Integrate SparkleShare seamlessly with the GNOME 3 experience

SparkleShare 1.2

Yesterday I made a new release of SparkleShare. It addresses several issues that may have been bugging you, so it's worth to upgrade. Depending on how well things go this week it may be the last release based on GNOME 2 technologies. Yay for the future!

SparkleShare 1.0

I’m delighted to announce the availability of SparkleShare 1.0!

What is SparkleShare?

SparkleShare is an Open Source (self hosted) file synchronisation and collaboration tool and is available for Linux distributions, Mac, and Windows.

SparkleShare creates a special folder on your computer in which projects are kept. All projects are automatically synced to their respective hosts (you can have multiple projects connected to different hosts) and to your team’s SparkleShare folders when someone adds, removes or edits a file.

The idea for SparkleShare sprouted about three years ago at the GNOME Usability Hackfest in London (for more background on this read The one where the designers ask for a pony).

SparkleShare uses the version control system Git under the hood, so people collaborating on projects can make use of existing infrastructure, and setting up a host yourself will be easy enough. Using your own host gives you more privacy and control, as well as lots of cheap storage space and higher transfer speeds.

Like every piece of software it’s not bug free, even though it has hit 1.0. But it’s been tested for a long time now and all reproducable and known major issues have been fixed. It works reliably and the issue tracker is mostly filled with feature requests now.

The biggest sign that it was time for a 1.0 release was the fact that Lapo hasn’t reported brokenness for a while now. This can either mean that SparkleShare has been blessed by a unicorn or that the world will end soon. I think it’s the first.

Features

For those of you that are not (that) familiar with SparkleShare, I’ll sum up its most important features:

The SparkleShare folder

This is where all of your projects are kept. Everything in this folder will be automatically synced to the remote host(s), as well as to your other computers and everyone else connected to the same projects. Are you done with a project? Simply delete it from your SparkleShare folder.

The status icon

The status icon gives you quick access to all of your projects and shows you what’s going on regarding the synchronisation process. From here you can connect to existing remote projects and open the recent changes window.

The setup dialog

Here you can link to a remote project. SparkleShare ships with a couple of presets. You can have mulitple projects syncing to different hosts at the same time. For example, I use this to sync some public projects with Github, some personal documents with my own private vps and work stuff with a host on the intranet.

Recent changes window

The recent changes window shows you everything that has recently changed and by whom.

History

The history view let’s you see who has edited a particular file before and allows you to restore deleted files or revert back to a previous version.

Conflict handling

When a file has been changed by two people at the same time and causes a conflict, SparkleShare will create a copy of the conflicting file and adds a timestamp. This way changes won’t get accidentally lost and you can either choose to keep one of the files or cherry pick the wanted changes.

Notifications

If someone makes a change to a file a notification will pop up saying what changed and by whom.

Client side encryption

Optionally you can protect a project with a password. When you do, all files in it will be encrypted locally using AES-256-CBC before being transferred to the host. The password is only stored locally, so if someone cracked their way into your server it will be very hard (if not impossible) to get the files’ contents. This on top of the file transfer mechanism, which is already encrypted and secure. You can set up an encrypted project easily with Dazzle.

Dazzle, the host setup script

I’ve created a script called Dazzle that helps you set up a Linux host to which you have SSH access. It installs Git, adds a user account and configures the right permissions. With it, you should be able to get up and running by executing just three simple commands.

Plans for the future

Something that comes up a lot is the fact that Git doesn’t handle large (binary) files well. Git also stores a database of all the files including history on every client, causing it to use a lot of space pretty quickly. Now this may or may not be a problem depending on your usecase. Nevertheless I want SparkleShare to be better at the “large backups of bulks of data” usecase.

I’ve stumbled upon a nice little project called git-bin in some obscure corner of Github. It seems like a perfect match for SparkleShare. Some work needs to be done to integrate it and to make sure it works over SSH. This will be the goal for SparkleShare 2.0, which can follow pretty soon (hopefully in months, rather than years).

I really hope contributors can help me out in this area. The Github network graph is feeling a bit lonely. Your help can make a big difference!

Some other fun things to work on may be:

  1. Saving the modification times of files
  2. Creating a binary Linux bundle
  3. SparkleShare folder location selection
  4. GNOME 3 integration
  5. …other things that you may find useful.

If you want to get started on contributing, feel free to visit the IRC channel: #sparkleshare on irc.gnome.org so I can answer any questions you may have and give support.

Finally…

I’d like to thank everyone who has helped testing and submitted patches so far. SparkleShare wouldn’t be nearly as far as it is now without you. Cheers!

February 06, 2015

Things software development teams should know about design

I wrote a quick, unpolished list of things that development teams should know about design (and designers). It’s worth sharing with a larger audience, so here it is:

  1. programmers and managers need to fully embrace design — without “buy-in”, designing is futile (as it’ll never get implemented properly — or, often, implemented at all)
  2. design is an iterative process involving everyone — not just “the designer”
  3. in all stages of the project cycle, design must be present: before, during, and after
  4. design isn’t just eye-candy; functionality is even more important (text/content is part of design too)
  5. designers are full team members (and should be treated as such), not some add-on
  6. design issues should have high priority too (if something is unusable or even sometimes looks unusable, then people can’t use the software)
  7. designers are developers too (in fact, anyone who contributes to making software is a developer — not just programmers)
  8. good software requires design, documentation, project management, community engagement, marketing, etc. — in addition to programming
  9. “usability” != “design”; design is about creating something useful (and/or pretty), whereas usability is discovering how well something works — usability tests are useful to see if a design works, but usability test alone usually won’t point a way to fix issues… design helps fix problems after they’re discovered, and helps to prevent problems in the first place
  10. a lot of designers are quite tech-savvy (especially us designers already in the open source world), but many aren’t — regardless, it’s okay to not know everything (especially about programming corner-cases or project-related esoterica)
  11. think about the people using the software as people using the software, not as “users” (using the term “users” is degrading (similar to “drug ‘users'”) and sets up an “us-versus-them” mentality)

February 05, 2015

Announce: Entangle “Strange” release 0.6.1 – an app for tethered camera control & capture

I am pleased to announce a new release 0.6.1 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

This release has primarily involved bug fixing, but one major user visible change is a rewrite of the camera control panel. Instead of showing all possible camera controls (which can imply 100’s of widgets), only 7 commonly used controls are displayed initially. Other controls can be optionally enabled at the discretion of the user and customization is remembered per camera model.

  • Require GTK >= 3.4
  • Fix check for GIO package in configure
  • Add missing icons to Makefile
  • Follow freedesktop thumbnail standard storage location
  • Refactor capture code to facilitate plugin script automation
  • Fix bug causing plugin to be displayed more than once
  • Make histogram height larger
  • Strip trailing ‘2’ from widget labels to be more friendly
  • Completely rewrite control panel display to show a small, user configurable subset from all the camera controls.
  • Remember custom list of camera controls per camera model
  • Hide compiler warnings from new glib atomic opertaions
  • Update to newer gnulib compiler warnings code
  • Remove broken double buffering code that’s no required when using GTK3
  • Remove use of deprecated GtkMisc APis
  • Allow camera picker list to show multiple lines
  • Remove crufty broken code from session browser that was breaking with new GTK versions
  • Disable libraw auto brightness since it totally overexposes many images, generally making things look worse
  • Fix memory leak handling camera events
  • Add keywords to desktop & appdata files

Ambient Light Sensors

An ambient light sensor is a little light-to-frequency chip that you’ve certainly got in your tablet, most probably in your phone and you might even have one in your laptop if you’re lucky. Ambient light sensors let us change the panel brightness in small ways so that you can still see your screen when it’s sunny outside, but we can dim it down when the ambient room light is lower to save power. Lots of power.

There is a chicken and egg problem here. Not many laptops have ambient light sensors; some do, but driver support is spotty and they might not work, or work but return values with some unknown absolute scale. As hardware support is so bad, we’ve not got any software that actually uses the ALS hardware effectively, and so most ALS hardware goes unused. Most people don’t actually have any kind of ALS at all, even on high-end models like Thinkpads

So, what do we do? I spent a bit of time over the last few weeks designing a small OpenHardware USB device that acts as a ALS sensor. It’s basically a ColorHug1 with a much less powerful processor, but speaking the same protocol so all the firmware update and test tools just work out of the box. It sleeps between readings too, so only consumes a tiiiiny amount of power. I figure that with hardware that we know works out of the box, we can get developers working on (and testing!) the software integration without spending hours and hours compiling kernels and looking at DSDTs. I was planning to send out devices for free to GNOME developers wanting to hack on ALS stuff with me, and sell the devices for perhaps $20 to everyone else just to cover costs.

pcb

The device would be a small PCB, 12x22mm in size which would be left in a spare USB slot. It only sticks out about 9mm from the edge of the laptop as most of the PCB actually gets pushed into the USB slot. It’s obviously non-ideal, and non-sexy, but I really think this is the way to break the chicken/egg problem we have with ALS sensors. It obviously costs money to make a device like this, and the smallest batch possible is about 60 – so before I spend any more of my spare money/time on this, is anyone actually interested in saving tons of power using an ALS sensor and dimming the display? Comment here or email me if you’re interested. Thanks.

Krita in FLOSS Weekly

Yesterday, Randal Schwartz and Aaron Newcomb interviewed Krita maintainer Boudewijn Rempt about Krita for their weekly podcast FLOSS Weekly. The live webcast was an enjoyable, if intense experience — and now the episode is available for download! Also, art was created right there, on the show!

February 04, 2015

The End is Nigh

“If @sgarrity doesn’t write a blog post in the next month he won’t have written for a year, and blogging will be over.”

— Peter Rukavina, via twitter

Studying Glaciers on our Roof

[Roof glacier as it slides off the roof] A few days ago, I wrote about the snowpack we get on the roof during snowstorms:

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

The day after I posted that, I had a chance to see what happens as the snow sheet slides off a roof if it doesn't have a long distance to fall. It folds gracefully and gradually, like a sheet.

[Underside of a roof glacier] [Underside of a roof glacier] The underside as they slide off the roof is pretty interesting, too, with varied shapes and patterns in addition to the imprinted pattern of the roof.

But does it really move like a glacier? I decided to set up a camera and film it on the move. I set the Rebel on a tripod with an AC power adaptor, pointed it out the window at a section of roof with a good snow load, plugged in the intervalometer I bought last summer, located the manual to re-learn how to program it, and set it for a 30-second interval. I ran that way for a bit over an hour -- long enough that one section of ice had detached and fallen and a new section was starting to slide down. Then I moved to another window and shot a series of the same section of snow from underneath, with a 40-second interval.

I uploaded the photos to my workstation and verified that they'd captured what I wanted. But when I stitched them into a movie, the way I'd used for my time-lapse clouds last summer, it went way too fast -- the movie was over in just a few seconds and you couldn't see what it was doing. Evidently a 30-second interval is far too slow for the motion of a roof glacier on a day in the mid-thirties.

But surely that's solvable in software? There must be a way to get avconv to make duplicates of each frame, if I don't mind that the movie come out slightly jump. I read through the avconv manual, but it wasn't very clear about this. After a lot of fiddling and googling and help from a more expert friend, I ended up with this:

avconv -r 3 -start_number 8252 -i 'img_%04d.jpg' -vcodec libx264 -r 30 timelapse.mp4

In avconv, -r specifies a frame rate for the next file, input or output, that will be specified. So -r 3 specifies the frame rate for the set of input images, -i 'img_%04d.jpg'; and then the later -r 30 overrides that 3 and sets a new frame rate for the output file, -timelapse.mp4. The start number is because the first file in my sequence is named img_8252.jpg. 30, I'm told, is a reasonable frame rate for movies intended to be watched on typical 60FPS monitors; 3 is a number I adjusted until the glacier in the movie moved at what seemed like a good speed.

The movies came out quite interesting! The main movie, from the top, is the most interesting; the one from the underside is shorter.

Roof Glacier
Roof Glacier from underneath.

I wish I had a time-lapse of that folded sheet I showed above ... but that happened overnight on the night after I made the movies. By the next morning there wasn't enough left to be worth setting up another time-lapse. But maybe one of these years I'll have a chance to catch a sheet-folding roof glacier.

February 03, 2015

GCompris: crowdfunding campaign is over, time to start the work

Hi,

The crowdfunding campaign we ran on IndieGoGo to support the work on new unified graphics for GCompris finished yesterday. We didn’t reach the goal set to complete the whole new graphics, but thanks to 94 generous contributors, we collected 3642$. Also we got 260€ directly from the Hackadon 2014, many thanks to those contributors too! Thanks again to everyone who contributed and helped to spread the word!

Now after deducing the fees from IndieGoGo, converting to euros and making the sum, the total should be around 3150€, which is enough to cover 25 days of work. This is way less than the full goal, so I have to adapt the plan. Of course I won’t be able to make new artwork for all activities in these 25 days of work, but I’ll make as much as possible to establish good bases for the new design transition. Also I’ll have to rely as much as possible on some of the existing artwork that is already good enough and only adapt the style and do a some edits instead of starting from scratch as much as possible.

I have published the initial proposal for the new artwork guidelines. I will start the work on February 16th, in 13 days, so this should leave enough time for people to review the guidelines and send their opinion and comments (please use the contact form on my main website, or any other way that you’re used to contact me..). I’ll read carefully every comment and apply needed edits to the guidelines. The guidelines proposal is here.

(Edit 02/11/2015: guidelines and examples updated according to reviews)

Then I’ll fulfil these 25 days of work between February and April. Let’s see how much activities I can update in this time ;)

Release date for Krita 2.9

After a short discussion, we came up with a release date for Krita 2.9! It's going to be... February 26th, Deo volente. Lots and lots and lots of new stuff, and even the old stuff in Krita is still amazing (lovely article, the author mentions a few features I didn't know about).

Then it's time for the port to Qt5, while Dmitry will be working on Photoshop-style layer styles -- a kickstarter feature that didn't make it for 2.9.0, but will be in 2.9.x. A new fundraiser for the next set of amazing features is also necessary.

Of course, there are still over 130 open bugs, and we've got a lot to do still, but then the bugs will always be with us, and after 2.9.0 a 2.9.1 will surely follow. But I do care about our bug reports.

Some people feel that in many projects,bugreports are often being discarded in an administrative way, but honestly, we try to do better! Because without user testing and bug reporting, we won't be able to improve Krita. After all, hacking on Krita takes so much time that I hardly get to use Krita for more than ten, twenty minutes a week!

We fixed, closed or de-duplicated 91 bugs in the past seven days. Of course, we also got a bunch of new bug reports: 25.

So, I want to take a bit of time to give a public thank-you to all our bug reporters. We've got an awesome bunch of testers!

For example, one tester has reported 46 bugs against the 2.9 betas: that is a pretty amazing level of activity! And we have by now fixed 33 of these 46 bugs. By testing betas and painstakingly carefully reporting bugs, often with videos to help us reproduce the issue, Krita has become so much better.

If you use Krita and notice an issue, don't think that you'll hurt us when you report the issue as a bug -- the only thing we ask from you is that you do a cursory check whether your bug has already been reported (if it isn't immediately obvious, report away, and if it's been reported before, no problem), and that we can come back to you with questions, if necessary.

February 02, 2015

released darktable 1.6.2

We are happy to announce that darktable 1.6.2 has been released.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.2
Please only use our provided packages (green buttons tar.xz and dmg) not the auto-created tarballs from github (grey buttons, zip and tar.gz). The latter are just git snapshots and will not work! Here's the direct link to tar.xz:
https://github.com/darktable-org/darktable/releases/download/release-1.6.2/darktable-1.6.2.tar.xz
and the DMG:
https://github.com/darktable-org/darktable/releases/download/release-1.6.2/darktable-1.6.2.dmg

this is a new stable point release, no big new features added.

sha256sum darktable-1.6.2.tar.xz
66ee5f8ce5df9169211980fa374dc686eaf74322e0bd363a56612ae808bdc5bd

sha256sum darktable-1.6.2.dmg
bd613994c9754313144e8804026b7faf672fa816801b687ff7d64a8d82880332

General improvements

  • Better names for key accels (no more <Primary>)
  • Local gallery export limited to useful web formats (JPEG/PNG/WebP)
  • Add a way to control the brush size with keys
  • Default X-Trans Demosaic to markesteijn (single pass)

Bugfixes

  • Fix IPTC Keyword reading for real
  • rawspeed: support short values in DNG ActiveArea
  • really disable parallel export
  • remove special characters from style export
  • Cropping aspect ratio fixes (#9942, #10265)
  • Some fixes to lua/masks/brushes

Camera support

  • Pentax *ist DS
  • Pentax *ist DL2
  • Pentax K110D
  • Sony A7 II
  • Sony ILCE-3500
  • Nikon 1 S2
  • Olympus E-450
  • Panasonic LX1
  • Panasonic G3 (aspect ratio modes)
  • Samsung NX1 blackpoint finetuning
  • Fuji X-E1 blackpoint finetuning

White balance presets

  • 7D Mark II (updated)
  • Olympus E-M1
  • Sony A99

Make it flat. Make it the all same. Make it Boring.



"Less is a bore" brilliantly said Robert Venturi.

So at the end of the Oxygen period, UX/UI design was reaching an inflection point. Gone were the days were graphical designers challenged its own illustrations skills in a perpetual "I can my candy more naturalistic silly than yours".
We had reached the saturation point of the silliness in graphic representations of every day objects as UI elements. Now back then people needed to find a culprit for it all, a quintessential word that in itself represented all evil... Cue in "skeuomorphism", an word used in traditional design to imply a faux representation of a material. in this word we collectively found the  "wrong" to be corrected, we had our culprit.

We have to kill all aspects of anything skeuomorphic, cue in flat design, we don't need anything "fake", we don't need textures, we must do without those fake drop-shadows, kill all artificial gradients. reinvent the circle in a precise concise square.  

Well this all to me back then sounded a bit like a personal attack ;), I mean, gradients and shadows is all I did :) and just because some were abusing it  I had to pay for it?
And I said, they were all wrong, this was nothing more than a modernism surge all over again, "skuemorphism is all we do in UI any way" all the concepts in the desktop are skuemorphick, seriously we call it DESKTOP, we use Buttons and Folders we can't do anything but Skeomorphic designs the only non Skeomorphic design would be a screen turned off.

Sad reality is that being right when everybody else is wrong, just makes you irrelevant. And its not like that despite the argumentation being fundamentally wrong, there  was no need for change, there was!
Overly done design tends to be a trick a designer will do when he cant  find a efficient answer, and we all abused this "trick".

Out of it some great new concepts and methods came to life, must say I'm a bit of fan of Google new "material" design language. (in fact material design is IMO not flat I mean they must have called it its "material" for some reason ;) ) 

But so comes today, every little single designer agency looks the same, its easy to achieve the current dictatorial style, slap in a blurred background a lonely Helvetica Neue on top and you are almost there.
Trendy websites pop out everywhere looking exactly the same as one cooperative unique brand took control of everything.

And its a BORE....

What will come next?
Don't know, predicting this sort of things is absolutely futile, and I'm almost sure that anything that comes after it will retain some of the best aspect it brought out. Think we are in a transitory period and something new is around the corner.

Interview with Lucas Falcão

watson

Would you like to tell us something about yourself?

Hi! My name is Lucas and I’m a 3D artist, I’ve been working professionally with 3D for about 6 years now. Mostly doing modeling and texturing tasks, but sometimes shading and lighting too. Besides this professional side, I like to be with friends, ride my bicycle, practice guitar, read and  taste new vegan foods.

Do you paint professionally or as a hobby artist?

The closest I get to painting is to paint textures for 3D models, I do that professionally. I started using Krita recently at the studio where I work for some tasks, and also started using it for my personal projects. I use Krita to put together baked maps that I create in Blender, also for painting textures and improve maps.

What is it that makes you choose digital over traditional painting?

I don’t know, I just started learning digital first. When I was studying at the college, by the way, I graduated in Design. There, at the college, I learned some graphic programs and 3D, so I started to practice in digital, but I also learned traditional art/design theory there. I never tried modeling in clay, sometimes I draw on paper, that’s usually the closest I get to traditional. But I’ve always studied traditional theory.

How did you first find out about open source communities? What is your opinion about them?

It’s was when I started using Blender and I found them really great. It’s an awesome environment to learn, people are always sharing files, techniques with each other. And there is also a lot of tutorials for free and a lot of tutorials for a very affordable price.

Have you worked for any FOSS project or contributed in some way?

I’ve never worked on a FOSS project, but I really like to someday work in one, like an open movie.  I make a monthly donation to Blender Foundation and I’m subscribed in the Blender Cloud, which is one of the ways that the BF is financing the Project Gooseberry. I also made a donation to the Kickstarter to accelerate Krita’s development. ;)

How did you find out about Krita?

I don’t know exactly, but I’m almost pretty sure it was by a video shared by that awesome artist, David Revoy, showing some features of Krita.

What was your first impression?

I was amazed by the features and tools that I see. I found the interface very professional and the software comes with a lot of awesome brushes.

What do you love about Krita?

Krita has a lot of tools that I love to see in image editor/painting software, like for example the wrap around mode, the mirror mode, instanced layers, the transform and warp tools are pretty awesome too. Among other tools.

What do you think needs improvement in Krita? Also, anything that you really hate?

I think Krita is doing great and I really like the direction it’s going, the software it seems to be made for artists, at least I have this impression when I use the tools to work on the creation and painting of textures. I don’t hate anything in Krita, and I don’t use all the tools, but I think usability could always be improved.

In your opinion, what sets Krita apart from the other tools that you use?

I think it’s the very good combination between image editor and painting in one package, and some awesome tools that I haven’t seen before in other software.

Anything else you’d like to share?

Thank you for inviting me for this interview and a big thanks to Krita development team for doing a great work on the software. Keep it up!

February 01, 2015

released darktable 1.6.1

We are happy to announce that darktable 1.6.1 has been released. Due to an oversight on our side we forgot to do this announcement back when the actual release was done, so this is mostly for historical reasons.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.1
Please only use our provided packages (green buttons tar.xz and dmg) not the auto-created tarballs from github (grey buttons, zip and tar.gz). The latter are just git snapshots and will not work! Here's the direct link to tar.xz:
https://github.com/darktable-org/darktable/releases/download/release-1.6.1/darktable-1.6.1.tar.xz
and the DMG:
https://github.com/darktable-org/darktable/releases/download/release-1.6.1/darktable-1.6.1.dmg

this is a point release which fixes a couple of minor issues in the recent feature release 1.6.0 (such as a crash with images greater than 134 megapixels).

happy holidays everyone :)

sha1sums:
e3e0014361081364b56b6c02e886ba2fba6c6887 darktable-1.6.1.tar.xz
7173938cad7cd4c4a86de9438517c17166008f3c darktable-1.6.1.dmg

General improvements:

  • Hide mouse in slideshow mode
  • Show option for txt overlay in the preferences Bugfixes:
  • ImageIO format TIFF: use scanline-based I/O. Fixes bug #10230
  • exif: always try to use Exiv2's lens detection for Olympus
  • demosaic: fix assertion
  • Do not deadlock in input color profile on unsupported input profiles
  • ensure that quick access preset menu is displayed correctly
  • Properly disconnect from the mipmap signal when leaving tethering mode
  • Avoid integer overflow on big images
  • OSX HiDPI fixes
  • Lua fixes

Modules:

  • masks: enhance mouse hover detection
  • masks: allow smaller radius for circle and ellipse
  • spots: fix icon states bug #10216
  • spots: rounded correction. Fix bug #10045
  • spots: legacy_params(): adapt for latest mask changes
  • flip: fix legacy presets update
  • exposure: enable soft boundaries for black
  • zonesystem: remove stale button_release() callback
  • graduatednd: avoid rounding issues for rotation after moving whole line. Fixes bug #10241

Camera support:

  • Pentax *istDL
  • 7D Mark II sRAW/mRAW
  • Samsung NX1

White balance presets:

  • 7D Mark II
  • Panasonic DMC-LX7

Children’s book with Krita

Today we’ve got a guest article by John Gholson. John is an artist who is currently working on a big project — an illustrated children’s book. As far as we know, it’s the first time an artist is using krita to illustate a whole children’s book project. So, over to John!

park3

Hi! I’m John Gholson Jr — and I am creating a children’s book together with my friend Margo Candelario, a seasoned author and poet. She’s doing the writing, and I’m doing the artwork! The colouring of the book is done 100% with Krita. I discovered Krita while looking for open source alternatives to the usual illustration software like Corel Painter or Photoshop. Krita was a very fun discovery — and moreover, Krita fits the theme of our book: solutions can be right in front of you or just a few steps away. Other themes of our book are exploration and adventure!

Here’s a complete overview of my workflow:

I love drawing in the traditional way first with ink washes or pencil to get some actual texture. Then I scan the image at 600dpi and go to town with Krita! There are several benefits to this combined method. It’s still very hard to replicate with a tablet some things that happen with physical drawing, like the subtle incremental details that a pencil gives. Working with layers is amazing, though. It’s something the early Disney animators did for their backgrounds, too.

A mostly digital workflow means a big saving in cost in art supplies or paint, so you can take more risks, you can mix colours instantly without cleaning your brush, and you don’t get awkward muddy colours like with oil or acrylic paint!

The difference in workflow isn’t good or bad in itself, it’s just a very big difference. I find it a lot faster and more efficient in Krita than in traditional media to work from dark to light. In fact that’s often what I do: use a dark and a light colour, swap foreground and background, and darken or lighten those two colours with ‘k’ and ‘l’ to get a whole range. I love the way the Krita interface works: not only the ‘k’ and ‘l’ keys for darker and lighter, but also the custom brushes, the speed and the reliability of the software. And the ability to define my own shortcuts!

But it’s combining Krita with traditional art that’s stolen my heart. Here’s my personal artwork: http://www.JohnGholson.com, and here’s my bio: http://john-gholson.artistwebsites.com/index.html?tab=about.

And check out our book project: http://www.book.gqbum.com!

January 31, 2015

Snow day!

We're having a series of snow days here. On Friday, they closed the lab and all the schools; the ski hill people are rejoicing at getting some real snow at last.

[Snow-fog coming up from the Rio Grande] It's so beautiful out there. Dave and I had been worried about this business of living in snow, being wimpy Californians. But how cool (literally!) is it to wake up, look out your window and see a wintry landscape with snow-fog curling up from the Rio Grande in White Rock Canyon?

The first time we saw it, we wondered how fog can exist when the temperature is below freezing. (Though just barely below -- as I write this the nearest LANL weather station is reporting 30.9°F. But we've seen this in temperatures as low as 12°F.) I tweeted the question, and Mike Alexander found a reference that explains that freezing fog consists of supercooled droplets -- they haven't encountered a surface to freeze upon yet. Another phenomenon, ice fog, consists of floating ice crystals and only occurs below 14°F.

['Glacier' moving down the roof] It's also fun to watch the snow off the roof.

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

[Mysterious tracks in the snow] When we do go outside, the snow has wonderful collections of tracks to try to identify. This might be a coyote who trotted past our house on the way over to the neighbors.

We see lots of rabbit tracks and a fair amount of raccoon, coyote and deer, but some are hard to identify: a tiny carnivore-type pad that might be a weasel; some straight lines that might be some kind of bird; a tail-dragging swish that could be anything. It's all new to us, and it'll be great fun learning about all these tracks as we live here longer.

Snow day!

We're having a series of snow days here. On Friday, they closed the lab and all the schools; the ski hill people are rejoicing at getting some real snow at last.

[Snow-fog coming up from the Rio Grande] It's so beautiful out there. Dave and I had been worried about this business of living in snow, being wimpy Californians. But how cool (literally!) is it to wake up, look out your window and see a wintry landscape with snow-fog curling up from the Rio Grande in White Rock Canyon?

The first time we saw it, we wondered how fog can exist when the temperature is below freezing. (Though just barely below -- as I write this the nearest LANL weather station is reporting 30.9°F. But we've seen this in temperatures as low as 12°F.) I tweeted the question, and Mike Alexander found a reference that explains that freezing fog consists of supercooled droplets -- they haven't encountered a surface to freeze upon yet. Another phenomenon, ice fog, consists of floating ice crystals and only occurs below 14°F.

['Glacier' moving down the roof] It's also fun to watch the snow off the roof.

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

[Mysterious tracks in the snow] When we do go outside, the snow has wonderful collections of tracks to try to identify. This might be a coyote who trotted past our house on the way over to the neighbors.

We see lots of rabbit tracks and a fair amount of raccoon, coyote and deer, but some are hard to identify: a tiny carnivore-type pad that might be a weasel; some straight lines that might be some kind of bird; a tail-dragging swish that could be anything. It's all new to us, and it'll be great fun learning about all these tracks as we live here longer.

Introducing Dirty Presets, Locked Brush Settings and Cumulative Undo in Krita 2.9.

One of the 2014 Google Summer of Code projects for Krita is going to be in the next release, Krita 2.9. It’s a bit complicated, so here’s a short tutorial in using Mohit’s Dirty Presets, Locked Brush Settings and Cumulative Undo projects!

1. Dirty Presets

This is a feature a lot of people asked for: It allows Krita to remember small changes made to a preset during a session, without it saving over the original.
You activate it in the brush settings window, by ticking ‘Temporarily Save Tweaks To Presets’.

ditry_presets_1

Then, select a preset.

dirty_presets_2

Now, if you tweak a setting, like, say, opacity, Krita will make the preset ‘dirty’. You can identify dirty presets by the little plus-icon on the preset icon.

dirty_presets_3

To get the original settings back, press the reload button.

dirty_presets_4

To retain these settings, just save the preset.

2. Locked Brush Settings.

Another often requested feature, this allows you to lock to opacity, or brush-tip, or even texture.

You activate it by right-clicking the lock besides a setting. Then, select ‘lock’.

locksettings_1

Now, the setting will not be reloaded every time you select a new preset.

This can be used, for example, to keep the same texture over all presets.

locksettings_2

You can unlock them by right-clicking the lock-icon again.

locksettings_3
There’s two options here.

Unlock (Drop Locked)
This will get rid of the settings of the locked parameter and take that of the active brush preset. So if your brush had no texture on, using this option will revert it to having no texture.
Unlock (Keep Locked)
This will keep the settings of the parameter even though it’s unlocked.

Finally, the last one.

3. Cumulative undo.

Cumulative undo allows you to have undos merge together. This can be useful if you’re the type to make a lot of tiny strokes, or to save memory.

Cumulative undo is activated via the Undo History Docker. Right-click an undo-state to enable it.

cumulative_undo_1

Afterwards, you can tweak it’s settings by right-clicking the undo-state again.

cumulative_undo_2

Start merging time
The amount of seconds required to consider a group of strokes mergable. So if this is set to five, at the least five seconds must have passed for Krita to consider these strokes mergable.
Group time
The amount of time it needs between strokes for Krita to consider the next stroke to be part of a new group. So if it’s set to 1, Krita will put strokes that were made more than 1 second after the first into a new merged group.
Split strokes.
The minimum amount of last strokes that stay undoable without being merged. So if you have this set to 3, and make five strokes, only the two oldest ones will be merged.

Disable this by right-clicking an undo-state and disabling it.
After this, you can start undoing large sets of strokes! The merged items will be represented with ‘merged’ behind their name in the undo history.

January 28, 2015

Detecting fake flash

I’ve been using F3 to check my flash drives, and this is how I discovered my drives were counterfeit. It seems to me this kind of feature needs to be built inside gnome-multi-writer itself to avoid sending fake flash out to customers. Last night I wrote a simple tool called gnome-multi-writer-probe which does the following few things:

* Reads the existing data from the drive in 32kb chunks every 32Mbish into RAM
* Writes random blocks of 32kb every 32MBish, and also stores in RAM
* Resets the drive
* Reads all the 32k blocks from slightly different addresses and sizes and compares them to the random data in RAM
* Writes all the saved data back to the drive.

I only takes a few seconds on most drives. It also tries to be paranoid, and saves the data back to the drive the best it can when it encounters an error. That said, please don’t use this tool on any drives that have important data on them; assume you’ll have to reformat them after using this tool. Also, it’s probably a really good idea to unmount any drives before you try this.

If you’ve got access to gnome-multi-writer from git (either from jhbuild, or from my repo) then please could you try this:

sudo gnome-multi-writer-probe /dev/sdX

Where sdX is the USB drive you want to test. I’d be interested of the output, and especially interested if you have any fake flash media you can test this with. Either leave a comment here, grab me on IRC or send me an email. Thanks.

January 27, 2015

Tue 2015/Jan/27

  • An inlaid GNOME logo, part 2

    Esta parte en español

    To continue with yesterday's piece — the amargoso board which I glued is now dry, and now it is time to flatten it. We use a straightedge to see how bad it is on the "good" side.

    Not flat

    We use a jack plane with a cambered blade. There is a slight curvature to the edge; this lets us remove wood quickly. We plane across the grain to remove the cupping of the board. I put some shavings in strategic spots between the board and the workbench to keep the board from rocking around, as its bottom is not flat yet.

    Cambered iron Cross-grain planing

    We use winding sticks at the ends of the board to test if the wood is twisted. Sight almost level across them, and if they look parallel, then the wood is not twisted. Otherwise, plane away the high spots.

    Winding sticks Not twisted

    This gives us a flat board with scalloped tracks. We use a smoothing plane to remove the tracks, planing along the grain. This finally gives us a perfectly flat, smooth surface. This will be our reference face.

    Scalloped board Smoothing plane Smooth, flat surface

    On that last picture, you'll see that both halves of the board are not of the same thickness, and we need to even them up. We set a marking gauge to the thinnest part of the boards. Mark all four sides, using the flat side as the reference face, so we have a line around the board at a constant distance to the reference face.

    Gauging the thinnest part Marking all around Marked all around

    Again, plane the board flat across the grain with a jack plane and its cambered iron. When you reach the gauged line, you are done. Use a smoothing plane along the grain to make the surface pretty. Now we have a perfectly flat board of uniform thickness.

    Thicknessing with the jack plane Smoothing plane Flat and uniform board

    Now we go back to the light-colored maple board from yesterday. First I finished flattening the reference face. Then, I used the marking gauge to put a line all around at about 5mm to the reference face. This will be our slice of maple for the inlaid GNOME logo.

    Marking the maple board

    We have to resaw the board in order to extract that slice. I took my coarsest ripsaw and started a bit away from the line at a corner, being careful to sight down the saw to make it coplanar with the lines on two edges. It is useful to clamp the board at about 45 degrees from level.

    Starting to resaw at a corner

    Once the saw is into the corner, tilt it down gradually to lengthen the kerf...

    Kerfing one side

    Tilt it gradually the other way to make the kerf on the other edge...

    Kerfing the other side

    And now you can really begin to saw powerfully, since the kerfs will guide the saw.

    Resawing

    Gradually extend the cut until the other corner, and repeat the process on all four sides.

    Extending the cut Resawing

    Admire your handiwork; wipe away the sweat.

    Resawn slice

    Plane to the line and leave a smooth surface. Since the board is too thin to hold down with the normal planing stops on the workbench, I used a couple of nails as planing stops to keep the board from sliding forward.

    Nail as planing stop

    Now we can see the contrast between the woods. The next step is to glue templates on each board, and start cutting.

    Contrast between woods

Scammers at promo-newa.com

tl;dr Don’t use promo-newa.com, they are scammers that sell fake flash.

Longer version: For the ColorHug project we buy a lot of the custom parts direct from China at a fraction of the price available to us in the UK, even with import tax considered. It would be impossible to produce such a low cost device and still make enough money to make it worth giving up our evenings and weekends. This often means sending thousands of dollars to sketchy-looking companies willing to take on small (to them!) custom orders of a few thousand parts.

So far we’ve been very lucky, until last week. I ordered 1000 customized 1GB flash drives to use as a LiveUSB image rather than using a LiveCD. I checked out the company as usual, and ordered a sample. The sample came back good quality, with 1GB of fast flash. Payment in full was sent, which isn’t unusual for my other suppliers in China.

Fast forward a few weeks. 1000 USB drives arrived, which look great. Great, until you start using them with GNOME MultiWriter, which kept throwing validation warnings. Using the awesome F3 and a few remove-insert cylces later, the f3probe tool told me the flash chip was fake, reporting the capacity to be 1GB, when it was actually 96Mb looped around 10 times.

Taking the drives apart you could also see the chip itself was different from the sample, and the plastic molding and metal retaining tray was a lower quality. I contacted the seller, who said he would speak to the factory later that day. The seller got back to me today, and told me that the factory has produced “B quality drives” and basically, that I got what I paid for. For another 1600USD they would send me the 1GB ICs, which I would have to switch in the USB units. Fool me once, shame on you; fool me twice, shame on me.

I suppose people can use the tiny flash drives to get the .icc profile off the LiveCD image, which was always a stumbling block for some people, but basically the drives are worthless to me as LiveUSB devices. I’m still undecided whether to include them in the ColorHug box; i.e. is a free 96Mb drive better than them all going into landfill?

As this is China, I understand all my money is gone. The company listing is gone from Alibaba, so there’s not a lot I can do there. So other people can hopefully avoid this same mistake, I’ve listed all the details here, which hopefully will become googleable:

Promo-Newa Electronic Limited(Shenzhen)
Wei and Ping Group Limited(Hongkong)  

Office: Building A, HuaQiang Garden, North HuaQiang Road, Futian district, Shenzhen China, 0755-3631 4600
Factory: Building 4, DengXinKeng Industrial Zone, JiHua Road,LongGang District, Shenzhen, China
Registered Address: 15/B—15/F Cheuk Nang Plaza 250 Hennessy Road, HongKong
Email: sales@promo-newa.com
Skype: promonewa

January 26, 2015

Print Module

After being in the camera our pictures deserve some love and to be shared. Every photographer will tell you the joy of having a picture in the hands. At last the pixels have taken form on a piece of paper to give birth to a photography which can be put on the wall!

Though, printing is not easy, there are many technical aspects to take into account. To streamline this process darktable has been added a print module.

The print module

Nothing fancy there, just the page displayed as it will be printed on the paper. The display will show the page itself, the borders and the image properly aligned:

dt-print-modulev2

  • the white area is the paper with the proper aspect ratio
  • the little black markers in each corner are representing the non printable area. These markers are not displayed for printers supporting border-less mode.
  • the gray area is the print area, that is, the paper minus the borders
  • finally the picture take place on the print area with the proper alignment, above the alignment is set to top.

The print settings

Let's look at the print settings offered by this module:

dt-print-settingsv2

Using the control offered we can:

  • select the printer
  • set the printer profile and intent which is the most important settings
  • select the paper
  • set the orientation of the page either landscape or portrait
  • select the unit for the border values
  • set the borders or each side separately or identically using the lock button
  • use one of the nine possible alignment of the picture on the page : left, right, bottom-right, centered...
  • specify the way to export the picture : export profile and intent
  • add a style during the export, this comes handy to add a signature or whatever watermark for example. it is also the way to adjust the exposure, indeed when printing B&W pictures it is often needed to add some lights.

This printer profile and intent is important to get correct color rendition on the print. This is the only way to ensure that the color displayed on the screen will be the one found on the paper as expected.

But be warned, a printer profile is valid for a specific paper, printer and driver. So profiles offered by vendors on the Internet cannot be used here. Indeed, if the profile match the printer and paper they have been created for the Windows or MacOS drivers. Using them won't give you a correct print rendition. One solution is to create the profile yourself for your graphic work-flow. This is outside the scope of this article but you can read the process on another article I have written some time ago. There is also companies offering profiling for you if you prefer.

The last widget is the print button, click on it and the picture will be sent to the corresponding printer.

How to setup for using the print module?

This is an important point to note. The print module is based on CUPS. So you need to install CUPS on your machine for it to work properly.

When it is installed head your Web browser to http://localhost:631 and add your printer there. Depending on the printer there is few or a lot of parameters to configure on this interface. The important ones are:

  • Uncorrected : If the printer offers different color settings select the one that does nothing. That is, asking the driver to not try to be smart at all.
  • Borderless : If you intend to print borderless you need to activate this option on the CUPS interface.

Note that when you have configured the printer on CUPS you should never ever change the settings there if you are using a print profile. Indeed the print profile depends on the CUPS rendering settings. You have been warned!

Mon 2015/Jan/26

  • An inlaid GNOME logo, part 1

    Esta parte en español

    I am making a special little piece. It will be an inlaid GNOME logo, made of light-colored wood on a dark-colored background.

    First, we need to make a board wide enough. Here I'm looking for which two sections of those longer/narrower boards to use.

    Grain matching pieces

    Once I am happy with the sections to use — similar grain, not too many flaws — I cross-cut them to length.

    Cross cutting

    (Yes, working in one's pajamas is fantastic and I thoroughly recommend it.)

    This is a local wood which the sawmill people call "amargoso", or bitter one. And indeed — the sawdust feels bitter in your nose.

    Once cut, we have two pieces of approximately the same length and width. They have matching grain in a V shape down the middle, which is what I want for the shape of this piece.

    V shaped grain match

    We clamp the pieces togther and match-plane them. Once we open them like a book, there should be no gaps between them and we can glue them.

    Clamped pieces Match-planing Match-planed pieces

    No light shows between the boards, so there are no gaps! On to gluing. Rub both boards back and forth to spread the glue evenly. Clamp them, and wait overnight.

    No gaps! Gluing boards Clamped boards

    Meanwhile, we can prepare the wood for the inlaid pieces. I used a piece of soft maple, which is of course pretty hard — unlike hard maple, which would be too goddamn hard.

    Rough maple board

    This little board is not flat. Plane it cross-wise and check for flatness.

    Checking for flatness Planing

    Tomorrow I'll finish flattening this face of the maple, and I'll resaw a thinner slice for the inlay.

    Planed board

മറിയംമുക്ക്

ആമേൻ ഒരു ‘ആനവാൽമോതിരം’ ആണെങ്കിൽ ‘ഭാർഗവ ചരിത’മാണ് മറിയംമുക്ക്.