February 19, 2017

Stellarium 0.12.8

Stellarium 0.12.8 has been released today!

The series 0.12 is LTS for owners of old computers (old with weak graphics cards) and this is bugfix release:
- Added textures for deep-sky objects (port from the series 1.x/0.15)
- Fixed sizes of few DSO textures (LP: #1641773)
- Fixed problem with defaultStarsConfig data and loading a new star catalogs (LP: #1641803)

February 18, 2017

Highlight and remove extraneous whitespace in emacs

I recently got annoyed with all the trailing whitespace I saw in files edited by Windows and Mac users, and in code snippets pasted from sites like StackOverflow. I already had my emacs set up to indent with only spaces:

(setq-default indent-tabs-mode nil)
(setq tabify nil)
and I knew about M-x delete-trailing-whitespace ... but after seeing someone else who had an editor set up to show trailing spaces, and tabs that ought to be spaces, I wanted that too.

To show trailing spaces is easy, but it took me some digging to find a way to control the color emacs used:

;; Highlight trailing whitespace.
(setq-default show-trailing-whitespace t)
(set-face-background 'trailing-whitespace "yellow")

I also wanted to show tabs, since code indented with a mixture of tabs and spaces, especially if it's Python, can cause problems. That was a little harder, but I eventually found it on the EmacsWiki: Show whitespace:

;; Also show tabs.
(defface extra-whitespace-face
  '((t (:background "pale green")))
  "Color for tabs and such.")

(defvar bad-whitespace
  '(("\t" . 'extra-whitespace-face)))

While I was figuring this out, I got some useful advice related to emacs faces on the #emacs IRC channel: if you want to know why something is displayed in a particular color, put the cursor on it and type C-u C-x = (the command what-cursor-position with a prefix argument), which displays lots of information about whatever's under the cursor, including its current face.

Once I had my colors set up, I found that a surprising number of files I'd edited with vim had trailing whitespace. I would have expected vim to be better behaved than that! But it turns out that to eliminate trailing whitespace, you have to program it yourself. For instance, here are some recipes to Remove unwanted spaces automatically with vim.

February 17, 2017

Fri 2017/Feb/17

  • How librsvg exports reference-counted objects from Rust to C

    Librsvg maintains a tree of RsvgNode objects; each of these corresponds to an SVG element. An RsvgNode is a node in an n-ary tree; for example, a node for an SVG "group" can have any number of children that represent various shapes. The toplevel element is the root of the tree, and it is the "svg" XML element at the beginning of an SVG file.

    Last December I started to sketch out the Rust code to replace the C implementation of RsvgNode. Today I was able to push a working version to the librsvg repository. This is a major milestone for myself, and this post is a description of that journey.

    Nodes in librsvg in C

    Librsvg used to have a very simple scheme for memory management and its representation of SVG objects. There was a basic RsvgNode structure:

    typedef enum {
        RSVG_NODE_TYPE_INVALID,
        RSVG_NODE_TYPE_CHARS,
        RSVG_NODE_TYPE_CIRCLE,
        RSVG_NODE_TYPE_CLIP_PATH,
        /* ... a bunch of other node types */
    } RsvgNodeType;
    	      
    typedef struct {
        RsvgState    *state;
        RsvgNode     *parent;
        GPtrArray    *children;
        RsvgNodeType  type;
    
        void (*free)     (RsvgNode *self);
        void (*draw)     (RsvgNode *self, RsvgDrawingCtx *ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNode;
    	    

    This is a no-frills base struct for SVG objects; it just has the node's parent, its children, its type, the CSS state for the node, and a virtual function table with just three methods. In typical C fashion for derived objects, each concrete object type is declared similar to the following one:

    typedef struct {
        RsvgNode super;
        RsvgLength cx, cy, r;
    } RsvgNodeCircle;
    	    

    The user-facing object in librsvg is an RsvgHandle: that is what you get out of the API when you load an SVG file. Internally, the RsvgHandle has a tree of RsvgNode objects — actually, a tree of concrete implementations like the RsvgNodeCircle above or others like RsvgNodeGroup (for groups of objects) or RsvgNodePath (for Bézier paths).

    Also, the RsvgHandle has an all_nodes array, which is a big list of all the RsvgNode objects that it is handling, regardless of their position in the tree. It also has a hash table that maps string IDs to nodes, for when the XML elements in the SVG have an "id" attribute to name them. At various times, the RsvgHandle or the drawing-time machinery may have extra references to nodes within the tree.

    Memory management is simple. Nodes get allocated at loading time, and they never get freed or moved around until the RsvgHandle is destroyed. To free the nodes, the RsvgHandle code just goes through its all_nodes array and calls the node->free() method on each of them. Any references to the nodes that remain in other places will dangle, but since everything is being freed anyway, things are fine. Before the RsvgHandle is freed, the code can copy pointers around with impunity, as it knows that the all_nodes array basically stores the "master" pointers that will need to be freed in the end.

    But Rust doesn't work that way

    Not so, indeed! C lets you copy pointers around all you wish; it lets you modify all the data at any time; and forces you to do all the memory-management bookkeeping yourself. Rust has simple and strict rules for data access, with profound implications. You may want to read up on ownership (where variable bindings own the value they refer to, there is one and only one binding to a value at any time, and values are deleted when their variable bindings go out of scope), and on references and borrowing (references can't outlive their parent scope; and you can either have one or more immutable references to a resource, or exactly one mutable reference, but not both at the same time). Together, these rules avoid dangling pointers, data races, and other common problems from C.

    So while the C version of librsvg had a sea of carefully-managed shared pointers, the Rust version needs something different.

    And it all started with how to represent the tree of nodes.

    How to represent a tree

    Let's narrow down our view of RsvgNode from C above:

    typedef struct {
        ...
        RsvgNode  *parent;
        GPtrArray *children;
        ...
    } RsvgNode;
    	    

    A node has pointers to all its children, and each child has a pointer back to the parent node. This creates a bunch of circular references! We would need a real garbage collector to deal with this, or an ad-hoc manual memory management scheme like librsvg's and its all_nodes array.

    Rust is not garbage-collected and it doesn't let you have shared pointers easily or without unsafe code. Instead, we'll use reference counting. To avoid circular references, which a reference-counting scheme cannot handle, we use strong references from parents to children, and weak references from the children to point back to parent nodes.

    In Rust, you can add reference-counting to your type Foo by using Rc<Foo> (if you need atomic reference-counting for multi-threading, you can use an Arc<Foo>). An Rc represents a strong reference count; conversely, a Weak<Foo> is a weak reference. So, the Rust Node looks like this:

    pub struct Node {
        ...
        parent:    Option<Weak<Node>>,
        children:  RefCell<Vec<Rc<Node>>>,
        ...
    }
    	    

    Let's unpack that bit by bit.

    "parent: Option<Weak<Node>>". The Weak<Node> is a weak reference to the parent Node, since a strong reference would create a circular refcount, which is wrong. Also, not all nodes have a parent (i.e. the root node doesn't have a parent), so put the Weak reference inside an Option. In C you would put a NULL pointer in the parent field; Rust doesn't have null references, and instead represents lack-of-something via an Option set to None.

    "children: RefCell<Vec<Rc<Node>>>". The Vec<Rc<Node>>> is an array (vector) of strong references to child nodes. Since we want to be able to add children to that array while the rest of the Node structure remains immutable, we wrap the array in a RefCell. This is an object that can hand out a mutable reference to the vector, but only if there is no other mutable reference at the same time (so two places of the code don't have different views of what the vector contains). You may want to read up on interior mutability.

    Strong Rc references and Weak refs behave as expected. If you have an Rc<Foo>, you can ask it to downgrade() to a Weak reference. And if you have a Weak<Foo>, you can ask it to upgrade() to a strong Rc, but since this may fail if the Foo has already been freed, that upgrade() returns an Option<Rc<Foo>> — if it is None, then the Foo was freed and you don't get a strong Rc; if it is Some(x), then x is an Rc<Foo>, which is your new strong reference.

    Handing out Rust reference-counted objects to C

    In the post about Rust constructors exposed to C, we talked about how a Box is Rust's primitive to put objects in the heap. You can then ask the Box for the pointer to its heap object, and hand out that pointer to C.

    If we want to hand out an Rc to the C code, we therefore need to put our Rc in the heap by boxing it. And going back, we can unbox an Rc and let it fall out of scope in order to free the memory from that box and decrease the reference count on the underlying object.

    First we will define a type alias, so we can write RsvgNode instead of Rc<Node> and make function prototypes closer to the ones in the C code:

    pub type RsvgNode = Rc<Node>;
    	    

    Then, a convenience function to box a refcounted Node and extract a pointer to the Rc, which is now in the heap:

    pub fn box_node (node: RsvgNode) -> *mut RsvgNode {
        Box::into_raw (Box::new (node))
    }
    	    

    Now we can use that function to implement ref():

    #[no_mangle]
    pub extern fn rsvg_node_ref (raw_node: *mut RsvgNode) -> *mut RsvgNode {
        assert! (!raw_node.is_null ());
        let node: &RsvgNode = unsafe { & *raw_node };
    
        box_node (node.clone ())
    }
    	    

    Here, the node.clone () is what increases the reference count. Since that gives us a new Rc, we want to box it again and hand out a new pointer to the C code.

    You may want to read that twice: when we increment the refcount, the C code gets a new pointer! This is like creating a hardlink to a Unix file — it has two different names that point to the same inode. Similarly, our boxed, cloned Rc will have a different heap address than the original one, but both will refer to the same Node in the end.

    This is the implementation for unref():

    #[no_mangle]
    pub extern fn rsvg_node_unref (raw_node: *mut RsvgNode) -> *mut RsvgNode {
        if !raw_node.is_null () {
            let _ = unsafe { Box::from_raw (raw_node) };
        }
    
        ptr::null_mut () // so the caller can do "node = rsvg_node_unref (node);" and lose access to the node
    }
    	    

    This is very similar to the destructor from a few blog posts ago. Since the Box owns the Rc it contains, letting the Box go out of scope frees it, which in turn decreases the refcount in the Rc. However, note that this rsvg_node_unref(), intended to be called from C, always returns a NULL pointer. Together, both functions are to be used like this:

    RsvgNode *node = ...; /* acquire a node from Rust */
    
    RsvgNode *extra_ref = rsvg_node_ref (node);
    
    /* ... operate on the extra ref; now discard it ... */
    
    extra_ref = rsvg_node_unref (extra_ref);
    
    /* Here extra_ref == NULL and therefore you can't use it anymore! */
    	    

    This is a bit different from g_object_ref(), which returns the same pointer value as what you feed it. Also, the pointer that you would pass to g_object_unref() remains usable if you didn't take away the last reference... although of course, using it directly after unreffing it is perilous as hell and probably a bug.

    In these functions that you call from C but are implemented in Rust, ref() gives you a different pointer than what you feed it, and unref() gives you back NULL, so you can't use that pointer anymore.

    To ensure that I actually used the values as intended and didn't fuck up the remaining C code, I marked the function prototypes with the G_GNUC_WARN_UNUSED_RESULT attribute. This way gcc will complain if I just call rsvg_node_ref() or rsvg_node_unref() without actually using the return value:

    RsvgNode *rsvg_node_ref (RsvgNode *node) G_GNUC_WARN_UNUSED_RESULT;
    
    RsvgNode *rsvg_node_unref (RsvgNode *node) G_GNUC_WARN_UNUSED_RESULT;
    	    

    And this actually saved my butt in three places in the code when I was converting it to reference counting. Twice when I forgot to just use the return values as intended; once when the old code was such that trivially adding refcounting made it use a pointer after unreffing it. Make the compiler watch your back, kids!

    Testing

    One of the things that makes me giddy with joy is how easy it is to write unit tests in Rust. I can write a test for the refcounting machinery above directly in my node.rs file, without needing to use C.

    This is the test for ref and unref:

    #[test]
    fn node_refs_and_unrefs () {
        let node = Rc::new (Node::new (...));
    
        let mut ref1 = box_node (node);                            // "hand out a pointer to C"
    
        let new_node: &mut RsvgNode = unsafe { &mut *ref1 };       // "bring back a pointer from C"
        let weak = Rc::downgrade (new_node);                       // take a weak ref so we can know when the node is freed
    
        let mut ref2 = unsafe { rsvg_node_ref (new_node) };        // first extra reference
        assert! (weak.upgrade ().is_some ());                      // "you still there?"
    
        ref2 = unsafe { rsvg_node_unref (ref2) };                  // drop the extra reference
        assert! (weak.upgrade ().is_some ());                      // "you still have life left in you, right?"
    
        ref1 = unsafe { rsvg_node_unref (ref1) };                  // drop the last reference
        assert! (weak.upgrade ().is_none ());                      // "you are dead, aren't you?
    }
    	    

    And this is the test for two refcounts indeed pointing to the same Node:

    #[test]
    fn reffed_node_is_same_as_original_node () {
        let node = Rc::new (Node::new (...));
    
        let mut ref1 = box_node (node);                         // "hand out a pointer to C"
    
        let mut ref2 = unsafe { rsvg_node_ref (ref1) };         // "C takes an extra reference and gets a new pointer"
    
        unsafe { assert! (rsvg_node_is_same (ref1, ref2)); }    // but they refer to the same thing, correct?
    
        ref1 = rsvg_node_unref (ref1);
        ref2 = rsvg_node_unref (ref2);
    }
    	    

    Hold on! Where did that rsvg_node_is_same() come from? Since calling rsvg_node_ref() now gives a different pointer to the original ref, we can no longer just do "some_node == tree_root" to check for equality and implement a special case. We need to do "rsvg_node_is_same (some_node, tree_root)" instead. I'll just point you to the source for this function.

Today, I:

I’ve gazed enviously at many a productivity scheme. Getting Things Done™, do one thing at a time, use a swimming desk, only use hand-hewn pencils on organic hemp paper, and so on.

I assume most of these techniques and schemes are like diets or exercise routines. There are no silver bullets, but there may be an occasional nugget of truth among the gimmicks and marketing.

Inspired by a post about daily work journals, I have found one tiny little trick that has actually worked for me. It hasn’t transformed my life or quadrupled my productivity. It has made me a touch more aware of how I spend my time.

Every weekday at 4:45pm, get a gentle reminder from Slack, the chat system we use at work. It looks like this:

The #retrospectives text is a link to a channel in Slack that is available to others to read, but where they won’t be bothered by my updates (unless they opt-in). I click the link and write a quick bullet-list summary of what I have done that day, starting with “Today, I:”. It usually looks something like this:

Screenshot of a daily work log

My first such post was on August 16, 2016. To my surprise, I have stuck with it. As of mid-February, about seven months later, I have posted 134 entries – one for every day I have worked.

What’s the point of writing about what you’ve already done each day? It serves several purposes for me. Most importantly, the ritual reminds me to pause and reflect (very briefly) on what I accomplished that day. This simple act makes me a bit more mindful of how I spend my time and energy. The log also proves useful for any kind of retroactive reporting (When did I start working on project X? How many days in October did I spend on client Y?).

It may also be helpful in 10,000 years, when aliens are trying to reconstruct what daily life was like for 2000-era web designer.

February 13, 2017

Emacs: Initializing code files with a template

Part of being a programmer is having an urge to automate repetitive tasks.

Every new HTML file I create should include some boilerplate HTML, like <html><head></head></body></body></html>. Every new Python file I create should start with #!/usr/bin/env python, and most of them should end with an if __name__ == "__main__": clause. I get tired of typing all that, especially the dunderscores and slash-greater-thans.

Long ago, I wrote an emacs function called newhtml to insert the boilerplate code:

(defun newhtml ()
  "Insert a template for an empty HTML page"
  (interactive)
  (insert "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n"
          "<html>\n"
          "<head>\n"
          "<title></title>\n"
          "</head>\n\n"
          "<body>\n\n"
          "<h1></h1>\n\n"
          "<p>\n\n"
          "</body>\n"
          "</html>\n")
  (forward-line -11)
  (forward-char 7)
  )

The motion commands at the end move the cursor back to point in between the <title> and </title>, so I'm ready to type the page title. (I should probably have it prompt me, so it can insert the same string in title and h1, which is almost always what I want.)

That has worked for quite a while. But when I decided it was time to write the same function for python:

(defun newpython ()
  "Insert a template for an empty Python script"
  (interactive)
  (insert "#!/usr/bin/env python\n"
          "\n"
          "\n"
          "\n"
          "if __name__ == '__main__':\n"
          "\n"
          )
  (forward-line -4)
  )
... I realized that I wanted to be even more lazy than that. Emacs knows what sort of file it's editing -- it switches to html-mode or python-mode as appropriate. Why not have it insert the template automatically?

My first thought was to have emacs run the function upon loading a file. There's a function with-eval-after-load which supposedly can act based on file suffix, so something like (with-eval-after-load ".py" (newpython)) is documented to work. But I found that it was never called, and couldn't find an example that actually worked.

But then I realized that I have mode hooks for all the programming modes anyway, to set up things like indentation preferences. Inserting some text at the end of the mode hook seems perfectly simple:

(add-hook 'python-mode-hook
          (lambda ()
            (electric-indent-local-mode -1)
            (font-lock-add-keywords nil bad-whitespace)
            (if (= (buffer-size) 0)
                (newpython))
            (message "python hook")
            ))

The (= (buffer-size) 0) test ensures this only happens if I open a new file. Obviously I don't want to be auto-inserting code inside existing programs!

HTML mode was a little more complicated. I edit some files, like blog posts, that use HTML formatting, and hence need html-mode, but they aren't standalone HTML files that need the usual HTML template inserted. For blog posts, I use a different file extension, so I can use the elisp string-suffix-p to test for that:

  ;; s-suffix? is like Python endswith
  (if (and (= (buffer-size) 0)
           (string-suffix-p ".html" (buffer-file-name)))
      (newhtml) )

I may eventually find other files that don't need the template; if I need to, it's easy to add other tests, like the directory where the new file will live.

A nice timesaver: open a new file and have a template automatically inserted.

February 10, 2017

Accelerated compositing in WebKitGTK+ 2.14.4

WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

  • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
  • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
  • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

We really hope this new release and the upcoming 2.16 will work much better for everybody.

February 08, 2017

Helping new users get on IRC, Part 2

Fedora Hubs

Where our story began…

You may first want to check out the first part of this blog post, Helping new users get on IRC. We’ll wait for you here. 🙂

A simpler way to choose a nick

(Relevant ticket: https://pagure.io/fedora-hubs/issue/283)

So Sayan kindly reviewed the ticket with the irc registration mockups in it and had some points of feedback about the nick selection process (original mockup shown below:)

Critical Feedback on Original Mockup

  • The layout of a grid of nicks to choose from invited the user to click on all of them, even if that wasn’t in their best interest. It drove their attention to a multiplicity of choices rather than focused them towards one they could use to move forward.
  • If the user clicked even on just one nick, they would have to wait for us to check if it was available. If they clicked on multiple, it could take a long time to get through the dialog. They might give up and not register. (We want them to join and chat, though!)
  • To make it clear which nick they wanted to register, we had the user click on a “Register” button next to every available nick. This meant, necessarily, that the button wasn’t in the lower right corner, the obvious place to look to continue. Users might be confused as to the correct next step.
  • Overall, the screen is more cluttered than it could be.

mockup showing 9 up nick suggestion display

We thought through a couple of alternatives that would meet the goals I had with the initial design, yet still address Sayan’s concerns listed above. Those goals are:

Mo’s goals for the mockup

  • Provide the user clues as to the standard format of the nicknames (providing an acceptable example can do this.)
  • Giving the user ideas in case they just can’t think of any nickname (generating suggestions based on heuristics can help.)
  • Making it very clear which nickname the user is going to continue with and register (in the first mockup, this was achieved through having the register button right next to each nick.)
  • Making it clear to the user that we needed to check if the nick was available after they came up with one. This is important becauae many websites do this as you type – we can’t because our availability check is much more expensive (parsing /info in irc!)

New solution

We decided to instead make the screen a simple form field for nick with a button to check availability, as well as a button the user could optionally click on to generate suggested nicks that would populate the field for them. Based on whether or not an available nick was populated in the field, the “Register” button in the lower right corner would be greyed out or active.

Initial view

Nickname field is blank.

mockup of screen with a single form field for nickname input

Nickname suggestion

If you click on the suggest button, it’ll fill the form field with a suggestion for you:

mockup of screen showing the suggest nickname button populating the nickname field

Checking nick availability

Click on the “Check availability” button, and it’ll fade out and a spinner will appear letting you know that we’re checking whether or not the nick is available (in the backend, querying Freenode nickserv or doing a /info on the nick.)

mockup showing nickname availability checking spinner

Nickname not available

If the nickname’s not available, we let you know. Edit the form field or click on the suggest button to try again and have the “Check availability” button appear.

mockup showing a not available message if a nickname fails the availability check

Nickname available

Hey, your nick’s not taken! The “Register” button in the lower right lights up and you’re ready to move forward if you want.

mockup showing the register button activating when the user's input nickname is in fact available

Taking a verification code instead

I learned a lesson I already knew – I should have known better but didn’t! 🙂 I assumed that when you provide your email address to freenode, the email they send back havs a link to click on to verify your email. I knew I should go through the process myself to be sure what the email said, what it looked like, etc., but I didn’t before I designed the original screen based on a faulty assumption. Here is the faulty assumption screen:

Original version of the email confirmation mockup which tells users to click a link in the email that doesn't exist.

I knew I should go through the process to verify some things about my understanding of it, though (hey, it’s been years and years since I registered my own nick and email with freenode NickServ.) I got around to it, and here’s what I got (with some details redacted for privacy, irc nick is $IRCNICK below:)

From "freenode" <noreply.support@freenode.net>
To "$IRCNICK" <email@example.com>
Date Mon, 06 Feb 2017 19:35:35 +0000
Subject freenode Account Registration

$IRCNICK,

In order to complete your account registration, you must type the following
command on IRC:

/msg NickServ VERIFY REGISTER $IRCNICK qbgcldnjztbn

Thank you for registering your account on the freenode IRC network!

Whoopsie! You’ll note the email has no link to click. See! Assumptions that have not been verified == bad! Burned Mo, burned!

So here’s what it looks like now. I added a field for the user to provide the verification code, as well as some hints to help them identify the code from the email. In the process, I also cut down the original text significantly since there is a lot more that has to go on the screen now. I should have cut the text down without this excuse (more text, less read):

new mockp for email verification showing a field for entering the verification code

 

I still need to write up the error cases here – what happens if the verification code gets rejected by NickServ or if it’s too long, has invalid characters, etc.

Handling edge cases

(Relevant ticket: https://pagure.io/fedora-hubs/issue/318)

Both in Twitter comments and IRC messages you sent me after I blogged the first design, I realized I needed to gather some data about the nickname registration process on Freenode. (Thank you!) I was super concerned about the fragility of the approach of slapping a UI on top of a process we don’t own or control. For example, the email verification email freenode sends that a user must act on within 24 hours to keep their nick – we can’t re-send that mail, so how do we handle this in a way that doesn’t break?

Even though I was a little intimidated (I forget that freenode isn’t like EFnet,) I popped into the #freenode admin channel and asked a bunch of questions to clear things up. The admins are super-helpful and nice, and they cleared everything up. I learned a few things:

  • A user is considered active or not based on the amount of time that has passed since they have been authenticated / identified with freenode NickServ.
  • After the user initially registers with NickServ, they are sent an email from “freenode Account Registration” <noreply.support@freenode.net> with a message that contains a verification code they need to send to NickServ to verify their email address.
  • If you don’t receive the email from freenode, you can drop the nick, take it again and try again with another email address.
  • While there is no formal freenode privacy policy for the email address collection, they confirmed they are only used for password recovery purposes and are not transmitted to third parties for any purpose.
  • If a nickname hasn’t been actively identified with NickServ for 10 weeks, it is considered “expired.” Identifying to NickServ with that nick and password will still work indefinitely until either the DB is purged (a regular maintenance task) or if another user requested the nick and took it over:
    • The DB purges don’t happen right away, so an expired nick won’t be removed on day 1 of week 11, but it’s vulnerable to purge from that point forward. Purges happen a couple times a year or so.
    • If another user wants to take over an expired nick that has not been purged, they can message an admin to have the nick freed for them to claim.

This was mostly good news, because being identified to NickServ means you’re active. Since we have an IRC bouncer (ircb) under the covers keeping users identified, the likelihood of their sitting around inactive for 10 weeks is far less. The possibility that they actually lose their nick is limited to an opportunist requesting it and taking it over or bad timing with a DB purge. This is a mercifully narrow case.

So here’s the edge cases we have to worry about from this:

Lost nickname

These cases result in the user needing to register a new nickname.

  • User hasn’t logged into Hubs for > 10 weeks, and circumstances (netsplit?) kept them from being identified. Their nick was taken by another user.
  • User didn’t verify email address, and their nick was taken by another user.

Need to re-register nickname

These cases result in the user needing to re-register their nickname.

  • User hasn’t logged into Hubs for > 10 weeks, circumstances kept them from being identified. Their nick was purged from the DB but is still available.
  • User didn’t verify email address, and their nick was purged from DB but is still available.

Handling lost nicks

If we can’t authenticate the user and their nickname, we’ll disable chat hubs-wide. IRC widgets on hubs will appear like so, to prompt the user to re-register:

IRC widget with re-register nag

If the user happens to visit the user settings panel, they’ll also see a prompt to re-register with the IRC feature disabled:

mockup of the user settings panel showing irc disabled with a nag to re-register nickname

Re-registration process

The registration process should appear the same as in the initial user registration flow, with a message at the top indicating which state the user is in (if their nick was db purged and is now available, let them know so they can re-register the same nick; if someone else grabbed it, let them know so they know to make a new one.) Here’s what this will look like:

nickname registration screen with a message showing the expired nickname is still available

 

The cases I forgot

What if the user already had a registered nickname before Hubs?  (This will be the vast majority of Hubs users when we launch!) I kind of thought about this, and assumed we’d slurp the user’s nickname in from FAS and prompt them for their password at some point, and forgot about it until Sayan mentioned it in our meeting this morning. There’s two cases here, actually:

  • User has a nickname registered with Nickserv already that they’ve provided to FAS. We need to provide them a way to enter in their password so we can authenticate them using Hubs.
  • User has a nickname registered with Nickserv already that is not in FAS. We need to let them provide their nickname/password so we can authenticate them.

I haven’t mocked this up yet. Next time! 🙂

Initial set of mockups for this.

Feedback Welcome!

So there you have it in today’s installation of Hubs’ IRC feature – a pretty major iteration on a design based on feedback, some research into how freenode handles registration, some mistakes (not verifying the email registration process first-hand, forgetting some cases), and additional mockups.

Keep the feedback coming – as you can see here, it’s super-helpful and gets applied directly to the design. 🙂

Made with Krita 2016: The Artbooks Have Arrived!

Made With Krita 2016 is now available! This morning the printer delivered 250 copies of the first book filled with art created in Krita by great artists from all around the world. We immediately set to work to send out all pre-orders, including the ones that were a kickstarter reward.

The books themselves are gorgeous. The artwork is great and varied, of course, but the printer did a good job on the colors, too — helped by the excellent way the open source desktop publishing application Scribus prepares PDF’s for printing. The picture doesn’t do it justice, since it was made with an old phone…

Forty artists from all over the world, working in all kinds of styles and on all kinds of subjects show how Krita is used in the real world to create amazing and engaging art. The book also contains a biographical section with information about each individual artist. Get your rare first edition now, an essential addition to every self-respecting bookshelf! The book is professionally printed on 130 grams paper and softcover bound in signatures.

The cover illustration is by Odysseas Stamoglou. The inner artwork features Arrianne Criseyde Pascual, Baukje Jagersma, Beelzy, Chewsome, David Revoy, Enrico Guarnieri, Eric Lee, Filipe Ferreira, Justin Nichol, Kesbet Tree, Livio Fania, Liz de Souza, Matt Preece, Melissa Lipan, Michael Bowling, Mozart Couto, Naghree Greenskin, Neotheta, Nivailis, Paolo Puggioni, R.J. Quiralta, Radian 1, Raghukamath, Ramón Miranda, Reine, Sylvain Boussiron, William Thorup, Elésiane Huve, Amelia Hamrick, Danilo Junior, Ivan Aros, Jennifer Reuter, Karen Kaye Llamas, Lucas Ribeiro, Motion Arc Foundry, Odysseas Stamoglou, Sylvia Ritter, Timothée Giet, Tony Jennison, Tyson Tan, and Wayne Parker.

Made with Krita 2016

Made with Krita 2016

Made with Krita 2016 is 19,95€ excluding VAT in the European Union, excluding shipping. Shipping is 11.25€ outside the Netherlands and 3.65€ inside the Netherlands.

International:

European Union:

 

New fwupd release, and why you should buy a Dell

This morning I released the first new release of fwupd on the 0.8.x branch. This has a number of interesting fixes, but more importantly adds the following new features:

  • Adds support for Intel Thunderbolt devices
  • Adds support for some Logitech Unifying devices
  • Adds support for Synaptics MST cascaded hubs
  • Adds support for the Altus-Metrum ChaosKey device
  • Adds Dell-specific functionality to allow other plugins turn on TBT/GPIO

Mario Limonciello from Dell has worked really hard on this release, and I can say with conviction: If you want to support a hardware company that cares about Linux — buy a Dell. They seem to be driving the importance of Linux support into their partners and suppliers. I wish other vendors would do the same.

February 07, 2017

Refugee Hope Box

I’m not sure that I’ve ever posted anything non-Fedora / Linux / tech in this blog since I started it over 10 years ago.

My daughter’s school is running a refugee hope box drive. The boxes are packed full of toiletries and other supplies for refugee children. We will drop our box off at school and it will get shipped to the Operation Refugee Child program, where its contents will be packed into a backpack and delivered to a child in the refugee camps. We decided to pack a box for a teenage girl. It includes everything from personal toiletries, to non-perishable snacks, to some fun things like gel pens, markers, a journal, and Lip Smackers.

We explained to our daughter that there are kids like her who got kicked out of their home by bad guys (“Like the Joker?” she asked. Yep, bad guys like him.) There’s one girl we are going to try to help – since she is far away from home and has almost nothing, we are going to help by sending some supplies for her. My daughter loved the idea and was really into it. We spent most of our Saturday this past weekend getting supplies out and about with the kids, until the kids kind of melted down (nap, shopping cart fatigue, etc.) and we had to head back home.

We were so close to being finished but needed a few more items to finish it up, so I set up an Amazon wishlist for the items remaining and posted it to Twitter. I figured other folks might want to go in on it with us and help, and I could always pick up anything else remaining later this week.

It seriously took 22 minutes to get all of the items left to order purchased. I’m totally floored by everyone’s generosity. Our community is awesome. Thank you.

If you would like to help, you can start your own box or buy an item off of the central organization’s Amazon wishlist.

February 06, 2017

Open Desktop Review System : One Year Review

This weekend we had the 2,000th review submitted to the ODRS review system. Every month we’re getting an additional ~300 reviews and about 500,000 requests for reviews from the system. The reviews that have been contributed are in 94 languages, and from 1387 different users.

Most reviews have come from Fedora (which installs GNOME Software as part of the default workstation) but other distros like Debian and Arch are catching up all the time. I’d still welcome KDE software center clients like Discover and Apper using the ODRS although we do have quite a lot of KDE software reviews submitted using GNOME Software.

Out of ~2000 reviews just 23 have been marked as inappropriate, of which I agreed with 7 (inappropriate is supposed to be swearing or abuse, not just being unhelpful) and those 7 were deleted. The mean time between a review being posted that is actually abuse and it being marked as such (or me noticing it in the admin panel) is just over 8 hours, which is certainly good enough. In the last few months 5523 people have clicked the “upvote” button on a review, and 1474 people clicked the “downvote” button on a review. Although that’s less voting that I hoped for, that’s certainly enough to give good quality sorting of reviews to end users in most locales. If you have a couple of hours on your hands, gnome-software --mode=moderate is a great way to upvote/downvote a lot of reviews in your locale.

So, onward to 3,000 reviews. Many thanks to those who submitted reviews already — you’re helping new users who don’t know what software they should install.

Rosy Finches

Los Alamos is having an influx of rare rosy-finches (which apparently are supposed to be hyphenated: they're rosy-finches, not finches that are rosy).

[Rosy-finches] They're normally birds of the snowy high altitudes, like the top of Sandia Crest, and quite unusual in Los Alamos. They're even rarer in White Rock, and although I've been keeping my eyes open I haven't seen any here at home; but a few days ago I was lucky enough to be invited to the home of a birder in town who's been seeing great flocks of rosy-finches at his feeders.

There are four types, of which three have ever been seen locally, and we saw all three. Most of the flock was brown-capped rosy-finches, with two each black rosy-finches and gray-capped rosy-finches. The upper bird at right, I believe, is one of the blacks, but it might be a grey-capped. They're a bit hard to tell apart. In any case, pretty birds, sparrow sized with nice head markings and a hint of pink under the wing, and it was fun to get to see them.

[Roadrunner] The local roadrunner also made a brief appearance, and we marveled at the combination of high-altitude snowbirds and a desert bird here at the same place and time. White Rock seems like much better roadrunner territory, and indeed they're sometimes seen here (though not, so far, at my house), but they're just as common up in the forests of Los Alamos. Our host said he only sees them in winter; in spring, just as they start singing, they leave and go somewhere else. How odd!

Speaking of birds and spring, we have a juniper titmouse determinedly singing his ray-gun song, a few house sparrows are singing sporadically, and we're starting to see cranes flying north. They started a few days ago, and I counted several hundred of them today, enjoying the sunny and relatively warm weather as they made their way north. Ironically, just two weeks ago I saw a group of about sixty cranes flying south -- very late migrants, who must have arrived at the Bosque del Apache just in time to see the first northbound migrants leave. "Hey, what's up, we just got here, where ya all going?"

A few more photos: Rosy-finches (and a few other nice birds).

We also have a mule deer buck frequenting our yard, sometimes hanging out in the garden just outside the house to drink from the heated birdbath while everything else is frozen. (We haven't seen him in a few days, with the warmer weather and most of the ice melted.) We know it's the same buck coming back: he's easy to recognize because he's missing a couple of tines on one antler.

The buck is a welcome guest now, but in a month or so when the trees start leafing out I may regret that as I try to find ways of keeping him from stripping all the foliage off my baby apple tree, like some deer did last spring. I'm told it helps to put smelly soap shavings, like Irish Spring, in a bag and hang it from the branches, and deer will avoid the smell. I will try the soap trick but will probably combine it with other measures, like a temporary fence.

February 03, 2017

Fri 2017/Feb/03

  • Algebraic data types in Rust, and basic parsing

    Some SVG objects have a preserveAspectRatio attribute, which they use to let you specify how to scale the object when it is inserted into another one. You know when you configure the desktop's wallpaper and you can set whether to Stretch or Fit the image? It's kind of the same thing here.

    Examples of        preserveAspectRatio from the SVG spec

    The SVG spec specifies a simple syntax for the preserveAspectRatio attribute; a valid one looks like "[defer] <align> [meet | slice]". An optional defer string, an alignment specifier, and an optional string which can be meet or slice. The alignment specifier can be any one of these strings:

    none
    xMinYMin
    xMidYMin
    xMaxYMin
    xMinYMid
    xMidYMid
    xMaxYMid
    xMinYMax
    xMidYMax
    xMaxYMax

    (Boy oh boy, I just hate camelCase.)

    The C code in librsvg would parse the attribute and encode it as a bitfield inside an int:

    #define RSVG_ASPECT_RATIO_NONE (0)
    #define RSVG_ASPECT_RATIO_XMIN_YMIN (1 << 0)
    #define RSVG_ASPECT_RATIO_XMID_YMIN (1 << 1)
    #define RSVG_ASPECT_RATIO_XMAX_YMIN (1 << 2)
    #define RSVG_ASPECT_RATIO_XMIN_YMID (1 << 3)
    #define RSVG_ASPECT_RATIO_XMID_YMID (1 << 4)
    #define RSVG_ASPECT_RATIO_XMAX_YMID (1 << 5)
    #define RSVG_ASPECT_RATIO_XMIN_YMAX (1 << 6)
    #define RSVG_ASPECT_RATIO_XMID_YMAX (1 << 7)
    #define RSVG_ASPECT_RATIO_XMAX_YMAX (1 << 8)
    #define RSVG_ASPECT_RATIO_SLICE (1 << 30)
    #define RSVG_ASPECT_RATIO_DEFER (1 << 31)

    That's probably not the best way to do it, but it works.

    The SVG spec says that the meet and slice values (represented by the absence or presence of the RSVG_ASPECT_RATIO_SLICE bit, respectively) are only valid if the value of the align field is not none. The code has to be careful to ensure that condition. Those values specify whether the object should be scaled to fit inside the given area, or stretched so that the area slices the object.

    When translating that this C code to Rust, I had two choices: keep the C-like encoding as a bitfield, while adding tests to ensure that indeed none excludes meet|slice; or take advantage of the rich type system to encode this condition in the types themselves.

    Algebraic data types

    If one were to not use a bitfield in C, we could represent a preserveAspectRatio value like this:

    typedef struct {
        defer: gboolean;
        
        enum {
            None,
            XminYmin,
            XminYmid,
            XminYmax,
            XmidYmin,
            XmidYmid,
            XmidYmax,
            XmaxYmin,
            XmaxYmid,
            XmaxYmax
        } align;
        
        enum {
            Meet,
            Slice
        } meet_or_slice;
    } PreserveAspectRatio;
    	    

    One would still have to be careful that meet_or_slice is only taken into account if align != None.

    Rust has algebraic data types; in particular, enum variants or sum types.

    First we will use two normal enums; nothing special here:

    pub enum FitMode {
        Meet,
        Slice
    }
    
    pub enum AlignMode {
        XminYmin,
        XmidYmin,
        XmaxYmin,
        XminYmid,
        XmidYmid,
        XmaxYmid,
        XminYmax,
        XmidYmax,
        XmaxYmax
    }

    And the None value for AlignMode? We'll encode it like this in another type:

    pub enum Align {
        None,
        Aligned {
            align: AlignMode,
            fit: FitMode
        }
    }

    This means that a value of type Align has two variants: None, which has no extra parameters, and Aligned, which has two extra values align and fit. These two extra values are of the "simple enum" types we saw above.

    If you "let myval: Align", you can only access the align and fit subfields if myval is in the Aligned variant. The compiler won't let you access them if myval is None. Your code doesn't need to be "careful"; this is enforced by the compiler.

    With this in mind, the final type becomes this:

    pub struct AspectRatio {
        pub defer: bool,
        pub align: Align
    }

    That is, a struct with a boolean field for defer, and an Align variant type for align.

    Default values

    Rust does not let you have uninitialized variables or fields. For a compound type like our AspectRatio above, it would be nice to have a way to create a "default value" for it.

    In fact, the SVG spec says exactly what the default value should be if a preserveAspectRatio attribute is not specified for an SVG object; it's just "xMidYMid", which translates to an enum like this:

    let aspect = AspectRatio {
        defer: false,
        align: Align::Aligned {
            align: AlignMode::XmidYmid,
    	fit: FitMode::Meet
        }
    }

    One nice thing about Rust is that it lets us define default values for our custom types. You implement the Default trait for your type, which has a single default() method, and make it return a value of your type initialized to whatever you want. Here is what librsvg uses for the AspectRatio type:

    impl Default for Align {
        fn default () -> Align {
            Align::Aligned {
                align: AlignMode::XmidYmid,
                fit: FitMode::Meet
            }
        }
    }
    
    impl Default for AspectRatio {
        fn default () -> AspectRatio {
            AspectRatio {
                defer: false,
                align: Default::default ()    // this uses the value from the trait implementation above!
            }
        }
    }

    Librsvg implements the Default trait for both the Align variant type and the AspectRatio struct, as it needs to generate default values for both types at different times. Within the implementation of Default for AspectRatio, we invoke the default value for the Align variant type in the align field.

    Simple parsing, the Rust way

    Now we have to implement a parser for the preserveAspectRatio strings that come in an SVG file.

    The Result type

    Rust has a FromStr trait that lets you take in a string and return a Result. Now that we know about variant types, it will be easier to see what Result is about:

    #[must_use]
    enum Result<T, E> {
       Ok(T),
       Err(E),
    }

    This means the following. Result is an enum with two variants, Ok and Err. The first variant contains a value of whatever type you want to mean, "this is a valid parsed value". The second variant contains a value that means, "these are the details of an error that happened during parsing".

    Note the #[must_use] tag in Result's definition. This tells the Rust compiler that return values of this type must not be ignored: you can't ignore a Result returned from a function, as you would be able to do in C. And then, the fact that you must see if the value is an Ok(my_value) or an Err(my_error) means that the only way ignore an error value is to actually write an empty stub to catch it... at which point you may actually write the error handler properly.

    The FromStr trait

    But we were talking about the FromStr trait as a way to parse strings into values! This is what it looks like for our AspectRatio:

    pub struct ParseAspectRatioError { ... };
    
    impl FromStr for AspectRatio {
        type Err = ParseAspectRatioError;
    
        fn from_str(s: &str) -> Result<AspectRatio, ParseAspectRatioError> {
            ... parse the string in s ...
    
            if parsing succeeded {
                return Ok (AspectRatio { ... fields set to the right values ... });
            } else {
                return Err (ParseAspectRatioError { ... fields set to error description ... });
            }
        }
    }

    To implement FromStr for a type, you implement a single from_str() method that returns a Result<MyType, MyErrorType>. If parsing is successful you return the Ok variant of Result with your parsed value as Ok's contents. If parsing fails, you return the Err variant with your error type.

    Once you have that implementation, you can simply call "let my_result = AspectRatio::from_str ("xMidyMid");" and piece apart the Result as with any other Rust code. The language provides facilities to chain successful results or errors so that you don't have nested if()s and such.

    Testing the parser

    Rust makes it very easy to write tests. Here are some for our little parser above.

    #[test]
    fn parsing_invalid_strings_yields_error () {
        assert_eq! (AspectRatio::from_str (""), Err(ParseAspectRatioError));
    
        assert_eq! (AspectRatio::from_str ("defer foo"), Err(ParseAspectRatioError));
    }
    
    #[test]
    fn parses_valid_strings () {
        assert_eq! (AspectRatio::from_str ("defer none"),
                    Ok (AspectRatio { defer: true,
                                      align: Align::None }));
    
        assert_eq! (AspectRatio::from_str ("XmidYmid"),
                    Ok (AspectRatio { defer: false,
                                      align: Align::Aligned { align: AlignMode::XmidYmid,
                                                              fit: FitMode::Meet } }));
    }

    Using C-friendly wrappers for those fancy Rust enums and structs, the remaining C code in librsvg now parses and uses AspectRatio values that are fully implemented in Rust. As a side benefit, the parser doesn't use temporary allocations; the old C code built up a temporary list from split()ting the string. Rust's iterators and string slices essentially let you split() a string with no temporary values in the heap, which is pretty awesome.

February 02, 2017

Introducing Neon – a way to Quickly review stuff and share with your friends

Over at silverorange, we’ve been working on a new product called Neon.The goal is to see if we can create compelling reviews with limited input (often from a phone). Our current take on this boils a review down to a few basic elements of a review:

  1. Title (what are you reviewing)
  2. Photo
  3. Pros & Cons
  4. A rating from 0 to 10
  5. An emoji to represent how you feel about it

You can also optionally add a longer description, a link to where you can buy it, and the price you paid.

For example, here’s a cutting and insightful review I wrote about a mouse pad.

Neon is in a closed alpha right now, which means that anyone can read the reviews, but to create reviews, you need to be invited to try it out. If you’re interested in trying out the alpha, or being notified when it is opened up to a larger audience, you an leave your email at neon.io.

Why I’m a Social Media Curmudgeon (oh, and follow my blog on Twitter)

I wanted to clarify for myself why it is that I don’t use Facebook or (for the most part) Twitter. Brace yourself for self-justification and equivocation.

First caveat: I actually do have a Twitter account (@sgarrity), but I don’t post anything (sort of, more on this later). I use it to follow people.

I don’t dislike Twitter or Facebook.  They are both amazing systems. They both took blogging and messaging and made them way easier on a massive scale. As a professional web designer and developer, I respect the craft with which both Facebook and Twitter have built their platforms. I regularly rely on open-source projects that both companies produce and finance (thanks!).

Messaging and communication are too important to be controlled by a private corporation. For all of their faults, our phone or text messaging services allow portability. If I have a problem with my phone company, I can take my phone number with me to another company. I can talk to someone regardless of what phone company they have chosen. The same is true of the web and of email (as long as you use your own domain name).

I’m not an extremist. I don’t think you’re doing something wrong if you use these services. I would like to see people use more open alternatives, but I understand that for many, the ease and convenience of platforms like Facebook and Twitter are worth the trade-offs.

All of this is to say that you can now follow @aov_blog on Twitter for updates on my Acts of Volition blog posts.

While I’m contradicting myself, I also have a third Twitter account, @steven_reviews, which I created to share reviews for a new site I’m helping to develop and test at work (more on that soon). While I may opt out of these services personally, if there’s a compelling reason for me to use them at work, or my reluctance proves a significant hindrance for those around me, the scales of the trade-offs may tip in a different direction.

Oh, and I also help manage the @silverorangeinc Twitter account as part of my job.

Now, get off my #lawn.

February 01, 2017

darktable 2.2.3 released

we're proud to announce the third bugfix release for the 2.2 series of darktable, 2.2.3!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.3.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.3.tar.xz
1b33859585bf283577680c61e3c0ea4e48214371453b9c17a86664d2fbda48a0  darktable-2.2.3.tar.xz
$ sha256sum darktable-2.2.3.dmg
1ebe9a9905b895556ce15d556e49e3504957106fe28f652ce5efcb274dadd41c  darktable-2.2.3.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.2 can be found below.

Bugfixes:

  • Fix fatal crash when generating preview for medium megapixel count (~16MP) Bayer images
  • Properly subtract black levels: respect the even/odd -ness of the raw crop origin point
  • Collection module: fix a few UI quirks

Krita 3.1.2 released!

Krita 3.1.2, released on February 1st 2017, is the first bugfix release in the 3.1 release series. But there are a few extra new features thrown in for good measure!

Audio Support for Animations

Import audio files to help with syncing voices and music. In the demo on the left, Timothée Giet shows how scrubbing and playback work when working with audio.

  • Available audio formats are WAV, MP3, OGG, and FLAC
  • A checkbox was added in the Render animation dialog to include the audio while exporting
  • See the documentation for more information on how to set it up and use the audio import feature.

Audio is not yet available in the Linux appimages. It is an experimental feature, with no guarantee that it works correctly yet — we need your feedback!

Other New Features

  • Ctrl key continue mode for Outline Selection tool: if you press ctrl while drawing an outline selection, the selection isn’t completed when you lift the stylus from the tablet. You can continue drawing the selection from an arbitrary point.
  • Allow deselection by clicking with a selection tool: you can now deselect with a single click with any selection tool.
  • Added a checkbox for enabling HiDPI to the settings dialog.
  • remove the export to PDF functionality. It is having too many issues right now. (BUG:372439)

There are also a lot of bug fixes. Check the full release notes!

Get the Book!

If you want to see what others can do with Krita, get Made with Krita 2016, the first Krita artbook, now available for pre-order!

Made with Krita 2016

Made with Krita 2016

Give us your feedback!

Almost 1000 people have already filled in the 2017 Krita Survey! Tell us how you use Krita, on what hardware and what you like and don’t like!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store will be available soon. You can also use the Krita Lime PPA to install Krita 3.1.2 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

January 31, 2017

Helping new users get on IRC

Fedora Hubs

Hubs and Chat Integration Basics

Hubs uses Freenode IRC for its chat feature. I talked quite a bit about the basics of how we think this could work (see “Fedora Hubs and Meetbot: A Recursive Tale” for all of the details.)

One case that we have to account for is users who are new Fedora contributors who don’t already have an IRC nick or even experience with IRC. A tricky thing is that we have to get them identified with NickServ, and continue to identify them with Nickserv seamlessly and automatically, after netsplits and other events that would cause them to lose their authentication to Nickserv, without their needing to be necessarily aware that the identification process was going on. Nickserv auth is kind of an implementation detail of IRC that I don’t think users, particularly those new to and unfamiliar with IRC, need to be concerned with.

Nickserv?

“Nickserv? What’s Nickserv?” you ask. Well. Different IRC networks have a nickserv or something similar to it.

On IRC, people chat using the same nickname and come to be known by their nickname. For example, I’ve been mizmo on freenode IRC for well over a decade and am known by that name, similarly to how people know me by my email address or phone number. IRC is from the old and trusting days of the internet, however, so there’s nothing in IRC to guarantee that I could keep the nick mizmo if I logged out and someone else logged in using ‘mizmo’ as their nickname! In fact, this is/was a common way to attack or annoy people in IRC – steal their nick.

In comes Nickserv to save the day – it’s a bot of sorts that Freenode runs on its IRC network that registers nicknames and provides an authentication system to password protect those names. Someone can still take your nick if you’re offline, but if you’ve registered it, you can use your password and Nickserv to knock them off so you can reclaim your nick.

Yes, IRC definitely has a kind of a weird and rather quaint authentication system. Our challenge is getting people through it to be able to participate without having to worry about it!

Configuration Questions

“Well, wait,” you ask. “If they aren’t even aware of it, how do they set their nickserv password? What if they want to ‘graduate’ to a non-hubs IRC client and need their nickserv password? What is they want to change their password?”

I considered having Hubs silently auto-generate a nickserv password and managing that on its own, potentially with a way of viewing / changing this password in user settings. I opted to provide a way to create their own password, ultimately deciding that silently generating it, they wouldn’t be aware it existed, and may end up confused if they tried a different client and might post their FAS password over non-SSL plaintext IRC…

(Some other config we should allow eventually that we’ve discussed in the weekly hubs meetings – allowing users to use a different IRC bouncer than Hubs’ for IRC, and offering the connection details for our bouncer so they could use Hubs as a client as well as a third party client without issue.)

The Mockups

So here is an attempt to mock up the workflow of setting up IRC for a Hubs user who has never used IRC before, either on Hubs or ever. Note these do not address the case of someone new to Hubs who hasn’t enabled the RC feature but who does have a registered nick already that they have not entered into FAS – these will need modification to address that case. (Namely a link on the first screen to fill out their Nickserv auth details in settings.)

Widget in Context

This is what the as-of-yet unactivated IRC widget would look like for a user in context. The user is visiting a team hub, and the admin of that hub configured an IRC widget to be present by default for that hub. To join the chat, the user needs to enable IRC as a feature for their account on hubs so the widget offers to do that for them.

mockup of a fedora hubs screen showing a widget on the right hand side for enabling IRC chat

Chatter thumbnails

The top of the widget has a section that has small thumbnails of the avatars of people currently in the room (my thought is in order of who spoke most recently) with a headcount for the total number of people in the room. The main idea behind this is to try to encourage people to join the conversation – maybe they will see a friend’s avatar and feel like the room could be more approachable, maybe we tap into some primal FOMO (Fear Of Missing Out) by alluding to the activity that is happening without revealing it.

Call to action

Next we have a direct call to action, “Enable chat in Hubs to chat with other people in this hub” with an action-oriented button label, “Enable Hubs Chat.” This, I hope, clearly lets the user know what would happen if they clicked the button.

Hiding control

At the bottom, a small link: “Hide this notification.” Sometimes ‘upsell’ nags can be irritating if you have no intention of participating. If someone is sure they do not want to enable IRC in Hubs (perhaps they just want to use their own client and not bother with Hubs for this,) this will let them hide it and will ‘roll up’ the IRC widget to take up less space.

Here’s a close-up of the widget:

closeup of the IRC activation widget

Registration Wizard

So once you’ve decided to enable IRC in Hubs, then what?

Since selecting and registering a nick I think needs to be a multi-step process (in part because Freenode makes it one), I thought a wizard might be the best approach so this is how I mocked it up. A rough outline of the wizard steps is as follows:

  • Figure out a nickname you want to use and make sure it’s available
  • Provide email address and password for registration
  • Register nickname
  • Verify email

Choosing a nickname

This is a little weird, because of how IRC works. Let me explain how I would like it to work, and how I think it probably will end up working because of how IRC & nickserv work.

I would like to present the user with a bunch of options to pick from for their nickname. I want to do this because I think coming up with a clever nickname involves a high cognitive load they may not be up for, and at the very least offering suggestions could be good brain food for them making up their own (we offer a way to do that on the bottom of this screen, as well.)

Ideally, you wouldn’t offer something to someone and then tell them it’s not available. Unfortunately, I don’t think there’s an easy way to check whether or not a suggested nick is available without actually trying to use it on Freenode and seeing if you’re able to use it or if Nickserv scolds you for trying to use someone else’s registered nick. I don’t know if it’s possible to check nick availability in a less expensive way? These designs are based on the assumption that this is the only way to check nick availability:

mockup: irc nick selection

The model here is that we’d use some heuristics based on your name and FAS username to suggest potential nick, with a freeform option if you have one in mind. If you click on a name, we then check it for availability, displaying a spinner and a short message while we check. This way, we only check the nicks that you’re actually interested in and not waste cycles on nicks you have no interest in.

If a nick is available, it turns green and we display a “Register” button. If not, a message to let them know it’s not available:

IRC nick availability mockup

Once it’s finished checking on all of the nicks, it might look like this:

IRC nick availability mockup - all lookups complete

Provide email and password for registration

Freenode Nickserv registration requires providing an email address and a password. This screen is where we collect that.

We offer to submit their FAS account email address, but also allow them to freeform the email address they’d prefer to be assocaited with Freenode. We provide the rationale for why the email address is needed (account recovery) and should probably refer to Freenode’s privacy policy if it has one for their usage. There’s also fields for setting the password.

Verify Email Address

This is the most fragile component of this workflow. Freenode will allow you to keep a registration for 24 hours; if you do not confirm your email address in that time, you will lose your registration. Rather than explain all of this, we just ask that users check their email (supplying them the address the gave us so they know which account to check) and verify the email address using the link provided.

Problem: We don’t control these emails, freenode does. What if the user doesn’t receive the verification email? We can’t resend the email because we didn’t send it in the first place. No easy answer here. We might need some language to talk about checking your spam folder, and how long it might take (it seems to be pretty quick in testing.) We could let them go ahead and start chatting while they wait for the email, but what would we do if it never gets verified and they lose the registration? Messy. But here’s the mockup:

Screen for user to confirm email address for freenode registration

Finish

This is just a screen to let them know they’re all set. After clicking through this screen, they should be logged into the IRC channel for the hub they initiated the registration flow from.

IRC registration finish screen

Thoughts?

This is a first-cut at mocking up this particular flow. I’m actively working on other ones (including post-set up configuration, and turning IRC on/off on individual hubs which is the same as joining or leaving a channel.) If you have any ideas for solving some of the issues I brought up or any feedback at all, I’d love to hear it!

I find Jeff Atwood’s pragmatic reaction to the Trump presidency to be hopeful in his specific plans and actions. Well said.

January 30, 2017

Do not show up late to a meeting with a coffee.

As a general rule: Do not show up late to a meeting with a coffee.

I did this today, but you never should. The message is clear.

darktable 2.2.2 released

we're proud to announce the second bugfix release for the 2.2 series of darktable, 2.2.2!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.2.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

766d7d734e7bd5a33f6a6932a43b15cc88435c64ad9a0b20410ba5b4706941c2 darktable-2.2.2.tar.xz
52fd0e9a8bb74c82abdc9a88d4c369ef181ef7fe2b946723c5706d7278ff2dfb darktable-2.2.2.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.1 can be found below.

New features:

  • color look up table module: include preset for helmholtz/kohlrausch monochrome
  • Lens module: re-enable tiling
  • Darkroom: fix some artefacts in the preview image (not the main view!)
  • DNG decoder: support reading one more white balance encoding method
  • Mac: display an error when too old OS version is detected
  • Some documentation and tooltips updates

Bugfixes:

  • Main view no longer grabs focus when mouse enters it. Prevents accidental catastrophic image rating loss.
  • OSX: fix bauhaus slider popup keyboard input
  • Don't write all XMP when detaching tag
  • OSX: don't do PPD autodetection, gtk did their thing again.
  • Don't show database lock popup when DBUS is used to start darktable
  • Actually delete duplicate's XMP when deleting duplicated image
  • Ignore UTF-8 BOM in GPX files
  • Fix import of LR custom tone-curve
  • Overwrite Xmp rating from raw when exporting
  • Some memory leak fixes
  • Lua: sync XMPs after some tag manipulations
  • Explicitly link against math library

Base Support:

  • Canon PowerShot SX40 HS (dng)
  • Fujifilm X-E2S
  • Leica D-LUX (Typ 109) (4:3, 3:2, 16:9, 1:1)
  • Leica X2 (dng)
  • Nikon LS-5000 (dng)
  • Nokia Lumia 1020 (dng)
  • Panasonic DMC-GF6 (16:9, 3:2, 1:1)
  • Pentax K-5 (dng)
  • Pentax K-r (dng)
  • Pentax K10D (dng)
  • Sony ILCE-6500

Noise Profiles:

  • Fujifilm X-M1
  • Leica X2
  • Nikon Coolpix A
  • Panasonic DMC-G8
  • Panasonic DMC-G80
  • Panasonic DMC-G81
  • Panasonic DMC-G85

January 27, 2017

Making aliases for broken fonts

A web page I maintain (originally designed by someone else) specifies Times font. On all my Linux systems, Times displays impossibly tiny, at least two sizes smaller than any other font that's ostensibly the same size. So the page is hard to read. I'm forever tempted to get rid of that font specifier, but I have to assume that other people in the organization like the professional look of Times, and that this pathologic smallness of Times and Times New Roman is just a Linux font quirk.

In that case, a better solution is to alias it, so that pages that use Times will choose some larger, more readable font on my system. How to do that was in this excellent, clear post: How To Set Default Fonts and Font Aliases on Linux .

It turned out Times came from the gsfonts package, while Times New Roman came from msttcorefonts:

$ fc-match Times
n021003l.pfb: "Nimbus Roman No9 L" "Regular"
$ dpkg -S n021003l.pfb
gsfonts: /usr/share/fonts/type1/gsfonts/n021003l.pfb
$ fc-match "Times New Roman"
Times_New_Roman.ttf: "Times New Roman" "Normal"
$ dpkg -S Times_New_Roman.ttf
dpkg-query: no path found matching pattern *Times_New_Roman.ttf*
$ locate Times_New_Roman.ttf
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
(dpkg -S doesn't find the file because msttcorefonts is a package that downloads a bunch of common fonts from Microsoft. Debian can't distribute the font files directly due to licensing restrictions.)

Removing gsfonts fonts isn't an option; aside from some documents and web pages possibly not working right (if they specify Times or Times New Roman and don't provide a fallback), removing gsfonts takes gnumeric and abiword with it, and I do occasionally use gnumeric. And I like having the msttcorefonts installed (hey, gotta have Comic Sans! :-) ). So aliasing the font is a better bet.

Following Chuan Ji's page, linked above, I edited ~/.config/fontconfig/fonts.conf (I already had one, specifying fonts for the fantasy and cursive web families), and added these stanzas:

    <match>
        <test name="family"><string>Times New Roman</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>
    <match>
        <test name="family"><string>Times</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>

The page says to log out and back in, but I found that restarting firefox was enough. Now I could load up a page that specified Times or Times New Roman and the text is easily readable.

Install openSUSE Tumbleweed + KDE on MacBook 2015

It is pretty easy to install openSUSE Linux on a MacBook as operating system. However there are some pitfalls, which can cause trouble. The article gives some hints about a dual boot setup with OS X 10.10 and at time of writing current openSUSE Tumbleweed 20170104 (oS TW) on a MacBookPro from early 2015. A recent Linux kernel, like in TW, is advisable as it provides better hardware support.

The LiveCD can be downloaded from www.opensuse.org and written with ImageWriter GUI to a USB stick ~1GB. I’ve choose the Live KDE one and it run well on a first test. During boot after the first sound and display light switches on hold Option/alt key and wait for the disk selection icon. Put the USB key with Linux in a USB port and wait until the removable media icon appears and select it for boot. For me all went fine. The internal display, sound, touchpad and keyboard where detected and worked well. After that test. It was a good time to backup all data from the internal flash drive. I wrote a compressed disk image to a stick using the unix dd command. With that image and the live media I was able to recover, in case anything went wrong. It is not easy to satisfy OS X for it’s journaled HFS and the introduced logical volume layout, which comes with a separate repair partition directly after the main OS partition. That combination is pretty fragile, but should not be touched. The rescue partition can be booted with the command key + r pressed. External tools failed for me. So I booted into rescue mode and took the OS X diskutil or it’s Disk Utility GUI counter part. The tool allows to split the disk into several partitions. The EFI and the rescue ones are hidden in the GUI. The newly created additional partitions can be formatted to exfat and later be modified for the Linux installation. One additional HFS partition was created for sharing data between OS X and Linux with the comfortable Unix attributes. The well know exfat used by many bigger USB sticks, is a possible option as well, but needs the exfat-kmp kernel module installed, which is not by default installed due to Microsofts patent license policy for the file system. In order to write to HFS from Linux, any HFS partition must have switched off the journal feature. This can be done inside the OS X Disk Utility GUI, by selecting the data partition and holding the alt key and searching in the menu for the disable journaling entry. After rebooting into the Live media, I clicked on the Install icon on the desktop background and started openSUSE’s Yast tool. Depending on the available space, it might be a good idea to disable the btrfs filesystem snapshot feature, as it can eat up lots of disk space during each update. An other pitfall is the boot stage. Select there secure GrubEFI mode, as Grub needs special handling for the required EFI boot process. That’s it. Finish install and you should be able to reboot into Linux with the alt key.

My MacBook has unfortunedly a defect. It’s Boot Manager is very slow. Erasing and reinstalling OS X did not fix that issue. To circumvent it, I need to reset NVRAM by pressing alt+cmd+r+p at boot start for around 14 second, until the display gets dark, hold alt on the soon comming next boot sound, select the EFI TW disk in Apple Boot Manager and can then fluently go through the boot process. Without that extra step, the keyboard and mouse might not respond in Linux at all, except the power button. Hot reboot from Linux works fine. OS X does a cold reboot and needs the extra sequence.

KDE’s Plasma needs some configuration to run properly on a high resolution display. Otherwise additional monitors can be connected and easily configured with the kscreen SystemSettings module. Hibernate works fine. Currently the notebooks SD slot is ignored and the facetime camera has no ready oS packages. Battery run time can be extended by spartan power consumption (less brightness, less USB devices and pulseaudio -k, check with powertop), but is not too far from OS X anyway.

January 26, 2017

FreeCAD Arch development news

A long time I didn't post here. Writing regularly on a blog proves more difficult than I thought. I have this blog since a long time, but never really tried to constrain myself to write regularly. You look elsewhere a little bit, and when you get back to it, two months have gone by... Since this post is aimed...

Bullet 2.86 with pybullet for robotics, deep learning, VR and haptics

The Bullet 2.86 has improved Python bindings, pybullet, for robotics, machine learning and VR, see the pybullet quickstart guide.

Furthermore, the PGS LCP constraint solver has a new option to terminate as soon as the residual (error) is below a specified tolerance (instead of terminating after a fixed number of iterations). There is preliminary support to load some MuJoCo MJCF xml files (see data/mjcf), and haptic experiments with a VR glove. Get the latest release from github here.App_SharedMemoryPhysics_VR_vs20 2017-01-26 10-12-45-16

.C2guE9TUcAAeMOw

 

 

January 25, 2017

One worry less

This year, we've got elections in the Netherlands. Which means, I have to choose where my vote goes. And that can be a trifle difficult.

After fifteen years in the free software world, I'm a certified leftie. I'm certainly not going to vote for the conservative party (CDA, formally Christan, been moving into Tea Party territory for a couple of years now), I'm not going to vote for the Liberal Party (VVD) -- that's only the right party for someone who has got more than half a million in the bank. Let's not even begin to talk about the Dutch Fascist Movement (PVV). The left-liberals (D66) are a bit too much anti-religion, and, shockingly, being a sub-deacon in the local Orthodox Church, I don't feel at home there. That leaves, more or less, the Socialist Party, the Labour Party and the United Christan party. The Socialist Party has never impressed me with their policies. That leaves two...

Yeah, you know, I'm a Christan. If someone's got a problem with that, that's their problem. I'm also a socialist. If someone's got a problem with that, that's their problem. If someone thinks I'm an ignorant idiot because of either, their problem.

But today, the Labour Party minister for international cooperation, Lilianne Ploumen, has announced an effort to create a fund to counter Trump's so-called "global gag rule". That means that any United States-funded organization which so much as cooperates with any organization involved in so-called "family planning" will lose its funding. She is working to restore the funding.

News headlines make this all about abortion... Which is in any case not something anyone with testicles should concern themselves with. But it isn't that, and just talking about abortion makes it narrow and easy to attack. As did our local United Christans party, which will never again receive my vote. It's also about education, it's also about contraceptives, it's about helping those Nepali teenage girls who are locked in a cow shed because they're menstruating. It's about helping those girls who get raped by their family get back to school.

It's about making the world a better and safer and healthier place for the girls and women who cannot defend themselves.

And I don't have to worry about my vote anymore. That's settled.

Artistic Constraints

I have moved most of the sharing with the world to the walled gardens of Facebook, Google+ and others because of their convenience, but for an old fart like me it’s way more appropriate to do it the old way. So the thing to share today is quite topical. Mark Ferrari (of Lucasarts fame) shares his experience with 8bit art and the creative constraint. There isn’t as much gold in what he says as much as the art he shares that he made over the years that flourished in those constraints.

8 Bit Constraints

Mark is clearly a master in lighting and none of this trickery would have any appeal if he wasn’t so great in mixing the secondary lights so well, but check out these amazing color cycling demos.

Actual image I found explaining how I anti-aliased in GIMP. Cca 2002.

As far as I ever got with 8bit animation.

January 24, 2017

Changing a website using the developer console

If you need to quickly change a website, you can use a combination of CSS/XPath selectors and a function to hide/remove DOM nodes. I had to find my way through a long list of similar items which was really hard to go through by simply looking at it.

For example, you can simply delete all links you’re not interested in by a simple combination of selector and function:

$x('//li/a[contains(., "not-interesting")]').map(function(n) { n.parentNode.removeChild(n) })

If you’ve made a mistake, reload the website.


We’re doing a User Survey!

While we’re still working on Vector, Text and Python Scripting, we’ve already decided: This year, we want to spend on stabilizing and polishing Krita!

Now, one of the important elements in making Krita stable is bug reports. And we’ve got a lot of those! But with some bug reports, we’re kind of stuck. We cannot figure out what type of hardware or drivers it is that is causing these bugs, so we’re asking for you help.

We’ve made a Krita user survey.

In it, we ask things like what type of hardware you have, and whether you have trouble with certain hardware. That way we can figure out which drivers and hardware are problematic and maybe get workarounds. There’s also some other questions, like what you make with Krita and how you get your Krita news.

January 23, 2017

Testing a GitHub Pull Request

Several times recently I've come across someone with a useful fix to a program on GitHub, for which they'd filed a GitHub pull request.

The problem is that GitHub doesn't give you any link on the pull request to let you download the code in that pull request. You can get a list of the checkins inside it, or a list of the changed files so you can view the differences graphically. But if you want the code on your own computer, so you can test it, or use your own editors and diff tools to inspect it, it's not obvious how. That this is a problem is easily seen with a web search for something like download github pull request -- there are huge numbers of people asking how, and most of the answers are vague unclear.

That's a shame, because it turns out it's easy to pull a pull request. You can fetch it directly with git into a new branch as long as you have the pull request ID. That's the ID shown on the GitHub pull request page:

[GitHub pull request screenshot]

Once you have the pull request ID, choose a new name for your branch, then fetch it:

git fetch origin pull/PULL-REQUEST_ID/head:NEW-BRANCH-NAME
git checkout NEW-BRANCH-NAME

Then you can view diffs with something like git difftool NEW-BRANCH-NAME..master

Easy! GitHub should give a hint of that on its pull request pages.

Fetching a Pull Request diff to apply it to another tree

But shortly after I learned how to apply a pull request, I had a related but different problem in another project. There was a pull request for an older repository, but the part it applied to had since been split off into a separate project. (It was an old pull request that had fallen through the cracks, and as a new developer on the project, I wanted to see if I could help test it in the new repository.)

You can't pull a pull request that's for a whole different repository. But what you can do is go to the pull request's page on GitHub. There are 3 tabs: Conversation, Commits, and Files changed. Click on Files changed to see the diffs visually.

That works if the changes are small and only affect a few files (which fortunately was the case this time). It's not so great if there are a lot of changes or a lot of files affected. I couldn't find any "Raw" or "download" button that would give me a diff I could actually apply. You can select all and then paste the diffs into a local file, but you have to do that separately for each file affected. It might be, if you have a lot of files, that the best solution is to check out the original repo, apply the pull request, generate a diff locally with git diff, then apply that diff to the new repo. Rather circuitous. But with any luck that situation won't arise very often.

Update: thanks very much to Houz for the solution! (In the comments, below.) Just append .diff or .patch to the pull request URL, e.g. https://github.com/OWNER/REPO/pull/REQUEST-ID.diff which you can view in a browser or fetch with wget or curl.

Interview with Adam

Could you tell us something about yourself?

Good day. My name is Adam and I am a 26-year-old person who is trying to learn how to draw…

Do you paint professionally, as a hobby artist, or both?

Hobby 🙂

What genre(s) do you work in?

I try to draw everything, I don’t want to get stuck in drawing only one thing over and over again and leave behind everything else.

Whose work inspires you most — who are your role models as an artist?

People who inspired me when i was younger … much younger … were Satoshi Urushihara, Masamune Shirow and DragonBall artists.

How and when did you get to try digital painting for the first time?

My first adventure with digital painting was about 4-5 years ago, when I bought my first small Wacom Bamboo tablet that I am still using.

How did you find out about Krita?

A friend of mine mentioned it.

What was your first impression?

I uninstalled it and then came back after a while 😉

What do you love about Krita?

Everything!

What do you think needs improvement in Krita? Is there anything that really annoys you?

Maybe make it less laggy, but that can be the fault of my laptop.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

The featured image. Not really my favourite, but I don’t have anything else worth showing!

What techniques and brushes did you use in it?

It was all random without any technique! I used Pencil 2B and pencil texture, nothing more or less.

Anything else you’d like to share?

Have a nice day everyone and let the Krita grow 😀

January 19, 2017

Plotting Shapes with Python Basemap wwithout Shapefiles

In my article on Plotting election (and other county-level) data with Python Basemap, I used ESRI shapefiles for both states and counties.

But one of the election data files I found, OpenDataSoft's USA 2016 Presidential Election by county had embedded county shapes, available either as CSV or as GeoJSON. (I used the CSV version, but inside the CSV the geo data are encoded as JSON so you'll need JSON decoding either way. But that's no problem.)

Just about all the documentation I found on coloring shapes in Basemap assumed that the shapes were defined as ESRI shapefiles. How do you draw shapes if you have latitude/longitude data in a more open format?

As it turns out, it's quite easy, but it took a fair amount of poking around inside Basemap to figure out how it worked.

In the loop over counties in the US in the previous article, the end goal was to create a matplotlib Polygon and use that to add a Basemap patch. But matplotlib's Polygon wants map coordinates, not latitude/longitude.

If m is your basemap (i.e. you created the map with m = Basemap( ... ), you can translate coordinates like this:

    (mapx, mapy) = m(longitude, latitude)

So once you have a region as a list of (longitude, latitude) coordinate pairs, you can create a colored, shaped patch like this:

    for coord_pair in region:
        coord_pair[0], coord_pair[1] = m(coord_pair[0], coord_pair[1])
    poly = Polygon(region, facecolor=color, edgecolor=color)
    ax.add_patch(poly)

Working with the OpenDataSoft data file was actually a little harder than that, because the list of coordinates was JSON-encoded inside the CSV file, so I had to decode it with json.loads(county["Geo Shape"]). Once decoded, it had some counties as a Polygonlist of lists (allowing for discontiguous outlines), and others as a MultiPolygonlist of list of lists (I'm not sure why, since the Polygon format already allows for discontiguous boundaries)

[Blue-red-purple 2016 election map]

And a few counties were missing, so there were blanks on the map, which show up as white patches in this screenshot. The counties missing data either have inconsistent formatting in their coordinate lists, or they have only one coordinate pair, and they include Washington, Virginia; Roane, Tennessee; Schley, Georgia; Terrell, Georgia; Marshall, Alabama; Williamsburg, Virginia; and Pike Georgia; plus Oglala Lakota (which is clearly meant to be Oglala, South Dakota), and all of Alaska.

One thing about crunching data files from the internet is that there are always a few special cases you have to code around. And I could have gotten those coordinates from the census shapefiles; but as long as I needed the census shapefile anyway, why use the CSV shapes at all? In this particular case, it makes more sense to use the shapefiles from the Census.

Still, I'm glad to have learned how to use arbitrary coordinates as shapes, freeing me from the proprietary and annoying ESRI shapefile format.

The code: Blue-red map using CSV with embedded county shapes

January 18, 2017

Comics page…

A little post to tell you that I finally added a page on my website with all my comics. Better late than never.
They were all released previously on my blog, and some of them were missing the license info which are now on this page. Also I re-licensed some pages from CC BY-NC-ND to CC BY-SA some time ago in a blog post, this page makes it more obvious.

Link to my comics, enjoy 🙂

(… yes, I know, I really should update my website…)

January 14, 2017

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

January 12, 2017

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

rawsamples.ch replacement

Rawsamples.ch is a website with the goal to:

…provide RAW-Files of nearly all available Digitalcameras mainly to software-developers. [sic]

It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with an SQL-injection that ended up corrupting the database for the Joomla install that hosted the site. To compound the pain, there were no database backups … :(

Luckily, Kees Guequierre (dtstyle.net) decided to build a site where contributors could upload sample raw files from their cameras for everyone to use – particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and raw.pixls.us is the licensing. The existing files, and the preference for any new contributions, are licensed as Creative Commons Zero – Public Domain (as opposed to CC-BY-NC-SA).

After some hacking, with input and guidance from darktable developer Roman Lebedev, the site was finally ready.

raw.pixls.us

The site is now live at https://raw.pixls.us.

You can look at the submitted files and search/sort through all of them (and download the ones you want).

In addition to browsing the archive, it would be fantastic if you're able to supplement the database by upload sample images. Many of the images from the rawsamples.ch archive are licensed CC-BY-NC-SA, but we'd rather have the files licensed CC0. CC0 is preferable to CC-BY-NC-SA because if the sample raw files are separated from the database, they can safely be redistributed without attribution (attribution is required by CC-BY-NC-SA). So if you have a camera that is already in the list with the more restrictive Creative Commons license, then please consider uploading a replacement for us!

We are looking for shots that are:

  • Lens mounted on the camera
  • Lens cap off
  • In focus
  • With normal exposure, not underexposed and not overexposed
  • Landscape orientation
  • Licensed under the Creative Commons Zero

We are not looking for:

  • Series of images with different ISO, aperture, shutter, wb, or lighting
    (Even if it's a shot of a color target)
  • DNG files created with Adobe DNG Converter

Please take a moment and see if you can provide samples to help the developers!

This post has been written in collaboration with pixls.us

New Year, New Raw Samples Website


New Year, New Raw Samples Website

A replacement for rawsamples.ch

Happy New Year, and I hope everyone has had a wonderful holiday!

We’ve been busy working on various things ourselves, including migrating RawPedia to a new server as well as building a replacement raw sample database/website to alleviate the problems that rawsamples.ch was having…

rawsamples.ch Replacement

Rawsamples.ch is a website with the goal to:

…provide RAW-Files of nearly all available Digitalcameras mainly to software-developers. [sic]

It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with a SQL-injection that ended up corrupting the database for the Joomla install that hosted the site. To compound the pain, there were no database backups… :(

On the good side, the PIXLS.US community has some dangerous folks with idle hands. Our friendly, neighborhood @andabata (Kees Guequierre) had some time off at the end of the year and a desire to build something. You may know @andabata as the fellow responsible for the super-useful dtstyle website, which is chock full of darktable styles to peruse and download (if you haven’t heard of it before – you’re welcome!). He’s also my go-to for macro photography and is responsible for this awesome image used on a slide for the Libre Graphics Meeting:

PIXLS.US LGM Slide

Luckily, he decided to build a site where contributors could upload sample raw files from their cameras for everyone to use – particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and raw.pixls.us is the licensing. The existing files, and the preference for any new contributions, are licensed as Creative Commons Zero - Public Domain (as opposed to CC-BY-NC-SA).

After some hacking, with input and guidance from darktable developer Roman Lebedev, the site was finally ready. The repository for it can be found on GitHub: raw.pixls.us repo.

raw.pixls.us

The site is now live at https://raw.pixls.us.

You can look at the submitted files and search/sort through all of them (and download the ones you want).

In addition to browsing the archive, it would be fantastic if you were able to supplement the database by uploading sample images. Many of the files from the rawsamples.ch archive are licensed CC-BY-NC-SA, but we’d rather have the files licensed Creative Commons Zero - Public Domain. CC0 is preferable because if the sample raw files are separated from the database, they can safely be redistributed without attribution. So if you have a camera that is already in the list with the more restrictive license, then please consider uploading a replacement for us!

We are looking for shots that are:

  • Lens mounted on the camera
  • Lens cap off
  • In focus
  • Properly exposed (not over/under)
  • Landscape orientation
  • Licensed under the Creative Commons Zero

We are not looking for:

  • Series of images with different ISO, aperture, shutter, wb, or lighting
    (Even if it’s a shot of a color target)
  • DNG files created with Adobe DNG Converter

Please take a moment and see if you can provide samples to help the developers!

Wed 2017/Jan/11

  • Reproducible font rendering for librsvg's tests

    The official test suite for SVG 1.1 consists of a bunch of SVG test files that use many of the features in the SVG specification. The test suite comes with reference PNGs: your SVG renderer is supposed to produce images that look like those PNGs.

    I've been adding test files from that test suite to librsvg as I convert things to Rust, and also when I refactor code that touches code for a particular kind of SVG element or filter.

    The SVG test suite is not a drop-in solution, however. The spec does not specify pixel-exact rendering. It doesn't mandate any specific kind of font rendering, either. The test suite is for eyeballing that tests render correctly, and each test has instructions on what to look for; it is not meant for automatic testing.

    The test files include text elements, and the font for those texts is specified in an interesting way. SVG supports referencing "SVG fonts": your image_with_text_in_it.svg can specify that it will reference my_svg_font.svg, and that file will have individual glyphs defined as normal SVG objects. "You draw an a with this path definition", etc.

    Librsvg doesn't support SVG fonts yet. (Patches appreciated!) As a provision for renderers which don't support SVG fonts, the test suite specifies fallbacks with well-known names like "sans-serif" and such.

    In the GNOME world, "sans-serif" resolves to whatever Fontconfig decides. Various things contribute to the way fonts are resolved:

    • The fonts that are installed on a particular machine.

    • The Fontconfig configuration that is on a particular machine: each distro may decide to resolve fonts in slightly different ways.

    • The user's personal ~/.fonts, and whether they are running gnome-settings-daemon and whether it monitors that directory for Fontconfig's perusal.

    • Phase of the moon, checksum of the clouds, polarity of the yak fields, etc.

    For silly reasons, librsvg's "make distcheck" doesn't work when run as a user; I need to run it as root. And as root, my personal ~/.fonts doesn't get picked up and also my particular font rendering configuration is different from the system's default (why? I have no idea — maybe I selected specific hinting/antialiasing at some point?).

    It has taken a few tries to get reproducible font rendering for librsvg's tests. Without reproducible rendering, the images that get rendered from the test suite may not match the reference images, depending on the font renderer's configuration and the available fonts.

    Currently librsvg does two things to get reproducible font rendering for the test suite:

    • We use a specific cairo_font_options_t on our PangoContext. These options specify what antialiasing, hinting, and hint metrics to use, so that the environment's or user's configuration does not affect rendering.

    • We create a specific FcConfig and a PangoFontMap for testing, with a single font file that we ship. This will cause any font description, no matter if it is "sans-serif" or whatever, to resolve to that single font file. Special thanks to Christian Hergert for providing the relevant code from Gnome-builder.

    • We ship a font file as mentioned above, and just use it for the test suite.

    This seems to work fine. I can run "make check" both as my regular user with my private ~/.fonts stash, or as root with the system's configuration, and the test suite passes. This means that the rendered SVGs match the reference PNGs that get shipped with librsvg — this means reproducible font rendering, at least on my machine. I'd love to know if this works on other people's boxes as well.

January 09, 2017

Snowy Winter Days, and an Elk Visit

[Snowy view of the Rio Grande from Overlook]

The snowy days here have been so pretty, the snow contrasting with the darkness of the piñons and junipers and the black basalt. The light fluffy crystals sparkle in a rainbow of colors when they catch the sunlight at the right angle, but I've been unable to catch that effect in a photo.

We've had some unusual holiday visitors, too, culminating in this morning's visit from a huge bull elk.

[bull elk in the yard] Dave came down to make coffee and saw the elk in the garden right next to the window. But by the time I saw him, he was farther out in the yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot and got what I could through the window.

Fortunately for my photography the elk wasn't going anywhere in any hurry. He has an injured leg, and was limping badly. He slowly made his way down the hill and into the neighbors' yard. I hope he returns. Even with a limp that bad, an elk that size has no predators in White Rock, so as long as he stays off the nearby San Ildefonso reservation (where hunting is allowed) and manages to find enough food, he should be all right. I'm tempted to buy some hay to leave out for him.

[Sunset light on the Sangre de Cristos] Some of the sunsets have been pretty nice, too.

A few more photos.

January 08, 2017

Using virtualenv to replace the broken pip install --user

Python's installation tool, pip, has some problems on Debian.

The obvious way to use pip is as root: sudo pip install packagename. If you hang out in Python groups at all, you'll quickly find that this is strongly frowned upon. It can lead to your pip-installed packages intermingling with the ones installed by Debian's apt-get, possibly causing problems during apt system updates.

The second most obvious way, as you'll see if you read pip's man page, is pip --user install packagename. This installs the package with only user permissions, not root, under a directory called ~/.local. Python automatically checks .local as part of its PYTHONPATH, and you can add ~/.local/bin to your PATH, so this makes everything transparent.

Or so I thought until recently, when I discovered that pip install --user ignores system-installed packages when it's calculating its dependencies, so you could end up with a bunch of incompatible versions of packages installed. Plus it takes forever to re-download and re-install dependencies you already had.

Pip has a clear page describing how pip --user is supposed to work, and that isn't what it's doing. So I filed pip bug 4222; but since pip has 687 open bugs filed against it, I'm not terrifically hopeful of that getting fixed any time soon. So I needed a workaround.

Use virtualenv instead of --user

Fortunately, it turned out that pip install works correctly in a virtualenv if you include the --system-site-packages option. I had thought virtualenvs were for testing, but quite a few people on #python said they used virtualenvs all the time, as part of their normal runtime environments. (Maybe due to pip's deficiencies?) I had heard people speak deprecatingly of --user in favor of virtualenvs but was never clear why; maybe this is why.

So, what I needed was to set up a virtualenv that I can keep around all the time and use by default every time I log in. I called it ~/.pythonenv when I created it:

virtualenv --system-site-packages $HOME/.pythonenv

Normally, the next thing you do after creating a virtualenv is to source a script called bin/activate inside the venv. That sets up your PATH, PYTHONPATH and a bunch of other variables so the venv will be used in all the right ways. But activate also changes your prompt, which I didn't want in my normal runtime environment. So I stuck this in my .zlogin file:

VIRTUAL_ENV_DISABLE_PROMPT=1 source $HOME/.pythonenv/bin/activate

Now I'll activate the venv once, when I log in (and once in every xterm window since I set XTerm*loginShell: true in my .Xdefaults. I see my normal prompt, I can use the normal Debian-installed Python packages, and I can install additional PyPI packages with pip install packagename (no --user, no sudo).

January 04, 2017

Firefox "Reader Mode" and NoScript

A couple of days ago I blogged about using Firefox's "Delete Node" to make web pages more readable. In a subsequent Twitter discussion someone pointed out that if the goal is to make a web page's content clearer, Firefox's relatively new "Reader Mode" might be a better way.

I knew about Reader Mode but hadn't used it. It only shows up on some pages. as a little "open book" icon to the right of the URLbar just left of the Refresh/Stop button. It did show up on the Pogue Yahoo article; but when I clicked it, I just got a big blank page with an icon of a circle with a horizontal dash; no text.

It turns out that to see Reader Mode content in noscript, you must explicitly enable javascript from about:reader.

There are some reasons it's not automatically whitelisted: see discussions in bug 1158071 and bug 1166455 -- so enable it at your own risk. But it's nice to be able to use Reader Mode, and I'm glad the Twitter discussion spurred me to figure out why it wasn't working.

January 03, 2017

Interview with Ismail Tarchoun

Could you tell us something about yourself?

My name is Ismael. I’m a self-taught artist from Tunisia, but I now live and study in Germany.

Do you paint professionally, as a hobby artist, or both?

I’m now painting only as a hobby, it’s a really fun and stress relieving activity. But I might do some freelancing work in the future.

What genre(s) do you work in?

I usually paint portraits and manga-styled characters, but I paint other stuff as well. I always try to expand my horizon and learn new things.

Whose work inspires you most — who are your role models as an artist?

Well, there is a long list of artists who inspired me. For example: Kuvshinov-Ilya and Laovaan Kite, I really like their style and their work always looks great. David Revoy is also one of my favorite artists, I really like his art and his web comic.

How and when did you get to try digital painting for the first time?

I actually only started last summer (2016). Before that, I mainly drew pencil portraits, which was limiting in nature. After seeing some amazing digital paintings on the internet, I wanted to be able to draw like that, and so it was decided. I bought a Wacom intuos art and tried it. It needed some getting used to, but I eventually fell in love with the infinite range of possibilities digital painting offers.

What makes you choose digital over traditional painting?

Well, I still paint traditionally from time to time. But I like digital painting more now, since it offers more tools which help me achieve good results with minimal effort. I also love the Ctrl+z shortcut (I wish real life had that!) so I’m not worried about ruining my work, and I can make more daring decisions which allow me to express myself more freely.

How did you find out about Krita?

I actually learned about form Blender forums, some users there recommended it over Gimp as a painting program, so I tried it and fell in love with it.

What was your first impression?

I was amazed by the sheer amount of features it offered, and the user interface looked good (I like dark-themed programs). For free software it was great, it even has features Photoshop doesn’t have. So in general, I had a positive first impression.

What do you love about Krita?

I really love the various brushes and the way they’re rendered, they felt so organic, and like real brushes. I also like the non-destructive filters and transformations, that is pretty rare in free software, and it really encourages you to try new and different stuff, and if you don’t like it, you can change it later (more freedom with minimal consequences).

What do you think needs improvement in Krita? Is there anything that really annoys you?

There are some features I want to see in Krita, for example: a small preview window: it’s essential to get a feeling of the painting in general, otherwise it might turn out weird. I also wish Krita could import more brushes from other programs. But nothing is really that bothersome about Krita, there are some bugs, but they are constantly being fixed by the awesome devs.

What sets Krita apart from the other tools that you use?

Canvas tilting, rulers, transformation and filter layers, and the Multibrush also. Quite neat features.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think I’d choose the stylized portrait at the top of this interview, which doesn’t have a name (I really suck at naming things). It started as a simple painting exercise, but it ended up looking pretty good, or at least better than my previous works, which is a good sign of improvement. But I hope it doesn’t stay my favorite painting for long. In other words, I hope I’ll be able to put it to shame in the near future.

What techniques and brushes did you use in it?

First, I made a rough sketch, then I started laying in some general colors using a large soft brush (deevad 4a airbrush by David Revoy) without caring about the details, only basic colors and a basic idea of how the painting is lit. Then I started going into details using a smaller sized brush (deevad 1f draw brush). I usually paint new details in a separate layer, then merge it down if I’m happy with the results, if not I, I delete the layer and paint a new one. I use the liquify tool a lot to fix the proportions or any anomaly. For the hair I used the brush (deevad 2d flat old) and the hair brush (vb3BE by Vasco Alexander Basque) which I also used for the hat. When the painting is done I use filters to adjust the colors and contrast, I then make a new layer for final and minor tweaks here and there.

Where can people see more of your work?

You can find me on DeviantArt (not everything is made using Krita): http://tarchoun.deviantart.com/

Anything else you’d like to share?

I just hope that Krita will get even better in the future and more people start using it and appreciating it.

January 02, 2017

Firefox's "Delete Node" eliminates pesky content-hiding banners

It's trendy among web designers today -- the kind who care more about showing ads than about the people reading their pages -- to use fixed banner elements that hide part of the page. In other words, you have a header, some content, and maybe a footer; and when you scroll the content to get to the next page, the header and footer stay in place, meaning that you can only read the few lines sandwiched in between them. But at least you can see the name of the site no matter how far you scroll down in the article! Wouldn't want to forget the site name!

Worse, many of these sites don't scroll properly. If you Page Down, the content moves a full page up, which means that the top of the new page is now hidden under that fixed banner and you have to scroll back up a few lines to continue reading where you left off. David Pogue wrote about that problem recently and it got a lot of play when Slashdot picked it up: These 18 big websites fail the space-bar scrolling test.

It's a little too bad he concentrated on the spacebar. Certainly it's good to point out that hitting the spacebar scrolls down -- I was flabbergasted to read the Slashdot discussion and discover that lots of people didn't already know that, since it's been my most common way of paging since browsers were invented. (Shift-space does a Page Up.) But the Slashdot discussion then veered off into a chorus of "I've never used the spacebar to scroll so why should anyone else care?", when the issue has nothing to do with the spacebar: the issue is that Page Down doesn't work right, whichever key you use to trigger that page down.

But never mind that. Fixed headers that don't scroll are bad even if the content scrolls the right amount, because it wastes precious vertical screen space on useless cruft you don't need. And I'm here to tell you that you can get rid of those annoying fixed headers, at least in Firefox.

[Article with intrusive Yahoo headers]

Let's take Pogue's article itself, since Yahoo is a perfect example of annoying content that covers the page and doesn't go away. First there's that enormous header -- the bottom row of menus ("Tech Home" and so forth) disappear once you scroll, but the rest stay there forever. Worse, there's that annoying popup on the bottom right ("Privacy | Terms" etc.) which blocks content, and although Yahoo! scrolls the right amount to account for the header, it doesn't account for that privacy bar, which continues to block most of the last line of every page.

The first step is to call up the DOM Inspector. Right-click on the thing you want to get rid of and choose Inspect Element:

[Right-click menu with Inspect Element]


That brings up the DOM Inspector window, which looks like this (click on the image for a full-sized view):

[DOM Inspector]

The upper left area shows the hierarchical structure of the web page.

Don't Panic! You don't have to know HTML or understand any of this for this technique to work.

Hover your mouse over the items in the hierarchy. Notice that as you hover, different parts of the web page are highlighted in translucent blue.

Generally, whatever element you started on will be a small part of the header you're trying to eliminate. Move up one line, to the element's parent; you may see that a bigger part of the header is highlighted. Move up again, and keep moving up, one line at a time, until the whole header is highlighted, as in the screenshot. There's also a dark grey window telling you something about the HTML, if you're interested; if you're not, don't worry about it.

Eventually you'll move up too far, and some other part of the page, or the whole page, will be highlighted. You need to find the element that makes the whole header blue, but nothing else.

Once you've found that element, right-click on it to get a context menu, and look for Delete Node (near the bottom of the menu). Clicking on that will delete the header from the page.

Repeat for any other part of the page you want to remove, like that annoying bar at the bottom right. And you're left with a nice, readable page, which will scroll properly and let you read every line, and will show you more text per page so you don't have to scroll as often.

[Article with intrusive Yahoo headers]

It's a useful trick. You can also use Inspect/Delete Node for many of those popups that cover the screen telling you "subscribe to our content!" It's especially handy if you like to browse with NoScript, so you can't dismiss those popups by clicking on an X. So happy reading!

Addendum on Spacebars

By the way, in case you weren't aware that the spacebar did a page down, here's another tip that might come in useful: the spacebar also advances to the next slide in just about every presentation program, from PowerPoint to Libre Office to most PDF viewers. I'm amazed at how often I've seen presenters squinting with a flashlight at the keyboard trying to find the right-arrow or down-arrow or page-down or whatever key they're looking for. These are all ways of advancing to the next slide, but they're all much harder to find than that great big spacebar at the bottom of the keyboard.

darktable 2.2.1 released

we're proud to announce the first bugfix release for the 2.2 series of darktable, 2.2.1!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.1.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.1.tar.xz
da843190f08e02df19ccbc02b9d1bef6bd242b81499494c7da2cccdc520e24fc  darktable-2.2.1.tar.xz
$ sha256sum darktable-2.2.1.3.dmg
9a86ed2cff453dfc0c979e802d5e467bc4974417ca462d6cbea1c3aa693b08de  darktable-2.2.1.3.dmg

and the changelog as compared to 2.2.0 can be found below.

New features:

  • Show a dialog window that tells when locking the database/library failed
  • Ask before deleting history stack from lightable.
  • preferences: make features that are not available (greyed out) more obvious

Bugfixes:

  • Always cleanup undo list before entering darkroom view. Fixes crash when using undo after re-entering darkroom
  • Darkroom: properly delete module instances. Fixes rare crashes after deleting second instance of module.
  • Levels and tonecurve modules now also use 256 bins.
  • Rawoverexposed module: fix visualization when a camera custom white balance preset is used

Base Support:

  • Canon EOS M5

December 31, 2016

The top 30 Blender developers 2016

Let’s salute and applaud the most active developers for Blender of the past year again! The ranking is based on commit total, for Blender itself and all its branches.  

Obviously a commit total doesn’t mean much. Nevertheless, it’s a nice way to put the people who make Blender in the spotlights.

The number ’30’ is also arbitrary. I just had to stop adding more! Names are listed in increasing commit count order.

Special thanks to Miika Hämäläinen for making the stats listing.

Ton Roosendaal, Blender Foundation chairman.
31-12-2016

Joey Ferwerda (28)

openhmd-logoJoey (Netherlands) worked in 2016 on adding real-time VR viewing in Blender’s viewport. This works for Oculus, with Vive support coming soon.

He currently works on OpenHMD, an open source library to support all current Head Mounted Displays.

Luca Rood (30)

screen-shot-2016-12-31-at-18-49-08Luca (Brazil) is to my knowledge the youngest on this list. With his 19 years he’s impressing everyone with in-depth knowledge of simulation techniques and courage to dive into Blender’s ancient cloth code to fix it up.

Luca currently works with a Development Fund grant on improving cloth sim, to make it usable for high quality character animation.

Gaia Clary (32)

collada-banner-200x55Gaia (Germany) is the maintainer of COLLADA in Blender. Her never-ending energy to keep this working in Blender means we can keep it supported for 2.8 as well.

Martijn Berger (40)

screen-shot-2016-12-31-at-18-43-59Martijn (Netherlands) was active in 2016 as platform manager for Windows and MacOS. He helps making the releases, especially to comply to the security standards for downloading binaries on Windows and MacOS.

Antonio Vazquez (41)

screen-shot-2016-12-31-at-18-39-22Antonio (Spain) joined the team to work on Grease Pencil. Based on feedback and guidance of Daniel Lara (Pepeland), he helped turning this annotation tool in Blender into a full fledged 2d animation and animatic storyboarding tool.

Ray Molenkamp (46)

screen-shot-2016-12-31-at-18-37-18Ray (Canada) joined the team in 2016, volunteering to help out maintaining Blender for the Windows platform, supporting Microsoft’s development environment.

Alexander Gavrilov (58)

26-manual-modeling-meshes-weight-paint-face-selectAlexander (Russia) joined the development team in 2016. He starting contributing fixes for Weight Painting and later on his attention moved to Cloth and Physics simulation in general.

He is also active in the bug tracker, providing bug fixes on regular basis.

Sybren Stüvel (59)

screen-shot-2016-12-31-at-18-10-55Sybren (Netherlands) works for Blender Institute as Cloud developer (shot management, render manager, libraries, security) and as developer for Blender pipeline features – such as Blender file manipulations, UI previews and the Pose library.

João Araújo (65)

800px-improved_extrusion1João (Portugal) accepted a Google Summer of Code grant to work on Blender’s 3D Curve object. He added improved extrusion options and tools for Extend, Batch Extend, Trim, Offset, Chamfer and Fillet.

His project is almost ready and will be submitted for review early 2017.

Benoit Bolsee (65)

screen-shot-2016-12-31-at-17-53-50Benoit (Belgium) is a long term contributor to Blender’s Game Engine. In 2016 he worked on the “Decklink” branch, supporting one of the industry’s best video capture cards.

Pascal Schön (78)

screen-shot-2016-12-31-at-17-48-36Pascal (Germany) joined the Cycles team this year, contributing the implementation of the Disney BSDF/BSSRDF.

This new physically based shading model  is able to reproduce a wide range of materials with only a few parameters.

Nathan Vollmer (80)

screen-shot-2016-12-31-at-17-44-50Nathan (Germany) accepted a GSoC grant to work on vertex painting and weight painting in Blender.

With the new P-BVH vertex painting we now get much improved performance, especially when painting dense meshes.

Philipp Oeser (83)

screen-shot-2016-12-31-at-17-38-34Philipp (Germany) is active in Blender’s bug tracker, providing fixes for issues in many areas in Blender.

Contributors who work on Blender’s quality this way are super important and can’t be valued enough. Kudos!

Phil Gosch (131)

pack_1_comparisonPhil (Austria) accepted a GSoC grant to work on Blender’s UV Tools, especially the Pack Island tool. While a bit more computation heavy, the solutions found by the new algorithm give much better results than the old “Pack Islands” in terms of used UV space.

Tainwei Shen (142)

blender33Tianwei (China) accepted a GSoC grant to work on Multiview camera reconstruction. This allows film makers to retrieve more accurate camera position information from footage, when one area gets shot from different positions.

His work is ready and close to be added in Blender.

Thomas Dinges (144)

cycles_278_single_channel_texturesThomas (Germany) started in the UI team for the 2.5 project, but with the start of Cycles in 2011 he put all his time in helping making it even more awesome.

His main contribution this year was work on  Cycles texture system, increasing the maximum amount of textures that can be used on CUDA GPUs, and lowering memory usage in many cases.

Dalai Felinto (192)

screen-shot-2016-12-31-at-17-06-00Dalai (Brazil, lives in Netherlands) added Multiview and Stereo rendering to Blender in 2015. In 2016 he contributed to making VR rendering possible in Cycles.

Dalai currently works (with Clement “PBR branch” Foucault) for Blender Institute on the Viewport 2.8 project. Check the posts on https://code.blender.org to see what’s coming.

Martin Felke (199)

screen-shot-2016-12-31-at-16-57-06Martin (Germany) deserves our respect and admiration for maintaining one of the oldest and very popular Blender branches: the “Fracture Modifier” branch.

For technical and quality reasons his work was never deemed to fit for a release. But for Blender 2.8 internal design will get updated to finally get his work released. Stay tuned!

Mai Lavelle (202)

screen-shot-2016-12-31-at-16-51-58Mai (USA) surprised everyone by falling from the sky with a patch for Cycles to support micro-polygon rendering. The skepticism from the Cycles developers quickly changed. “This is actually really good code” said one of them, which is a huge compliment when coming from coders!

She is currently working for Blender Institute on the Cycles “Split Kernel” project, especially for OpenCL GPU rendering.

Brecht Van Lommel (210)

cycles_shader_ao-200x170Brecht (Belgium, lives in Spain) worked on Blender for a decade. His most memorable contribution was the Cycles render engine (2011). 

Aside of working on Cycles, Brecht is active in maintaining the MacOS version and Blender’s UI code.

Joshua Leung (264)

bbone-restpose_curves-inactionJoshua (New Zealand) is Blender’s animation system coder. He contributed many new features to Blender in the past decade (including Grease Pencil).

Joshua’s highlight for 2016 was adding the “Bendy Bones”. A project that was started by Jose Molina and Daniel Lara.

Lukas Stockner (277)

screen-shot-2016-12-31-at-16-34-38Lukas (Germany) is a new contributor to Cycles, since 2015. He accepted a Google Summer of Code grant to work on Cycles denoising.

Lukas’ specialism is implementing math. One of his last 2016 commits was titled “Replace T-SVD algorithm with new Jacobi Eigen-decomposition solver”. Right on!

Sebastián Barschkis (300)

300px-nb_flipSebastián (Germany) is a recurring GSoC student. He is currently working in his branch on “Manta Flow”, an improved fluid simulation library.

Mike Erwin (308)

imgresMike (USA) has been contracted this year by AMD to help modernizing Blenders’s OpenGL, and to make sure we’re Vulkan ready in the future.

He currently works on the Blender 2.8 branch. making Blender work with OpenGL 3.2 or later.

Lukas Toenne (413)

cv7t4cnxaauludsLukas (Germany) worked for Blender Institute on hair simulation in 2014-2015. In 2016 he went back experimenting with node systems for objects and particles and wrote a review and proposal for how to add this in Blender.

Most of his commits were in the object-nodes branch, a project which is currently on hold, until we find more people for it.

Kévin Dietrich (516)

screen-shot-2016-12-31-at-15-59-04Kévin (France) has mainly been working on two topics in 2016. In a branch he still works on integration of OpenVDB – tools for storage of volumetric data such as smoke.

Released in 2.78 was his work on Alembic input/output. Alembic is essential for mixed application pipelines for film and animation.

Julian Eisel (760)

manipulator_spinJulian (Germany) not only finds usability and UI interesting topics, he also manages to untangle Blender’s code for it. He contributed to many areas already, such as pie-menus and node inserting.

His 2016 highlight is ongoing work on Custom Manipulators – which is a topic for 2.8 workflow project. Goal: bring back editing to the viewport!

Bastien Montagne (1008)

screenBastien (France) is working full-time for Blender Foundation for many years now. He became our #1 bug tracker reviewer in the past years.

His special interest is Asset management though. He’s now an expert in Blender’s file system and works on 2.8 Asset Browsing.

Sergey Sharybin (1143)

xmas3Sergey (Russia, living in Netherlands) is on his way to become the #1 Blender contributor. He is best known for work on Motion tracking, Cycles rendering, Open Subdiv and recently on the Blender dependency graph.

And: of course we shouldn’t forget all of his 100s of bug fixes and patch reviews. The Blender Institute is happy to have him on board.

Campbell Barton (1156)

290px-bmesh_boolean_example_03Campbell (Australia) surprised everyone in August with his announcement to step down from his duties at blender.org. He is taking a well deserved break to renew his energy, and to work on other (own) projects.

He’s still Blender’s #1 committer of 2016 though. Even after his retirement he kept providing code, over 50 commits now. One of this year highlights was adding a high quality boolean modifier in Blender.

Whew, what a year!

This is not the place to present an opinion on all the other things that have happened in 2016, but when it comes to Krita, 2016 was perhaps the most intense year ever for the project. Let’s step back for a moment and do a bit of review!

Krita’s grown by leaps and bounds. We’ve got more users and more contributors than ever — and sometimes, we’re feeling the load!

To give just a small example: Krita is part of the KDE community. The KDE community also makes one of the most popular Linux desktops and a host of other applications. This year, users reported 2290 bugs for the desktop as a whole, and 1134 for Krita. The next biggest projects in terms of reported bugs are the video editor Kdenlive (701 new bugs) and the window manager kwin (674).  We resolved 1134 bugs, of which I personally closed 1052. Lots of bug reports doesn’t equal bad software; it equals very actively used software! And so it’s fair to say that Krita is the second biggers project in the KDE community…

Krita is not only actively used, but also very actively developed. There have been, on average, about 10 commits a day, and that excludes translations. In the past year, about fifty people have contributed to the codebase. We have one of the finest manuals of any free software project, and we could answer most questions on irc, the forum or reddit with just a link to the relevant page. And the manual is being translated as well!

As a project, we’re a long way from the old days when Krita was a free software alternative to application X on Linux; artists are choosing to use Krita on whichever platform they are because they want to use it. That’s great, but… Sometimes people bring assumptions that are unwarranted. I’ve had phone calls from the US from people needing help who thought they were contacting a professionally staffed help desk department. In general, many people think there’s a big company behind Krita, one with oodles of money. Nothing could be further from the truth: the income generated by the Krita Foundation, from donations or from sales of training dvd’s, is not enough to pay for even two full-time developers.

So, part of this year, I have been working on a 3 days a week freelance gig for a Dutch company called Quby. And three or four days a week for Krita. That was a bit much, and in June I suffered from a breakdown, from which I still have not fully recovered. That’s why we had fewer releases this year than we had intended. We wanted to release every six weeks, but we only have had four releases:

  • 2.9.11 on February 4th.
  • 3.0 on May 31th
  • 3.0.1 on September 6th
  • 3.1 on December 13th

The first release was the final one of the very succesful 2.9 series, the last version of Krita based on Qt 4, the last release where Krita was still part of the Calligra project. The 3.0 release marked the move to Qt 5, a standalone git repository, appimages for easy distribution on Linux, animation support, instant preview and much more. And with 3.1, we now officially support OSX (or macOS) as well!

Still, we’re a small team of mostly volutneers, we’re having fun, but sometimes we’re just overloaded with user requests, support requests and coding tasks, documentation writing, community management, bug triaging. We managed to round off another kickstarter, but the rewards are still in production. As is the artbook — which is going to look great, and which you can pre-order now!

Made with Krita 2016

But everything should be ready for sending out in January. But that’s already 2017! So, what will 2017 bring? We looked forward in detail before, so everyone will be aware of the coming of SVG support, improved vector tools, improved text tool and Python scripting. But Dmitry also added support for audio in animations in the past week — which will be in the next release, 3.1.2, scheduled for end of January. We’ll do regular bug fix releases, of course, and we’ll do at least one big feature release with the last 2015 and the 2016 kickstarter goals, as well as whatever else gets coded. And it’s not just coding: in 2017, we want to bring out another book, a Pepper and Carrot book!

And in 2017 we will need to do another big fundraiser. After a lot of discussion on the forum, we feel that instead of proposing new features, it might be time to go for consolidation, stabilization and polish. We’ve added so much stuff in the past couple of years — though we didn’t forget to fix lots of bugs, see above — that it’s time to take stock and invest into making Krita even more solid. And we hope that you all will support us in that effort, next year!

Yours,

Boudewijn Rempt

Project Maintainer

December 29, 2016

Commercial open-source: Sentry

Commercial open-source software is usually based around some kind of asymmetry: the owner possesses something that you as a user do not, allowing them to make money off of it.

This asymmetry can take on a number of forms. One popular option is to have dual licensing: the product is open-source (usually GPL), but if you want to deviate from that, there’s the option to buy a commercial license. These projects are recognizable by the fact that they generally require you to sign a Contributor License Agreement (CLA) in which you transfer all your rights to the code over to the project owners. A very bad deal for you as a contributor (you work but get nothing in return) so I recommend against participating in those projects. But that’s a subject for a different day.

Another option for making asymmetry is open core: make a limited version open-source and sell a full-featured version. Typically named “the enterprise version”. Where you draw the line between both versions determines how useful the project is in its open-source form versus how much potential there is to sell it. Most of the time this tends towards a completely useless open-source version, but there are exceptions (e.g. Gitlab).

These models are so prevalent that I was pleasantly surprised to see who Sentry does things: as little asymmetry as possible. The entire product is open-source and under a very liberal license. The hosted version (the SaaS product that they sell) is claimed to be running using exactly the same source code. The created value, for which you’ll want to pay, is in the belief that a) you don’t want to spend time running it yourself and b) they’ll do a better job at it than you do.

This model certainly won’t work in all contexts and it probably won’t lead to a billion dollar exit, but that doesn’t always have to be the goal.

So kudos to Sentry, they’re certainly trying to make money in the nicest way possible, without giving contributors and hobbyists a bad deal. I hope they do well.

More info on their open-source model can be read on their blog: Building an Open Source Service.


Comments | More on rocketeer.be | @rubenv on Twitter

December 28, 2016

Last chance for ColorHug(1) users to get upgraded

For the early adopters of the original ColorHug I’ve been offering a service where I send all the newer parts out to people so they can retrofit their device to the latest design. This included an updated LiveCD, the large velcro elasticated strap and the custom cut foam pad that replaced the old foam feet. In the last two years I’ve sent out over 300 free upgrades, but this has reduced to a dribble recently as later ColorHug1’s and all ColorHug2 had all the improvements and extra bits included by default. I’m going to stop this offer soon as I need to make things simpler so I can introduce a new thing (+? :) next year. If you do need a HugStrap and gasket still, please fill in the form before the 4th January. Thanks, and Merry Christmas to all.

December 27, 2016

December 25, 2016

Photographing Farolitos (and other night scenes)

Excellent Xmas to all! We're having a white Xmas here..

Dave and I have been discussing how "Merry Christmas" isn't alliterative like "Happy Holidays". We had trouble coming up with a good C or K adjective to go with Christmas, but then we hit on the answer: Have an Excellent Xmas! It also has the advantage of inclusivity: not everyone celebrates the birth of Christ, but Xmas is a secular holiday of lights, family and gifts, open to people of all belief systems.

Meanwhile: I spent a couple of nights recently learning how to photograph Xmas lights and farolitos.

Farolitos, a New Mexico Christmas tradition, are paper bags, weighted down with sand, with a candle inside. Sounds modest, but put a row of them alongside a roadway or along the top of a typical New Mexican adobe or faux-dobe and you have a beautiful display of lights.

They're also known as luminarias in southern New Mexico, but Northern New Mexicans insist that a luminaria is a bonfire, and the little paper bag lanterns should be called farolitos. They're pretty, whatever you call them.

Locally, residents of several streets in Los Alamos and White Rock set out farolitos along their roadsides for a few nights around Christmas, and the county cooperates by turning off streetlights on those streets. The display on Los Pueblos in Los Alamos is a zoo, a slow exhaust-choked parade of cars that reminds me of the Griffith Park light show in LA. But here in White Rock the farolito displays are a lot less crowded, and this year I wanted to try photographing them.

Canon bugs affecting night photography

I have a little past experience with night photography. I went through a brief astrophotography phase in my teens (in the pre-digital phase, so I was using film and occasionally glass plates). But I haven't done much night photography for years.

That's partly because I've had problems taking night shots with my current digital SLRcamera, a Rebel Xsi (known outside the US as a Canon 450d). It's old and modest as DSLRs go, but I've resisted upgrading since I don't really need more features.

Except maybe when it comes to night photography. I've tried shooting star trails, lightning shots and other nocturnal time exposures, and keep hitting a snag: the camera refuses to take a photo. I'll be in Manual mode, with my aperture and shutter speed set, with the lens in Manual Focus mode with Image Stabilization turned off. Plug in the remote shutter release, push the button ... and nothing happens except a lot of motorized lens whirring noises. Which shouldn't be happening -- in MF and non-IS mode the lens should be just sitting there intert, not whirring its motors. I couldn't seem to find a way to convince it that the MF switch meant that, yes, I wanted to focus manually.

It seemed to be primarily a problem with the EF-S 18-55mm kit lens; the camera will usually condescend to take a night photo with my other two lenses. I wondered if the MF switch might be broken, but then I noticed that in some modes the camera explicitly told me I was in manual focus mode.

I was almost to the point of ordering another lens just for night shots when I finally hit upon the right search terms and found, if not the reason it's happening, at least an excellent workaround.

Back Button Focus

I'm so sad that I went so many years without knowing about Back Button Focus. It's well hidden in the menus, under Custom Functions #10.

Normally, the shutter button does a bunch of things. When you press it halfway, the camera both autofocuses (sadly, even in manual focus mode) and calculates exposure settings.

But there's a custom function that lets you separate the focus and exposure calculations. In the Custom Functions menu option #10 (the number and exact text will be different on different Canon models, but apparently most or all Canon DSLRs have this somewhere), the heading says: Shutter/AE Lock Button. Following that is a list of four obscure-looking options:

  • AF/AE lock
  • AE lock/AF
  • AF/AF lock, no AE lock
  • AE/AF, no AE lock

The text before the slash indicates what the shutter button, pressed halfway, will do in that mode; the text after the slash is what happens when you press the * or AE lock button on the upper right of the camera back (the same button you use to zoom out when reviewing pictures on the LCD screen).

The first option is the default: press the shutter button halfway to activate autofocus; the AE lock button calculates and locks exposure settings.

The second option is the revelation: pressing the shutter button halfway will calculate exposure settings, but does nothing for focus. To focus, press the * or AE button, after which focus will be locked. Pressing the shutter button won't refocus. This mode is called "Back button focus" all over the web, but not in the manual.

Back button focus is useful in all sorts of cases. For instance, if you want to autofocus once then keep the same focus for subsequent shots, it gives you a way of doing that. It also solves my night focus problem: even with the bug (whether it's in the lens or the camera) that the lens tries to autofocus even in manual focus mode, in this mode, pressing the shutter won't trigger that. The camera assumes it's in focus and goes ahead and takes the picture.

Incidentally, the other two modes in that menu apply to AI SERVO mode when you're letting the focus change constantly as it follows a moving subject. The third mode makes the * button lock focus and stop adjusting it; the fourth lets you toggle focus-adjusting on and off.

Live View Focusing

There's one other thing that's crucial for night shots: live view focusing. Since you can't use autofocus in low light, you have to do the focusing yourself. But most DSLR's focusing screens aren't good enough that you can look through the viewfinder and get a reliable focus on a star or even a string of holiday lights or farolitos.

Instead, press the SET button (the one in the middle of the right/left/up/down buttons) to activate Live View (you may have to enable it in the menus first). The mirror locks up and a preview of what the camera is seeing appears on the LCD. Use the zoom button (the one to the right of that */AE lock button) to zoom in; there are two levels of zoom in addition to the un-zoomed view. You can use the right/left/up/down buttons to control which part of the field the zoomed view will show. Zoom all the way in (two clicks of the + button) to fine-tune your manual focus. Press SET again to exit live view.

It's not as good as a fine-grained focusing screen, but at least it gets you close. Consider using relatively small apertures, like f/8, since it will give you more latitude for focus errors. Yyou'll be doing time exposures on a tripod anyway, so a narrow aperture just means your exposures have to be a little longer than they otherwise would have been.

After all that, my Xmas Eve farolitos photos turned out mediocre. We had a storm blowing in, so a lot of the candles had blown out. (In the photo below you can see how the light string on the left is blurred, because the tree was blowing around so much during the 30-second exposure.) But I had fun, and maybe I'll go out and try again tonight.


An excellent X-mas to you all!

Stellarium 0.15.1

Version 0.15.1 introduces a few new exciting features.
- The Digital Sky Survey (DSS) can be shown (requires online connection).
- AstroCalc is now available from the main menu and gives interesting new computational insight.
- Stellarium can act as Spout sender (important for multimedia environments; Windows only).
In addition, a lot of bugs have been fixed.
- wait() and waitFor() in the Scripting Engine no longer inhibits performance of moves.
- DE430/431 DeltaT may be OK now. We still want to test a bit more, though.
- ArchaeoLines also offers two arbitrary declination lines.
- Added support of time zones dependent by location.
- Added new skyculture: Sardinian.
- Added updates and improvements in catalogs.
- Added improvements in the GUI.
- Added cross identification data for stars from Bright Star Catalogue, 5th Revised Ed. (Hoffleit+, 1991)

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes:
- Added code which allows to display full-sky multi-resolution remote images projected using the TOAST format
- Added new option to Oculars plugin: enabling automatic install the type of mount from the telescope settings for saving horizontal orientation of CCD frame
- Added new option stars/flag_forced_twinkle=(false|true) for planetariums to enable a twinkling of stars without atmosphere (LP: #1616007)
- Added calculations of conjunction between major planets and deep-sky objects (AstroCalc)
- Added calculations of occultations between major planets and DSO/SSO (AstroCalc)
- Added option to toggle visibility of designations of exoplanets (esp. for exoplanetary systems)
- Added support of time zones dependent by location (LP: #505096, #1492941, #1431926, #1106753, #1099547, #1092629, #898710, #510480)
- Added option in GUI to edit colours of the markings
- Added a special case for educational purpose to drawing orbits for the Solar System Observer
- Added the new config option in the sky cultures for management of boundaries of constellations
- Added support synonyms of star names (2nd edition of the sky cultures)
- Added support reference data for star names (2nd edition of the sky cultures)
- Added support synonyms of DSO names (2nd edition of the sky cultures)
- Added support native names of DSO (2nd edition of the sky cultures)
- Added support reference data for native DSO names (2nd edition of the sky cultures)
- Added support Spout under Windows (Stellarium is Spout Sender now)
- Added a few trojans and quasi-satellites in ssystem.ini file
- Added a virtual planet 'Earth Observer' for educational purposes (illustration for the Moon phases)
- Added orbit lines for some asteroids and comets
- Added config option to switch between styles of color of orbits (LP: #1586812)
- Added support of separate colors for orbits (LP: #1586812)
- Added function and shortcut for quick reversing direction of time
- Added custom objects to avoid of getting of empty space for Search Tool (SIMBAD mode) and to adding custom markers on the sky (LP: #958218, #1485349)
- Added tool to management of visibility for grids/lines in Oculars plugin
- Added the hiding of supernova remnants before their birth (LP: #1623177)
- Added markers for various poles and zenith/nadir (LP: #1629501, #1366643, #1639010)
- Added markers for equinoxes
- Added supergalactic coordinate system
- Added code for management of SkyPoints in Oculars plugin
- Added notes to the Help window
- Added commands and shortcuts for quick move to celestial poles
- Added textures for deep-sky objects
- Added Millions of light years distance to infostring for galaxies (LP: #1637809)
- Added 2 custom declination lines to ArchaeoLines plugin
- Added dialog with options to adjust of colors of markers of deep-sky objects
- Added new options to infobox
- Added calculating and displaying the Moon phases
- Added allow for proper stopping of time dragging (LP: #1640574)
- Added time scrolling
- Added Scottish Gaelic (gd) translations for landscapes and sceneries.
- Added usage the Qt 5 JSON parser instead of the hand-made one we used so far
- Added synonyms for DSO
- Added names consistency for DSO
- Added support operational statuses for few classes of artificial satellites
- Added special zoom level for Telrad to avoid undefined behaviour of zooming (LP: #1643427)
- Added new icons for Bookmarks and AstroCalc tools
- Added missing Bayer designation of 4 Aurigae (LP: #1645087)
- Added calculations of conjunctions/occultations between planets and bright stars (AstroCalc tool)
- Added 'Altitude vs Time' feature for AstroCalc tool
- Added location identification which adds canonical IAU 3-letter code to infostring and PointerCoordinates plugin
- Added elongation and phase angle to Comet and MinorPlanet infostrings
- Added cross identification data for stars from Bright Star Catalogue, 5th Revised Ed. (Hoffleit+, 1991)
- Added AstroCalc icon (LP: #1641256)
- Added new scriptable methods to CustomObjectMgr class
- Added Sardinian sky culture
- Allow changes of milky way and zodiacal light brightness via GUI while stars switched off (LP: #1651897)
- Fixed visual issue for AstroCalc tool
- Fixed incorrectly escaping of translated strings for some languages in Remote Control plugin (LP: #1608177)
- Fixed disappearing of star labels when the 'Navigational Stars' plugin is loaded and not activated (LP: #1608288)
- Fixed offset issue for image sensor frame in Oculars plugin (LP: #1610629)
- Fixed crash when labels and markers box in sky and Viewing Options tab is ticked (LP: #1609958)
- Fixed spherical distortion with night mode (LP: #1606969)
- Fixed behaviour of buttons on Oculars on-screen control panel (LP: #1609060)
- Fixed misalignment of Veil nebula image (LP: #1610824)
- Fixed build of planetarium without scripting support (Thanks to Alexey Dokuchaev for the bug report)
- Fixed coordinates of the open cluster Melotte 227
- Fixed typos in Kamilorai skyculture
- Fixed cmake typos for scenery3d model (Thanks to Alexey Dokuchaev for the bug report)
- Fixed crash when turning off custom GRS if GRS Details dialogue has not been opened before (LP: #1620083)
- Fixed small issue rounding for JD for DeltaT tooltip on bottom bar
- Fixed rendering orbits of artificial satellites during changing of location through spaceship feature (LP: #1622796)
- Fixed hiding the Moon during a total solar eclipse when option 'limit magnitude' for Solar system objects is enabled
- Fixed small bug for updating catalog in the Bright Novae plugin
- Fixed displaying date and time in AstroCalc Tool (LP: #1630685)
- Fixed a typographical error in coefficient for precession expressions by Vodnark et. al.
- Fixed crash on exit in the debug mode in StelTexture module (LP: #1632355)
- Fixed crash in the debug mode (LP: #1631910)
- Fixed coverity issues
- Fixed obvious typos
- Fixed issue for wrong calculation of exit pupil for binoculars in Oculars plugin (LP: #1632145)
- Fixed issue for Arabic translation (LP: #1635587)
- Fixed packaging on OS X for Qt 5.6.2+
- Fixed calculations of the value of the Moon secular acceleration when JPL DE43x ephemeris is used
- Fixed distances for objects of Caldwell catalogue
- Fixed calculations for transit phenomena (LP: #1641255)
- Fixed missing translations in Remote Control plug-in (LP: #1641296)
- Fixed displaying of localized projection name in Remote Control (LP: #1641297)
- Fixed search localized names of artificial satellites
- Fixed showing translated names of artificial satellite
- Fixed GUI Problem in Satellites plug-in (LP: #1640291)
- Fixed moon size infostring (LP: #1641755)
- Fixed AstroCalc lists transits as occulations (LP: #1641255)
- Fixed resetting of DSO Limit Magnitude by Image Sensor Frame (LP: #1641736)
- Fixed Nova Puppis 1942 coordinates
- Fixed error in light curve model of historical supernovae (LP: #1648978)
- Fixed width of search dialog for Meteor Showers plugin (LP: #1649640)
- Fixed shortcut for "Remove selection of constellations" action (LP: #1649771)
- Fixed retranslation name and type of meteor showers in Meteor Showers when language of application is changed
- Fixed wrong tooltip info (LP: #1650725)
- Fixed Mercury's magnitude formula (LP: #1650757)
- Fixed cross-id error for alpha2 Cen (LP: #1651414)
- Fixed core.moveToAltAzi(90,XX) issue (LP: #1068529)
- Fixed view centered on zenith (restart after save) issue (LP: #1620030)
- Updated DSO Catalog: Added support full of UGC catalog
- Updated DSO Catalog: Added galaxies from PGC2003 catalog (mag <=15)
- Updated DSO Catalog: Removed errors and duplicates
- Updated stars catalogues
- Updated stars names (LP: #1641455)
- Updated settings for AstroCalc
- Updated Satellites plugin
- Updated Scenery3d plugin
- Updated Stellarium User Guide
- Updated support for High DPI monitors
- Updated list of star names for Western sky culture
- Updated Search Tool (LP: #958218)
- Updated default catalogues for plugins
- Updated list of locations (2nd version of format - TZ support)
- Updated pulsars catalog & util
- Updated Scripting Engine
- Updated rules for visibility of exoplanets labels
- Updated shortcuts
- Updated info for the landscape actions (LP: #1650727)
- Updated Bengali translation of Sardinian skyculture (LP: #1650733)
- Correctly apply constellation fade duration (LP: #1612967)
- Expanded behaviour of for isolated constellations selection (only for skycultures with IAU generic boundaries) (LP: #1412595)
- Ensure stable up vector for lookZenith, lookEast etc.
- Verification the star names by the official IAU star names list
- Minor improvement in ini file writing
- Set usage Bortle index only after year 1825
- Re-implement wait() and waitFor() scripting functions to avoid large delays in main thread
- Restored broken feature (hidding markers for selected planets)
- Removed color profiles from PNG files
- Removed the flips of the CCD frame because it works incorrect and he introduced new one bug
- Removed the useless misspelled names from list of stars
- Removed the Time Zone plug-in
- Removed useless translations in ArchaeoLines plug-in

December 24, 2016

darktable 2.2.0 released

we're proud to finally announce the new feature release of darktable, 2.2.0!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the sha256 checksum is:

3eca193831faae58200bb1cb6ef29e658bce43a81706b54420953a7c33d79377  darktable-2.2.0.tar.xz
75d5f68fec755fefe6ccc82761d379b399f9fba9581c0f4c2173f6c147a0109f  darktable-2.2.0.dmg

and the changelog as compared to 2.0.0 can be found below.

when updating from the currently stable 2.0.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.2 to 2.0.x any more.

  • Well over 2k commits since 2.0.0
  • 298 pull requests handled
  • 360+ issues closed

Gource visualization of git log from 2.0.0 to right before 2.2.0:

https://youtu.be/E2UU5x7sS3g

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in “more modules” so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Add basic undo/redo support for the darkroom (masks are not accounted !)
  • Support the Exif date and time when importing photos from camera
  • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
  • Rudimentary CYGM and RGBE color filter array support
  • Nicer web gallery exporter -- now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Hotpixels module: make it actually work for X-Trans
  • Cmstest tool should now produce correct output in more cases, especially in multi-monitor setups.
  • Darkroom histogram now uses more bins: use all 8-bit of the output, not just 6.

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar XMP schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • Lens correction module: switched back to normal Lensfun search mode for lens lookups.
  • Make sure that proper signal handlers are still set after GM initialization...
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Feathering size in some mask shapes can be set with shift+scroll
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series
  • Tone curve: mode “automatic in XYZ” mode for “scale chroma”
  • Some compilation fixes

Lua specific changes:

  • All asynchronous calls have been rewritten
    • the darktable-specific implementation of yield was removed
    • darktable.control.execute allows to execute some shell commands without blocking Lua
    • darktable.control.read allows to wait for a file to be readable without blocking Lua
    • darktable.control.sleep allows to pause the Lua execution without blocking other Lua threads
  • darktable.gui.libs.metadata_view.register_info allows to add new field to the metadata widget in the darkroom view
  • The TextView widget can now be created in Lua, allowing input of large chunks of text
  • It is now possible to use a custom widget in the Lua preference window to configure a preference
  • It is now possible to set the precision and step on slider widgets

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even Debian/stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.
  • Remove gnome keyring password backend

Base Support:

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-G81 (4:3)
  • Panasonic DMC-G85 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Panasonic DMC-LX10 (3:2)
  • Panasonic DMC-LX15 (3:2)
  • Panasonic DMC-LX9 (3:2)
  • Panasonic DMC-TZ100 (3:2)
  • Panasonic DMC-TZ101 (3:2)
  • Panasonic DMC-TZ110 (3:2)
  • Panasonic DMC-ZS110 (3:2)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX100M5
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCA-99M2
  • Sony ILCE-6300

We were unable to bring back these 2 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800

White Balance Presets:

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-1
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles:

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX100 IS
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5
  • Nikon D5500
  • Olympus E-PL6
  • Olympus E-PM2
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-1
  • Pentax K-S2
  • Ricoh GR
  • Sony DSLR-A900
  • Sony DSC-RX10
  • Sony ILCE-6300
  • Sony NEX-5
  • Sony SLT-A37

New Translations:

  • Hebrew
  • Slovenian

Updated Translations:

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Italian
  • Polish
  • Russian
  • Slovak
  • Spanish
  • Swedish
  • Ukrainian

We wish you a merry Christmas, happy Hanukkah or just a good time. Enjoy taking photos and developing them with darktable.

December 23, 2016

Release GCompris 0.70

screenshots of the new activities in version 0.61

Hi,

just in time for Christmas, we are pleased to announce the new GCompris version 0.70.

It is an important release because we officially drop the Gtk+ version for Windows to use the Qt one.
Everyone who bought the full version for the last two years will get a new activation code in a few days.

Also, for people who like numbers, we are beyond 100000 downloads in the google play store.

This new version contains 8 new activities, half of them being created by last Google Summer of Code students:

  • an activity where the child has to move elements to build a given model (crane by Stefan Toncu)
  • an activity to draw the numbers from 0 to 9 (drawnumbers by Nitish Chauhan)
  • an activity to draw the letters (drawletters by Nitish Chauhan)
  • an activity to find on which words a given letter belongs (letter-in-word by Akshat Tandon)
  • the nine men morris game against Tux (nine_men_morris by Pulkit Gupta)
  • the nine men morris game with a friend (nine_men_morris_2_players by Pulkit Gupta)
  • an activity to learn to split a given number of candies amonst children (share by Stefan Toncu)
  • an activity to learn roman numbers (roman_numerals by Bruno Coudoin)

 

We always have new features and bug fixes:

  • search feature by Rishabh Gupta
  • windows build by Bruno Coudoin and Johnny Jazeix
  • hint icon in the bar (used in photohunter) by Johnny Jazeix
  • neon build by Jonathan Riddell
  • we are now in openSUSE Tumbleweed repository thanks to the Bruno Friedmann
  • archlinux (https://aur.archlinux.org/packages/gcompris-qt/) by Jose Riha
  • package on mageia cauldron by Timothee Giet
  • word list for Slovakia by Jose Riha
  • word list for Belarusian by Antos Vaclauski
  • various updates on Romanian wordlists and voices (probably the most complete one) by Horia Pelle
  • voices added for Portuguese Brazilian by Marcos D.
  • new graphics for crane by Timothee Giet
  • screenshots on gcompris.net updated to the Qt version by Timothee and Johnny

 

You can find this new version here:

Android version

Windows 32bit or Windows 64bit version

Linux version (64bit)

source code

On the translation side, we have 15 languages fully supported: Belarusian, British English, Brazilian Portuguese, Catalan, Catalan (Valencian), Dutch, Estonian, French, Italian, Polish, Portuguese, Romanian, Spanish, Swedish, Ukrainian and some partially: Breton (82%), Chinese Simplified (93%), Chinese Traditional (91%), Finnish (70%), Galician (93%), German (97%), Norwegian Nynorsk (98%), Russian (83%), Slovak (85%), Slovenian (88%), Turkish (77%).

If you want to help, please make some posts in your community about GCompris.

December 22, 2016

Tips on Developing Python Projects for PyPI

I wrote two recent articles on Python packaging: Distributing Python Packages Part I: Creating a Python Package and Distributing Python Packages Part II: Submitting to PyPI. I was able to get a couple of my programs packaged and submitted.

Ongoing Development and Testing

But then I realized all was not quite right. I could install new releases of my package -- but I couldn't run it from the source directory any more. How could I test changes without needing to rebuild the package for every little change I made?

Fortunately, it turned out to be fairly easy. Set PYTHONPATH to a directory that includes all the modules you normally want to test. For example, inside my bin directory I have a python directory where I can symlink any development modules I might need:

mkdir ~/bin/python
ln -s ~/src/metapho/metapho ~/bin/python/

Then add the directory at the beginning of PYTHONPATH:

export PYTHONPATH=$HOME/bin/python

With that, I could test from the development directory again, without needing to rebuild and install a package every time.

Cleaning up files used in building

Building a package leaves some extra files and directories around, and git status will whine at you since they're not version controlled. Of course, you could gitignore them, but it's better to clean them up after you no longer need them.

To do that, you can add a clean command to setup.py.

from setuptools import Command

class CleanCommand(Command):
    """Custom clean command to tidy up the project root."""
    user_options = []
    def initialize_options(self):
        pass
    def finalize_options(self):
        pass
    def run(self):
        os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info ./docs/sphinxdoc/_build')
(Obviously, that includes file types beyond what you need for just cleaning up after package building. Adjust the list as needed.)

Then in the setup() function, add these lines:

      cmdclass={
          'clean': CleanCommand,
      }

Now you can type

python setup.py clean
and it will remove all the extra files.

Keeping version strings in sync

It's so easy to update the __version__ string in your module and forget that you also have to do it in setup.py, or vice versa. Much better to make sure they're always in sync.

I found several version of that using system("grep..."), but I decided to write my own that doesn't depend on system(). (Yes, I should do the same thing with that CleanCommand, I know.)

def get_version():
    '''Read the pytopo module versions from pytopo/__init__.py'''
    with open("pytopo/__init__.py") as fp:
        for line in fp:
            line = line.strip()
            if line.startswith("__version__"):
                parts = line.split("=")
                if len(parts) > 1:
                    return parts[1].strip()

Then in setup():

      version=get_version(),

Much better! Now you only have to update __version__ inside your module and setup.py will automatically use it.

Using your README for a package long description

setup has a long_description for the package, but you probably already have some sort of README in your package. You can use it for your long description this way:

# Utility function to read the README file.
# Used for the long_description.
def read(fname):
    return open(os.path.join(os.path.dirname(__file__), fname)).read()
    long_description=read('README'),

December 21, 2016

Casa Min-Max

Esta casa custa R$ 55 000. Completa, com tudo. This house costs BRL 55 000. Complete, with everything. Como chegamos ate aqui? ...

December 17, 2016

Distributing Python Packages Part II: Submitting to PyPI

In Part I, I discussed writing a setup.py to make a package you can submit to PyPI. Today I'll talk about better ways of testing the package, and how to submit it so other people can install it.

Testing in a VirtualEnv

You've verified that your package installs. But you still need to test it and make sure it works in a clean environment, without all your developer settings.

The best way to test is to set up a "virtual environment", where you can install your test packages without messing up your regular runtime environment. I shied away from virtualenvs for a long time, but they're actually very easy to set up:

virtualenv venv
source venv/bin/activate

That creates a directory named venv under the current directory, which it will use to install packages. Then you can pip install packagename or pip install /path/to/packagename-version.tar.gz

Except -- hold on! Nothing in Python packaging is that easy. It turns out there are a lot of packages that won't install inside a virtualenv, and one of them is PyGTK, the library I use for my user interfaces. Attempting to install pygtk inside a venv gets:

********************************************************************
* Building PyGTK using distutils is only supported on windows. *
* To build PyGTK in a supported way, read the INSTALL file.    *
********************************************************************

Windows only? Seriously? PyGTK works fine on both Linux and Mac; it's packaged on every Linux distribution, and on Mac it's packaged with GIMP. But for some reason, whoever maintains the PyPI PyGTK packages hasn't bothered to make it work on anything but Windows, and PyGTK seems to be mostly an orphaned project so that's not likely to change.

(There's a package called ruamel.venvgtk that's supposed to work around this, but it didn't make any difference for me.)

The solution is to let the virtualenv use your system-installed packages, so it can find GTK and other non-PyPI packages there:

virtualenv --system-site-packages venv
source venv/bin/activate

I also found that if I had a ~/.local directory (where packages normally go if I use pip install --user packagename), sometimes pip would install to .local instead of the venv. I never did track down why this happened some times and not others, but when it happened, a temporary mv ~/.local ~/old.local fixed it.

Test your Python package in the venv until everything works. When you're finished with your venv, you can run deactivate and then remove it with rm -rf venv.

Tag it on GitHub

Is your project ready to publish?

If your project is hosted on GitHub, you can have pypi download it automatically. In your setup.py, set

download_url='https://github.com/user/package/tarball/tagname',

Check that in. Then make a tag and push it:

git tag 0.1 -m "Name for this tag"
git push --tags origin master

Try to make your tag match the version you've set in setup.py and in your module.

Push it to pypitest

Register a new account and password on both pypitest and on pypi.

Then create a ~/.pypirc that looks like this:

[distutils]
index-servers =
  pypi
  pypitest

[pypi]
repository=https://pypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

[pypitest]
repository=https://testpypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

Yes, those passwords are in cleartext. Incredibly, there doesn't seem to be a way to store an encrypted password or even have it prompt you. There are tons of complaints about that all over the web but nobody seems to have a solution. You can specify a password on the command line, but that's not much better. So use a password you don't use anywhere else and don't mind too much if someone guesses.

Update: Apparently there's a newer method called twine that solves the password encryption problem. Read about it here: Uploading your project to PyPI. You should probably use twine instead of the setup.py commands discussed in the next paragraph.

Now register your project and upload it:

python setup.py register -r pypitest
python setup.py sdist upload -r pypitest

Wait a few minutes: it takes pypitest a little while before new packages become available. Then go to your venv (to be safe, maybe delete the old venv and create a new one, or at least pip uninstall) and try installing:

pip install -i https://testpypi.python.org/pypi YourPackageName

If you get "No matching distribution found for packagename", wait a few minutes then try again.

If it all works, then you're ready to submit to the real pypi:

python setup.py register -r pypi
python setup.py sdist upload -r pypi

Congratulations! If you've gone through all these steps, you've uploaded a package to pypi. Pat yourself on the back and go tell everybody they can pip install your package.

Some useful reading

Some pages I found useful:

A great tutorial except that it forgets to mention signing up for an account: Python Packaging with GitHub

Another good tutorial: First time with PyPI

Allowed PyPI classifiers -- the categories your project fits into Unfortunately there aren't very many of those, so you'll probably be stuck with 'Topic :: Utilities' and not much else.

Python Packages and You: not a tutorial, but a lot of good advice on style and designing good packages.

Call to translators

We plan to release Stellarium 0.15.1 at next one or two weeks.

This is a bugfix release and he introduces a few new important features from the version 1.0. Currently translators can improve translation of version 0.15.0 and fix some mistakes in translations. If you can assist with translation to any of the 136 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

If it will be required we can postpone release on few days.

Thank you!

December 16, 2016

Looking Forward

We have just released Krita 3.1, but we are already deep into coding again! We will continue releasing bug fix versions of Krita 3.1.x until it’s time to release 3.2 (or maybe 4.0…). And, as with 2.9, some bug fix releases might even contain new features, if they’re small and safe enough. But we’ll also start making development builds soon, and there’s also the  daily build for Windows.

For the 2015 kickstarter features, we’re working on the following items:

  • Lazy Brush: the interactive colorizing tool. We spent a lot of time to get this to work, and it does work and can be enabled in 3.1 (add disableColorizeMaskFeature=false to the kritarc file). The algorithm we implemented is simply too slow to be useful. We will keep the existing user interface for it, but will work on a faster algorithm.
  • Improved palette handling: we’ve started work on adding support for two new palette formats that can handle color correctly already. We intend to rewrite the palette editor in Python, and for that we need 2016’s Python scripting to be done. That is moving along quite nicely! More information on that below.
  • Stacked brushes: This functionally works and is pretty innovative, even! But it is difficult to configure stacked brushes with the existing brush editor UI design. We’ve started re-working the brush editor already to make way for this feature. When that work is done this feature can land. The new brush editor UI is still being refined:

screenshot_20161213_143102

For the 2016 Kickstarter features, we’re working on the following items:

  • SVG Support: We’ve started on the vector file format rewrite. We’re now at a stage where Krita can load and save the SVG format instead of ODG — though a bit of work remains to be done. We can then start hooking up the functionality to the redesigned tools on the UI.
  • Python Scripting: The API has been defined on what it will do. Everything builds and works. Most of the API still needs to be hooked up for things to work still, but you an already iterate through the layers and masks in an image and save them.  There is also a separate command-line utility to run scripts, without needing to start the entire gui. But we need to figure out how to build and bundle Python on OSX and Windows! And Eliakin Costa, a Season of KDE student is working on a gui to create, run and save scripts from within the gui.
  • Text Tools: We have had some discussion about what features will be included and how it will fit into the application’s UI design — but no coding done yet. The text tools will be using SVG, so we need to wait until that is done before we start digging too deep into this.
  • Reference Images Docker: we’re also considering rewriting this docker completely, in Python.

And of course this is just what we’ve planned. With outside people making code changes and new people joining, it will be an adventure to see what actually happens. There will no doubt be more exciting features coming along the way in 2017!

December 15, 2016

Making your own retro keyboard

We're about a week before Christmas, and I'm going to explain how I created a retro keyboard as a gift to my father, who introduced me to computers when he brought back a Thomson TO7 home, all the way back in 1985.

The original idea was to use a Thomson computer to fit in a smaller computer, such as a CHIP or Raspberry Pi, but the software update support would have been difficult, the use limited to the builtin programs, and it would have required a separate screen. So I restricted myself to only making a keyboard. It was a big enough task, as we'll see.

How do keyboards work?

Loads of switches, that's how. I'll point you to Michał Trybus' blog post « How to make a keyboard - the matrix » for details on this works. You'll just need to remember that most of the keyboards present in those older computers have no support for xKRO, and that the micro-controller we'll be using already has the necessary pull-up resistors builtin.

The keyboard hardware

I chose the smallest Thomson computer available for my project, the MO5. I could have used a stand-alone keyboard, but would have lost all the charm of it (it just looks like a PC keyboard), some other computers have much bigger form factors, to include cartridge, cassette or floppy disk readers.

The DCMoto emulator's website includes tons of documentation, including technical documentation explaining the inner workings of each one of the chipsets on the mainboard. In one of those manuals, you'll find this page:



Whoot! The keyboard matrix in details, no need for us to discover it with a multimeter.

That needs a wash in soapy water

After opening up the computer, and eventually giving the internals, and the keyboard especially if it has mechanical keys, a good clean, we'll need to see how the keyboard is connected.

Finicky metal covered plastic

Those keyboards usually are membrane keyboards, with pressure pads, so we'll need to either find replacement connectors at our local electronics store, or desolder the ones on the motherboard. I chose the latter option.

Desoldered connectors

After matching the physical connectors to the rows and columns in the matrix, using a multimeter and a few key presses, we now know which connector pin corresponds to which connector on the matrix. We can start soldering.

The micro-controller

The micro-controller in my case is a Teensy 2.0, an Atmel AVR-based micro-controller with a very useful firmware that makes it very very difficult to brick. You can either press the little button on the board itself to upload new firmware, or wire it to an external momentary switch. The funny thing is that the Atmega32U4 is 16 times faster than the original CPU (yeah, we're getting old).

I chose to wire it to the "Initial. Prog" ("Reset") button on the keyboard, so as to make it easy to upload new firmware. To do this, I needed to cut a few traces coming out of the physical switch on the board, to avoid interferences from components on the board, using a tile cutter. This is completely optional, and if you're only going to use firmware that you already know at least somewhat works, you can set a key combo to go into firmware upload mode in the firmware. We'll get back to that later.

As far as connecting and soldering to the pins, we can use any I/O pins we want, except D6, which is connected to the board's LED. Note that any deviation from the pinout used in your firmware, you'd need to make changes to it. We'll come back to that again in a minute.

The soldering

Colorful tinning

I wanted to keep the external ports full, so it didn't look like there were holes in the case, but there was enough headroom inside the case to fit the original board, the teensy and pins on the board. That makes it easy to rewire in case of error. You could also dremel (yes, used as a verb) a hole in the board.

As always, make sure early that things would fit, especially the cables!

The unnecessary pollution

The firmware

Fairly early on during my research, I found the TMK keyboard firmware, as well as very well written forum post with detailed explanations on how to modify an existing firmware for your own uses.

This is what I used to modify the firmware for the gh60 keyboard for my own use. You can see here a step-by-step example, implementing the modifications in the same order as the forum post.

Once you've followed the steps, you'll need to compile the firmware. Fedora ships with the necessary packages, so it's a simple:


sudo dnf install -y avr-libc avr-binutils avr-gcc

I also compiled and installed in my $PATH the teensy_cli firmware uploader, and fixed up the udev rules. And after a "make teensy" and a button press...

It worked first time! This is a good time to verify that all the keys work, and you don't see doubled-up letters because of short circuits in your setup. I had 2 wires touching, and one column that just didn't work.

I also prepared a stand-alone repository, with a firmware that uses the tmk_core from the tmk firmware, instead of modifying an existing one.

Some advices

This isn't the first time I hacked on hardware, but I'll repeat some old adages, and advices, because I rarely heed those warnings, and I regret...
  • Don't forget the size, length and non-flexibility of cables in your design
  • Plan ahead when you're going to cut or otherwise modify hardware, because you might regret it later
  • Use breadboard cables and pins to connect things, if you have the room
  • Don't hotglue until you've tested and retested and are sure you're not going to make more modifications
That last one explains the slightly funny cabling of my keyboard.

Finishing touches

All Sugru'ed up

To finish things off nicely, I used Sugru to stick the USB cable, out of the machine, in place. And as earlier, it will avoid having an opening onto the internals.

There are a couple more things that I'll need to finish up before delivery. First, the keymap I have chosen in the firmware only works when a US keymap is selected. I'll need to make a keymap for Linux, possibly hard-coding it. I will also need to create a Windows keymap for my father to use (yep, genealogy software on Linux isn't quite up-to-par).

Prototype and final hardware

All this will happen in the aforementioned repository. And if you ever make your own keyboard, I'm happy to merge in changes to this repository with documentation for your Speccy, C64, or Amstrad CPC hacks.

(If somebody wants to buy me a Sega keyboard, I'll gladly work on a non-destructive adapter. Get in touch :)

December 13, 2016

Krita 3.1 Released!

Today the Krita team releases Krita 3.1.1 ! Krita 3.1 is the first release that is fully supported on OSX (10.9 and later)! Krita 3.1 is the result of half a year of intense work and contains many new features, performance improvement and bug fixes. It’s now possible to use render animations (using ffmpeg) to gif or various video formats. You can use a curve editor to animate properties. Soft-proofing was added for seeing how your artwork will look in print. A new color picker that allows selecting wide-gamut colors. There is also a new brush engine that paints fast on large canvases, a stop-based gradient editor.

There are a lot of fixes, improvements, and speedups. Visit the Krita 3.1 release notes for a list of everything that was changed.

krita_animation_3_0_2

These are the highlights:

  • OSX is fully supported from now on. The OpenGL canvas works just as well as everywhere else. There might still be OSX-specifc bugs, of course! But now is the time for OSX and MacOS fans to use Krita and report any issues they might come across.
  • Krita can now, with FFmpeg render an animation to gif, mp4, mkv and ogg.
  • There is now automated tweening of opacity between frames in an animation. You can color-code frames in the timeline, and animate the raster content of filter layers, fill layers and masks.
  • There is a new color selector, accessible with the dual color button on the top toolbar. This color selector supports selecting HDR colors, colors outside the sRGB gamut of your screen. It can pick colors from Krita windows accurately and has much nicer support for working with palettes.
  • The Quick Brush engine is a really fast and really simple brush engine.
  • There is a stop-based gradient editor in addition to the existing segment-based gradient editor
  • We added a halftone filter

There are many more new features as well!

A quick video overview of all of the major features and fixes that are being shipped with Krita 3.1.

This release contains work funded by the 2015 Kickstarter, work done during the Google Summer of Code, and work done by volunteer hackers just for the fun of it. Not everyone might be aware of this, but Krita is open source and everyone is welcome to work on features, hack on bugs, or help out in many other ways. Join our community!.

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store.

OSX

Source code

md5sums

For all downloads.

Key

The Linux appimage and the source tarbal are signed. You can load the public key over https here:
0x58b9596c722ea3bd.asc
. the signatures are here.

Get the Book!

If you want to see what others can do with Krita, get Made with Krita 2016, the first Krita artbook, now available for pre-order!

Made with Krita 2016

Made with Krita 2016

Made with Krita 2016 — the Krita Artbook

Made With Krita 2016 is now available for pre-order! Forty artists from all over the world, working in all kinds of styles and on all kinds of subjects show how Krita is used in the real world to create amazing and engaging art. The book also contains a biographical section with information about each individual artist.

Made with Krita 2016

Made with Krita 2016

The book is professionally printed on 130 grams paper and softcover bound in signatures. The cover illustration is by Odysseas Stamoglou.

Made with Krita 2016 is 19,95€, excluding shipping. Shipping is 11.25€ outside the Netherlands and 3.65€ inside the Netherlands.

International:

Netherlands:

December 12, 2016

security things in Linux v4.9

Previously: v4.8.

Here are a bunch of security things I’m excited about in the newly released Linux v4.9:

Latent Entropy GCC plugin

Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX’s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel’s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is “credited”, but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.

vmapped kernel stack and thread_info relocation on x86

Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process’s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.

Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86.

CONFIG_DEBUG_RODATA mandatory on arm64

As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there’s no reason to make the protection optional.

random_page() cleanup

Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand.

That’s it for now! Let me know if there are other fun things to call attention to in v4.9.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Logitech Unifying Hardware Required

Does anyone have a spare Logitech Unifying dongle I can borrow? I specifically need the newer Texas Instruments version, rather than the older Nordic version.

You can tell if it’s the version I need by looking at the etching on the metal USB plug, if it says U0008 above the CE marking then it’s the one I’m looking for. I’m based in London, UK if that matters. Thanks!

Interview with Jabari Dumisani

warriors

Could you tell us something about yourself?

Hello all. I’m originally from the city of Chicago, Ill., USA. I’ve been hardwired as an artistic and creative geek my whole life experiencing comic books/manga, cartoons/anime, RPG’s, video games and such since I was very young. The strongest pull came from Marvel/DC comics which lea me into teaching myself a bit of cartoon art and animation. I’m a hip-hop music lyricist under the name “Demygawd Tha Urthmaan”, beatmaker, producer (some of my rap music is still online). I also DJ in the deep/house, disco, acid, hardhouse genres.

Do you paint professionally, as a hobby artist, or both?

Basically a pro-hobbyist, I’ve published digital comics to Amazon.com, done stuff for Xbox Live indie games, created an app for Apple on their iTunes marketplace. All independent but sold through various online marketplaces.

What genre(s) do you work in?

Comic book action adventure, mostly. Episodic adventure storytelling, I’ve been reading comics for over 30 years, so writing fantasy is pretty much in me.

Whose work inspires you most — who are your role models as an artist?

Walt Disney, Craig McCracken, Genndy Tartikofsky, Bruce Timm, Jim Lee, Joe Madureira, & J. Scott Campbell. My art is very stylized and cartoony, kinda like movable stuffed toys.

How and when did you get to try digital painting for the first time?

1995 with Adobe Photoshop, until I was introduced to Macromedia/Adobe Flash. I was commissioned to write a comic book and Photoshop is what they used. The rest was history.

What makes you choose digital over traditional painting?

Unfortunately due to an economic downturn, I lost everything and fell into homelessness for about 4 years, sleeping on the streets. I could no longer afford art supplies and such or a steady place to be besides a local library, but I had a laptop and software which allowed me to create. This is where I started to attempt to use storytelling as a means to try to help me out of a bad situation, it kept me sane and kept me focused until something came along. My situation has gotten better, dusted myself off, and I’ve been digital ever sense.

How did you find out about Krita?

Actually I was looking for some GIMP update news online, ended up on a Blender 3D forum and heard about Krita from one of the posts, never hearing of it before. I nosed around, followed the trail to the .org website, and the rest was history. Krita and I have been buddies ever since.

What was your first impression?

Took a moment to get adjusted, but it was nothing I couldn’t get familiar with. I know I felt more comfortable with Krita than I did with Gimp for drawing content. There is nothing wrong with Gimp, Krita just suits me and my way of creating art.

What do you love about Krita?

1st) – VECTOR LAYERS! Coming from an Adobe Flash background and having the ability to tweek and manipulate vector points for inking is a must have and my go to line art finishing move.

2nd) – small software footprint/minimum processing power, I have Blender 3D, Inkscape, Gimp, Scibus, Calibre, and Krita running on a Dell Venue 8 Pro tablet with Windows 10 OS with no issue. Toss in a Adonit Jot Pro stylus and I have a whole graphic design studio that fits in a cargo pants pocket. You can’t ask for more. You “could”, but you would be greedy, … stop it.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would like a revamp of the text engine. I got spoiled with having Krita be a layout, storyboard, pencil, and inking solution for my creative needs, I probably would use vector layers to letter if it wasn’t so non-intuitive unlike how it is using Illustrator and Flash, based on my experience. Plus it’s a bit glitch-buggy. Inkscape is what I use for lettering and that needs work in itself, but the Krita text engine needs a hug and a healing, IMO.

What sets Krita apart from the other tools that you use?

Although it wasn’t my first choice in the beginning, it’s become my de-facto for cartoon and comic art creation. Hands down, it’s basically my all-in-one, start to end 2D digital creation suite. Unless I “NEED”  Blender 3D for what it does, for now in the realm of 2D, its Krita all the way.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

jaelynn

Easily my “Guardians of the Cloudgate: Attack of the Ebonseed” tabletop board game. This is the first time I ventured into toy design and used Krita for everything from the playing board, the pull cards, the cartoon art, the meeple stickers, the box art, the banner ads for the online storefront … everything.

What techniques and brushes did you use in it?

Depending on the requirements, I measured the art board to fit the need. My brush tools are the circle and hard erasures, the circle fill marker, ink open 10 ink pen, and the 2B pencil. I lay out my art and design on blueline, sketchwork, coloring, shading, and inking layers. Export as png’s. Compress file sizes using tinypng.com. Upload to the manufacturer. Order a physical prototype via snail-mail. Backup all art to a cloud drive. Call it a wrap for that project. Pretty straightforward for my workflow.

Where can people see more of your work?

Please feel free to check out our “Guardians of the Cloudgate” IP at https://guardiansofthecloudgate.blogspot.com. The “Guardians of the Cloudgate: The Wrath of Elaina” digital comic on Amazon Kindle devices and PC, smartphone and tablet apps. I also have cartoony superhero fan art at maadvectorstudios.blogspot.com, still being updated and under construction. Stay tuned, this is just the beginning.

Anything else you’d like to share?

I want to thank everyone at Krita for creating such special software. Allowing artists everywhere, regardless of skillset, the opportunity to be creative, tell fantastic stories to entertain people, and bring them enjoyment. Also, to all who have an artistic itch, do your thing to the best of your ability. Have fun, start a project, finish the project, repeat. Don’t let anyone tell you what you can or can’t do, keep going and keep growing. You only lose when you quit, so don’t quit. 🙂

 

darktable 2.2.0rc3 released

we're proud to announce the fourth release candidate of darktable 2.2.0, with some fixes over the previous release candidate.

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc3.

as always, please don't use the tarball autogenerated by github, but only our .tar.xz with the following sha256sum:

f7b9e8f5f56b2a52a4fa51e085b8aefe016ab08daf7b4a6ebf3af3464b1d2c29  darktable-2.2.0~rc3.tar.xz
86293aded568903eba3b225d680ff06bc29ea2ed678de05a0fd568aed93a0587  darktable-2.2.0.rc3.3.g9af0d4fcb.dmg

the changelog vs. the stable 2.0.x series is below:

  • Well over 2k commits since 2.0.0

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in "more modules" so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Add basic undo/redo support for the darkroom (masks are not accounted !)
  • Support the Exif date and time when importing photos from camera
  • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
  • Rudimentary CYGM and RGBE color filter array support
  • Nicer web gallery exporter -- now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Hotpixels module: make it actually work for X-Trans
  • Cmstest tool should now produce correct output in more cases, especially in multi-monitor setups.
  • Darkroom histogram now uses more bins: use all 8-bit of the output, not just 6.

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar XMP schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • Make sure that proper signal handlers are still set after GM initialization...
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Feathering size in some mask shapes can be set with shift+scroll
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series
  • Tone curve: mode "automatic in XYZ" mode for "scale chroma"
  • Some compilation fixes

Lua specific changes:

  • All asynchronous calls have been rewritten
  • The darktable-specific implementation of yield was removed
  • darktable.control.execute allows to execute some shell commands without blocking Lua
  • darktable.control.read allows to wait for a file to be readable without blocking Lua
  • darktable.control.sleep allows to pause the Lua execution without blocking other Lua threads
  • darktable.gui.libs.metadata_view.register_info allows to add new field to the metadata widget in the darkroom view
  • The TextView widget can now be created in Lua, allowing input of large chunks of text
  • It is now possible to use a custom widget in the Lua preference window to configure a preference
  • It is now possible to set the precision and step on slider widgets

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even debian stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.

Base Support:

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-G81 (4:3)
  • Panasonic DMC-G85 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Panasonic DMC-LX10 (3:2)
  • Panasonic DMC-LX15 (3:2)
  • Panasonic DMC-LX9 (3:2)
  • Panasonic DMC-TZ100 (3:2)
  • Panasonic DMC-TZ101 (3:2)
  • Panasonic DMC-TZ110 (3:2)
  • Panasonic DMC-ZS110 (3:2)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX100M5
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCA-99M2
  • Sony ILCE-6300

We were unable to bring back these 2 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800

White Balance Presets:

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-1
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles:

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX100 IS
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5
  • Nikon D5500
  • Olympus E-PL6
  • Olympus E-PM2
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-1
  • Pentax K-S2
  • Ricoh GR
  • Sony DSLR-A900
  • Sony DSC-RX10
  • Sony ILCE-6300
  • Sony NEX-5
  • Sony SLT-A37

New Translations:

  • Hebrew
  • Slovenian

Updated Translations:

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Polish
  • Russian
  • Slovak
  • Spanish
  • Swedish

December 11, 2016

Distributing Python Packages Part I: Creating a Python Package

I write lots of Python scripts that I think would be useful to other people, but I've put off learning how to submit to the Python Package Index, PyPI, so that my packages can be installed using pip install.

Now that I've finally done it, I see why I put it off for so long. Unlike programming in Python, packaging is a huge, poorly documented hassle, and it took me days to get a working.package. Maybe some of the hints here will help other struggling Pythonistas.

Create a setup.py

The setup.py file is the file that describes the files in your project and other installation information. If you've never created a setup.py before, Submitting a Python package with GitHub and PyPI has a decent example, and you can find lots more good examples with a web search for "setup.py", so I'll skip the basics and just mention some of the parts that weren't straightforward.

Distutils vs. Setuptools

However, there's one confusing point that no one seems to mention. setup.py examples all rely on a predefined function called setup, but some examples start with

from distutils.core import setup
while others start with
from setuptools import setup

In other words, there are two different versions of setup! What's the difference? I still have no idea. The setuptools version seems to be a bit more advanced, and I found that using distutils.core , sometimes I'd get weird errors when trying to follow suggestions I found on the web. So I ended up using the setuptools version.

But I didn't initially have setuptools installed (it's not part of the standard Python distribution), so I installed it from the Debian package:

apt-get install python-setuptools python-wheel

The python-wheel package isn't strictly needed, but I found I got assorted warnings warnings from pip install later in the process ("Cannot build wheel") unless I installed it, so I recommend you install it from the start.

Including scripts

setup.py has a scripts option to include scripts that are part of your package:

    scripts=['script1', 'script2'],

But when I tried to use it, I had all sorts of problems, starting with scripts not actually being included in the source distribution. There isn't much support for using scripts -- it turns out you're actually supposed to use something called console_scripts, which is more elaborate.

First, you can't have a separate script file, or even a __main__ inside an existing class file. You must have a function, typically called main(), so you'll typically have this:

def main():
    # do your script stuff

if __name__ == "__main__":
    main()

Then add something like this to your setup.py:

      entry_points={
          'console_scripts': [
              script1=yourpackage.filename:main',
              script2=yourpackage.filename2:main'
          ]
      },

There's a secret undocumented alternative that a few people use for scripts with graphical user interfaces: use 'gui_scripts' rather than 'console_scripts'. It seems to work when I try it, but the fact that it's not documented and none of the Python experts even seem to know about it scared me off, and I stuck with 'console_scripts'.

Including data files

One of my packages, pytopo, has a couple of files it needs to install, like an icon image. setup.py has a provision for that:

      data_files=[('/usr/share/pixmaps',      ["resources/appname.png"]),
                  ('/usr/share/applications', ["resources/appname.desktop"]),
                  ('/usr/share/appname',      ["resources/pin.png"]),
                 ],

Great -- except it doesn't work. None of the files actually gets added to the source distribution.

One solution people mention to a "files not getting added" problem is to create an explicit MANIFEST file listing all files that need to be in the distribution. Normally, setup generates the MANIFEST automatically, but apparently it isn't smart enough to notice data_files and include those in its generated MANIFEST.

I tried creating a MANIFEST listing all the .py files plus the various resources -- but it didn't make any difference. My MANIFEST was ignored.

The solution turned out to be creating a MANIFEST.in file, which is used to generate a MANIFEST. It's easier than creating the MANIFEST itself: you don't have to list every file, just patterns that describe them:

include setup.py
include packagename/*.py
include resources/*
If you have any scripts that don't use the extension .py, don't forget to include them as well. This may have been why scripts= didn't work for me earlier, but by the time I found out about MANIFEST.in I had already switched to using console_scripts.

Testing setup.py

Once you have a setup.py, use it to generate a source distribution with:

python setup.py sdist
(You can also use bdist to generate a binary distribution, but you'll probably only need that if you're compiling C as part of your package. Source dists are apparently enough for pure Python packages.)

Your package will end up in dist/packagename-version.tar.gz so you can use tar tf dist/packagename-version.tar.gz to verify what files are in it. Work on your setup.py until you don't get any errors or warnings and the list of files looks right.

Congratulations -- you've made a Python package! I'll post a followup article in a day or two about more ways of testing, and how to submit your working package to PyPI.

Update: Part II is up: Distributing Python Packages Part II: Submitting to PyPI.

December 08, 2016

Fedora Design Interns Update

Fedora Design Team Logo

I wanted to give you an update on the status of the Fedora Design team’s interns. We currently have two interns on our team:

Flock 2016 Logo

Mary Shakshober – (IRC: mshakshober) Mary started her internship full time this summer and amongst other things designed the beautiful, Polish folk art-inspired Flock 2016 logo. She’s currently working limited hours as the school year is back in swing at UNH, but she is still working on design team tickets, including new Fedora booth material designs and a template for Fedora’s logic model.

Suzanne Hillman – (IRC: shillman) Suzanne just started her Outreachy internship with us two days ago. She has been working on UX design research for a new Fedora Hubs feature – Regional Hubs. She’s already had some interviews with Fedora folks who’ve been involved in organizing regional Fedora events, and we’ll be using an affinity mapping exercise along with Matthew Miller to analyze the data she’s collected.

If you see Mary or Suzanne around, please say hi! 🙂

December 05, 2016

Welcome Digital Painters


Welcome Digital Painters

You mean there's art outside photography?

Yes, there really is art outside photography. :)

The history and evolution of painting has undergone a similar transformation as most things adapting to a digital age. As photographers, we adapted techniques and tools commonly used in the darkroom to software, and found new ways to extend what was possible to help us achieve a vision. Just as we tried to adapt skills to a new environment, so too did traditional artists, like painters.

Pat David Painting by Gustavo Deveze My headshot, as painted by Gustavo Deveze

These artists adapted by not only emulating the results of various techniques, but by pushing forward the boundaries of what was possible through these new (Free Software) tools.

Impetus

Digital painting discussions with Free Software lacks a good outlet for collaboration that can open the discussion for others to learn from and participate in. This is a similar situation the Free Software + photography world was in that prompted the creation of pixls.us.

Due to this, both Americo Gobbo and Elle Stone reached out to us to see if we could create a new category in the community about Digital Painting with a focus on promoting serious discussion around techniques, processes, and associated tools.

Both of them have been working hard on advancing the capabilities and quality of various Free Software tools for years now. Americo brings with him the interest of other painters who want to help accelerate the growth and adoption of Free Software projects for painting (and more) in a high-quality and professional capacity. A little background about them:

Americo Gobbo studied Fine Arts in Bologna, Italy. Today he lives and works in Brazil, where he continues to develop studies and create experimentation with painting and drawing mainly within the digital medium in which he tries to replicate the traditional effects and techniques from the real world to the virtual.

Imaginary Landscape Painting by Americo Gobbo Imaginary Landscape - Wet sketches, experiments on GIMP 2.9.+
Americo Gobbo, 2016.

Elle Stone is an amateur photographer with a long-standing interest in the history of photography and print making, and in combining painting and photography. She’s been contributing to GIMP development since 2012, mostly in the areas of color management and proper color mixing and blending.

Leaves in May Image by Elle Stone Leaves in May, GIMP-2.9 (GIMP-CCE)
Elle Stone, 2016.

Artists

With this introductory post to the new Digital painting category forum we feature Gustavo Deveze, who is a Visual Artist using free software. Deveze’s work is characterized by mixing different medias and techniques. With future posts we want to continue featuring artists using free software.

Gustavo Deveze

Gustavo Deveze is a visual artist and lives in Buenos Aires. He trained as a draftsman at the National School of Fine Arts “Manuel Belgrano”, and filmmaker at IDAC - Instituto de Arte Cinematográfica in Avellaneda, Argentina.

His works utilize different materials and supports and he is published by different publishers. Although in the last years he works mainly in digital format and with free software. He has participated in national and international shows and exhibitions of graphics and cinema with many awards. His last exposition can be seen on issuu.com: https://issuu.com/gustavodeveze/docs/inadecuado2edicion

Website: http://www.deveze.com.ar

Cudgels and Bootlickers: The Emperor's happiness - Gustavo Deveze Cudgels and Bootlickers: The Emperor’s happiness - Gustavo Deveze.
Let's be clear: the village's idiot is not tall... - Gustavo Deveze Let’s be clear: the village’s idiot is not tall… - Gustavo Deveze.

Digital Painting Category

The new Digital Painting category is for discussing painting techniques, processes, and associated tools in a digital environment using Free/Libre software. Some relevant topics might include:

  • Emulating non-digital art, drawing on diverse historical and cultural genres and styles of art.

  • Emulating traditional “wet darkroom” photography, drawing on the rich history of photographic and printmaking techniques.

  • Exploring ways of making images that were difficult or impossible before the advent of new algorithms and fast computers to run them on, including averaging over large collections of images.

  • Discussion of topics that transcend “just photography” or “just painting”, such as composition, creating a sense of volume or distance, depicting or emphasizing light and shadow, color mixing, color management, and so forth.

  • Combining painting and photography: Long before digital image editing artists already used photographs as aids to and part of making paintings and illustrations, and photographers incorporated painting techniques into their photographic processing and printmaking.

  • An important goal is also to encourage artists to submit tutorials and videos about Digital Painting with Free Software and to also submit high-quality finished works.

Say Hello!

Please feel free to stop into the new Digital Painting category, introduce yourself, and say hello! I look forward to seeing what our fellow artists are up to.

All images not otherwise specified are licensed CC-BY-NC-SA

December 04, 2016

darktable 2.2.0rc2 released

we're proud to announce the third release candidate of darktable 2.2.0, with some fixes over the previous release candidate.

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc2.

as always, please don't use the tarball autogenerated by github, but only our .tar.xz with the following sha256sum:

f3ed739f79858a1ce2b3746bbab11994f5fb38db6e96941d84ba475beab890a6  darktable-2.2.0.rc2.tar.xz
5d91cfd1622fb82e8f59db912e8b784a36b83f4a06d179e906f437104edc96f1  darktable-2.2.0.rc2.39.g684e8af41.dmg

the changelog vs. the stable 2.0.x series is below:

  • Well over 2k commits since 2.0.0

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in "more modules" so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Add basic undo/redo support for the darkroom (masks are not accounted !)
  • Support the Exif date and time when importing photos from camera
  • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
  • Rudimentary CYGM and RGBE color filter array support
  • Nicer web gallery exporter -- now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Hotpixels module: make it actually work for X-Trans
  • Cmstest tool should now produce correct output in more cases, especially in multi-monitor setups.
  • Darkroom histogram now uses more bins: use all 8-bit of the output, not just 6.

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar XMP schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • Make sure that proper signal handlers are still set after GM initialization...
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Feathering size in some mask shapes can be set with shift+scroll
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series
  • Tone curve: mode "automatic in XYZ" mode for "scale chroma"
  • Some compilation fixes

Lua specific changes:

  • All asynchronous calls have been rewritten
  • The darktable-specific implementation of yield was removed
  • darktable.control.execute allows to execute some shell commands without blocking Lua
  • darktable.control.read allows to wait for a file to be readable without blocking Lua
  • darktable.control.sleep allows to pause the Lua execution without blocking other Lua threads
  • darktable.gui.libs.metadata_view.register_info allows to add new field to the metadata widget in the darkroom view
  • The TextView widget can now be created in Lua, allowing input of large chunks of text
  • It is now possible to use a custom widget in the Lua preference window to configure a preference
  • It is now possible to set the precision and step on slider widgets

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even Debian stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.

Base Support

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Panasonic DMC-LX10 (3:2)
  • Panasonic DMC-LX15 (3:2)
  • Panasonic DMC-LX9 (3:2)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCE-6300

We were unable to bring back these 3 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800

White Balance Presets:

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-1
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles:

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX100 IS
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5
  • Nikon D5500
  • Olympus E-PL6
  • Olympus E-PM2
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-1
  • Pentax K-S2
  • Ricoh GR
  • Sony DSLR-A900
  • Sony DSC-RX10
  • Sony ILCE-6300
  • Sony NEX-5
  • Sony SLT-A37

New Translations:

  • Hebrew
  • Slovenian

Updated Translations:

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Polish
  • Russian
  • Slovak
  • Spanish
  • Swedish

November 30, 2016

Krita 3.1 Release Candidate

Due to illness, a week later than planned, we are still happy to release today the first release candidate for Krita 3.1. There are a number of important bug fixes, and we intend to fix a number of other bugs still in time for the final release.

  • Fix a crash when saving a document that has a vector layer to anything but the native format (regression in beta 3)
  • Fix exporting images using the commandline on Linux
  • Update the OSX QuickLook plugin to use the right thumbnail sizes
  • Improved zoom menu icons
  • Unify colors on all svg icons
  • Fix tilt-elevation brushes to work properly on a rotated or mirrored canvas
  • Improve drawing with the stabilizer enabled
  • Fix isotropic spacing when painting on a mirrored canvas
  • Fix a race condition when saving
  • Fix multi-window usage: the tool options palette would only be available the last openend window, now it’s available everywhere.
  • Fix a number memory leaks
  • Fix selecting the saving location for rendering animations (there are still several bugs in that plugin, though — we’re on it!)
  • Improve rendering speed of the popup color selector

You can find out more about what is going to be new in Krita 3.1 in the release notes. The release notes aren’t finished yet, but take a sneak peek all the same!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store is available in the beta channel.

OSX

Source code

November 28, 2016

A Masashi Wakui look with GIMP


A Masashi Wakui look with GIMP

A color bloom fit for night urban landscapes

This tutorial explains how to achieve an effect based on the post processing by photographer Masashi Wakui. His primary subjects appear as urban landscape views of Japan where he uses some pretty and aggressive color toning to complement his scenes along with a soft ‘bloom’ effect on the highlights. The results evoke a strong feeling of an almost cyberpunk or futuristic aesthetic (particularly for fans of Bladerunner or Akira!).

Untitled Untitled Untitled

This tutorial started its life in the pixls.us forum, which was inspired by a forum post seeking assistance on replicating the color grading and overall look/feel of Masashi’s photography.

Prerequisites

To follow along will require a couple of plugins for GIMP.

The Luminosity Mask filter will be used to target color grading to specific tones. You can find out more about luminosity masks in GIMP at Pat David’s blog post and his follow-up blog post. If you need to install the script, directions can be found (along with the scripts) at the PIXLS.US GIMP scripts git repository.

You will also need the Wavelet decompose plugin. The easiest way to get this plugin is to use the one available in G’MIC. As a bonus you’ll get access to many other incredible filters as well! Once you’ve installed G’MIC the filter can be found under
Details → Split details [wavelets].

We will do some basic toning and then apply Gimp’s wavelet decompose filter to do some magic. Two things will be used from the wavelet decompose results:

  • the residual
  • the coarsest wavelet scale (number 8 in this case)

The basic idea is to use the residual of the the wavelet decompose filter to color the image. What this does is average and blur the colors. The trick strengthens the effect of the surroundings being colored by the lights. The number of wavelet scales to use depends on the pixel size of the picture; the relative size of the coarsest wavelet scale compared to the picture is the defining parameter. The wavelet scale 8 will then produce overemphasised local contrasts, which will accentuate the lights further. This works nicely in pictures with lights as the brightest areas will be around lights. Used on daytime picture this effect will also accentuate brighter areas which will lead to a kind of “glow” effect. I tried this as well and it does look good on some pictures while on others it looks just wrong. Try it!

We will be applying all the following steps to this picture, taken in Akihabara, Tokyo.

The unaltered photograph The starting image (download full resolution).
  1. Apply the luminosity mask filter to the base picture. We will use this later.

    Filters → Generic → Luminosity Masks

  2. Duplicate the base picture (Ctrl+Shift+D).

    Layer → Duplicate Layer

  3. Tone the shadows of the duplicated picture using the tone curve by lowering the reds in the shadows. If you want your shadows to be less green, slightly raise the blues in the shadows.

    Colors → Curves

    The toning curves
    The photograph with the toning curve applied
  4. Apply a layer mask to the duplicated and toned picture. Choose the DD luminosity mask from a channel.

    Layer → Mask → Add Layer Mask

    Luminosity Mask Added
  5. With both layers visible, create a new layer from what is visible. Call this layer the “blended” layer.

    Layer → New from Visible

    The photograph after the blended layer
  6. Apply the wavelet decompose filter to the “blended” layer and choose 9 as number of detail scales. Set the G’MIC output mode to “New layer(s)” (see below).

    Filters → G’MIC
    Details → Split Details [wavelets]

    G'MIC Split Details Wavelet Decompose dialog Remember to set G’MIC to output the results on New Layer(s).
  7. Make the blended and blended [residual] layers visible. Then set the mode of the blended [residual] layer to color. This will give you a picture with averaged, blurred colors.

    The fully colored photograph
  8. Turn the opacity of the blended [residual] down to 70%, or any other value to your taste, to bring back some color detail.

    The partially colored photograph
  9. Turn on the blended [scale #8] layer, set the mode to grain merge, and see how the lights start shining. Adjust opacity to taste.

    The augmented contrast layer
  10. Optional: Turn the wavelet scale 3 (or any other) on to sharpen the picture and blend to taste.

  11. Make sure the following layers are visible:

    • blended
    • residual
    • wavelet scale 8
    • Any other wavelet scale you want to use for sharpening
  12. Make a new layer from visible

    Layer → New from Visible

  13. Raise and slightly crush the shadows using the tone curve.

    Raise the shadow curve
  14. Optional: Adjust saturation to taste. If there are predominantly white lights and the colors come mainly from other objects, the residual will be washed out, as is the case with this picture.

    I noticed that the reds and yellows were very dominant compared to greens and blues. So using the Hue-Saturation dialog I raised the master saturation by +70 and lowered the yellow saturation by -50 and lowered the red saturation by -40 all using an overlap of 60.

The final result:

The final image! The final result. (Click to compare to original.)
Download the full size result.

Linux communities, we need your help!

There are a lot of Linux communities all over the globe filled with really nice people who just want to help others. Typically these people either can’t (or don’t feel comfortable) coding, and I’d love to harness some of that potential by adding a huge number of new application reviews to the ODRS. At the moment we have about 1100 reviews, mostly covering the more popular applications, and also mostly written in English.

What I would love is for a few groups of people to come together for their next LUG/outreach/InstallFest and sit down together somewhere cozy and write a few reviews. Bonus points if you use a less-well-known application, and even more points if you can write in a language other than English. Submitting a review is easy; just open up GNOME Software, find the application, and click ‘Write a Review‘ at the bottom of the page.

Application reviews help new users what to install, and the star ratings you give means we can return useful search results full of great applications. Please write an email, ask about helping the ODRS, and perhaps you can help a lot of new users next time you meet with your Linuxy friends.

Thanks!

November 26, 2016

FreeCAD Arch development news

There is quite some time I don't write about Arch development, so here goes a little overview of what's been going on during the last weeks. As always, I'll be describing mostly what I've been doing myself, but many other people are very actively working on FreeCAD too, much more is going on. The best...

November 24, 2016

Watching org.libelektra with Qt

libelektra is a configuration library and tools set. It provides very many capabilities. Here I’d like to show how to observe data model changes from key/value manipulations outside of the actual application inside a user desktop. libelektra broadcasts changes as D-Bus messages. The Oyranos projects will use this method to sync the settings views of GUI’s, like qcmsevents, Synnefo and KDE’s KolorManager with libOyranos and it’s CLI tools in the next release.

Here a small example for connecting the org.libelektra interface over the QDBusConnection class with a class callback function:

Declare a callback function in your Qt class header:

public slots:
 void configChanged( QString msg );

Add the QtDBus API in your sources:

#include <QtDBus/QtDBus>

Wire the org.libelektra intereface to your callback in e.g. your Qt classes constructor:

if( QDBusConnection::sessionBus().connect( QString(), "/org/libelektra/configuration", "org.libelektra", QString(),
 this, SLOT( configChanged( QString ) )) )
 fprintf(stderr, "=================== Done connect\n" );

In your callback arrive the org.libelektra signals:

void Synnefo::configChanged( QString msg )
{
 fprintf( stdout, "config changed: %s\n", msg.toLocal8Bit().data() );
};

As the number of messages are not always known, it is useful to take the first message as a ping and update with a small timeout. Here a more practical code elaboration example:

// init a gate keeper in the class constructor:
acceptDBusUpdate = true;

void Synnefo::configChanged( QString msg )
{
  // allow the first message to ping
  if(acceptDBusUpdate == false) return;
  // block more messages
  acceptDBusUpdate = false;

  // update the view slightly later and avoid trouble
  QTimer::singleShot(250, this, SLOT( update() ));
};

void Synnefo::update()
{
  // clear the Oyranos settings cache (Oyranos CMS specific)
  oyGetPersistentStrings( NULL );

  // the data model reading from libelektra and GUI update
  // code ...

  // open the door for more messages to come
  acceptDBusUpdate = true;
}

The above code works for both Qt4 and Qt5.

String freeze for the upcoming 2.2 series

This is a call for all our translators, now is the time to bring your .po file in the master branch up to date. We will not ship any translation that is not relatively complete, the exact threshold is still to be determined.

As a quick reminder, these are the steps to update the translation if you are working from git. language_code is not the whole filename of the po file but just the first part of it. For example, when for Italian the language code is it while the filename is it.po. You also have to compile darktable before updating your .po file as some of the translated files are auto-generated.

cd /path/to/your/darktable/checkout/
git checkout master
git pull
./build.sh
cd po/
intltool-update <language_code>
<edit language_code.po>

If you don't have a build environment set up to compile darktable you can also use this .pot file.

November 23, 2016

darktable 2.2.0rc1 released

we're proud to announce the second release candidate of darktable 2.2.0, with some fixes over the previous release candidate. the most important one might be bringing back read support for very old xmp files (~4 years).

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc1.

as always, please don't use the tarball autogenerated by github, but only our .tar.xz with the following sha256sum:

0612163b0020bc3326909f6d7f7cbd8cfb5cff59b8e0ed1a9e2a2aa17d8f308e  darktable-2.2.0~rc1.tar.xz

the changelog vs. the stable 2.0.x series is below:

  • Well over 2k commits since 2.0.0

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in "more modules" so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Add basic undo/redo support for the darkroom (masks are not accounted !)
  • Support the Exif date and time when importing photos from camera
  • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
  • Rudimentary CYGM and RGBE color filter array support
  • Nicer web gallery exporter -- now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Hotpixels module: make it actually work for X-Trans

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar XMP schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Feathering size in some mask shapes can be set with shift+scroll
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even debian stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.

A special note to all the darktable Fedora users: Fedora-provided darktable packages are intentionally built with Lua disabled. Thus, Lua scripting will not work. This breaks e.g. darktable-gimp integration. Please bug Fedora. In the mean time you could fix that by self-compiling darktable (pass -DDONT_USE_INTERNAL_LUA=OFF to cmake in order to enable use of bundled Lua5.2.4).

Base Support

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Panasonic DMC-LX10 (3:2)
  • Panasonic DMC-LX15 (3:2)
  • Panasonic DMC-LX9 (3:2)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCE-6300

We were unable to bring back these 3 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800
  • Nikon D3X (12-bit)

White Balance Presets

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-1
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus E-PL6
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-S2
  • Ricoh GR
  • Sony DSLR-A900
  • Sony DSC-RX10
  • Sony SLT-A37

New Translations

  • Hebrew
  • Slovenian

Updated Translations

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Russian
  • Slovak
  • Spanish
  • Swedish

November 22, 2016

Giving Thanks


Giving Thanks

For an awesome community!

Here in the U.S., we have a big holiday coming up this week: Thanksgiving. Serendipitously, this holiday also happens to fall when a few neat things are happening around the community, and what better time is there to recognize some folks and to give thanks of our own? No time like the present!

A Special Thanks

I feel a special “Thank You” should first go to a photographer and fantastic supporter of the community, Dimitrios Psychogios. Last year for our trip to Libre Graphics Meeting, London he stepped up with an awesome donation to help us bring some fun folks together.

LGM2016 Dinner Fun folks together.
Mairi, the darktable nerds, a RawTherapee nerd, and a PhotoFlow nerd.
(and the nerd taking the photo, patdavid)

This year he was incredibly kind by offering a donation to the community (completely unsolicited) that covers our hosting and infrastructure costs for an entire year! So on behalf of the community, Thank You for your support, Dimitrios!

I’ll be creating a page soon that will list our supporters as a means of showing our gratitude. Speaking of supporters and a new page on the site…

A Support Page

Someone had asked about the possibility of donating to the community on a post. We were talking about providing support in darktable for using a midi controller deck and the costs for some of the options weren’t too extravagant. This got us thinking that enough small donations could probably cover something like this pretty easily, and if it was community hardware we could make sure it got passed around to each of the projects that would be interested in creating support for it.

KORG NanoControl2 An example midi-controller that we might get support
for in darktable and other projects.

That conversation had me thinking about ways to allow folks to support the community. In particular, ways to make it easy to provide support on an on-going basis if possible (in addition to simple, single donations). There are goal-oriented options out there that folks are probably already familiar with (Kickstarter, Indiegogo and others) but the model for us is less goal-oriented and more about continuous support.

Patreon was an option as well (and I already had a skeleton Patreon account set up), but the fees were just too much in the end. They wanted a flat 5% along with the regular PayPal fees. The general consensus among the staff was that we wanted to maximize the funds getting to the community.

The best option in the end was to create a merchant account on PayPal and manually set up the various payment options. I’ve set them up similar to how a service like Patreon might run with four different recurring funding levels and an option for a single one-time payment of whatever a user would like. Recurring levels are nice because they make it easier to plan with.

We’re Not Asking

Our requirements for the infrastructure of the site are modest and we haven’t actively pursued support or donations for the site before. That hasn’t changed.

We’re not asking for support now. The best way that someone can help the community is by being an active part of it.

Engaging others, sharing what you’ve done or learned, and helping other users out wherever you can. This is the best way to support the community.

I purposely didn’t talk about funding before because I don’t want folks to have to worry or think about it. And before you ask: no, we are not and will not run any advertising on the site. I’d honestly rather just keep paying for things out of my pocket instead.

We’re not asking for support, but we’ll accept it.

With that being said, I understand that there’s still some folks that would like to contribute to the infrastructure or help us to get hardware to add support in projects and more. So if you do want to contribute, the page for doing so can be found here:

https://pixls.us/support

There are four recurring funding levels of $1, $3, $5, and $10 per month. There is also a one-time contribution option as well.

We also have an Amazon Affiliate link option. If you’re not familiar with it, you simply click the link to go to Amazon.com. Then anything you buy for the next 24 hours will give us some small percentage of your purchase price. It doesn’t affect the price of what you’re buying at all. So if you were going to purchase something from Amazon anyway, and don’t mind - then by all means use our link first to help out!


1000 Users

This week we also finally hit 1,000 users registered on discuss! Which is just bananas to me. I am super thankful for each and every member of the community that has taken the time to participate, share, and generally make one of the better parts of my day catching up on what’s been going on. You all rock!

While we’re talking about a number “1” with bunch of zeros after it, we recently made some neat improvements to the forums…

100 Megabytes

We are a photography community and it seemed stupid to have to restrict users from uploading full quality images or raw files. Previously it was a concern because the server the forums are hosted on have limited disk space (40GB). Luckily, Discourse has an option for storing all uploads to the forum on Amazon S3 buckets.

I went ahead and created some S3 buckets so that any uploads to the forums will now be hosted on Amazon instead of taking up precious space on the server. The costs are quite reasonable (around $0.30/GB right now), and it also means that I’ve been able to bump the upload size to 100MB for forum posts! You can now just drag and drop full resolution raw files directly into the post editor to include the file!

Drag and Drop files in discuss 70MB GIMP .xcf file? Just drag-and-drop to upload, no problem! :)

Travis CI Automation

On a slightly geekier note, did you know that the code for the entire website is available on Github? It’s also licensed liberally (CC-BY-SA), so no reason not to come and fiddle with things with us! One of the features of using Github is integration with Travis CI (Continuous Integration).

What this basically means is that every commit to the Github repo for the website gets picked up by Travis and built to test that everything is working ok. You can actually see the history of the website builds there.

I’ve now got it set up so that when a build is successful on Travis, it will automatically publish the results to the main webserver and make it live. Our build system, Metalsmith, is a static site generator. This means that we build the entire website on our local computers when we make changes, and then publish all of those changes to the webserver. This change automates that process for us now by handling the building and publishing if everything is ok.

In fact, if everything is working the way I think it should, this very blog post will be the first one published using the new automated system! Hooray!

You can poke me or @paperdigits on discuss if you want more details or feel like playing with the website.

Mica

Speaking of @paperdigits, I want to close this blog post with a great big “Thank You!“ to him as well. He’s the only other person insane enough to try and make sense of all the stuff I’ve done building the site so far, and he’s been extremely helpful hacking at the website code, writing articles, make good infrastructure suggestions, taking the initiative on things (t-shirts and github repos), and generally being awesome all around.

November 21, 2016

Last batch of ColorHugALS

I’ve got 9 more ColorHugALS devices in stock and then when they are sold they will be no more for sale. With all the supplier costs going recently up my “sell at cost price” has turned into “make a small loss on each one” which isn’t sustainable. It’s all OpenHardware, both hardware design and the firmware itself so if someone wanted to start building them for sale they would be doing it with my blessing. Of course, I’m happy to continue supporting the existing sold devices into the distant future.

colorhug-als1-large

In part the original goal is fixed, the kernel and userspace support for the new SensorHID protocol works great and ambient light functionality works out of the box for more people on more hardware. I’m slightly disappointed more people didn’t get involved in making the ambient lighting algorithms more smart, but I guess it’s quite a niche area of development.

Plus, in the Apple product development sense, killing off one device lets me start selling something else OpenHardware in the future. :)

November 18, 2016

Solar diagrams in FreeCAD

New feature in FreeCAD: Arch Sites can now display a solar diagram: More info at http://forum.freecadweb.org/viewtopic.php?f=23&p=145036#p145036

Miyazaki Tribute

I am dono, CG freelancer from Paris, France. I use Blender as my main tool for both personal and professional work.

My workflow was a bit hectic during the creation of my tribute to Hayao Miyazaki short. There’s a ton of ways to produce such film anyway, and everyone has its own workflow, so the best I can do is to simply share how I personally did it.

I always loved the work of Hayao Miyazaki. I already had a lot of references from blu-ray, art books, mangas and such, so I didn’t spend a lot of time searching for references, but all I can say is that’s quite an important task at the beginning of a project. Having good references can save a lot of time.

I simply started the project as a modeling and texturing exercise, just to practice. After modeling the bath of “Spirited Away”, I thought it could be cool to do something more evolved.

miazaki_tribute_01

So I first did a layout with very low poly meshes to have a realtime preview of the camera’s movements. I also extracted frames from the movies using blurays footage to make two different quality versions. One version used low res JPGs to use for realtime preview in 3D viewport. The second one used raw PNGs for final renders.

miazaki_tribute_02

I used realtime previews to edit it all together using Blender’s sequencer. I wanted to find a good tempo and feeling for the music, and with realtime in Blender’s viewport, it was easy and smooth to built up. I edited directly the 3D viewport, by linking the scene in the sequencer, so I didn’t need to render anything!

miazaki_tribute_03

Next, I did the rotoscoping in Blender frame by frame. Having used realtime previews for the editing, I already knew exactly how many frames I had to rotoscope. That way I didn’t wasted any time rotoscoping unecessary footage, which was crucial because rotoscoping is very, very time consuming. The very important thing when you do a rotoscoping is to separate parts. You do not want to have everything in one part. Having separated layers makes it more flexible and faster.

miazaki_tribute_04

Then, I modeled and unwraped the assets in blender, textured them in Blender and Gimp. I used one blend for each asset to limit blends file size, and used linking to bring everything together in one scene. I also created a blend file that contained a lot of materials (different kind of metal, wood), so I could link them and reuse them at will. It was worth it since it having a modular workflow often really saves time.

miazaki_tribute_05

For the smoke, I used the blender smoke, directly rendered in openGL in Blender Internal. You can see and correct very easily any mistakes. I did also some dust and fog pass with it.

miazaki_tribute_06

Ocean was done using ocean modifier in Blender. I baked an image sequence in EXR, and used these images to do the wave displacement and foam.

miazaki_tribute_07

For rendering I used Octane since I wanted to try a new renderer for this project, but it could have been done using Cycles without any troubles. I rendered layers separatly: characters, sets, backgrounds and fxs. It was very good to have rendered things separatly: the render is more fast, you can have more bigger scene with more polys, and mostly, you can render again a part, if necessary (and it was very often the case) without to render the whole image all over again. Renders were saved in PNG 16 bits for the layer color, and in EXR 32 bits for the Z layer pass. I also rendered some masks and ID mask. This allowed to correct details very quickly during compositing without having to render again the whole image. The rendering time for one frame was from 4 minutes to 15 minutes.

miazaki_tribute_08

I finished the compo with Natron, added glow, vignetting, motion blur. The Z layer pass was used to add some fog, and ID mask to correct some objects colors. When you have a lot of layer pass from blender, it is very easy to do compositing and tweak things very quickly. I remember when I used to do everything in one single pass at the time. I did renders over and over to fixe errors and it was very time consuming. Sozap, a friend of mine and a very talented artist taught me to use separate layers. It was a really great tip and thanks to him I could work more efficiently.

miazaki_tribute_09

During the production, I showed wip to my friends, because they could provide a new and fresh look on my work. Sometimes, it is hard to have critics, but it is important to listen as they can help you a lot to to improve your work. Without critics, my short most certainly wouldn’t have looked as it does now. Thanks again to Blackschmoll, Boby, Christophe, Clouclou, Cremuss, David, Félicia, Frenchman, Sozap, Stéphane, Virgil! And Thanks to Ton Roosendaal, the Blender community, developers of Blender, Gimp and Natron!

miazaki_tribute_10

Check out the making of video!

November 16, 2016

Wed 2016/Nov/16

  • Debugging Rust code inside a C library

    An application that uses librsvg is in fact using a C library that has some Rust code inside it. We can debug C code with gdb as usual, but what about the Rust code?

    Fortunately, Rust generates object code! From the application's viewpoint, there is no difference between the C parts and the Rust parts: they are just part of the librsvg.so to which it linked, and that's it.

    Let's try this. I'll use the rsvg-view-3 program that ships inside librsvg — this is a very simple program that opens a window and displays an SVG image in it. If you build the rustification branch of librsvg (clone instructions at the bottom of that page), you can then run this in the toplevel directory of the librsvg source tree:

    tlacoyo:~/src/librsvg-latest (rustification)$ libtool --mode=execute gdb ./rsvg-view-3

    Since rsvg-view-3 is an executable built with libtool, we can't plainly run gdb on it. We need to invoke libtool with the incantation for "do your magic for shared library paths and run gdb on this binary".

    Gdb starts up, but no shared libraries are loaded yet. I like to set up a breakpoint in main() and run the program with its command-line arguments, so its shared libs will load, and then I can start setting breakpoints:

    (gdb) break main
    Breakpoint 1 at 0x40476c: file rsvg-view.c, line 583.
    
    (gdb) run tests/fixtures/reftests/bugs/340047.svg
    Starting program: /home/federico/src/librsvg-latest/.libs/rsvg-view-3 tests/fixtures/reftests/bugs/340047.svg
    
    ...
    
    Breakpoint 1, main (argc=2, argv=0x7fffffffdd48) at rsvg-view.c:583
    583         int retval = 1;
    (gdb)

    Okay! Now the rsvg-view-3 binary is fully loaded, with all its initial shared libraries. We can set breakpoints.

    But what does Rust call the functions we defined? The functions we exported to C code with the #[no_mangle] attribute of course get the name we expect, but what about internal, Rust-only functions? Let's ask gdb!

    Finding mangled names

    I have a length.rs file which defines an RsvgLength structure with a "parse" constructor: it takes a string which is a CSS length specifier, and returns an RsvgLength structure. I'd like to debug that RsvgLength::parse(), but what is it called in the object code?

    The gdb command to list all the functions it knows about is "info functions". You can pass a regexp to it to narrow down your search. I want a regexp that will match something-something-length-something-parse, so I'll use "ength.*parse". I skip the L in "Length" because I don't know how Rust mangles CamelCase struct names.

    (gdb) info functions ength.*parse
    All functions matching regular expression "ength.*parse":
    
    File src/length.rs:
    struct RsvgLength rsvg_internals::length::rsvg_length_parse(i8 *, enum class LengthDir);
    static struct RsvgLength rsvg_internals::length::{{impl}}::parse(struct &str, enum class LengthDir);

    All right! The first one, rsvg_length_parse(), is a function I exported from Rust so that C code can call it. The second one is the mangled name for the RsvgLength::parse() that I am looking for.

    Printing values

    Let's cut and paste the mangled name, set a breakpoint in it, and continue the execution:

    (gdb) break rsvg_internals::length::{{impl}}::parse
    Breakpoint 2 at 0x7ffff7ac6297: file src/length.rs, line 89.
    
    (gdb) cont
    Continuing.
    [New Thread 0x7fffe992c700 (LWP 26360)]
    [New Thread 0x7fffe912b700 (LWP 26361)]
    
    Thread 1 "rsvg-view-3" hit Breakpoint 2, rsvg_internals::length::{{impl}}::parse (string=..., dir=Both) at src/length.rs:89
    89              let (mut value, rest) = strtod (string);
    (gdb)

    Can we print values? Sure we can. I'm interested in the case where the incoming string argument contains "100%" — this will be parse()d into an RsvgLength value with length.length=1.0 and length.unit=Percent. Let's print the string argument:

    89              let (mut value, rest) = strtod (string);
    (gdb) print string
    $2 = {data_ptr = 0x8bd8e0 "12.0\377\177", length = 4}

    Rust strings are different from null-terminated C strings; they have a pointer to the char data, and a length value. Here, gdb is showing us a string that contains the four characters "12.0". I'll make this a conditional breakpoint so I can continue the execution until string comes in with a value of "100%", but I'll cheat: I'll use the C function strncmp() to test those four characters in string.data_ptr; I can't use strcmp() as the data_ptr is not null-terminated.

    (gdb) cond 2 strncmp (string.data_ptr, "100%", 4) == 0
    (gdb) cont
    Continuing.
    
    Thread 1 "rsvg-view-3" hit Breakpoint 2, rsvg_internals::length::{{impl}}::parse (string=..., dir=Vertical) at src/length.rs:89
    89              let (mut value, rest) = strtod (string);
    (gdb) p string
    $8 = {data_ptr = 0x8bd8e0 "100%", length = 4}

    All right! We got to the case we wanted. Let's execute this next line that has "let (mut value, rest) = strtod (string); in it, and print out the results:

    (gdb) next
    91              match rest.as_ref () {
    (gdb) print value
    $9 = 100
    (gdb) print rest
    $10 = {data_ptr = 0x8bd8e3 "%", length = 1}

    What type did "value" get assigned?

    (gdb) ptype value
    type = f64 

    A floating point value, as expected.

    You can see that the value of rest indicates that it is a string with "%" in it. The rest of the parse() function will decide that in fact it is a CSS length specified as a percentage, and will translate our value of 100 into a normalized value of 1.0 and a length.unit of LengthUnit.Percent.

    Summary

    Rust generates object code with debugging information, which gets linked into your C code as usual. You can therefore use gdb on it.

    Rust creates mangled names for methods. Inside gdb, you can find the mangled names with "info functions"; pass it a regexp that is close enough to the method name you are looking for, unless you want tons of function names from the whole binary and all its libraries.

    You can print Rust values in gdb. Strings are special because they are not null-terminated C strings.

    You can set breakpoints, conditional breakpoints, and do pretty much do all the gdb magic that you expect.

    I didn't have to do anything for gdb to work with Rust. The version that comes in openSUSE Tumbleweed works fine. Maybe it's because Rust generates standard object code with debugging information, which gdb readily accepts. In any case, it works out of the box and that's just as it should be.

Krita 3.1 Beta 4 Released

Here is the fourth Krita 3.1 beta! From the Krita 3.1 on, Krita will officially support OSX. All OSX users are urged to use this version instead of earlier “stable” versions for OSX. We’re releasing a fourth beta because of more changes to the code that actually saves your images… We’ve tried to make it much safer, but please do test this!

These are the most important fixes:

  • Improve the framerate of the stabilizer
  • Fix a regression with saving animations in the previous beta (3.0.92)
  • Fix some crashes when drawing in the scratch pad
  • Make the color selector in the color-to-alpha filter work properly

Note: the beta still contains the colorize mask/lazy brush plugin. We will probably remove that feature in the final release because the current algorithm is too slow to be usable, and we’re still looking for and experimenting with new algorithms. With the current beta you will get a preview of how the user interface will work, but keep in mind that the we know that it’s too slow to be usable and are working on fixing that

The fourth beta is also much more stable and usable than earlier builds, and we’d like to ask everyone to try to use this version in production and help us find bugs and issues!

You can find out more about what is going to be new in Krita 3.1 in the release notes. The release notes aren’t finished yet, but take a sneak peek all the same!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store is available in the beta channel.

OSX

Source code

Responsive HTML with CSS and Javscript

In this article you can learn how to make a minimalist web page readable on different format readers like larger desktop screens and handhelds. The ingredients are HTML, with CSS and few JavaScript. The goals for my home page are:

  • most of the layout resides in CSS in a stateless way
  • minimal JavaScript
  • on small displays – single column layout
  • on wide format displays – division of text in columns
  • count of columns adapts to browser window width or screen size
  • combine with markdown

CSS:

h1,h2,h3 {
  font-weight: bold;
  font-style: normal;
}

@media (min-width: 1000px) {
  .tiles {
    display: flex;
    justify-content: space-between;
    flex-wrap: wrap;
    align-items: flex-start;
    width: 100%;
  }
  .tile {
    flex: 0 1 49%;
  }
  .tile2 {
    flex: 1 280px
  }
  h1,h2,h3 {
    font-weight: normal;
  }
}
@media (min-width: 1200px) {
  @supports ( display: flex ) {
    .tile {
      flex: 0 1 24%;
    }
  }
}

The content in class=”tile” is shown as one column up to 4 columns. tile2 has a fixed with and picks its column count by itself. All flex boxes behave like one normal column. With @media (min-width: 1000px) { a bigger screen is assumed. Very likely there is a overlapping width for bigger handhelds, tablets and smaller laptops. But the layout works reasonable and performs well on shrinking the web browser on a desktop or viewing fullscreen and is well readable. Expressing all tile stuff in flex: syntax helps keeping compatibility with non flex supporting layout engines like in e.g. dillo.

For reading on High-DPI monitors on small it is essential to set font size properly. Update: Google and Mozilla recommend a meta “viewport” tag to signal browsers, that they are prepared to handle scaling properly. No JavaScript is needed for that.

<meta name="viewport" content="width=device-width, initial-scale=1.0">

[Outdated: I found no way to do that in CSS so far. JavaScript:]

function make_responsive () {
  if( typeof screen != "undefined" ) {
    var fontSize = "1rem";
    if( screen.width < 400 ) {
      fontSize = "2rem";
    }
    else if( screen.width < 720 ) {
      fontSize = "1.5rem";
    }
    else if( screen.width < 1320 ) {
      fontSize = "1rem";
    }
    if( typeof document.children === "object" ) {
      var obj = document.children[0]; // html node
      obj.style["font-size"] = fontSize;
    } else if( typeof document.body != "undefined" ) {
      document.body.style.fontSize = fontSize;
    }
  }
}
document.addEventListener( "DOMContentLoaded", make_responsive, false );
window.addEventListener( "orientationchange", make_responsive, false );

[The above JavaScript checks carefully if various browser attributes and scales the font size to compensate for small screens and make it readable.]

The above method works in all tested browsers (FireFox, Chrome, Konqueror, IE) beside dillo and on all platforms (Linux/KDE, Android, WP8.1). The meta tag method works as well better for printing.

Below some markdown to illustrate the approach.

HTML:

<div class="tiles">
<div class="tile"> My first text goes here. </div>
<div class="tile"> Second text goes here. </div>
<div class="tile"> Third text goes here. </div>
<div class="tile"> Fourth text goes here. </div>
</div>

In my previous articles you can read about using CSS3 for Translation and Web Open Font Format (WOFF) for Web Documents.

November 15, 2016

Fedora Hubs and Meetbot: A Recursive Tale

Fedora Hubs

Hubs and Chat Integration Basics

One of the planned features of Fedora Hubs that I am most excited about is chat integration with Fedora development chat rooms. As a mentor and onboarder of designers and other creatives into the Fedora project, I’ve witnessed IRC causing a lot of unnecessary pain and delay in the onboarding experience. The idea we have for Hubs is to integrate Fedora’s IRC channels into the Hubs web UI, requiring no IRC client installation and configuration on the part of users in order to be able to participate. The model is meant to be something like this:

Diagram showing individual hubs mapping to individual IRC channels / privmsgs.

By default, any given hub won’t have an IRC chat window. And whether or not a chat window appears on the hub is configurable by the hub admin (they can choose to not display the chat widget.) However, the hub admin may map their hub to a specific channel – whatever is appropriate for their team / project / self – and the chat widget on their hub will give visitors the possibility to interact with that team via chat, right in the web interface. Early mockups depict this feature looking something like this, for inclusion on a team or project hub (a PM window for user hubs):

mockup showing an irc widget for #fedora-design on the design team hub

Note this follows our general principle of enabling new contributors while not uprooting our existing ones. We followed this with HyperKitty – if you prefer to interact with mailing lists on the web, you can, but if you’ve got your own email-based workflow and client that you don’t want to change at all, HyperKitty doesn’t affect you. Same principle here: if you’ve got an IRC client you like, no change for you. This is just an additional interface by which new folks can interact with you in the same places you already are.

Implementation is planned to be based on waartaa, for which the lead Hubs developer Sayan Chowdhury is also an upstream developer.

Long-term, we (along with waartaa upstream) have been thinking about matrix as a better chat protocol that waartaa could support or be ported to in the future. (I personally have migrated from HexChat to Riot.im – popular matrix web + smartphone client – as my only client to connect to Freenode. The experiment has gone quite well. I access my usual freenode channels using Riot.im’s IRC bridges.) So when we think about implementing chat, we also keep in mind the protocol underneath may change at some point.

That’s a high-level explanation of how we’re thinking about integrating chat into Hubs.

Next Level: HALP!!1

As of late, Aurélien Bompard has been investigating the “Help/Halp” feature of feature hubs. (https://pagure.io/fedora-hubs/issue/98)

The general idea is to have a widget that aggregates all help requests (created using the meetbot #help command while meeting minutes are being recorded) across all teams / meetings and have a single place to sort through them. Folks (particularly new contributors) looking for things they can help out with can refer to it as a nice, timely bucket of tasks that are needed with clear suggestions for how to get started. (Timely, because new contributors want to help with tasks that are needed now and not waste their time on requests that are stale and are no longer needed or already fixed. On the other side, the widget helps bring some attention to the requests people in need of help are making, hopefully increasing the chances they’ll get the help they are looking for.

The mechanism for generating the list of help requests is to gather #help requests from meeting minutes and display them from most recent to least recent. The chances you’ll find a task that is actually needed now are high. As the requests age, they scroll further and further back into the backlog until they are no longer displayed (the idea being, if enough time has passed, the help is likely no longer needed or has already been provided.) The contact point for would-be helpers is easy – the person who ran the #help command in the meeting is listed as a contact for you to sync up with to get started.)

The mockups are available in the ticket, but are shown below as well for purposes of illustration:

Main help widget, showing active help requests across various Fedora teams

Main help widget, showing active help requests across various Fedora teams

Mockup showing UI panel where someone can volunteer to help someone with a request.

Mockup showing UI panel where someone can volunteer to help someone with a request.

An issue that came up has to do with the mapping we talked about earlier. Many Fedora team meetings occur in #fedora-meeting-*; e.g., #fedora-meeting, #fedora-meeting-1, etc. Occasionally, Fedora meetings occur in a team channel (e.g., #fedora-design) that may not map up with the team’s ‘namespace’ in other applications (e.g., our mailing list is design-team. Our pagure.io repo is ‘/design’.) Based on how Fedora teams use IRC and how meetbot works, we cannot rely on the channel name to get the correct namespace / hub name for a team making a request during a meeting using the meetbot #help command.

Meetbot does also have a mechanism to set a topic for a meeting, and many teams use this to identify the team meeting – in fact, it’s required to start a meeting now – but depending on who is running the meeting, this freeform field can vary. (For instance – the design team has meetings marked fedora_design, fedora-design, designteam, design-team, design, etc. etc.) So the topic field in the fedmsg meetbot puts out may also not be reliable for pointing to a hub / team.

One idea we talked about in our meeting a couple of weeks ago as well as last week’s meeting was having some kind of lookup table to map a team to all of its various namespaces in different applications. The problem with this is that because meetbot issues the fedmsgs used to generate the halp widget list of requests as soon as the #help command is issued – it is meetbot itself that would need to lookup the mapping so that it had the correct team name issued in its fedmsg. We couldn’t write some kind of script or something to reconcile things after the meeting concluded. Meetbot itself needs to be changed for this to work – for the #help requests put out on fedmsg by meetbot to have the correct team names associated with them.

Which Upstream is Less Decomposed?

Do you see dead upstreams? Zombie image

Zombie artwork credit: Zombies Silhouette by GDJ on OpenClipArt.

We determined we needed to make a change to meetbot. meetbot is a plugin to an IRC bot called supybot. Fedora infrastructure doesn’t actually use supybot to run meetbot, though. (There haven’t been any commits to supybot for about 2 years.) Instead, we use a fork called limnoria that is Python 3-based and has various enhancements applied to it.

How about meetbot? Well, meetbot hasn’t been touched by its upstream since 2009 (7 years ago.) I believe Fedora carries some local patches to it. In talking with Kevin Fenzi, we discovered there is a newer fork of meetbot maintained by the upstream OpenStack team. That hadn’t seen activity in 3 years, according to github.

Aurélien contacted the upstream OpenStack folks and discovered that, pending a modification to implement file-based configs to enable deployment using tools like Ansible, they were looking to port their supybot plugins (including meetbot) to errbot and migrate to that. So we had a choice – we could implement what we needed on top of their newer meetbot as is and they would be willing to work with us, or we could join their team in migrating to errbot, participate in the meetbot porting process, and use errbot going forward. Errbot appears to have a very active upstream with many plugins available already.

How Far Down the Spiral Do We Go?

To unravel ourselves a bit from the spiral of recursion here… remember, we’re trying to implement a simple Help widget for Fedora Hubs. As we’ve discovered, the technology upon which the features we need to interact with to make the feature happen are a bit zombiefied. What to do?

We agreed that the overall mission of Fedora Hubs as a project is to make collaboration in Fedora more efficient and easy for everyone. In this situation specifically, we decided that migrating to errbot and upgrading a ported meetbot to allow for mapping team namespaces to meeting minutes would be the right way to go. It’s definitely not the easy way, but we think it’s the right way.

It’s our hope in general that as we work our way through implementing Hubs as a unified interface for collaboration in Fedora, we expose deficiencies present in the underlying apps and are able to identify and correct them as we go. This hopefully will result in a better experience for everyone using those apps, whether or not they are Hubs users.

Want to Help?

we need your help!

Does this sound interesting? Want to help us make it happen? Here’s what you can do:

  • Come say hi on the hubs-devel mailing list, introduce yourself, read up on our past meeting minutes.
  • Join us during our weekly meetings on Tuesdays at 15:00 UTC in #fedora-hubs on irc.freenode.net.
  • Reach out to Aurélien and coordinate with him if you’d like to help with the meetbot porting effort to errbot. You may want to check out those codebases as well.
  • Reach out to Sayan if you’d like to help with the implementation of waartaa to provide IRC support in Fedora Hubs!
  • Hit me up if you’ve got ideas or would like to help out with any of the UX involved!

Ideas, feedback, questions, etc. provided in a respectful manner are welcome in the comments.

CSS3 for Translation

Years ago I used a CMS to bring content to a web page. But with evolving CSS, markdown syntax and comfortable git hosting, publication of smaller sites can be handled without a CMS. My home page is translated. Thus I liked to express page translations in a stateless language. The ingredients are simple. My requirements are:

  • stateless CSS, no javascript
  • integrable with markdown syntax (html tags are ok’ish)
  • default language shall remain visible, when no translation was found
  • hopefully searchable by robots (Those need to understand CSS.)

CSS:

/* hide translations initially */
.hide {
  display: none
}
/* show a browser detected translation */
:lang(de) { display: block; }
li:lang(de) { display: list-item; }
a:lang(de) { display: inline; }
em:lang(de) { display: inline; }
span:lang(de) { display: inline; }

/* hide default language, if a translation was found */
:lang(de) ~ [lang=en] {
 display: none;
}

The CSS uses the display property of the element, which was returned by the :lang() selector. However the selectors for different display: types are somewhat long. Which is not so short as I liked.

Markdown:

<span lang="de" class="hide"> Hallo _Welt_. </span>
<span lang="en"> Hello _World_. </span>

Even so the plain markdown text looks not as straight forward as before. But it is acceptable IMO.

Hiding the default language uses the sibling elements combinator E ~ F and selects a element containing the lang=”en” attribute. Matching elements are hidden (display: none;). This is here the default language string “Hello _World_.” with the lang=”en” attribute. This approach works fine in FireFox(49), Chrome Browser(54), Konqueror(4.18 khtml&WebKit) and WP8.1 with Internet Explorer. Dillo(3.0.5) does not show the translation, only the english text, which is correct as fallback for a non :lang() supporting engine.

On my search I found approaches for content swapping with CSS: :lang()::before { content: xxx; } . But those where not well accessible. Comments and ideas welcome.

Lyon GNOME Bug day #1

Last Friday, both a GNOME bug day and a bank holiday, a few of us got together to squash some bugs, and discuss GNOME and GNOME technologies.

Guillaume, a new comer in our group, tested the captive portal support for NetworkManager and GNOME in Gentoo, and added instructions on how to enable it to their Wiki. He also tested a gateway related configuration problem, the patch for which I merged after a code review. Near the end of the session, he also rebuilt WebKitGTK+ to test why Google Docs was not working for him anymore in Web. And nobody believed that he could build it that quickly. Looks like opinions based on past experiences are quite hard to change.

Mathieu worked on removing jhbuild's .desktop file as nobody seems to use it, and it was creating the Sundry category for him, in gnome-shell. He also spent time looking into the tracker blocker that is Mozilla's Focus, based on disconnectme's block lists. It's not as effective as uBlock when it comes to blocking adverts, but the memory and performance improvements, and the slow churn rate, could make it a good default blocker to have in Web.

Haïkel looked into using Emeus, potentially the new GTK+ 4.0 layout manager, to implement the series properties page for Videos.

Finally, I added Bolso to jhbuild, and struggled to get gnome-online-accounts/gnome-keyring to behave correctly in my installation, as the application just did not want to log in properly to the service. I also discussed Fedora's privacy policy (inappropriate for Fedora Workstation, as it doesn't cover the services used in the default installation), a potential design for Flatpak support of joypads and removable devices in general, as well as the future design of the Network panel.

November 14, 2016

João Almeida's darktable Presets


João Almeida's darktable Presets

A gorgeous set of film emulation for darktable

I realize that I’m a little late to this, but photographer João Almeida has created a wonderful set of film emulation presets for darktable that he uses in his own workflow for personal and commisioned work. Even more wonderful is that he has graciously released them for everyone to use.

These film emulations started as a personal side project for João, and he adds a disclaimer to them that he did not optimize them all for each brand or model of his cameras. His end goal was for these to be as simple as possible by using a few darktable modules. He describes it best on his blog post about them:

The end goal of these presets is to be as simple as possible by using few Darktable modules, it works solely by manipulating Lab Tone Curves for color manipulation, black & white films rely heavily on Channel Mixer. Since I what I was aiming for was the color profiles of each film, other traits related with processing, lenses and others are unlikely to be implemented, this includes: grain, vignetting, light leaks, cross-processing, etc.

Some before/after samples from his blog post:

João Almeida Portra 400 sample João Portra 400
(Click to compare to original)
João Alemida Kodachrome 64 sample João Kodachrome 64
(Click to compare to original)
João Alemida Velvia 50 sample João Velvia 50
(Click to compare to original)

You can read more on João’s website and you can see many more images on Flickr with the #t3mujinpack tag. The full list of film emulations included with his pack:

  • AGFA APX 25, 100
  • Fuji Astia 100F
  • Fuji Neopan 1600, Acros 100
  • Fuji Pro 160C, 400H, 800Z
  • Fuji Provia 100F, 400F, 400X
  • Fuji Sensia 100
  • Fuji Superia 100, 200, 400, 800, 1600, HG 1600
  • Fuji Velvia 50, 100
  • Ilford Delta 100, 400, 3200
  • Ilford FP4 125
  • Ilford HP5 Plus 400
  • Ilford XP2
  • Kodak Ektachrome 100 GX, VS
  • Kodak Ektar 100
  • Kodak Elite Chrome 400
  • Kodak Kodachrome 25, 64, 200
  • Kodak Portra 160 NC, VC
  • Kodak Portra 400 NC, UC, VC
  • Kodak Portra 800
  • Kodak T-Max 3200
  • Kodak Tri-X 400

If you see João around the forums stop and say hi (and maybe a thank you). Even better, if you find these useful, consider buying him a beer (donation link is on his blog post)!

Mon 2016/Nov/14

  • Exposing Rust objects to C code

    When librsvg parses an SVG file, it will encounter elements that generate path-like objects: lines, rectangles, polylines, circles, and actual path definitions. Internally, librsvg translates all of these into path definitions. For example, librsvg will read an element from the SVG that defines a rectangle like

    <rect x="20" y="30" width="40" height="50" style="..."></rect> 

    and translate it into a path definition with the following commands:

    move_to (20, 30)
    line_to (60, 30)
    line_to (60, 80)
    line_to (20, 80)
    line_to (20, 30)
    close_path ()

    But where do those commands live? How are they fed into Cairo to actually draw a rectangle?

    Get your Cairo right here

    One of librsvg's public API entry points is rsvg_handle_render_cairo():

    gboolean rsvg_handle_render_cairo (RsvgHandle * handle, cairo_t * cr);

    Your program creates an appropriate Cairo surface (a window, an off-screen image, a PDF surface, whatever), obtains a cairo_t drawing context for the surface, and passes the cairo_t to librsvg using that rsvg_handle_render_cairo() function. It means, "take this parsed SVG (the handle), and render it to this cairo_t drawing context".

    SVG files may look like an XML-ization of a tree of graphical objects: here is a group which contains a blue rectangle and a green circle, and here is a closed Bézier curve with a black outline and a red fill. However, SVG is more complicated than that; it allows you to define objects once and recall them later many times, it allows you to use CSS cascading rules for applying styles to objects ("all the objects in this group are green unless they define another color on their own"), to reference other SVG files, etc. The magic of librsvg is that it resolves all of that into drawing commands for Cairo.

    Feeding a path into Cairo

    This is easy enough: Cairo provides an API for its drawing context with functions like

    void cairo_move_to (cairo_t *cr, double x, double y);
    
    void cairo_line_to (cairo_t *cr, double x, double y);
    
    void cairo_close_path (cairo_t *cr);
    
    /* Other commands ommitted */

    Librsvg doesn't feed paths to Cairo as soon as it parses them from the XML; that is done until rendering time. In the meantime, librsvg has to keep an intermediate representation of path data.

    Librsvg uses an RsvgPathBuilder object to hold on to this path data for as long as needed. The API is simple enough:

    pub struct RsvgPathBuilder {
       ...
    }
    
    impl RsvgPathBuilder {
        pub fn new () -> RsvgPathBuilder { ... }
    
        pub fn move_to (&mut self, x: f64, y: f64) { ... }
    
        pub fn line_to (&mut self, x: f64, y: f64) { ... }
    
        pub fn curve_to (&mut self, x2: f64, y2: f64, x3: f64, y3: f64, x4: f64, y4: f64) { ... }
    
        pub fn close_path (&mut self) { ... }
    }

    This mimics the sub-API of cairo_t to build paths, except that instead of feeding them immediately into the Cairo drawing context, RsvgPathBuilder builds an array of path commands that it will later replay to a given cairo_t. Let's look at the methods of RsvgPathBuilder.

    "pub fn new () -> RsvgPathBuilder" - this doesn't take a self parameter; you could call it a static method in languages that support classes. It is just a constructor.

    "pub fn move_to (&mut self, x: f64, y: f64)" - This one is a normal method, as it takes a self parameter. It also takes (x, y) double-precision floating point values for the move_to command. Note the "&mut self": this means that you must pass a mutable reference to an RsvgPathBuilder, since the method will change the builder's contents by adding a move_to command. It is a method that changes the state of the object, so it must take a mutable object.

    The other methods for path commands are similar to move_to. None of them have return values; if they did, they would have a "-> ReturnType" after the argument list.

    But that RsvgPathBuilder is a Rust object! And it still needs to be called from the C code in librsvg that hasn't been ported over to Rust yet. How do we do that?

    Exporting an API from Rust to C

    C doesn't know about objects with methods, even though you can fake them pretty well with structs and pointers to functions. Rust doesn't try to export structs with methods in a fancy way; you have to do that by hand. This is no harder than writing a GObject implementation in C, fortunately.

    Let's look at the C header file for the RsvgPathBuilder object, which is entirely implemented in Rust. The C header file is rsvg-path-builder.h. Here is part of that file:

    typedef struct _RsvgPathBuilder RsvgPathBuilder;
    
    G_GNUC_INTERNAL
    void rsvg_path_builder_move_to (RsvgPathBuilder *builder,
                                    double x,
                                    double y);
    G_GNUC_INTERNAL
    void rsvg_path_builder_line_to (RsvgPathBuilder *builder,
                                    double x,
                                    double y);

    Nothing special here. RsvgPathBuilder is an opaque struct; we declare it like that just so we can take a pointer to it as in the rsvg_path_builder_move_to() and rsvg_path_builder_line_to() functions.

    How about the Rust side of things? This is where it gets more interesting. This is part of path-builder.rs:

    extern crate cairo;                                                         // 1
    
    pub struct RsvgPathBuilder {                                                // 2
        path_segments: Vec<cairo::PathSegment>,
    }
    
    impl RsvgPathBuilder {                                                      // 3
        pub fn move_to (&mut self, x: f64, y: f64) {                            // 4
            self.path_segments.push (cairo::PathSegment::MoveTo ((x, y)));      // 5
        }
    }
    
    #[no_mangle]                                                                    // 6
    pub extern fn rsvg_path_builder_move_to (raw_builder: *mut RsvgPathBuilder,     // 7
                                             x: f64,
                                             y: f64) {
        assert! (!raw_builder.is_null ());                                          // 8
    
        let builder: &mut RsvgPathBuilder = unsafe { &mut (*raw_builder) };         // 9
    
        builder.move_to (x, y);                                                     // 10
    }

    Let's look at the numbered lines:

    1. We use the cairo crate from the excellent gtk-rs, the Rust binding for GTK+ and Cairo.

    2. This is our Rust structure. Its fields are not important for this discussion; they are just what the struct uses to store Cairo path commands.

    3. Now we begin implementing methods for that structure. These are Rust-side methods, not visible from C. In 4 and 5 we see the implementation of ::move_to(); it just creates a new cairo::PathSegment and pushes it to the vector of segments.

    6. The "#[no_mangle]" line instructs the Rust compiler to put the following function name in the .a library just as it is, without any name mangling. The function name without name mangling looks just like rsvg_path_builder_move_to to the linker, as we expect. A name-mangled Rust function looks like _ZN14rsvg_internals12path_builder15RsvgPathBuilder8curve_to17h1b8f49042ff19daaE — you can explore these with "objdump -x rust/target/debug/librsvg_internals.a"

    7. "pub extern fn rsvg_path_builder_move_to (raw_builder: *mut RsvgPathBuilder". This is a public function with an exported symbol in the .a file, not an internal one, as it will be called from the C code. And the "raw_builder: *mut RsvgPathBuilder" is Rust-ese for "a pointer to an RsvgPathBuilder with mutable contents". If this were only an accessor function, we would use a "*const RsvgPathBuilder" argument type.

    8. "assert! (!raw_builder.is_null ());". You can read this as "g_assert (raw_builder != NULL);" if you come from GObject land.

    9. "let builder: &mut RsvgPathBuilder = unsafe { &mut (*raw_builder) }". This declares a builder variable, of type &mut RsvgPathBuilder, which is a reference to a mutable path builder. The variable gets intialized with the result of "&mut (*raw_builder)": first we de-reference the raw_builder pointer with the asterisk, and convert that to a mutable reference with the &mut. De-referencing pointers that come from who-knows-where is an unsafe operation in Rust, as the compiler cannot guarantee their validity, and so we must wrap that operation with an unsafe{} block. This is like telling the compiler, "I acknowledge that this is potentially unsafe". Already this is better than life in C, where *every* de-reference is potentially dangerous; in Rust, only those that "bring in" pointers from the outside are potentially dangerous.

    10. Now we have a Rust-side reference to an RsvgPathBuilder object, and we can call the builder.move_to() method as in regular Rust code.

    Those are methods. And the constructor/destructor?

    Excellent question! We defined an absolutely conventional method, but we haven't created a Rust object and sent it over to the C world yet. And we haven't taken a Rust object from the C world and destroyed it when we are done with it.

    Construction

    Here is the C prototype for the constructor, exactly as you would expect from a GObject library:

    G_GNUC_INTERNAL
    RsvgPathBuilder *rsvg_path_builder_new (void);

    And here is the corresponding implementation in Rust:

    #[no_mangle]
    pub unsafe extern fn rsvg_path_builder_new () -> *mut RsvgPathBuilder {    // 1
        let builder = RsvgPathBuilder::new ();                                 // 2
    
        let boxed_builder = Box::new (builder);                                // 3
    
        Box::into_raw (boxed_builder)                                          // 4
    }

    1. Again, this is a public function with an exported symbol. However, this whole function is marked as unsafe since it returns a pointer, a *mut RsvgPathBuilder. To Rust this declaration means, "this pointer will be out of your control", hence the unsafe. With that we acknowledge our responsibility in handling the memory to which the pointer refers.

    2. We instantiate an RsvgPathBuilder with normal Rust code...

    3. ... and ensure that that object is put in the heap by Boxing it. This is a common operation in garbage-collected languages. Boxing is Rust's primitive for putting data in the program's heap; it allows the object in question to outlive the scope where it got created, i.e. the duration of the rsvg_path_builder_new()function.

    4. Finally, we call Box::into_raw() to ask Rust to give us a pointer to the contents of the box, i.e. the actual RsvgPathBuilder struct that lives there. This statement doesn't end in a semicolon, so it is the return value for the function.

    You could read this as "builder = g_new (...); initialize (builder); return builder;". Allocate something in the heap and initialize it, and return a pointer to it. This is exactly what the Rust code is doing.

    Destruction

    This is the C prototype for the destructor. This not a reference-counted GObject; it is just an internal thing in librsvg, which does not need reference counting.

    G_GNUC_INTERNAL
    void rsvg_path_builder_destroy (RsvgPathBuilder *builder);

    And this is the implementation in Rust:

    #[no_mangle]
    pub unsafe extern fn rsvg_path_builder_destroy (raw_builder: *mut RsvgPathBuilder) {    // 1
        assert! (!raw_builder.is_null ());                                                  // 2
    
        let _ = Box::from_raw (raw_builder);                                                // 3
    }

    1. Same as before; we declare the whole function as public, exported, and unsafe since it takes a pointer from who-knows-where.

    2. Same as in the implementation for move_to(), we assert that we got passed a non-null pointer.

    3. Let's take this bit by bit. "Box::from_raw (raw_builder)" is the counterpart to Box::into_raw() from above; it takes a pointer and wraps it with a Box, which Rust knows how to de-reference into the actual object it contains. "let _ =" is to have a variable binding in the current scope (the function we are implementing). We don't care about the variable's name, so we use _ as a default name. The variable is now bound to a reference to an RsvgPathBuilder. The function terminates, and since the _ variable goes out of scope, Rust frees the memory for the RsvgPathBuilder. You can read this idiom as "g_free (builder)".

    Recapitulating

    Make your object. Box it. Take a pointer to it with Box::into_raw(), and send it off into the wild west. Bring back a pointer to your object. Unbox it with Box::from_raw(). Let it go out of scope if you want the object to be freed. Acknowledge your responsibilities with unsafe and that's all!

    Making the functions visible to C

    The code we just saw lives in path-builder.rs. By convention, the place where one actually exports the visible API from a Rust library is a file called lib.rs, and here is part of that file's contents in librsvg:

    pub use path_builder::{
        rsvg_path_builder_new,
        rsvg_path_builder_destroy,
        rsvg_path_builder_move_to,
        rsvg_path_builder_line_to,
        rsvg_path_builder_curve_to,
        rsvg_path_builder_close_path,
        rsvg_path_builder_arc,
        rsvg_path_builder_add_to_cairo_context
    };
    
    mod path_builder; 

    The mod path_builder indicates that lib.rs will use the path_builder sub-module. The pub use block exports the functions listed in it to the outside world. They will be visible as symbols in the .a file.

    The Cargo.toml (akin to a toplevel Makefile.am) for my librsvg's little sub-library has this bit:

    [lib]
    name = "rsvg_internals"
    crate-type = ["staticlib"]

    This means that the sub-library will be called librsvg_internals.a, and it is a static library. I will link that into my master librsvg.so. If this were a stand-alone shared library entirely implemented in Rust, I would use the "cdylib" crate type instead.

    Linking into the main .so

    In librsvg/Makefile.am I have a very simplistic scheme for building the librsvg_internals.a library with Rust's tools, and linking the result into the main librsvg.so:

    RUST_LIB = rust/target/debug/librsvg_internals.a
    
    .PHONY: rust/target/debug/librsvg_internals.a
    rust/target/debug/librsvg_internals.a:
    	cd rust && \
    	cargo build --verbose
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_CPPFLAGS = ...
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_CFLAGS = ...
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_LDFLAGS = ...
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_LIBADD = \
    	$(LIBRSVG_LIBS) 	\
    	$(LIBM)			\
    	$(RUST_LIB)

    This uses a .PHONY target for librsvg_internals.a, so "cargo build" will always be called on it. Cargo already takes care of dependency tracking; there is no need for make/automake to do that.

    I put the filename of my library in a RUST_LIB variable, which I then reference from LIBADD. This gets librsvg_internals.a linked into the final librsvg.so.

    When you run "cargo build" just like that, it creates a debug build in a target/debug subdirectory. I haven't looked for a way to make it play together with Automake when one calls "cargo build --release": that one puts things in a different directory, called target/release. Rust's tooling is more integrated that way, while in the Autotools world I'm expected to pass any CFLAGS for compilation by hand, depending on whether I'm doing a debug build or a release build. Any ideas for how to do this cleanly are appreciated.

    I don't have any code in configure.ac to actually detect if Rust is present. I'm just assuming that it is for now; fixes are appreciated :)

    Using the Rust functions from C

    There is no difference from what we had before! This comes from rsvg-shapes.c:

    static RsvgPathBuilder *
    _rsvg_node_poly_create_builder (const char *value,
                                    gboolean close_path)
    {
        RsvgPathBuilder *builder;
    
        ...
    
        builder = rsvg_path_builder_new ();
    
        rsvg_path_builder_move_to (builder, pointlist[0], pointlist[1]);
    
        ...
    
        return builder;
    }

    Note that we are calling rsvg_path_builder_new() and rsvg_path_builder_move_to(), and returning a pointer to an RsvgPathBuilder structure as usual. However, all of those are implemented in the Rust code. The C code has no idea!

    This is the magic of Rust: it allows you to move your C code bit by bit into a safe language. You don't have to do a whole rewrite in a single step. I don't know any other languages that let you do that.

November 08, 2016

November 06, 2016

darktable 2.2.0rc0 released

we’re proud to announce the first release candidate for the upcoming 2.2 series of darktable, 2.2.0rc0!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc0.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

a084ef367b1a1b189ad11a6300f7e0cadb36354d11bf0368de7048c6a0732229 darktable-2.2.0~rc0.tar.xz

and the changelog as compared to 2.0.0 can be found below.

  • Well over 2 thousand commits since 2.0.0

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes. That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in “more modules” so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Support the Exif date and time when importing photos from camera
  • Rudimentary CYGM and RGBE color filter array support
  • Preview pipe now does run demosaic module too, and its input is no longer pre-demosaiced, but is just downscaled without demosaicing it at the same time.
  • Nicer web gallery exporter – now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don’t need it.
  • Hotpixels module: make it actually work for X-Trans

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar Xmp schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even Debian/stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.

A special note to all the darktable Fedora users: Fedora-provided darktable packages are intentionally built with Lua disabled. Thus, Lua scripting will not work. This breaks e.g. darktable-gimp integration. Please bug Fedora. In the mean time you could fix that by self-compiling darktable (pass -DDONT_USE_INTERNAL_LUA=OFF to cmake in order to enable use of bundled Lua5.2.4).

Base Support

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCE-6300

White Balance Presets

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus E-PL6
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-S2
  • Ricoh GR
  • Sony DSC-RX10
  • Sony SLT-A37

New Translations

  • Hebrew
  • Slovenian

Updated Translations

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Russian
  • Slovak
  • Spanish
  • Swedish

Stellarium 0.12.7 discussion

Thank you Alexander! This will keep a few old computers happy...

G.

Stellarium 0.12.7

Stellarium 0.12.7 has been released today!

Yes, the series 0.12 is LTS for owners of old computers (old with weak graphics cards). This release has ports of some features from the series 1.x/0.15:
- textures for deep-sky objects
- star catalogues
- fixes for MPC search tool in Solar System Editor plugin

November 04, 2016

Aligning Images with Hugin


Aligning Images with Hugin

Easily process your bracketed exposures

Hugin is an excellent tool for for aligning and stitching images. In this article, we’ll focus on aligning a stack of images. Aligning a stack of images can be useful for achieving several results, such as:

  • bracketed exposures to make an HDR or fused exposure (using enfuse/enblend), or manually blending the images together in an image editor
  • photographs taken at different focal distances to extend the depth of field, which can be very useful when taking macros
  • photographs taken over a period of time to make a time-lapse movie

For the example images included with this tutorial, the focal length is 12mm and the focal length multiplier is 1. A big thank you to @isaac for providing these images.

You can download a zip file of all of the sample Beach Umbrellas images here:

Download Outdoor_Beach_Umbrella.zip (62MB)

Other sample images to try with this tutorial can be found at the end of the post.

These instructions were adapted from the original forum post by @Carmelo_DrRaw; many thanks to him as well.

We’re going to align these bracked exposures so we can blend them:

Blend Examples
  1. Select InterfaceExpert to set the interface to Expert mode. This will expose all of the options offered by Hugin.

  2. Select the Add images… button to load your bracketed images. Select your images from the file chooser dialog and click Open.

  3. Set the optimal setting for aligning images:

    • Feature Matching Settings: Align image stack
    • Optimize Geometric: Custom parameters
    • Optimize Photometric: Low dynamic range
  4. Select the Optimizer tab.

  5. In the Image Orientation section, select the following variables for each image:

    • Roll
    • X (TrX) [horizontal translation]
    • Y (TrY) [vertical translation]

    You can Ctrl + left mouse click to enable or disable the variables.

    roll x y Hugin

    Note that you do not need to select the parameters for the anchor image:

    Hugin anchor image
  6. Select Optimize now! and wait for the software to finish the calculations. Select Yes to apply the changes.

  7. Select the Stitcher tab.

  8. Select the Calculate Field of View button.

  9. Select the Calculate Optimal Size button.

  10. Select the Fit Crop to Images button.

  11. To have the maximum number of post-processing options, select the following image outputs:

    • Panorama Outputs: Exposure fused from any arrangement
      • Format: TIFF
      • Compression: LZW
    • Panorama Outputs: High dynamic range
      • Format: EXR
    • Remapped Images: No exposure correction, low dynamic range

      Hugin Image Export
  12. Select the Stitch! button and choose a place to save the files. Since Hugin generates quite a few temporary images, save the PTO file in it’s own folder.

Hugin will output the following images:

  • a tif file blended by enfuse/enblend
  • an HDR image in the EXR format
  • the individual images after remapping and without any exposure correction that you can import into the GIMP as layers and blend manually.

You can see the result of the image blended with enblend/enfuse:

Beach Umbrella Fused

With the output images, you can:

  • edit the enfuse/enblend tif file further in the GIMP or RawTherapee
  • tone map the EXR file in LuminanceHDR
  • manually blend the remapped tif files in the GIMP or PhotoFlow

Image files

  • Camera: Olympus E-M10 mark ii
  • Lens: Samyang 12mm F2.0

Indoor_Guitars

Download Indoor_Guitars.zip (75MB)

  • 5 brackets
  • ±0.3 EV increments
  • f5.6
  • focus at about 1m
  • center priority metering
  • exposed for guitars, bracketed for the sky, outdoor area, and indoor area
  • manual mode (shutter speed recorded in EXIF)
  • shot in burst mode, handheld

Outdoor_Beach_Umbrella

Download Outdoor_Beach_Umbrella.zip (62MB)

  • 3 brackets
  • ±1 EV increments
  • f11
  • focus at infinity
  • center priority metering
  • exposed for the water, bracketed for umbrella and sky
  • manual mode (shutter speed recorded in EXIF)
  • shot in burst mode, handheld

Outdoor_Sunset_Over_Ocean

Download Outdoor_Sunset_Over_Ocean.zip (60MB)

  • 3 brackets
  • ±1 EV increments
  • f11
  • focus at infinity
  • center priority metering
  • exposed for the darker clouds, bracketed for darker water and lighter sky areas and sun
  • manual mode (shutter speed recorded in EXIF)
  • shot in burst mode, handheld

Licencing Information

November 03, 2016

Thu 2016/Nov/03

  • Refactoring C to make Rustification easier

    In SVG, the sizes and positions of objects are not just numeric values or pixel coordinates. You can actually specify physical units ("this rectangle is 5 cm wide"), or units relative to the page ("this circle's X position is at 50% of the page's width, i.e. centered"). Librsvg's machinery for dealing with this is in two parts: parsing a length string from an SVG file into an RsvgLength structure, and normalizing those lengths to final units for rendering.

    How RsvgLength is represented

    The RsvgLength structure used to look like this:

    typedef struct {
        double length;
        char factor;
    } RsvgLength;

    The parsing code would then do things like

    RsvgLength
    _rsvg_css_parse_length (const char *str)
    {
        RsvgLength out;
    
        out.length = ...; /* parse a number with strtod() and friends */
    
        if (next_token_is ("pt")) { /* points */
            out.length /= 72;
    	out.factor = 'i';
        } else if (next_token_is ("in")) { /* inches */
            out.factor = 'i';
        } else if (next_token_is ("em")) { /* current font's Em size */
            out.factor = 'm';
        } else if (next_token_is ("%")) { /* percent */
            out.factor = 'p';
        } else {
            out.factor = '\0';
        }
    }

    That is, it uses a char for the length.factor field, and then uses actual characters to indicate each different type. This is pretty horrible, so I changed it to use an enum:

    typedef enum {
        LENGTH_UNIT_DEFAULT,
        LENGTH_UNIT_PERCENT,
        LENGTH_UNIT_FONT_EM,
        LENGTH_UNIT_FONT_EX,
        LENGTH_UNIT_INCH,
        LENGTH_UNIT_RELATIVE_LARGER,
        LENGTH_UNIT_RELATIVE_SMALLER
    } LengthUnit;
    
    typedef struct {
        double length;
        LengthUnit unit;
    } RsvgLength;

    We have a nice enum instead of chars, but also, the factor field is now renamed to unit. This ensures that code like

    if (length.factor == 'p')
        ...

    will no longer compile, and I can catch all the uses of "factor" easily. I replace them with unit as appropriate, and ensure that simply changing the chars for enums as appropriate is the right thing.

    When would it not be the right thing? I'm just replacing 'p' for LENGTH_UNIT_PERCENT, right? Well, it turns out that in a couple of hacky places in the rsvg-filters code, that code put an 'n' by hand in foo.factor to really mean, "this foo length value was not specified in the SVG data".

    That pattern seemed highly specific to the filters code, so instead of adding an extra LENGTH_UNIT_UNSPECIFIED, I added an extra field to the FilterPrimitive structures: when they used 'n' for primitive.foo.factor, instead they now have a primitive.foo_specified boolean flag, and the code checks for that instead of essentially monkey-patching the RsvgLength structure.

    Normalizing lengths for rendering

    At rendering time, these RsvgLength with their SVG-specific units need to be normalized to units that are relative to the current transformation matrix. There is a function used all over the code, called _rsvg_css_normalize_length(). This function gets called in an interesting way: one has to specify whether the length in question refers to a horizontal measure, or vertical, or both. For example, an RsvgNodeRect represents a rectangle shape, and it has x/y/w/h fields that are of type RsvgLength. When librsvg is rendering such an RsvgNodeRect, it does this:

    static void
    _rsvg_node_rect_draw (RsvgNodeRect *self, RsvgDrawingCtx *ctx)
    {
        double x, y, w, h;
    
        x = _rsvg_css_normalize_length (&rect->x, ctx, 'h');
        y = _rsvg_css_normalize_length (&rect->y, ctx, 'v');
    
        w = fabs (_rsvg_css_normalize_length (&rect->w, ctx, 'h'));
        h = fabs (_rsvg_css_normalize_length (&rect->h, ctx, 'v'));
    
        ...
    }

    Again with the fucking chars. Those 'h' and 'v' parameters are because lengths in SVG need to be resolved relative to the width or the height (or both) of something. Sometimes that "something" is the size of the current object's parent group; sometimes it is the size of the whole page; sometimes it is the current font size. The _rsvg_css_normalize_length() function sees if it is dealing with a LENGTH_UNIT_PERCENT, for example, and will pick up page_size->width if the requested value is 'h'orizontal, or page_size->height if it is 'v'ertical. Of course I replaced all of those with an enum.

    This time I didn't find hacky code like the one that would stick an 'n' in the length.factor field. Instead, I found an actual bug; a horizontal unit was using 'w' for "width", instead of 'h' for "horizontal". If these had been enums since the beginning, this bug would probably not be there.

    While I appreciate the terseness of 'h' instead of LINE_DIR_HORIZONTAL, maybe we can later refactor groups of coordinates into commonly-used patterns. For example, instead of

    patternx = _rsvg_css_normalize_length (&rsvg_pattern->x, ctx, LENGTH_DIR_HORIZONTAL);
    patterny = _rsvg_css_normalize_length (&rsvg_pattern->y, ctx, LENGTH_DIR_VERTICAL);
    patternw = _rsvg_css_normalize_length (&rsvg_pattern->width, ctx, LENGTH_DIR_HORIZONTAL);
    patternh = _rsvg_css_normalize_length (&rsvg_pattern->height, ctx, LENGTH_DIR_VERTICAL);

    perhaps we can have

    normalize_lengths_for_x_y_w_h (ctx,
                                   &rsvg_pattern->x,
                                   &rsvg_pattern->y,
                                   &rsvg_pattern->width,
                                   &rsvg_pattern->height);

    since those x/y/width/height groups get used all over the place.

    And in Rust?

    This is all so that when that code gets ported to Rust, it will be easier. Librsvg is old code, and it has a bunch of C-isms that either don't translate well to Rust, or are kind of horrible by themselves and could be turned into more robust C — to make the corresponding rustification obvious.

Searching in GNOME Software

I’ve spent a few days profiling GNOME Software on ARM, mostly for curiosity but also to help our friends at Endless. I’ve merged a few patches that make the existing --profile code more useful to profile start up speed. Already there have been some big gains, over 200ms of startup time and 12Mb of RSS, but there’s plenty more that we want to fix to make GNOME Software run really nicely on resource constrained devices.

One of the biggest delays is constructing the search token cache at startup. This is where we look at all the fields of the .desktop files, the AppData files and the AppStream files and split them in a UTF8-sane way into search tokens, adding them into a big hash table after stemming them. We do it with 4 threads by default as it’s trivially parallelizable. With the search cache, when we search we just ask all the applications in the store “do you have this search term” and if so it gets added to the search results and ordered according to how good the match is. This takes 225ms on my super-fast Intel laptop (and much longer on ARM), and this happens automatically the very first time you search for anything in GNOME Software.

At the moment we add (for each locale, including fallbacks) the package name, the app ID, the app name, app single line description, the app keywords and the application long description. The latter is the multi-paragraph long description that’s typically prose. We use 90% of the time spent loading the token cache just splitting and adding the words in the description. As the description is prose, we have to ignore quite a few words e.g. “and”, “the”, “is” and “can” are some of the most frequent, useless words. Just the nature of the text itself (long non-technical prose) it doesn’t actually add many useful keywords to the search cache, and the ones that is does add are treated with such low priority other more important matches are ordered before them.

My proposal: continue to consume everything else for the search cache, and drop using the description. This means we start way quicker, use less memory, but it does require upstream actually adds some [localized] Keywords=foo;bar;baz in either the desktop file or <keywords> in the AppData file. At the moment most do, especially after I sent ~160 emails to the maintainers that didn’t have any defined keywords in the Fedora 25 Alpha, so I think it’s fairly safe at this point. Comments?

November 02, 2016

Casa Natureza

USA, 2016 Em projeto Uma casa modernista da grande tradição brasileira do ideal das casas modernistas, que...

The Royal Photographic Society Journal


The Royal Photographic Society Journal

Who let us in here?

The Journal of the Photographic Society is the journal for one of oldest photographic societies in the world: the Royal Photographic Society. First published in 1853, the RPS Journal is the oldest photographic periodical in the world (just edging out the British Journal of Photography by about a year).

So you can imagine my doubt when confronted with an email about using some material from pixls.us for their latest issue…


If the name sounds familiar to anyone it may be from a recent post by Joe McNally who is featured prominently in the September 2016 issue. He was also just inducted as a fellow into the society!

RPS Journal 2016-09 Cover

It turns out my initial doubts were completely unfounded, and they really wanted to run a page based off one of our tutorials. The editors liked the Open Source Portrait tutorial. In particular, the section on using Wavelet Decompose to touch up the skin tones:

RPS Journal 2016-11 PD Yay Mairi!

How cool is that? I actually searched the archive and the only other mention I can find of GIMP (or any other F/OSS) is from a “Step By Step” article written by Peter Gawthrop (Vol. 149, February 2009). I think it’s pretty awesome that we can promote a little more exposure for Free Software alternatives. Especially in more mainstream publications and to a broader audience!

November 01, 2016

Tue 2016/Nov/01

  • Bézier curves, markers, and SVG's concept of directionality

    SVG reference image        with markers

    In the first post in this series I introduced SVG markers, which let you put symbols along the nodes of a path. You can use them to draw arrows (arrowhead as an end marker on a line), points in a chart, and other visual effects.

    In that post and in the second one, I started porting some of the code in librsvg that renders SVG markers from C to Rust. So far I've focused on the code and how it looks in Rust vs. C, and on some initial refactorings to make it feel more Rusty. I have casually mentioned Bézier segments and their tangents, and you may have an idea that SVG paths are composed of Bézier curves and straight lines, but I haven't explained what this code is really about. Why not simply walk over all the nodes in the path, and slap a marker at each one?

    Aragorn        does not simply walk a degenerate path

    (Sorry. Couldn't resist.)

    SVG paths

    If you open an illustration program like Inkscape, you can draw paths based on Bézier curves.

    Path of Bézier        segments, nodes, and control points

    Each segment is a cubic Bézier curve and can be considered independently. Let's focus on the middle segment there.

    Single Bézier        segment with control points

    At each endpoint, the tangent direction of the curve is determined by the corresponding control point. For example, at endpoint 1 the curve goes out in the direction of control point 2, and at endpoint 4 the curve comes in from the direction of control point 3. The further away the control points are from the endpoints, the larger "pull" they will have on the curve.

    Tangents at the endpoints

    Let's consider the tangent direction of the curve at the endpoints. What cases do we have, especially when some of the control points are in the same place as the endpoints?

    Directions at the endpoints of Bézier segments

    When the endpoints and the control points are all in different places (upper-left case), the tangents are easy to compute. We just subtract the vectors P2-P1 and P4-P3, respectively.

    When just one of the control points coincides with one of the endpoints (second and third cases, upper row), the "missing" tangent just goes to the other control point.

    In the middle row, we have the cases where both endpoints are coincident. If the control points are both in different places, we just have a curve that loops back. If just one of the control points coincides with the endpoints, the "curve" turns into a line that loops back, and its direction is towards the stray control point.

    Finally, if both endpoints and both control points are in the same place, the curve is just a degenerate point, and it has no tangent directions.

    Here we only care about the direction of the curve at the endpoints; we don't care about the magnitude of the tangent vectors. As a side note, Bézier curves have the nice property that they fit completely inside the convex hull of their control points: if you draw a non-crossing quadrilateral using the control points, then the curve fits completely inside that quadrilateral.

    Convex hulls of Bézier segments

    How SVG represents paths

    SVG uses a representation for paths that is similar to that of PDF and its precursor, the PostScript language for printers. There is a pen with a current point. The pen can move in a line or in a curve to another point while drawing, or it can lift up and move to another point without drawing.

    To create a path, you specify commands. These are the four basic commands:

    • move_to (x, y) - Change the pen's current point without drawing, and begin a new subpath.
    • line_to (x, y) - Draw a straight line from the current point to another point.
    • curve_to (x2, y2, x3, y3, x4, y4) - Draw a Bézier curve from the current point to (x4, y4), with the control points (x2 y2) and (x3, y3).
    • close_path - Draw a line from the current point back to the beginning of the current subpath (i.e. the position of the last move_to command).

    For example, this sequence of commands draws a closed square path:

    move_to (0, 0)
    line_to (10, 0)
    line_to (10, 10)
    line_to (0, 10)
    close_path

    If we had omitted the close_path, we would have an open C shape.

    SVG paths provide secondary commands that are built upon those basic ones: commands to draw horizontal or vertical lines without specifying both coordinates, commands to draw quadratic curves instead of cubic ones, and commands to draw elliptical or circular arcs. All of these can be built from, or approximated from, straight lines or cubic Bézier curves.

    Let's say you have a path with two disconnected sections: move_to (0, 0), line_to (10, 0), line_to (10, 10), move_to (20, 20), line_to (30, 20).

    Bézier path        with two open subpaths

    These two sections are called subpaths. A subpath begins with a move_to command. If there were a close_path command somewhere, it would draw a line from the current point back to where the current subpath started, i.e. to the location of the last move_to command.

    Markers at nodes

    Repeating ourselves a bit: for each path, SVG lets you define markers. A marker is a symbol that can be automatically placed at each node along a path. For example, here is a path composed of line_to segments, and which has an arrow-shaped marker at each node:

    Bézier path with        markers

    Here, the arrow-shaped marker is defined to be orientable. Its anchor point is at the V shaped concavity of the arrow. SVG specifies the angle at which orientable markers should be placed: given a node, the angle of its marker is the average of the incoming and outgoing angles of the path segments that meet at that node. For example, at node 5 above, the incoming line comes in at 0° (Eastwards) and the outgoing line goes out at 90° (Southwards) — so the arrow marker at 5 is rotated so it points at 45° (South-East).

    In the following picture we see the angle of each marker as the bisection of the incoming and outgoing angles of the respective nodes:

    Bézier path with        markers and directions

    The nodes at the beginning and end of subpaths only have one segment that meets that node. So, the marker uses that segment's angle. For example, at node 6 the only incoming segment goes Southward, so the marker points South.

    Converting paths into Segments

    The path above is simple to define. The path definition is

    move_to (1)
    line_to (2)
    line_to (3)
    line_to (4)
    line_to (5)
    line_to (6)

    (Imagine that instead of those numbers, which are just for illustration purposes, we include actual x/y coordinates.)

    When librsvg turns that path into Segments, they more or less look like

    line from 1, outgoing angle East,       to 2, incoming angle East
    line from 2, outgoing angle South-East, to 3, incoming angle South-East
    line from 3, outgoing angle North-East, to 4, incoming angle North-East
    line from 4, outgoing angle East,       to 5, incoming angle East
    line from 5, outgoing angle South,      to 6, incoming angle South

    Obviously, straight line segments (i.e. from a line_to) have the same angles at the start and the end of each segment. In contrast, curve_to segments can have different tangent angles at each end. For example, if we had a single curved segment like this:

    move_to (1)
    curve_to (2, 3, 4)

    Bézier curve with directions

    Then the corresponding single Segment would look like this:

    curve from 1, outgoing angle North, to 4, incoming angle South-East

    Now you know what librsvg's function path_to_segments() does! It turns a sequence of move_to / line_to / curve_to commands into a sequence of segments, each one with angles at the start/end nodes of the segment.

    Paths with zero-length segments

    Let's go back to our path made up of line segments, the one that looks like this:

    Bézier path with        markers

    However, imagine that for some reason the path contains duplicated, contiguous nodes. If we specified the path as

    move_to (1)
    line_to (2)
    line_to (3)
    line_to (3)
    line_to (3)
    line_to (3)
    line_to (4)
    line_to (5)
    line_to (6)

    Then our rendered path would look the same, with duplicated nodes at 3:

    Bézier path with        duplicated nodes

    But now when librsvg turns that into Segments, they would look like

      line from 1, outgoing angle East,       to 2, incoming angle East
      line from 2, outgoing angle South-East, to 3, incoming angle South-East
      line from 3, to 3, no angles since this is a zero-length segment
    * line from 3, to 3, no angles since this is a zero-length segment
      line from 3, outgoing angle North-East, to 4, incoming angle North-East
      line from 4, outgoing angle East,       to 5, incoming angle East
      line from 5, outgoing angle South,      to 6, incoming angle South

    When librsvg has to draw the markers for this path, it has to compute the marker's angle at each node. However, in the starting node for the segment marked with a (*) above, there is no angle! In this case, the SVG spec says that you have to walk the path backwards until you find a segment which has an angle, and then forwards until you find another segment with an angle, and then take their average angles and use them for the (*) node. Visually this makes sense: you don't see where there are contiguous duplicated nodes, but you certainly see lines coming out of that vertex. The algorithm finds those lines and takes their average angles for the marker.

    Now you know where our exotic names find_incoming_directionality_backwards() and find_outgoing_directionality_forwards() come from!

    Next up: refactoring C to make Rustification easier.

October 31, 2016

Flatpak cross-compilation support: Epilogue

You might remember my attempts at getting an easy to use cross-compilation for ARM applications on my x86-64 desktop machine.

With Fedora 25 approaching, I'm happy to say that the necessary changes to integrate the feature have now rolled into Fedora 25.

For example, to compile the GNU Hello Flatpak for ARM, you would run:

$ flatpak install gnome org.freedesktop.Platform/arm org.freedesktop.Sdk/arm
Installing: org.freedesktop.Platform/arm/1.4 from gnome
[...]
$ sudo dnf install -y qemu-user-static
[...]
$ TARGET=arm ./build.sh

For other applications, add the --arch=arm argument to the flatpak-builder command-line.

This example also works for 64-bit ARM with the architecture name aarch64.

October 28, 2016

Fri 2016/Oct/28

  • Porting a few C functions to Rust

    Last time I showed you my beginnings of porting parts of Librsvg to Rust. In this post I'll do an annotated porting of a few functions.

    Disclaimers: I'm learning Rust as I go. I don't know all the borrowing/lending rules; "Rust means never having to close a socket" is a very enlightening article, although it doesn't tell the whole story. I don't know Rust idioms that would make my code prettier. I am trying to refactor things to be prettier after a the initial pass of C-to-Rust. If you know an idiom that would be useful, please mail me!

    So, let's continue with the code to render SVG markers, as before. I'll start with this function:

    /* In C */
    
    static gboolean
    points_equal (double x1, double y1, double x2, double y2)
    {
        return DOUBLE_EQUALS (x1, x2) && DOUBLE_EQUALS (y1, y2);
    }

    I know that Rust supports tuples, and pretty structs, and everything. But so far, the refactoring I've done hasn't led me to really want to use them for this particular part of the library. Maybe later! Anyway, this translates easily to Rust; I already had a function called double_equals() from the last time. The result is as follows:

    /* In Rust */
    
    fn points_equal (x1: f64, y1: f64, x2: f64, y2: f64) -> bool {
        double_equals (x1, x2) && double_equals (y1, y2)
    }

    Pro-tip: text editor macros work very well for shuffling around the "double x1" into "x1: f64" :)

    Remove the return and the semicolon at the end of the line so that the function returns the value of the && expression. I could leave the return in there, but not having it is more Rusty, perhaps. (Rust also has a return keyword, which I think they keep around to allow early exits from functions.)

    This function doesn't get used yet, so the existing tests don't catch it. The first time I ran the Rust compiler on it, it complained of a type mismatch: I had put f64 instead of bool for the return type, which is of course wrong. Oops. Fix it, test again that it builds, done.

    Okay, next!

    But first, a note about how the original Segment struct in C evolved after refactoring in Rust.

    Original in C Straight port to Rust After refactoring
    typedef struct {
        /* If is_degenerate is true,
         * only (p1x, p1y) are valid.
         * If false, all are valid.
         */
        gboolean is_degenerate;
        double p1x, p1y;
        double p2x, p2y;
        double p3x, p3y;
        double p4x, p4y;
    } Segment;
    struct Segment {
        /* If is_degenerate is true,
         * only (p1x, p1y) are valid.
         * If false, all are valid.
         */
        is_degenerate: bool,
        p1x: f64, p1y: f64,
        p2x: f64, p2y: f64,
        p3x: f64, p3y: f64,
        p4x: f64, p4y: f64
    }
    pub enum Segment {
        Degenerate { // A single lone point
            x: f64,
            y: f64
        },
    
        LineOrCurve {
            x1: f64, y1: f64,
            x2: f64, y2: f64,
            x3: f64, y3: f64,
            x4: f64, y4: f64
        },
    }

    In the C version, and in the original Rust version, I had to be careful to only access the x1/y1 fields if is_degenerate==true. Rust has a very convenient "enum" type, which can work pretty much as a normal C enum, or as a tagged union, as shown here. Rust will not let you access fields that don't correspond to the current tag value of the enum. (I'm not sure if "tag value" is the right way to call it — in any case, if a segment is Segment::Degenerate, the compiler only lets you access the x/y fields; if it is Segment::LineOrCurve, it only lets you access x1/y1/x2/y2/etc.) We'll see the match statement below, which is how enum access is done.

    Next!

    Original in C Straight port to Rust
    /* A segment is zero length if it is degenerate, or if all four control points
     * coincide (the first and last control points may coincide, but the others may
     * define a loop - thus nonzero length)
     */
    static gboolean
    is_zero_length_segment (Segment *segment)
    {
        double p1x, p1y;
        double p2x, p2y;
        double p3x, p3y;
        double p4x, p4y;
    
        if (segment->is_degenerate)
            return TRUE;
    
        p1x = segment->p1x;
        p1y = segment->p1y;
    
        p2x = segment->p2x;
        p2y = segment->p2y;
    
        p3x = segment->p3x;
        p3y = segment->p3y;
    
        p4x = segment->p4x;
        p4y = segment->p4y;
    
        return (points_equal (p1x, p1y, p2x, p2y)
                && points_equal (p1x, p1y, p3x, p3y)
                && points_equal (p1x, p1y, p4x, p4y));
    }
    /* A segment is zero length if it is degenerate, or if all four control points
     * coincide (the first and last control points may coincide, but the others may
     * define a loop - thus nonzero length)
     */
    fn is_zero_length_segment (segment: Segment) -> bool {
        match segment {
            Segment::Degenerate { .. } => { true },
    
            Segment::LineOrCurve { x1, y1, x2, y2, x3, y3, x4, y4 } => {
                (points_equal (x1, y1, x2, y2)
                 && points_equal (x1, y1, x3, y3)
                 && points_equal (x1, y1, x4, y4))
            }
        }
    }

    To avoid a lot of "segment->this, segment->that, segment->somethingelse", the C version copies the fields from the struct into temporary variables and calls points_equal() with them. The Rust version doesn't need to do this, since we have a very convenient match statement.

    Rust really wants you to handle all the cases that your enum may be in. You cannot do something like "if segment == Segment::Degenerate", because you may forget an "else if" for some case. Instead, the match statement is much more powerful. It is really a pattern-matching engine, and for enums it lets you consider each case separately. The fields inside each case get unpacked like in "Segment::LineOrCurve { x1, y1, ... }" so you can use them easily, and only within that case. In the Degenerate case, I don't use the x/y fields, so I write "Segment::Degenerate { .. }" to avoid having unused variables.

    I'm sure I'll need to change something in the prototype of this function. The plain "segment: Segment" argument in Rust means that the is_zero_length_segment() function will take ownership of the segment. I'll be passing it from an array, but I don't know what shape that code will take yet, so I'll leave it like this for now and change it later.

    This function could use a little test, couldn't it? Just to guard from messing up the coordinate names later if I decide to refactor it with tuples for points, or something. Fortunately, the tests are really easy to set up in Rust:

        #[test]
        fn degenerate_segment_is_zero_length () {
            assert! (super::is_zero_length_segment (degenerate (1.0, 2.0)));
        }
    
        #[test]
        fn line_segment_is_nonzero_length () {
            assert! (!super::is_zero_length_segment (line (1.0, 2.0, 3.0, 4.0)));
        }
    
        #[test]
        fn line_segment_with_coincident_ends_is_zero_length () {
            assert! (super::is_zero_length_segment (line (1.0, 2.0, 1.0, 2.0)));
        }
    
        #[test]
        fn curves_with_loops_and_coincident_ends_are_nonzero_length () {
            assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 1.0, 2.0)));
            assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 1.0, 2.0, 3.0, 4.0, 1.0, 2.0)));
            assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 1.0, 2.0)));
        }
    
        #[test]
        fn curve_with_coincident_control_points_is_zero_length () {
            assert! (super::is_zero_length_segment (curve (1.0, 2.0, 1.0, 2.0, 1.0, 2.0, 1.0, 2.0)));
        }

    The degenerate(), line(), and curve() utility functions are just to create the appropriate Segment::Degenerate { x, y } without so much typing, and to make the tests more legible.

    After running cargo test, all the tests pass. Yay! And we didn't have to fuck around with relinking a version specifically for testing, or messing with making static functions available to tests, like we would have had to do in C. Double yay!

    Next!

    Original in C Straight port to Rust
    static gboolean
    find_incoming_directionality_backwards (Segment *segments, int num_segments, int start_index, double *vx, double *vy)
    {
        int j;
        gboolean found;
    
        /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
    
        found = FALSE;
    
        for (j = start_index; j >= 0; j--) {                                                                 /* 1 */
            if (segments[j].is_degenerate)
                break; /* reached the beginning of the subpath as we ran into a standalone point */
            else {                                                                                           /* 2 */
                if (is_zero_length_segment (&segments[j]))                                                   /* 3 */
                    continue;
                else {
                    found = TRUE;
                    break;
                }
            }
        }
    
        if (found) {                                                                                         /* 4 */
            g_assert (j >= 0);
            *vx = segments[j].p4x - segments[j].p3x;
            *vy = segments[j].p4y - segments[j].p3y;
            return TRUE;
        } else {
            *vx = 0.0;
            *vy = 0.0;
            return FALSE;
        }
    }
    fn find_incoming_directionality_backwards (segments: Vec, start_index: usize) -> (bool, f64, f64)
    {
        let mut found: bool;
        let mut vx: f64;
        let mut vy: f64;
    
        /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
    
        found = false;
        vx = 0.0;
        vy = 0.0;
    
        for j in (0 .. start_index + 1).rev () {                                                            /* 1 */
            match segments[j] {
                Segment::Degenerate { .. } => {
                    break; /* reached the beginning of the subpath as we ran into a standalone point */
                },
    
                Segment::LineOrCurve { x3, y3, x4, y4, .. } => {                                            /* 2 */
                    if is_zero_length_segment (&segments[j]) {                                              /* 3 */
                        continue;
                    } else {
                        vx = x4 - x3;
                        vy = y4 - y3;
                        found = true;
                        break;
                    }
                }
            }
        }
    
        if found {                                                                                           /* 4 */
            (true, vx, vy)
        } else {
            (false, 0.0, 0.0)
        }
    }

    In reality this function returns three values: whether a directionality was found, and if so, the vx/vy components of the direction vector. In C the prototype is like "bool myfunc (..., out vx, out vy)": return the boolean conventionally, and get a reference to the place where we should store the other return values. In Rust, it is simple to just return a 3-tuple.

    (Keen-eyed rustaceans will detect a code smell in the bool-plus-extra-crap return value, and tell me that I could use an Option instead. We'll see what the code wants to look like during the final refactoring!)

    With this code, I need temporary variables vx/vy to store the result. I'll refactor it to return immediately without needing temporaries or a found variable.

    1. We are looking backwards in the array of segments, starting at a specific element, until we find one that satisfies a certain condition. Looping backwards in C in the way done here has the peril that your loop variable needs to be signed, even though array indexes are unsigned: j will go from start_index down to -1, but the loop only runs while j >= 0.

    Rust provides a somewhat strange idiom for backwards numeric ranges. A normal range looks like "0 .. n" and that means the half-open range [0, n). So if we want to count from start_index down to 0, inclusive, we need to rev()erse the half-open range [0, start_index + 1), and that whole thing is "(0 .. start_index + 1).rev ()".

    2. Handling the degenerate case is trivial. Handling the other case is a bit more involved in Rust. We compute the vx/vy values here, instead of after the loop has exited, as at that time the j loop counter will be out of scope. This ugliness will go away during refactoring.

    However, note the match pattern "Segment::LineOrCurve { x3, y3, x4, y4, .. }". This means, "I am only interested in the x3/y3/x4/y4 fields of the enum"; the .. indicates to ignore the others.

    3. Note the ampersand in "is_zero_length_segment (&segments[j])". When I first wrote this, I didn't include the & sign, and Rust complained that it couldn't pass segments[j] to the function because the function would take ownership of that value, while in fact the value is owned by the array. I need to declare the function as taking a reference to a segment ("a pointer"), and I need to call the function by actually passing a reference to the segment, with & to take the "address" of the segment like in C. And if you look at the C version, it also says "&segments[j]"! So, the function now looks like this:

    fn is_zero_length_segment (segment: &Segment) -> bool {
        match *segment {
            ...

    Which means, the function takes a reference to a Segment, and when we want to use it, we de-reference it as *segment.

    While my C-oriented brain interprets this as references and dereferencing pointers, Rust wants me to think in the higher-level terms. A function will take ownership of an argument if it is declared like fn foo(x: Bar), and the caller will lose ownership of what it passed in x. If I want the caller to keep owning the value, I can "lend" it to the function by passing a reference to it, not the actual value. And I can make the function "borrow" the value without taking ownership, because references are not owned; they are just pointers to values.

    It turns out that the three chapters of the Rust book that deal with this are very clear and understandable, and I was irrationally scared of reading them. Go through them in order: Ownership, References and borrowing, Lifetimes. I haven't used the lifetime syntax yet, but it lets you solve the problem of dangling pointers inside live structs.

    4. At the end of the function, we build our 3-tuple result and return it.

    And what if we remove the ugliness from a straight C-to-Rust port? It starts looking like this:

    fn find_incoming_directionality_backwards (segments: Vec, start_index: usize) -> (bool, f64, f64)
    {
        /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
    
        for j in (0 .. start_index + 1).rev () {
            match segments[j] {
                Segment::Degenerate { .. } => {
                    return (false, 0.0, 0.0); /* reached the beginning of the subpath as we ran into a standalone point */
                },
    
                Segment::LineOrCurve { x3, y3, x4, y4, .. } => {
                    if is_zero_length_segment (&segments[j]) {
                        continue;
                    } else {
                        return (true, x4 - x3, y4 - y3);
                    }
                }
            }
        }
    
        (false, 0.0, 0.0)
    }

    We removed the auxiliary variables by returning early from within the loop. I could remove the continue by negating the result of is_zero_length_segment() and returning the sought value in that case, but in my brain it is easier to read, "is this a zero length segment, i.e. that segment has no directionality? If yes, continue to the previous one, otherwise return the segment's outgoing tangent vector".

    But what if is_zero_length_segment() is the wrong concept? My calling function is called find_incoming_directionality_backwards(): it looks for segments in the array until it finds one with directionality. It happens to know that a zero-length segment has no directionality, but it doesn't really care about the length of segments. What if we called the helper function get_segment_directionality() and it returned false when the segment has none, and a vector otherwise?

    Rust provides the Option pattern just for this. And I'm itching to show you some diagrams of Bézier segments, their convex hulls, and what the goddamn tangents and directionalities actually mean graphically.

    But I have to evaluate Outreachy proposals, and if I keep reading Rust docs and refactoring merrily, I'll never get that done.

    Sorry to leave you in a cliffhanger! More to come soon!