An nice additional benefit of the recent Kernel Page Table Isolation (CONFIG_PAGE_TABLE_ISOLATION) patches (to defend against CVE-2017-5754, the speculative execution “rogue data cache load” or “Meltdown” flaw) is that the userspace page tables visible while running in kernel mode lack the executable bit. As a result, systems without the SMEP CPU feature (before Ivy-Bridge) get it emulated for “free”.
Here’s a non-SMEP system with PTI disabled (booted with “pti=off“), running the EXEC_USERSPACE LKDTM test:
# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[ 0.000000] Kernel/User page tables isolation: disabled on command line.
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
# dmesg
[ 17.883754] lkdtm: Performing direct entry EXEC_USERSPACE
[ 17.885149] lkdtm: attempting ok execution at ffffffff9f6293a0
[ 17.886350] lkdtm: attempting bad execution at 00007f6a2f84d000
No crash! The kernel was happily executing userspace memory.
But with PTI enabled:
# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[ 0.000000] Kernel/User page tables isolation: enabled
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
Killed
# dmesg
[ 33.657695] lkdtm: Performing direct entry EXEC_USERSPACE
[ 33.658800] lkdtm: attempting ok execution at ffffffff926293a0
[ 33.660110] lkdtm: attempting bad execution at 00007f7c64546000
[ 33.661301] BUG: unable to handle kernel paging request at 00007f7c64546000
[ 33.662554] IP: 0x7f7c64546000
...
It should only take a little more work to leave the userspace page tables entirely unmapped while in kernel mode, and only map them in during copy_to_user()/copy_from_user() as ARM already does with ARM64_SW_TTBR0_PAN (or CONFIG_CPU_SW_DOMAIN_PAN on arm32).
The blender.org project is very lucky with attracting talent –great developers working together with fantastic artists. It’s how Blender manages to stand out as a successful and highly functional free & open software project. In this post I want to thank everyone for a wonderful Blender year and give a view at all of the exciting things that are going to happen– in 2018! (Fingers crossed :)
Eevee
In 2016 it was just an idea, having an interactive viewport in Blender with rendering quality at PBR levels. Last year this project took off in ways beyond expectation – everyone should have seen the demos by now.
Early in 2018 animation support will come back (with support for modifiers), with as highlight OpenSubdiv support (GPU based adaptive subdivision surfaces).
Blender is an original innovator in this area –providing a fully functional 2D animation tool in a 3D environment. You have to see it to believe it –it’s a mindblowing workflow for animators and story artists.
In Q1 of 2018 the short film “Hero” will be finished as proof-of-concept for the new workflow and tools of Grease Pencil in 2.8x.
Optimizing and organizing one’s working environment can significantly improve the workflow in 3D applications. We can’t make everyone happy with a single Blender configuration anymore. This is where the new Workspaces and Application Templates come in. In Q1 and Q2 of 2018 the first prototypes for radically configured simple Blenders are going to be made (a.k.a. the Blender 101 project).
Meanwhile work continues on usability and configurations in a daily production environment. Blender’s latest Open Movie “Spring” is going to be used for this.
Blender 2.8x is also getting a complete new layer system, allowing to organize your scenes in advanced ways. A Scene can have unlimited amount of layers (= drawings or renders), unlimited amounts of collections and per collection render settings and overrides.
No, there are no pictures yet! But one of the cool things of releasing a massive update is to also to update the looks. Nothing radical, just to make it look fresh and to match contemporary desktop environments. We’re still using the (great) design from 2009-2010. In computer years, that’s a century ago! Work on this should start Q1 and get finalized before Q2 ends. Contributions welcome (check ‘get involved’).
Cycles
In 2017 we saw the rise of AMD GPUs. Thanks to a full time developer who worked on OpenCL for a year, Blender is now a good choice for use on AMD hardware. For 2018 we want to work on solving the kernel compiling waiting time.
Cycles is now one of the most popular areas for developers to work in. Most of these are doing this as part of their daytime job – to make sure Cycles stays an excellent choice for production rendering. Expect in 2018 a lot of high quality additions and especially ways to manage fast renders.
One of Blender’s best features is that it’s a complete integrated 3D creation suite –enabling artists to create projects from concept to final edits or playback. Unfortunately the game engine has fallen behind in development– not getting the focus or development time it needs. There are many reasons for it, but one of these is that the code base for BGE is too much separated from the rest of Blender. That means that newly added Blender features need to be ported over to the engine to work.
For the 2.8 project we want to achieve a better integration of BGE and Blender itself. The Eevee project has proven already how important real-time tools are and how well this can work for interactive 3D design and game creators.
That being said, outside of blender.org interesting Blender-related development for game engines happens too. Check out the Blender fork UPBGE for example, or the fascinating Armory Engine (see image above, it’s written in Haxe and Kha). And don’t forget the open source WebGL environments Blend4Web and Verge3D.
Assets and presets
Another ‘2.8 workflow’ related feature: we are working on better managing your files and 3d assets. Partially it’s for complex production setup, partially it’s also about configuring your Workspaces with nice visible presets – lists of pictures of shaders or primitives for example, ready to be dragged & dropped or selected.
An important design aspect of Blender’s new viewport system is that each engine is drawing in its own buffer. These buffers then get composited in real-time.
To illustrate how fast it is: in 2.8x the “Overlay engine” is using real-time compositing in the viewport (to draw selections or widgets).
When 2.8x is ready to get out of beta, we will also check on how to allow (node based) real time compositing in viewports. That then is the final step to fully have replaced the old “Blender Internal” render engine with an OpenGL based system.
This will especially be interesting for the Non-Photo-Realistic rendering enthusiasts out there. Note – FreeStyle rendering will have to fully recoded for 2.8. That’s still an open issue.
Modifiers & Physics upgrade
Blender’s modifier code is getting fully rewritten for 2.8. That’s needed for the new dependency graph system (threadable animation updates and duplication of data).
A nice side effect of this recode is that all modifiers then will be ‘node ready’. We expect first experiments with modifier nodes to happen in 2018. Don’t get too excited yet, it’s especially the complexity of upgrading the old particle and hair system that’s making it a very hard project to handle.
An important related issue here is how to handle “caches” well (i.e. generated mesh data by modifiers or physics systems). This needs to be saved and managed properly – which is what the dependency graph has to do as well. As soon that’s solved we can finally merge in the highly anticipated Fracture Modifier branch.
Animation tools
Blender’s armature and rigging system is based on a design from the 90ies. It’s a solid concept, but it’s about time to refresh and upgrade it. When Blender 2.8x gets closer to beta I want to move my focus on getting a project organized (and funded) to establish a small team of developers on animation tools for the next decade – Animation 2020! Contact me if you want to contribute.
Discourse forums
Improving onboarding for new developers is on our wish list already for years. There are several areas we should do better – more swiftly handle reviews for provided patches and branches for example.
We also often hear that Blender developer channels are hard to find or not very accessible. The blender.org teams still mainly use IRC chat and MailMan mailing lists for communciation.
In January we will test a dedicated blender.org developer forum using Discourse (fully free/open software). This forum will focus on people working with Blender’s code, developer tools and anything related to becoming a contributor. If this experiment works well we can expand it to a more general “get involved” website (for docs, educators, scientists, conferences, events). However, user questions, feature requests – would be off topic, there are better places that handle this.
20th anniversary of first public Blender release
Oh yes! Today is exactly 20 years ago that I released the first Blender version in public – only for the Silicon Graphics IRIX platform.
A FreeBSD and Linux version were made a couple of months after.
All as “freeware” then, not open source. I first had to learn the lesson of bursting internet bubbles before going fully open!
Blender 2.80 beta release
Originally planned for Q2 this year… luckily that quarter lasts until July 1st. All depends on how well the current projects go the coming months. But if it’s not July first, then at least we have…
SIGGRAPH, Vancouver
The largest annual 3D CG event is at August 12-16 this year. We aim at a great presence there and certainly it’s a great milestone to showcase 2.80 there!
Open issues
The 2.8 team tries to keep focus – not to do too many things at once and to finish what’s being worked on in the highest usable quality possible. That means that some topics are being done first, and some later. The priorities for 2.8 have been written down in this mail to the main developers list.
We can still use a lot of help. Please don’t hesitate to reach out – especially when workflow and usability are your strength! But we can use contributors in many ‘orphaned’ areas: such as Booleans, Video editor, Freestyle render, Particles, Physics caching, Hair, Nurbs… but also to work on better integration with Windows and MacOS desktop environments.
Credits
An important part of the blender.org project are the studios and companies who contribute to Blender.
Special thanks goes to Blender Foundation (development fund grants), Blender Institute/Animation Studio (hiring 3-5 devs), Tangent Animation (viewport development), Aleph Objects (from Lulzbot printers, supporting Blender 101), Nimble Collective (Alembic), AMD (Cycles OpenCL support), Intel (seeding hardware, Cycles development), Nvidia (seeding hardware), Theory Animation and Barnstorm VFX (Cycles development, VFX pipeline).
Special thanks also to the biggest supporters of the Development Fund: Valve Steam Workshop and Blender Market.
When you say you mostly do bugfixing now, seven kinds of new features will crawl under your bed and bite your silly toes off. If we were to come up with a short summary for 2017, it would be along those very lines.
So yes, we ended up with more new features that, however, make GIMP faster and improve workflows. Here’s just a quick list of the top v2.10 parole violators: multi-threading via GEGL, linear color space workflow, better support for CIELCH and CIELAB color spaces, much faster on-canvas Warp Transform tool, complete on-canvas gradients editing, better PSD support, metadata viewing and editing, under- and overexposure warning on the canvas.
All of the above features (and many more) are available in GIMP 2.9.8 released earlier this month. We are now in the strings freeze mode which means there will be very few changes to the user interface so that translators could safely do their job in time for the v2.10 release.
Everyone is pretty tired of not having GIMP 2.10 out by now, so we only work on bugs that block the v2.10 release. There are currently 25 such bugs. Some are relatively easy to fix, some require more time and effort. Some have patches or there is work in progress, and some need further investigation. We will get there faster, if more people join to hack on GIMP.
Speaking of which, one thing that has changed in the GIMP project for the better this year is the workload among top contributors. Michael Natterer is still responsible for 33% of all GIMP commits in the past 12 months, but that’s a ca. 30% decrease from the last year. Jehan Pagès and Ell now have a 38% share of all contributions, and Øyvind Kolås tops that with his 5% thanks to the work on layers blending/compositing and linear color space workflow in GIMP.
In particular, Ell committed the most between 2.9.6 and 2.9.8, implemented on-canvas gradients editing, introduced other enhancements, and did a lot of work on tuning performance in both GIMP and GEGL. We want to thank him especially for being the most prolific developer of GIMP for this last development release!
Another increasingly active contributor in the GEGL project is Debarshi Ray who uses the library for his project, GNOME Photos. Debarshi focused mostly on GEGL operations useful for digital photography such as exposure and shadows-highlights, and did quite a lot of bugfixing. We also got a fair share of contributions from Thomas Manni who added some interesting experimental filters like SLIC (Simple Linear Iterative Clustering) and improved existing filters.
Changes in GEGL and babl in 2017 included (but are not limited to) 15 new filters, improvements in mipmap processing and multi-threading computations, a video editing engine called gcut, more fast paths to convert pixel data between various color spaces, support for custom RGB primaries and TRC, ICC color profiles parsing and generation, and numerous bugfixes.
At least some of the work done by Øyvind Kolås on both GEGL, babl, and GIMP this year was sponsored by you, the GIMP community, via Patreon and Liberapay platforms. Please see his post on 2017 crowdfunding results for details and consider supporting him. Improving GEGL is crucial for GIMP to become a state-of-the art professional image editing program. Over the course of 2017, programming activity in GEGL and babl increased by 120% and 102% respectively in terms of commits, and we’d love to see the dynamics keep up in 2018 and onwards.
Even though the focus of another crowdfunded effort by Jehan Pagès and Aryeom Han is to create an animated short movie, Jehan Pagès contributed roughly 1/5 of code changes this year, fixing bugs, improving painting-related features, maintaining GIMP official Flatpak, and these statistics don’t even count the work on a much more sophisticated animation plug-in currently available in a dedicated Git branch. Hence supporting this project results in better user experience for GIMP users. You can help fund Jehan and Aryeom on Liberapay, Patreon or Tipeee. You can also read their end-of-2017 report.
We also want to thank Julien Hardelin who has been a great help in updating the user manual for upcoming v2.10, as well as all the translators and people who contributed patches. Moreover, we thank Pat David for further work on the new website, and Michael Schumacher for tireless bug triaging. They all don’t get nearly as much praise as they deserve.
First and foremost, 2017 ends well. We will end this year putting Krita 4.0 in string freeze, which means a release early next year! In 2017, we’ve released several versions of Krita 3.x. We’ve gained a lot of new contributors with great contributions to Krita. We’ve got money in the bank, too. Less than last year, but sales on the Windows Store help quite a bit! And development fund subscriptions have been steadily climbing, and we’re at 70 subscribers now! We’ve also done a great project with Intel, which not only brought some more money in, but also great performance improvements for painting and rendering animations.
It’s been a tough year, though! Our maintainer had only just recovered from being burned out from working full-time on Krita and on a day job when the tax office called… The result was half a year of stress and negotiations, ending in a huge tax bill and a huge accountant’s bill. And enough uncertainty that we couldn’t have our yearly fund raiser, and enough extra non-coding work that the work on the features funded in 2016 took much, much more time than planned. In the period when we were talking to the tax office, until we could go public, Boudewijn and Dmitry were supported by members from the community; without that support the project might not have survived.
But then, when we could go public with our problems, the response was phenomenal. At that point, we were confident we would survive anyway, with the work we were doing for Intel, the Windows Store income and private savings, but it would have been extremely tight. The community rallied around us magnificently, and then Private Internet Access (who also sponsor KDE, Gnome, Blender and Inkscape, among others) contacted us with their decision to pay the bill!
From nearly broke, we went to be in a position to start planning again!
We will to release Krita 4.0 with Python scripting, SVG vector layers, a new text tool, the stacked brushes feature (now renamed to masking brush) and the lazy coloring brush, and many more features. String freeze December 31st 23:59:59, release planned for March!
We want to spend next year working on bug fixes, performance improvements and polish
But there will also be work on a new reference images tool, improved session management and other things.
We will look into the possibility of porting Krita to Android and iOS, though the first attempts have not been promising.
We will do another fund raiser, though whether that will be a kickstarter hasn’t been decided yet.
After being constrained from attending open source conferences in 2016 and 2017, we intend to have at least someone present at the Libre Graphics Meeting and Akademy. We shouldn’t get disconnected from our roots!
Akademy is the yearly KDE community conference, and Krita has always been part of the KDE community. And KDE has always been more than a desktop environment for Linux and other Unix-like operating systems. As a community, KDE offers an environment where projects like Krita can flourish. Every developer in the KDE community can work on any of the KDE projects; the level of trust in each other is very high.
These days, judging by the number of bugs reported and closed, Krita is the second-most used KDE project, after the Plasma desktop shell. Without KDE, Krita wouldn’t be where it’s now. Without KDE, and the awesome job it’s volunteer sysadmins are doing, we wouldn’t have working forums, continuous integration, bug trackers or a project management platform. Sprints would be far more difficult to organize, and, of course, Krita depends heavily on a number of KDE framework libraries that make our coding life much easier. KDE is currently having the annual End of Year Fundraiser!
Our contributor sprint in 2016 was partly sponsored by KDE as well. With all the bother, it looked like we wouldn’t meet up in 2017. But with the project back on a sound footing, we managed to have a small sprint in November after all, and much vigorous discusion was had by all participants, ending up with firm plans for the last few 4.0 features that we were working on. Next year, we intend to have another big contributor sprint as well.
And, of course, lots of lovely releases, bug fixes, features, artist interviews, documentation updates, and the please of seeing so many people create great art!
Krita is not widely known in Latin America. In Colombia, we found that people are interested in knowing more about how to use it. This year, in April 2017, the program of the Latin American Free Software Install Fest included a workshop by David Bravo about Krita. The workshop was fully booked and inspired us to create this course.
Left to right: Mateo Leal, Angie Alzate, David Bravo (teacher), Lina Porras, Lucas Gelves, Juan Pablo Sainea, Javier Gaitán
During 4 sessions of 3 hours each, David Bravo guided a group of six students through their first steps in Krita, including sketch, canvas, digitalization, lines, curves and brush, light and shadow, digital color, painting and color palette, texture, effects, exporting files for digital media and printing.
David Bravo (front). The projected drawing is his work.
This course was made possible by the cooperation of three organizations: Onoma Project, Corre Libre Foundation and Ubuntu Colombia. The cost for the students was about 16 USD; all of the proceeds were donated to the Krita Foundation.
Lucas Gelves teaching himself to draw.
We think that we can offer an intermediate course in 2018. And of course we want to say thank you to the Krita Foundation for sending gifts for the course students and for staying in touch with us. We hope to cooperate in the near future for future courses!
David Bravo is a digital and multimedia designer from Colegio Mayor de Cundinamarca, currently working in multimedia freelance projects with a focus on traditional animation, 3D and visualization in virtual environments. He is also the leader of the Onoma Project, an online free platform that is under development. The main objective of this project is to provide tools for easy and secure learning of FLOSS for design.
Ubuntu Colombia acts as coordinator and communicator of the course. Ubuntu Colombia is a community with 12 years of history in spreading Ubuntu and FLOSS in Colombia; the Krita course was part of this year’s efforts of the community to promote education on FLOSS tools, as were LaTeX courses and LPIC preparation courses.
Corre Libre Foundation is an NGO created in 2008. Its objectives are:
– to promote the creation of free/open knowledge
– to sponsor free technological projects with social impact
– to promote and spread the use and development of technologies that contribute to human freedom
– to promote and spread collaborative work.
They support Orfeo, which is documentary software. For this course they provided a place to work, which would otherwise have been too difficult and expensive to find in our city.
My name is Rositsa (also known as Roz) and I’m somewhat of a late blooming artist. When I was a kid I was constantly drawing and even wanted to become an artist. Later on I chose a slightly different path for my education and career and as a result I now have decent experience as a web and graphic designer, front end developer and copywriter. I am now completely sure that I want to devote myself entirely to art and that’s what I’m working towards.
Do you paint professionally, as a hobby artist, or both?
I mainly work on personal projects. I have done some freelance paintings in the past, though. I’d love to paint professionally full time sometime soon, hopefully for a game or a fantasy book of some sort.
What genre(s) do you work in?
I prefer fantasy most of all and anything that’s not entirely realistic. It has to have some magic in it, something from another world. That’s when I feel most inspired.
Whose work inspires you most — who are your role models as an artist?
I’m a huge fan of Bill Tiller’s work for The Curse of Monkey Island, A Vampyre Story and Duke Grabowski, Mighty Swashbuckler! Other than him I’m following countless other artists on social networks and use their work to get inspired. Also, as a member of a bunch of art groups I see great
artworks from artists I’ve never heard of every single day and that’s also a huge inspiration.
How and when did you get to try digital painting for the first time?
My first encounter with digital painting was in 2006-2007 on deviantART but it wasn’t until 2010-2011 when I finally got my precious Wacom Bamboo tablet (which I still use by the way!) that I could finally begin my own digital art journey for real.
What makes you choose digital over traditional painting?
Digital painting combines my two loves – computers and art. It only seems logical to me that I chose it over traditional art but back then I didn’t give it that much thought – I just thought how awesome all the paintings I was seeing at the time were and how I’d love to do that kind of art myself. I’ve since come to realize that one doesn’t really have to choose one or the other – I find doing traditional art every once in a while incredibly soothing, even though I’ve chosen to focus on digital art as my career path.
How did you find out about Krita?
I think I first got to know about Krita from David Revoy on Twitter some years ago, but it wasn’t until this year when I finally decided to give it a try.
What was your first impression?
My first impression was just WOW. I thought “OMG, it’s SO similar to Photoshop but has all these features in addition and it’s FREE!” I was really impressed that I could do all that I was used to do in Photoshop but in a native Linux application and free of charge.
What do you love about Krita?
Exactly what I mentioned above. I’m still kind of a newbie with Krita so there’s not so much to tell but I’m sure I’m yet to discover a lot more to love as time goes by.
What do you think needs improvement in Krita? Is there anything that really annoys you?
I’d like to see an improved way of working with the Bezier Curve Selection Tool as I use it a lot but am having trouble doing perfect selections at one go. I’d really like to be able to variate between corner anchor points and curves on the fly, as I’m creating the selection, instead of creating a somewhat messy selection and then having to go back and clean it up by adding and subtracting parts of it until it looks the way I’d intended. That would certainly save me a lot of time.
What sets Krita apart from the other tools that you use?
That it’s free to use but not any less usable than the most popular paid applications of the sort! Also, the feeling I get whenever I’m involved with Krita in any way – be it by reading news about it, interacting on social media or painting with it. I’m just so excited that it exists and grows and is getting better and better. I feel somewhat proud that I’m contributing even in the tiniest way.
If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?
I love everything I’ve created in Krita so far. I don’t think it’s that much about the software you create a certain artwork with but rather allthe love you put in it as you’re creating it.
What techniques and brushes did you use in it?
I’m trying to use less brushstrokes and more colorful shapes as I paint. I mainly use the Bezier Curve Selection Tool, the Gradient Tool and a small set of Krita’s predefined brushes for my artworks. I have tried creating my own custom brushes but with little luck so far (I think I have much more reading and experimenting to do before I succeed).
Where can people see more of your work?
I have a portfolio website (in Bulgarian): www.artofroz.com; but you can find me on Facebook, Twitter, Behance, Artstation and a bunch of other places either as ArtofRoz, Rositsa Zaharieva or some combo/derivative of both.
Anything else you’d like to share?
I’d like to tell everyone that’s been using other software for their digital paintings to definitely give Krita a try, too. Not that other software is bad in any way, but Krita is awesome!
Dave and I will be giving a planetarium talk in February
on the analemma and related matters.
Our planetarium, which runs a fiddly and rather limited program called
Nightshade, has no way of showing the analemma. Or at least, after
trying for nearly a week once, I couldn't find a way. But it can
show images, and since I once wrote a
Python
program to plot the analemma, I figured I could use my program
to generate the analemmas I wanted to show and then project them
as images onto the planetarium dome.
But naturally, I wanted to project just the analemma and
associated labels; I didn't want the blue background to
cover up the stars the planetarium shows. So I couldn't just use
a simple screenshot; I needed a way to get my GTK app to create a
transparent image such as a PNG.
That turns out to be hard. GTK can't do it (either GTK2 or GTK3),
and people wanting to do anything with transparency are nudged toward
the Cairo library. As a first step, I updated my analemma program to
use Cairo and GTK3 via gi.repository. Then I dove into Cairo.
A Cairo surface is like a canvas to draw on, and it knows how to
save itself to a PNG image.
A context is the equivalent of a GC in X11 programming:
it knows about the current color, font and so forth.
So the trick is to create a new surface, create a context,
then draw everything all over again with the new context and surface.
A Cairo widget will already have a function to draw everything
(in my case, the analemma and all its labels), with this signature:
def draw(self, widget, ctx):
It already allows passing the context in, so passing in a different
context is no problem. I added an argument specifying the background
color and transparency, so I could use a blue background in the user
interface but a transparent background for the PNG image:
def draw(self, widget, ctx, background=None):
I also had a minor hitch: in draw(), I was saving the context as
self.ctx rather than passing it around to every draw routine.
That means calling it with the saved image's context would overwrite
the one used for the GUI window. So I save it first.
Here's the final image saving code:
def save_image(self, outfile):
dst_surface = cairo.ImageSurface(cairo.FORMAT_ARGB32,
self.width, self.height)
dst_ctx = cairo.Context(dst_surface)
# draw() will overwrite self.ctx, so save it first:
save_ctx = self.ctx
# Draw everything again to the new context,
# with a transparent instead of an opaque background:
self.draw(None, dst_ctx, (0, 0, 1, 0)) # transparent blue
# Restore the GUI context:
self.ctx = save_ctx
dst_surface.write_to_png("example.png")
print("Saved to", outfile)
when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.
Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!
The maintainership of the RawSpeed library was transferred to the darktable project. The work on code cleanup, hardening, modernization, simplification and testing is ongoing.
Almost 3 thousand commits to darktable+rawspeed since 2.2.0
273 pull requests handled
340+ issues closed
Updated user manual is coming soon™
Gource visualization of git log from 2.2.0 to right before 2.4.0:
Hell Froze Over
As you might have read on our news post we finally ported darktable to Windows and intend to support it in the future. At the moment it’s still lacking a few features (for example there is no printing support), has a few limitations (tethering requires special drivers to be installed) and comes with its own set of bugs (TIFF import and export doesn’t support non-ASCII characters in file names). But overall we are confident that it’s quite usable already and hope you will enjoy it. A very special thanks goes to Peter Budai who finally convinced us to agree to the port and who did most of the work.
The Big Ones
A new module for haze removal
The local contrast module can now be pushed much further, it also got a new local laplacian mode
Add undo support for masks and more intelligent grouping of undo steps
Blending now allows to display individual channels using false colors
darktable now supports loading Fujifilm compressed RAFs
darktable now supports loading floating point HDR DNGs as written by HDRMERGE
We also added channel specific blend modes for Lab and RGB color spaces
The base curve module allows for more control of the exposure fusion feature using the newly added bias slider
The tonecurve module now supports auto colour adjustment in RGB
Add absolute color input as an option to the color look up table module
A new X-Trans demosaicing algorithm, Frequency Domain Chroma, was implemented.
You can now choose from pre-defined scheduling profiles for OpenCL
Speaking of OpenCL, darktable now allows to force-use OpenCL for a specific pixelpipe
XMP sidecar files are no longer written to disk when the content didn’t actually change. That mostly helps with network storage and backup systems that use files’ time stamps
New Features And Changes
Show a dialog window that tells when locking the database/library failed
Don’t shade the whole region on the map when searching for a location. Instead just draw a border around it.
Also in map mode: Clear the search list and map indicators when resetting the search module.
With OsmGPSMap newer than version 1.1.0 (i.e., anything released after that OsmGPSMap version) the map will show copyright info.
Running jobs with a progressbar (mostly import and export) will show that progress bar ontop the window entry in your task bar – if the system supports it. It should work on GNOME, KDE and Windows at least.
Add bash like string replacement for variables (export, watermark, session settings)
Add a preferences option to ask before removing empty dirs
The “colorbalance” module got a lot faster, thanks to SSE optimized code
Make gradient sliders a little more colorful
Make PNG compression level used for exporting configurable
On OSX, load single images from command line or via drag&drop in darkroom mode
Add an option to omit the intermediate tag hierarchy in exported files and only add the last level
In the watermark module, sort the list of SVG files and omit the file extension
Support XYZ as a proofing profile
Local contrast now got a new slider to set the midtone range
darktable got two new helper scripts (those are not installed by default, grab them from the sources)
One to purge thumbnails that no longer have an associated image in the database,
and a second script that uses inotify to watch a folder for new files to open them in a running darktable instance.
In the curve editors of base curve and tone curve you can now delete nodes with a right click and see coordinates of nodes while editing. Note that you can use keyboard modifiers ctrl and shift to change the precision of your changes
Creating a new instance of a module can now be done with a quick click of the middle mouse button on the multi-instance icon
New darktable installations on computers with more than 8 Gb of memory will now by default use half of that per module
Several background colors and the brush color are now configurable in the CSS
Some new cameras can bump the ISO level to insane highs. We try to follow as good as we can by no longer limiting it to 51200 in the GUI
Base curve and the highlights module now support multiple instances and use blending and masks
Having the 1 key toggle between 1 and 0 stars wasn’t very popular with many people. You can disable that extra feature and have it behave like the other rating shortcuts now
You can decide if you want to be asked before resetting the history stacks of images from the lighttable
The grain module was slightly changed to have a more pleasing, photographic-paper like appearance
Using the color look up table module you can now convert your images to monochrome, honoring the Helmholtz-Kohlrausch effect
Support basic import of Lightroom 7 settings
Change the styling of insensitive bauhaus widgets
Don’t hide the mode combobox in the exposure module, just disable it
Read primaries and whitepoint from .hdr files and default to those as the input color profile
Some more small improvements were made
Bugfixes
Fix the problem with rating images by accident when moving the mouse while typing an image size in the export module
Fix several oddities in folder and tag mode of the collect module
Print mode’s color profile settings no longer interact with the export module
Update the style lists when importing a style
Fix some bugs with multiple module instances used in a style
On OSX only the main window should be fullscreen, not the popups
Some speedups with VERY big libraries or having A LOTOF tags
Significantly speed up tagging many images
Fix searching locations using OpenStreetMap
Fix partial copies of large files in “import from camera”
Fix a crash in the import dialog when using Lua to add widgets there
Fix some false-positive warnings about another running darktable instance and it having locked the databases
No longer switch to the favourite modules group when duplicating one of its modules
Fix loading of XYZ files
Fix Lab export when the profile was set from the lighttable
Create temporary snapshot files with mode 0600 to stop other people looking at them
Fix several bugs with Wayland. However, there are still issues, so darktable will prefer XWayland
Google deprecated the Picasa Web API so it’s no longer possible to create G+ albums
Fix the default for sliders with target not being “red” in the channel mixer
Fix the removal of directories
Make the escape key cancel history dialogs
Block keyboard accels when editing camera controls
Properly delete XMP sidecars
Make sure that the rating set in darktable is used for the exported file, not something set inside the raw file
Don’t re-write all XMP files when detaching a tag
Sync XMPs when a tag is removed from the database
Sync XMPs after a tag is attached/detached via the Lua API
Bail out of darktable-cli when the XMP file is not readable
Show ratings on zoomable lighttable without a delay
Rely on CUPS color management when printing without configuring any color profile in darktable
Fix spurious segfault in local contrast
Make calls to exiv2’s readMetadata thread safe to not crash randomly
Properly read Lightroom XMPs on systems with , as the decimal separator
Fix setting the PNG bit depth from the gui
Many more bugs got fixed
Lua
darktable now uses Lua 5.3. The bundled copy got updated accordingly
Add dt.print_log. It’s like print_error but without the ERROR prefix
Reorder callback parameters for intermediate export image: add the actual image to the parameters of the event
Call lua post-import-image event synchronously
Add darktable.configuration.running_os to detect the OS darktable is running on
New widget type: section_label, adds a label which looks like a section change
Changed Dependencies
CMake 3.1 is now required
In order to compile darktable you now need at least gcc-5.0+/clang-3.4+
ZLIB is now required for the DNG Deflate compressed raw support
darktable now uses Lua 5.3
Camera support, compared to 2.2.0
Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!
A new year is coming on us quickly, so how about a nice new website to go with it?
Baby New Year from 110 years ago …
houz and I have been working hard over the past few months to migrate the old website from Wordpress to a new static site, using Python/Pelican.
This should make things more secure and safer for both you and us (see the problems that rawsamples.ch had for the perils of using a db-driven backend for a website).
Not to mention it makes collaboration and contributing a bit easier now, as the entire site gets its own GitHub repository (I’ll be eagerly awaiting your pull requests).
I tried to create a design that had a bit more emphasis on accessibility and readability.
The site is fully responsive for screens ranging from a mobile phone to a full desktop.
The type is larger and generally given a bit more room to breathe in the page and we tried to highlight images a bit more, as this is a photography-centered project.
This is just one small way for me to contribute and give back to the community in my own way (you should probably not let me near any real code).
In fact, this is one of the reasons I started up the community over at PIXLS.US.
We needed a community focused specifically on Free Software photography and a way for us to freely share our knowledge and experiences to help everyone, especially across multiple projects.
pixls.us
If you’re not familiar with the community yet, why not?!
We’re all photography folks who have a passion for Free Software with the mission:
To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.
We also happen to have quite a few developers of various types in the community, and as a way to assist projects and contribute back we’ve been working on websites and other community-oriented functionality (like the forums, hosting comments, files, and more).
I’d get in trouble if I didn’t mention that we also got raw.pixls.us setup to replace the ailing rawsamples.ch also.
One neat way we’re able to help out even more is by integrating the commenting system here into the forums.
It’s how we manage commenting on the main pixls.us website, and we had great success with this approach for the digiKam project when we built them a new website earlier this year.
This lets us moderate comments in one place, allows for cross-pollination of knowledge between the projects, and users get to truly own their comments (instead of being monetized and tracked by some third-party commenting system like Disqus).
Happy New Year
Have a look around the new site and please don’t hesitate to point out any problems you may run into!
From me, thank you x1000 to the developers for this awesome project.
GIMP’s user interface is currently available in 80 languages. So far ca. 20 translations have been updated in the unstable branch since the beginning of the work on v2.10, and only 8 translations in the ‘po’ directory (where most translatable messages reside) are at least 90% complete. So clearly we need to give our translators a head start.
This is why GIMP’s master branch is now entering a tentative strings freeze phase in preparation for 2.10 release. We expect further changes between today and the v2.10 final release to affect no more than 1% of translatable messages. So it’s safe to start updating user interface translations now.
If you are interested, how complete the translation into your language is, check out the current stats. To start updating it, please contact your local team.
We would also like to remind translators that we are relaxing the release policy for the stable branch. Starting with v2.10, stable releases may have minor new features. This makes changes and the introduction of new strings more likely than in previous stable branches, where the only changes allowed were bug fixes (and which introduced only few string changes).
Hi all,
First off, sorry for the delay, I was trying to finish the new Render workbench (see below) before posting this report, because otherwise there is not much exciting stuff this month, and it ended up taking more time than I thought, because I kept experimenting a lot until arriving to a good solution. And...
Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!
changes since rc1
Fix a bug in haze removal that resulted in black areas in the exported image
Support Sony ILCE-7RM3
Make calls to exiv2’s readMetadata thread safe to not crash randomly
Don’t hide the mode combobox in the exposure module, just disable it
Change the styling of insensitive bauhaus widgets
Fix spurious segfault in local contrast
Don’t show an error popup on Windows when the CD drive is empty
and the changelog as compared to 2.2.0 can be found below. Some of the fixes might have been backported to the stable 2.2.x series already.
The maintainership of the RawSpeed library was transferred to the darktable project. The work on code cleanup, hardening, modernization, simplification and testing is ongoing.
Well over 2 thousand commits to darktable+rawspeed since 2.2.0
244 pull requests handled
320+ issues closed
Updated user manual is coming soon™
Hell Froze Over
As you might have read on our news post we finally ported darktable to Windows and intend to support it in the future. At the moment it’s still lacking a few features (for example there is not printing support), has a few limitations (tethering requires special drivers to be installed) and comes with its own set of bugs. But overall we are confident that it’s quite usable already and hope you will enjoy it. A very special thanks goes to Peter Budai who finally convinced us to agree to the port and who did most of the work.
The Big Ones
A new module for haze removal
The local contrast module can now be pushed much further, it also got a new local laplacian mode
Add undo support for masks and more intelligent grouping of undo steps
Blending now allows to display individual channels using false colors
darktable now supports loading Fujifilm compressed RAFs
darktable now supports loading floating point HDR DNGs as written by HDRMERGE
We also added channel specific blend modes for Lab and RGB color spaces
The base curve module allows for more control of the exposure fusion feature using the newly added bias slider
The tonecurve module now supports auto colour adjustment in RGB
Add absolute color input as an option to the color look up table module
A new X-Trans demosaicing algorithm, Frequency Domain Chroma, was implemented.
You can now choose from pre-defined scheduling profiles for OpenCL
Speaking of OpenCL, darktable now allows to force-use OpenCL for a specific pixelpipe
Xmp sidecar files are no longer written to disk when the content didn’t actually change. That mostly helps with network storage and backup systems that use files’ time stamps
New Features And Changes
Show a dialog window that tells when locking the database/library failed
Don’t shade the whole region on the map when searching for a location. Instead just draw a border around it.
Also in map mode: Clear the search list and map indicators when resetting the search module.
With OsmGPSMap newer than version 1.1.0 (i.e., anything released after that OsmGPSMap version) the map will show copyright info.
Running jobs with a progressbar (mostly import and export) will show that progress bar ontop the window entry in your task bar – if the system supports it. It should work on GNOME, KDE and Windows at least.
Add bash like string replacement for variables (export, watermark, session settings).
Add a preferences option to ask before removing empty dirs
The “colorbalance” module got a lot faster, thanks to SSE optimized code
Make gradient sliders a little more colorful
Make PNG compression level used for exporting configurable
On OSX, load single images from command line or via drag&drop in darkroom mode
Add an option to omit the intermediate tag hierarchy in exported files and only add the last level
In the watermark module, sort the list of SVG files and omit the file extension
Support XYZ as a proofing profile
Local contrast now got a new slider to set the midtone range
darktable got two new helper scripts (those are not installed by default, grab them from the sources): One to purge thumbnails that no longer have an associated image in the database, and a second script that uses inotify to watch a folder for new files to open them in a running darktable instance.
In the curve editors of base curve and tone curve you can now delete nodes with a right click and see coordinates of nodes while editing. Note that you can use keyboard modifiers ctrl and shift to change the precision of your changes
Creating a new instance of a module can now be done with a quick click of the middle mouse button on the multi-instance icon
New darktable installations on computers with more than 8 Gb of memory will now by default use half of that per module
Several background colors and the brush color are now configurable in the CSS
Some new cameras can bump the ISO level to insane highs. We try to follow as good as we can by no longer limiting it to 51200 in the GUI
Base curve and the highlights module now support multiple instances and use blending and masks
Having the 1 key toggle between 1 and 0 stars wasn’t very popular with many people. You can disable that extra feature and have it behave like the other rating shortcuts now
You can decide if you want to be asked before resetting the history stacks of images from the lighttable
The grain module was slightly changed to have a more pleasing, photographic-paper like appearance
Using the color look up table module you can now convert your images to monochrome, honoring the Helmholtz-Kohlrausch effect
Some more small improvements were made
Support basic import of Lightroom 7 settings
Change the styling of insensitive bauhaus widgets
Don’t hide the mode combobox in the exposure module, just disable it
Bugfixes
Fix the problem with rating images by accident when moving the mouse while typing an image size in the export module
Fix several oddities in folder and tag mode of the collect module.
Print mode’s color profile settings no longer interact with the export module
Update the style lists when importing a style
Fix some bugs with multiple module instances used in a style
On OSX only the main window should be fullscreen, not the popups
Some speedups with VERY big libraries or having A LOTOF tags
Significantly speed up tagging many images
Fix searching locations using OpenStreetMap
Fix partial copies of large files in “import from camera”
Fix a crash in the import dialog when using Lua to add widgets there
Fix some false-positive warnings about another running darktable instance and it having locked the databases
No longer switch to the favourite modules group when duplicating one of its modules
Fix loading of XYZ files
Fix Lab export when the profile was set from the lighttable
Create tmp snapshot files with mode 0600 to stop other people looking at them
Fix several bugs with Wayland. However, there are still issues, so darktable will prefer XWayland
Google deprecated the Picasa Web API so it’s no longer possible to create G+ albums
Fix the default for sliders with target not being “red” in the channel mixer
Fix the removing of directories
Make the escape key cancel history dialogs
Block keyboard accels when editing camera controls
Properly delete XMP sidecars
Make sure that the rating set in darktable is used for the exported file, not something set inside the raw file
Don’t re-write all XMP files when detaching a tag
Sync XMPs when a tag is removed from the database
Sync XMPs after a tag is attached/detached via the Lua API
Bail out of darktable-cli when the XMP file is not readable
Show ratings on zoomable lighttable without a delay
Rely on CUPS color management when printing without configuring any color profile in darktable
Many more bugs got fixed
Fix spurious segfault in local contrast
Make calls to exiv2’s readMetadata thread safe to not crash randomly
Lua
darktable now uses Lua 5.3. The bundled copy got updated accordingly
Add dt.print_log. It’s like print_error but without the ERROR prefix
Reorder callback parameters for intermediate export image: add the actual image to the parameters of the event
Call lua post-import-image event synchronously
Add darktable.configuration.running_os to detect the OS darktable is running on
New widget type: section_label, adds a label which looks like a section change
Changed Dependencies
CMake 3.1 is now required.
In order to compile darktable you now need at least gcc-4.9+/clang-3.4+, and gcc-5.0+ is highly recommended.
ZLIB is now required for the DNG Deflate compressed raw support.
darktable now uses Lua 5.3
Camera support, compared to 2.2.0
Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!
Playing with the
ATtiny85
I was struck by how simple the circuit was.
Sure, I'd made a
homemade
Arduino on a breadboard;
but with the crystal and all the extra capacitors and resistors it ends
up seeming like a lot of parts and wires.
If an ATtiny can use a built-in clock and not need all those extra
parts, couldn't I use an Atmega328 the same way?
Why, yes, as it turns out. But there are a few tricks.
For the initial wiring, all you need is
two power and two ground lines, the pins marked - and +,
plus a pullup resistor on RST (something large, like 10kΩ).
The excellent tutorial
From
Arduino to a Microcontroller on a Breadboard is a good guide
if you need additional details: the third section
shows a circuit without external clock.
Add an LED and resistor on pin 13 (atmega pin 19, called SCK) so
you can test it using a blink program.
Now you need to set up the software.
Set up a hardware profile for a bare Arduino
To program it with the Arduino libraries,
you'll need a hardware definition for an atmega328 chip
with an internal clock. I used the download
from the last section of the excellent tutorial,
From
Arduino to a Microcontroller on a Breadboard. (Keep that page
up: it has good wiring diagrams.)
For Arduino 1.8.5, download breadboard-1-6-x.zip and unpack it
in your ~/sketchbook/hardware/ directory, making a directory
there called breadboard. Then you'll need to make one change:
the 1.6 directory is missing a file called pins_arduino.h",
so if you try to compile with this hardware definition, you'll get
an error like:
mkdir -p build-atmega328bb-atmega328
/usr/local/share/arduino/hardware/tools/avr/bin/avr-g++ -x c++ -include Arduino.h -MMD -c -mmcu=atmega328p -DF_CPU=8000000L -DARDUINO=185 -DARDUINO_ARCH_AVR -D__PROG_TYPES_COMPAT__ -I/usr/local/share/arduino/hardware/arduino/avr/cores/arduino -I/home/akkana/sketchbook/hardware/breadboard/avr/variants/standard -Wall -ffunction-sections -fdata-sections -Os -fpermissive -fno-exceptions -std=gnu++11 -fno-threadsafe-statics -flto blink.ino -o build-atmega328bb-atmega328/blink.ino.o
In file included from :0:0:
/usr/local/share/arduino/hardware/arduino/avr/cores/arduino/Arduino.h:257:26: fatal error: pins_arduino.h: No such file or directory
#include "pins_arduino.h"
^
compilation terminated.
/usr/share/arduino/Arduino.mk:1251: recipe for target 'build-atmega328bb-atmega328/blink.ino.o' failed
make: *** [build-atmega328bb-atmega328/blink.ino.o] Error 1
The problem is that it's including these directories:
-I/usr/local/share/arduino/hardware/arduino/avr/cores/arduino
-I/home/akkana/sketchbook/hardware/breadboard/avr/variants/standard
but the actual file is in:
/usr/local/share/arduino/hardware/arduino/avr/variants/standard/pins_arduino.h
You can fix that by making a link from the "standard" directory in your
Arduino install to breadboard/avr/variants/standard. On Linux, that would
be something like this (Mac and Windows people can substitute their
local equivalents):
Now your hardware definition should be ready to go. To check, fire up
the IDE and look in Tools->Board for
ATmega328 on a breadboard (8 MHz internal clock).
Or if you use Arduino-mk, run
ALTERNATE_CORE=breadboard make show_boards
and make sure it lists
atmega328bb ATmega328 on a breadboard (8 MHz internal clock).
Reprogram the Fuses and Bootloader for an Internal Clock
The next trick is that an Atmega chip programmed with the Arduino
bootloader is also fused to use an external, 16MHz clock.
If you wire it to use its internal 8MHz clock, you won't be
able to talk to it with either an ISP or FTDI.
You'll definitely run into this if you pull the CPU out of an Arduino.
But even if you buy new chips you may see it:
many Atmega328s come pre-programmed with the Arduino bootloader.
After all, that's what most people want.
The easiest way to reprogram the fuses is to use the hardware
definition you just installed to burn a new bootloader, which resets
the fuse settings at the same time. So you need an In-System
Programmer, or ISP. You can use an Arduino as an ISP, but I'm told
that this tends to be flaky and isn't recommended. After I had
problems using an Arduino I ordered a cheap USBtinyUSP, which works
fine.
Regardless of which ISP you use, if you wire up your atmega without
an external clock when it's fused for one, you won't be able to burn a
bootloader. A typical error:
[ ... ]
Reading | ################################################## | 100% 0.02s
avrdude: Device signature = 0x000000 (retrying)
Error while burning bootloader.
Reading | ################################################## | 100% 0.02s
avrdude: Device signature = 0x000000
avrdude: Yikes! Invalid device signature.
Double check connections and try again, or use -F to override
this check.
The solution is to burn the bootloader using an external clock.
You can add a crystal and two capacitors to your breadboard circuit
if you have them.
If not, an easy solution is to pull the chip out of the breadboard,
plug it into the socket in an Arduino and burn it there.
(Note: if you're using an Arduino as your ISP, you'll need a second
Arduino.)
Plug your ISP into the Arduino's ISP header: on an Uno, that's the
header labeled ICSP at the end of the chip farthest away from the USB
plug. It's a six-pin connector (2x3), it's easy to plug in backward
and you can't depend on either the Arduino's header or the ISP's cable
being labeled as to direction; if in doubt, use a multimeter in
continuity mode to see which pin is ground on each side, then make
sure those pins match. Once you're sure, mark your connector somehow
so you'll know next time.
In the Arduino IDE, set Tools->Board to
ATmega328 on a breadboard (8 MHz internal clock),
set Programmer to whatever ISP you're using.
then run Tools->Burn Bootloader.
If you're using Arduino-mk instead of the IDE,
set up a Makefile that looks like this:
Substitute your ISP, if different, and your location for Arduino.mk.
Then type make burn_bootloader
Program it
Once you're wired, you should be able to program it either with an
FTDI board or an ISP, as I discussed in
homemade
Arduino, Part 1.
You should be able to use your minimal Atmega328 to
run anything you can run on a normal Arduino (albeit at half the
clock speed).
I plan to make a little board with a ZIF socket and connectors for
both the USBtinyISP and the FTDI Friend so I don't have to plug in
all those wires again each time.
In the midst of post-release bug fixing, we've also added a fair number of new features to our stack. As usual, new features span a number of different components, so integrators will have to be careful picking up all the components when, well, integrating.
PS3 clones joypads support
Do you have a PlayStation 3 joypad that feels just a little bit "off"? You can't find the Sony logo anywhere on it? The figures on the face buttons look like barbed wire? And if it were a YouTube video, it would say "No copyright intended"?
Bingo. When plugged in via USB, those devices advertise themselves as SHANWAN or Gasia, and implement the bare minimum to work when plugged into a PlayStation 3 console. But as a Linux computer would behave slightly differently, we need to fix a couple of things.
There are a number of Bluetooth LE joypads available for pickup, including a few that should be firmware upgradeable. Look for "Bluetooth 4" as well as "Bluetooth LE" when doing your holiday shopping.
gnome-bluetooth work
Finally, this is the boring part. Benjamin and I reworked code that's internal to gnome-bluetooth, as used in the Settings panel as well as the Shell, to make it use modern facilities like GDBusObjectManager. The overall effect of this is, less code, less brittle and more reactive when Bluetooth adapters come and go, such as when using airplane mode.
Apart from the kernel patch mentioned above (you'll know if you need it :), those features have been integrated in UPower 0.99.7 and in the upcoming BlueZ 5.48. And they will of course be available in Fedora, both in rawhide and as updates to Fedora 27 as soon as the releases have been done and built.
There are two libre graphics related meetings coming up early next year.
The annual Libre Graphics Meeting (in Spain this year), and something entirely new: a
libre graphics track at SCaLE.
How exciting!
The Libre Graphics Meeting is going to be in Seville, Spain this year.
They recently published their Call for Participation and are accepting presentation and talk proposals now.
Unfortunately, I won’t be able to attend this year, but there’s a pretty good chance some friendlier folks from the community will be!
We’ll update more about who will be making it out as soon as we know, and maybe we can convince someone to run another photowalk with everyone.
(On a side note, if anyone from the community is going to make it and wants a hand putting anything together for a presentation just let us know - we’re here to help.)
This year we have a neat announcement - due to some prodding from Nate Willis, we have been given a day at the Southern California Linux Expo (SCaLE) to hold a Libre Graphics focused track!
The expo is at the Pasadena Convention Center, March 8-11, 2018.
We first had a chance to hang out with LWN editor Nate Willis during the Libre Graphics Meeting 2016 in London, and later out at the Texas Linux Fest.
GIMP was able to have both Akkana Peck and myself out to present on GIMPy stuff and host a photowalk as well.
The organizer for SCaLE, Ilan, was kind enough to give us a day (Friday, March 9th) and a room for all the libre graphics artists, designers, programmers, and hackers.
You could come meet the face behind these avatars.
I will be in attendance promoting GIMP stuff in the main track, Dr. Ullah (Isaac Ullah) will hopefully be presenting, and Mica will be there (@paperdigits) as well.
I’m pretty certain we’ll be holding a photowalk for attendees while we’re there - and we may even setup a nice headshot booth in the expo to take free headshots for folks.
We would love to see some folks out there.
If you think you might be able to make it, or even better submit a talk proposal, please come and join us!
(I was thinking about getting an AirBnB to stay in, so if folks let me know they are going to make it out we can coordinate a place to all stay together.)
The libre graphics community is thrilled to announce that a special,
one-day track at SCaLE 16x will be dedicated to libre graphics
software and artists. All those who work with free and open-source
tools for creative graphics projects are invited to submit a proposal
and join us for the day!
SCaLE 16x will take place from March 8 to 11 of 2018 in Pasadena
California. Libre Graphics Day: SCaLE will take place at the main
SCaLE venue on Friday, March 9.
The libre graphics track is an opportunity for teams, contributors and
practitioners involved in Libre Graphics projects to share their
experiences, showcase new developments, and hear new and inspiring ideas.
By libre graphics we mean “free, Libre and Open Source tools for
creative uses”. Libre graphics is not just about software, but extends to
standards and file formats used in creative work.
People from around the world who are passionate about
Free/Libre tools and their creative applications are encouraged to
submit a talk proposal. Sessions will be 30 minutes in length.
Developers, artists, and activists alike are invited. First-time
presenters and established projects of all sizes are welcome to submit.
We are looking for:
Reflections and practical sessions on promoting the philosophy
and use of Libre Graphics tools.
Technical presentations and workshops for developers.
Showcases of excellent work made using Libre Graphics tools.
New tools and workflows for graphics and code.
Reflections on the activities of existing Free/Libre and Open Source communities.
This is a little blog post from India. I’ve been invited to give not one, but two talks at Swatantra 2017, the triennial conference organised by ICFOSS in Thiruvananthapuram (also known by its shorter old name, Trivandrum), Kerala.
I’ll have the pleasure to give a talk about GCompris, and another one about Synfig studio. It’s been a long time since I didn’t talk about the latter, but since Konstantin Dmitriev and the Morevna team were not available, I’ll do my best to represent Synfig there.
(little teaser animation of the event banner, done with Synfig studio)
I’ll also meet some friends from Krita, David Revoy and Raghavendra Kamath, so even if there is no talk dedicated to Krita, it should be well represented.
The event will happen the 20th and 21st of December, and my talks will be on the second day. Until then, I’m spending a week visiting and enjoying the south of India.
You can find more info on the official website of the event: swatantra.net.in. Many thanks again to the nice organization team at ICFOSS for the invitation !
Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!
changes since rc0
noise profile for Nikon D4
Phase One IQ140 support
OSX packaging fixes
Lightroom 7 import fixes
Some fixes for sliders and comboboxen and grabbing the keyboard focus
No longer use colored sliders in the white balance module – they confused people
Update Catalan translation
Update Hungarian translation
Fix OpenCL on OSX
Bail out of darktable-cli when the XMP file is not readable
Fix timezone selection for geotagging on Windows
Canon EOS M100 supported
Show ratings on zoomable lighttable without a delay
Rely on CUPS color management when printing without configuring any color profile in darktable
and the changelog as compared to 2.2.0 can be found below. Some of the fixes might have been backported to the stable 2.2.x series already.
The maintainership of the RawSpeed library was transferred to the darktable project. The work on code cleanup, hardening, modernization, simplification and testing is ongoing.
Well over 2 thousand commits to darktable+rawspeed since 2.2.0
244 pull requests handled
320+ issues closed
Updated user manual is coming soon™
Hell Froze Over
As you might have read on our news post we finally ported darktable to Windows and intend to support it in the future. At the moment it’s still lacking a few features (for example there is not printing support), has a few limitations (tethering requires special drivers to be installed) and comes with its own set of bugs. But overall we are confident that it’s quite usable already and hope you will enjoy it. A very special thanks goes to Peter Budai who finally convinced us to agree to the port and who did most of the work.
The Big Ones
A new module for haze removal
The local contrast module can now be pushed much further, it also got a new local laplacian mode
Add undo support for masks and more intelligent grouping of undo steps
Blending now allows to display individual channels using false colors
darktable now supports loading Fujifilm compressed RAFs
darktable now supports loading floating point HDR DNGs as written by HDRMERGE
We also added channel specific blend modes for Lab and RGB color spaces
The base curve module allows for more control of the exposure fusion feature using the newly added bias slider
The tonecurve module now supports auto colour adjustment in RGB
Add absolute color input as an option to the color look up table module
A new X-Trans demosaicing algorithm, Frequency Domain Chroma, was implemented.
You can now choose from pre-defined scheduling profiles for OpenCL
Speaking of OpenCL, darktable now allows to force-use OpenCL for a specific pixelpipe
Xmp sidecar files are no longer written to disk when the content didn’t actually change. That mostly helps with network storage and backup systems that use files’ time stamps
New Features And Changes
Show a dialog window that tells when locking the database/library failed
Don’t shade the whole region on the map when searching for a location. Instead just draw a border around it.
Also in map mode: Clear the search list and map indicators when resetting the search module.
With OsmGPSMap newer than version 1.1.0 (i.e., anything released after that OsmGPSMap version) the map will show copyright info.
Running jobs with a progressbar (mostly import and export) will show that progress bar ontop the window entry in your task bar – if the system supports it. It should work on GNOME, KDE and Windows at least.
Add bash like string replacement for variables (export, watermark, session settings).
Add a preferences option to ask before removing empty dirs
The “colorbalance” module got a lot faster, thanks to SSE optimized code
Make gradient sliders a little more colorful
Make PNG compression level used for exporting configurable
On OSX, load single images from command line or via drag&drop in darkroom mode
Add an option to omit the intermediate tag hierarchy in exported files and only add the last level
In the watermark module, sort the list of SVG files and omit the file extension
Support XYZ as a proofing profile
Local contrast now got a new slider to set the midtone range
darktable got two new helper scripts (those are not installed by default, grab them from the sources): One to purge thumbnails that no longer have an associated image in the database, and a second script that uses inotify to watch a folder for new files to open them in a running darktable instance.
In the curve editors of base curve and tone curve you can now delete nodes with a right click and see coordinates of nodes while editing. Note that you can use keyboard modifiers ctrl and shift to change the precision of your changes
Creating a new instance of a module can now be done with a quick click of the middle mouse button on the multi-instance icon
New darktable installations on computers with more than 8 Gb of memory will now by default use half of that per module
Several background colors and the brush color are now configurable in the CSS
Some new cameras can bump the ISO level to insane highs. We try to follow as good as we can by no longer limiting it to 51200 in the GUI
Base curve and the highlights module now support multiple instances and use blending and masks
Having the 1 key toggle between 1 and 0 stars wasn’t very popular with many people. You can disable that extra feature and have it behave like the other rating shortcuts now
You can decide if you want to be asked before resetting the history stacks of images from the lighttable
The grain module was slightly changed to have a more pleasing, photographic-paper like appearance
Using the color look up table module you can now convert your images to monochrome, honoring the Helmholtz-Kohlrausch effect
Some more small improvements were made
Support basic import of Lightroom 7 settings
Bugfixes
Fix the problem with rating images by accident when moving the mouse while typing an image size in the export module
Fix several oddities in folder and tag mode of the collect module.
Print mode’s color profile settings no longer interact with the export module
Update the style lists when importing a style
Fix some bugs with multiple module instances used in a style
On OSX only the main window should be fullscreen, not the popups
Some speedups with VERY big libraries or having A LOTOF tags
Significantly speed up tagging many images
Fix searching locations using OpenStreetMap
Fix partial copies of large files in “import from camera”
Fix a crash in the import dialog when using Lua to add widgets there
Fix some false-positive warnings about another running darktable instance and it having locked the databases
No longer switch to the favourite modules group when duplicating one of its modules
Fix loading of XYZ files
Fix Lab export when the profile was set from the lighttable
Create tmp snapshot files with mode 0600 to stop other people looking at them
Fix several bugs with Wayland. However, there are still issues, so darktable will prefer XWayland
Google deprecated the Picasa Web API so it’s no longer possible to create G+ albums
Fix the default for sliders with target not being “red” in the channel mixer
Fix the removing of directories
Make the escape key cancel history dialogs
Block keyboard accels when editing camera controls
Properly delete XMP sidecars
Make sure that the rating set in darktable is used for the exported file, not something set inside the raw file
Don’t re-write all XMP files when detaching a tag
Sync XMPs when a tag is removed from the database
Sync XMPs after a tag is attached/detached via the Lua API
Bail out of darktable-cli when the XMP file is not readable
Show ratings on zoomable lighttable without a delay
Rely on CUPS color management when printing without configuring any color profile in darktable
Many more bugs got fixed
Lua
darktable now uses Lua 5.3. The bundled copy got updated accordingly
Add dt.print_log. It’s like print_error but without the ERROR prefix
Reorder callback parameters for intermediate export image: add the actual image to the parameters of the event
Call lua post-import-image event synchronously
Add darktable.configuration.running_os to detect the OS darktable is running on
New widget type: section_label, adds a label which looks like a section change
Changed Dependencies
CMake 3.1 is now required.
In order to compile darktable you now need at least gcc-4.9+/clang-3.4+, and gcc-5.0+ is highly recommended.
ZLIB is now required for the DNG Deflate compressed raw support.
darktable now uses Lua 5.3
Camera support, compared to 2.2.0
Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!
Newly released GIMP 2.9.8 introduces on-canvas gradient editing and various
enhancements while focusing on bugfixing and stability. For a complete list of
changes please see NEWS.
One of the most user-visible changes in 2.9.8 is the updated Blend tool.
Here’s what’s new about it.
First of all, it pretty much eliminates the need for the old Gradient Editor
dialog, as all of the dialog’s features are now available directly on the
canvas. You can create and delete color stops, select and shift them, assign
colors to color stops, change blending and coloring for segments between color
stops, create new color stops from midpoints.
Secondly, default gradients are now “editable”. As you probably know, the
reason most resources such as brushes, painting dynamics, and gradients are not
direclty editable is that they are typically installed into a system directory
where non-privileged user can’t make any changes.
Now when you try to change an existing gradient from a system folder, GIMP will
create a copy of it, call it a Custom Gradient and preserve it across
sessions. Unless, of course, you edit another ‘system’ gradient, in which case
it will become the new custom gradient.
Since this feature is useful for more than just gradients, it was made generic
enough to be used for brushes and other types of resources in the future.
We expect to revisit this in the future releases of GIMP.
Now that 2.9.8 is out with the updated Blend tool, we are interested in your
feedback, as we still expect some cleanup and enhancements to be done there.
Most of the programming was done by Ell, however we also want to acknowledge
two other people who contributed to that effort one way or another.
Michael Henning improved the Blend tool for 2.9.2, making the position of its
endpoints editable before applying the gradient fill.
Michael Natterer refactored source code of GIMP’s tools to make them reuse one
another’s on-canvas handles. That greatly simplified adding on-canvas handles
for color stops. He also added the generic on-canvas dialog with the most
important options for tools.
Ell also implemented a feature request made in our public mailing list,
where Elle Stone asked for some way to visualize underexposed and overexposed
areas of a photo, which is a common feature in digital photography tools such as
darktable and RawTherapee.
The new Clip Warning display filter targets that use case and fills
underexposed and overexposed areas with user-configurable colors. For now,
it’s mostly geared towards images where colors are stored with floating point
precision. You will mostly benefit from this, if you work on 16/32 bit per
channel float images such as EXR and TIFF.
Implementing this feature as a display filter has certain disadvantages such
as having to go through the whole routine of adding a display filter for every
image. We are thinking of better ways to do this.
GIMP now uses the babl library for doing conversion of images between color
spaces when matrix-based ICC profiles are used. This leads to completing
transforms ca. 5 times faster in comparison to LittleCMS v2 on a few test
images we tried this on. We expect to make further use of babl for doing color
transforms once the library supports ICC profiles based on lookup tables.
While we already had the screenshot plug-in working under GNOME/Wayland,
we now implemented screenshots for KDE/Wayland (though it misses rectangular
area selection).
The Color Picker widget will now also work in KDE/Wayland.
Note that there is still no color-picking interface in GNOME for
Wayland, so as a workaround,
color picking will only work inside GIMP windows for this platform.
Color-picked and screenshot pixels are not color-managed yet in Wayland.
Michael Natterer implemented another small feature request from a user who
asked for an Inkscape-like Paste in Place command. The idea is that GIMP
should be able to paste contents of the clipboard at exact coordinates the
contents was originally copied from. This feature is available for both the
regular clipboard and named buffers.
Paste in Place complements the usual Paste command which places contents
of the clipboard into the center of the viewport.
The spinscale widget now highlights vertical parts of the slider section
differently to hint that position of cursor above the widget matters.
When changing values in the lower step section, the pointer will be wrapped
around the screen so that you could continue adjusting the value without interruptions.
When using transform tools, you can now press a modifier key before or after
pressing/releasing a mouse button.
The Info window for the color picker now remembers the modes across session.
So if you prefer seeing LAB values, that’s what you will see every time until
you choose something else.
Canvas rotation and flip information is now visible in the status bar, as angle
value and flip icon. Clicking on these canvas statuses will respectively raise
the Select Rotation Angle dialog or unflip the canvas.
Upon detection of locally installed manuals in several languages, GIMP will now
allow selection of the preferred manual language in the Preferences dialog
(Interface > Help System).
Manual localization settings in GIMP’s preferences
This is especially useful since GIMP’s interface is available in 80 languages,
while its manual is translated to only 17
languages. You may therefore not have a choice of viewing the manual in your
preferred language.
Moreover, some people choose English over their native language for user
interfaces, while sticking to their native language for reading documentation.
This is another case where choosing preferred language for the user manual
might come in handy.
The much demanded Wavelet Decompose filter got a small round of updates
and gained a couple of new options: placing decomposition stack into its own
layer group and adding a layer mask to each scales layers. It also produces
more expected results now.
The PSD plug-in was fixed to properly handle Photoshop files with deeply
nested layer groups and preserve expanded state of groups for both importing
and exporting. Additional changes fix mask position and improve layer opacity
for importing/exporting.
The PDF plug-in now supports loading password-protected files by promting
the user for password.
HGT files can now be imported. HGT is the format for Digital Elevation Model
data by the NASA and other space agencies.
GIMP now supports both the SRTM-1 and SRTM-3 types (as far as we know, the
only two variants) which will be imported as grayscale RGB images.
NASAHGT file import followed by appropriate “Gradient Map” filtering
In order to obtain more visible relief information, you will want to map
altitudes to colors, for instance with the “Gradient Map” filter as we did
in the example image above (see also this explicative post on the
process).
We’ll enter strings freeze soon so that translators could safely finalize their
work for 2.10. Following that we expect to start making release candidates
of GIMP 2.10.
On Friday I added support for yet another variant of DFU. This variant is called “driverless DFU” and is used only by BlueCore chips from Cambridge Silicon Radio (now owned by Qualcomm). The driverless just means that it’s DFU like, and routed over HID, but it’s otherwise an unremarkable protocol. CSR is a huge ODM that makes most of the Bluetooth audio chips in vendor hardware. The hardware vendor can enable or disable features on the CSR microcontroller depending on licensing options (for instance echo cancellation), and there’s even a little virtual machine to do simple vendor-specific things. All the CSR chips are updatable in-field, and most vendors issue updates to fix sound quality issues or to add support for new protocols or devices.
The BlueCore CSR chips are used everywhere. If you have a “wireless” speaker or headphones that uses Bluetooth there is a high probability that it’s using a CSR chip inside. This makes the addition of CSR support into fwupd a big deal to access a lot of vendors. It’s a lot easier to say “just upload firmware” rather than “you have to write code” so I think it’s useful to have done this work.
The vendor working with me on this feature has been the awesome AIAIAI who make some very nice modular headphones. A few minutes ago we uploaded the H05 v1.5 firmware to the LVFS testing stream and v1.6 will be coming soon with even more bug fixes. To update the AIAIAI H05 firmware you just need to connect the USB cable and press and hold the top and bottom buttons on the headband until the LED goes out. You can then update the firmware using fwupdmgr update or just using GNOME Software. The big caveat is that you have to be running fwupd >= 1.0.3 which isn’t scheduled to be released until after Christmas.
I’ve contacted some more vendors I suspect are using the CSR chips. These include:
Jarre Technologies
RIVA Audio
Avantree
Zebra
Fugoo
Bowers&Wilkins
Plantronics
BeoPlay
JBL
If you know of any other “wireless speaker” companies that have issued at least one firmware update to users, please let me know in a comment here or in an email. I will follow up all suggestions and put the status on the Naughty&Nice vendorlist so please check that before suggesting a company. It would also be really useful to know the contact details (e.g. the web-form URL, or the email address) and also the model name of the device that might be updatable, although I’m happy to google myself if required. Thanks as always to Red Hat for allowing me to work on this stuff.
I’m Rytelier, a digital artist. I’ve had an interest in creating art for a few years, I mainly want to visualize my original world.
Do you paint professionally, as a hobby artist, or both?
Currently I do only personal work, but I will look for some freelance job in the future.
What genre(s) do you work in?
I work mainly in science fiction – I’m creating an original world. I like to try various things, from creatures to landscapes and architecture. There are so many things to design in this world.
Whose work inspires you most — who are your role models as an artist?
It’s hard to point out certain artists, there are so many. Mainly I get inspired by fantasy art from the internet, I explore various websites to find interesting art.
How and when did you get to try digital painting for the first time?
It was years ago, I’ve got interested in the subject after I saw other people’s work. It was obviously confusing, how to place strokes, how to mix colors, and I had to get used to not looking at my hand when doing something on the tablet.
What makes you choose digital over traditional painting?
I like the freedom and flexibility that digital art gives. I can create a variety of textures, find colors more easily and fix mistakes.
How did you find out about Krita?
I saw a news item about Krita on some website related to digital art and decided to try it.
What was your first impression?
I liked how many interesting brushes there were. As time went on I discovered more useful features. It was surprising to find out that some functions aren’t available in Photoshop.
What do you love about Krita?
It has many useful functions and very high user convenience. I love the brush editor – it’s clean and simple to understand, but powerful. The dynamics curve adjustment is useful, the size dependent brush with sunken curve allows me to paint fur and grass more easily.
Also different functional brush engines. Color smudge is nice for more traditional work, like mixing wet paint. Shape brush is like a lasso, but better because it shows the shape instantly, without having to use the fill tool. Filter brush is nice too, I mainly use it as sharpen and customizable burn/dodge. There are also ways to color line art quickly. For a free program that functionality is amazing — it would be amazing even for a paid program! I like this software much more than Photoshop.
What do you think needs improvement in Krita? Is there anything that really annoys you?
The performance is the thing I most want to see improved for painting and filters. I’m happy to see multi-threaded brushes in the 4.0 version. Also I would like more dynamic preview on applying filters like the gradient map, where it updates instantly when moving the color on the color wheel. It annoys me that large brush files (brushes with big textures) don’t load, I
have to optimize my textures by reducing the size so the brush can load.
What sets Krita apart from the other tools that you use?
The amount of convenience is very high compared to other programs. The amount of “this one should be designed in a better way, it annoys me” things is the smallest of all the programs I use, and if something is broken, then most of these functions are announced to improve in 4.0.
If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?
It’s hard to pick a favourite. I think this, because I challenged myself in this picture and they are my original character, which I like a lot.
What techniques and brushes did you use in it?
I use brushes that I’ve created myself from resources found on the internet and pictures scanned by myself. I like to use slightly different ways of painting in every artwork, still looking for techniques that suit me best. Generally I start from sketch, then paint splatter going all over the canvas, then adding blurry forms, then adding details. Starting from soft edges allows me to find good colors more easily.
I hope that Krita will get more exposure and more people, including professionals, will use it and will donate to its development team instead of buying expensive digital art programs. Open source software is having a great time, more and more tools are being created that replace these expensive ones in various categories.
How about a sports (football, hockey, basketball, etc.) simulator that simulates what it’s like to play a game rather than simulating what it’s like to watch one on TV.
Or, if we’re going to simulate what feels like to watch sports on TV, let’s get hyper-real. Greasy potato-chip fingers, bathroom breaks during ads, find the remote!
There are lots of tutorials around for building an Arduino on a
breadboard, using an Atmega328 (or the older 168) chip, a crystal,
a few capacitors and resistors and a power supply.
It's a fun project that every Arduino hacker should try at least once.
But while there are lots of instructions on how to wire up a breadboard
Arduino, most instructions on how to program one are confusing and incomplete.
Of course, you can program your Atmega chip while it's in an Arduino,
then unplug it from the Arduino's socket and move it to the
breadboard. But what a hassle! It's so more convenient to leave the chip
in the breadboard while you test new versions of the code. And you can,
in two different ways: with FTDI, which uses the Arduino bootloader,
or with an ISP, which doesn't.
Either way, start by downloading a good pinout diagram for the
Atmega328 chip. I use this one: the
Arduino
ATmega328 Pinout from HobbyTronics, which is very compact yet does a
good job of including both the mappings to Arduino digital and analog
pins and the functions like RX, TX, MOSI and MISO you'll need for
programming the chip.
Load Programs with FTDI
An FTDI board is a little trickier to wire than an ISP, but it's
less risky because it loads the code the same way an Arduino would,
so you don't overwrite the bootloader and you
can still put your chip back into an Arduino if things go wrong.
So let's start with FTDI.
I use an
Adafruit "FTDI Friend", but there are lots of similar
FTDI boards from Sparkfun
and other vendors. They have six outputs,
but you'll need only five of those. Referring to your Atmega pinout,
wire up power, ground, TX, and RX. For some FTDI boards you may need
pullup resistors on the TX and RX lines; I didn't need them.
Now you have four pins connected.
Wiring the reset line is more complicated because it requires a
0.1μF capacitor. A lot of tutorials don't mention the capacitor,
but it didn't work for me without one.
Connect from RTS on the FTDI board, through the
0.1μF cap, to the RST line.
A 0.1μF capacitor is an electrolytic cap with a positive and a
negative lead, but the few online tutorials that even mention the
capacitor don't bother to say which side is whick. I connected the
FTDI friend to the cap's negative lead, and the positive lead to the
Atmega chip, and it worked.
You may also need a pullup on that RST/RTS line: a resistor
around 10kΩ from the RST pin 1 of the atmega chip to the 5v power line.
Note: the Fritzing diagram here shows pullup resistors on RST, TX
and RX. You may not need any of them.
Incidentally, RST stands for "reset", while RTS stands for "Ready To
Send"; they're not meant as anagrams of each other. The remaining pin
on the FTDI friend, CTS, is "Clear To Send" and isn't needed for an
Arduino.
Once the wiring is ready, plug in the FTDI board, check to make sure
Port is set to whatever port the FTDI board registered,
and try uploading a program as if you were uploading to a normal Arduino Uno.
And cross your fingers. If it doesn't work, try fiddling with pullups
and capacitor values.
Load Programs with an ISP
An In-System Programmer, or ISP, writes programs straight to the chip,
bypassing (and overwriting) the Arduino bootloader. You can also use
an ISP to burn a new bootloader and reprogram the fuses on your
Arduino, to change parameters like the clock rate. (More on that in Part 2.)
You can use an
Arduino as an ISP, but it's somewhat unreliable and
prone to unexplained errors. A dedicated ISP isn't expensive, is
easier to wire and is more likely to work. A common type of ISP is
called a "USBtinyISP", and you can buy one from vendors like
Sparkfun or
Adafruit,
or search for usbtinyisp on sites like ebay or aliexpress.
ISPs typically use a six-pin connector (2x3). It's not always easy to
figure out which end is which, so use a multimeter in continuity mode
to figure out which pin is ground. Once you're sure, mark your connector
so you'll know which pin is pin 1 (MISO, the pin opposite ground).
Once you have your ISP pins straight, refer to your handy-dandy
Atmega328 pinout and connect power, ground, MOSI, MISO, SCK, and RST
to the appropriate Atmega pins.
All wired up? In the Arduino IDE, set Programmer to your ISP,
for instance, USBtinyISP or Arduino as ISP
Then use the Upload button to upload sketches.
If you prefer Arduino-mk instead of the IDE, add this to your Makefile:
ISP_PROG = usbtiny
(or whatever ISP you're using). Then type make ispload
instead of make upload
Once you have your FTDI or ISP working, then you can think about making
an even simpler circuit -- without the external clock and its associated
capacitors. But there are a couple of additional tricks to that.
Stay tuned for Part 2.
The Open Age Ratings Service is a simple website that lets you generate some content rating XML for your upstream AppData file.
In the last few months it’s gone from being hardly used to being used multiple times an hour, probably due to the requirement that applications on Flathub need it as part of the review process. After some complaints, I’ve added a ton more explanation to each question and made it easier to use. In particular if you specify that you’re creating metadata for a “non-game” then 80% of the questions get hidden from view.
As part of the relaunch, we now have a proper issue tracker and we’re already pushed out some minor (API compatible) enhancements which will become OARS v1.1. These include several cultural sensitivity questions such as:
Homosexuality
Prostitution
Adultery
Desecration
Slavery
Violence towards places of worship
The cultural sensitivity questions are work in progress. If you have any other ideas, or comments, please let me know. Also, before I get internetted-to-death, this is just for advisory purposes, not for filtering. Thanks.
A quick post to tell you that we finally added UTC support to Clocks' and the Shell's World Clocks section. And if you're into it, there's also Anywhere on Earth support.
You will need to have git master versions of libgweather (our cities and timezones database), and gnome-clocks. This feature will land in GNOME 3.28.
Many thanks to Giovanni for coming up with an API he was happy with after I attempted a couple of iterations on one. Enjoy!
There are many different approaches to blending exposures in the various projects, and they can range from extremely detailed and complex to quick and simple.
Today we’re going to look at the latter.
I was recently lucky enough to attend an old friends wedding in upstate NY.
Mairi got married!
(For those not familiar with her, she’s the model from An Open Source Portrait as well as A Chiaroscuro Portrait tutorials.)
Mairi’s chiaroscuro portrait.
I had originally planned on celebrating with everyone and wrangling my two kids, so I left my camera gear at home.
Turns out Mairi was hoping that I’d be shooting photos.
Not wanting to disappoint, I quickly secured a kit from a local rental shop.
(Thank goodness for friends new and old to help wrangle a very busy 2 year old.)
During the rehearsal I was experimenting with views to get a feel for the room I’d have and how framing would work out.
One of the shots looked from the audience and right into a late afternoon sun.
My inner nerd kicked in and I thought, “This might be a neat image to use for a tutorial!”.
Exposure Fusion (Mapping)
The idea behind exposure fusion (or mapping) is to extend the dynamic range represented in an image by utilizing more than one exposure of the same subject and choosing relevant bits to fit in the final output.
I say “fit” because usually you are trying to put a larger amount of data than a single image might have been able to capture.
So you choose which parts you want to use to get something that you’ll like.
For example, here we have two images that will be used in this article where the image showing the foreground correctly causes the sky to blow out, while exposing for the sky causes the foreground to go almost black.
By selectively combining these two images, we can get something that might show a larger dynamic range than would have been possible in a single exposure:
This porridge is too hot, this porridge is too cold, but this porridge is just right.
This is one common use case scenario for creating HDR/EXR imaging (and tonemapping is the term for doing exactly what we’re describing here - squishing data into the viewable range in a way that we like).
In fact, at the end of this article I’ll show how Enfuse handled merging these image exposures (spoiler: pretty darn well).
Exposing
In exposing for the subjects in this image, I used the structure to block the sun from direct view (though there’s still a loss of contrast and flaring).
The straight out of the camera jpg looks like this:
Foreground exposure
This gave me a well exposed foreground and subjects.
I then gave the shutter speed a quick spin to 1⁄1000 (about 4-stops) to get the sky better exposed.
The camera jpg for the sky looked like this:
Sky exposure
In retrospect I probably would have been better to shoot for 2-stops difference to keep the sky exposed higher in the histogram, but c’est la vie.
It also helps to avoid going too far in the extremes when exposing, so as to avoid making it look too unrealistic (yes - the example above probably skirts that pretty close, but it’s exaggerated to make a good article).
This gives us a nice enough starting point to play with some simple exposure mapping.
Alignment
For this to work properly the images do need to line up as perfectly as possible.
Imperfect alignment can be worked around to some extent, usually determined by how complex your masking will have to be, but the better aligned the images are the easier your job will be.
As usual I have had good luck using the align_image_stack script that’s included as part of Hugin.
This makes short work of getting the images aligned properly for just this sort of work:
/path/to/hugin/align_image_stack -m -a OUT FILE1 FILE2
On Windows, this looks like:
c:\Program Files\Hugin\bin\align_image_stack.exe -m -a OUT FILE1 FILE2
Once it finishes up, you’ll end up with some new files like OUT0001.tif.
These are your aligned images that we’ll now bring into GIMP!
Masking
The heart of fusing these exposures is going to rely entirely on masking.
This is where it usually pays off nicely to take your time and consider carefully an approach that will keep things looking clean and with a natural transition between the exposures.
If this had simple geometry it would be an easy problem to solve, but the inclusion of the trees in the background makes it slightly more complex (but not nearly as bad as hair masking can get).
The foreground and sky are very simple things to mask overall, where we know we may want 100% of the foreground to come from one image, and 100% of the sky from another. This helps simplifies things greatly.
Rough mask (temporary - for reference only) where we can see we want all of the sky from one image, and all of the foreground from another.
I tend to keep the darker sky layer on the bottom and the lighter foreground layer above that.
The hard edges of the structure make it an easy masking job there, so the main area of concern here is getting a good blend with the treeline in the background.
There are a couple of approaches we can take to try and get a good blend, so let’s have a look…
Luminosity (Grayscale) Mask
A common approach would be to apply an inverted grayscale mask to the foreground layer.
If there’s a decent amount of contrast between the foreground/sky layers then this is a quick and easy way to get something to use as a base for further work:
Applying this mask yields pretty good looking results right away:
You can also investigate some of the other color channel options to see if there might be something that works better to create a clean mask with.
In GIMP 2.9.x I also found that using Colors > Components > Extract Component using CMYK Key produced another finely separated option.
This makes a nice starting point.
As we said we wanted all of the sky from one exposure and the rest of the image from the other exposure, we can start roughing in the overall mask.
For this simple walkthrough we can make our job a bit easier by using two copies of the foreground layer and selectively blending them over the sky layer.
Why Two Layers?
If you take your foreground layer and start cleaning up the mask for the sky by painting in black, it should be relatively easy.
Until you get down to the treeline.
You can use a soft-edged brush and try to blend in smoothly, but in order to let the sky come through nicely you may find yourself getting more of the dark exposure trees showing through.
This will show as a dark halo on the tops of the trees:
A nice way to adjust the falloff along the tops of the trees is by using a second copy of the foreground layer, and using a gradient on the mask that will blend smoothly from full sky to full foreground along the tops of the trees.
This will ease the dark halo a bit until the transition looks good to you.
You can then modify/update the gradient on the copy until the transition is smooth or to your liking.
At this point my layers would look like this in GIMP:
The results of using a second fore layer with a gradient to help ease the transition:
Top: Original grayscale mask only
Middle: Manual mask painting down to treeline
Bottom: Second layer with gradient mask
When pixel-peeping it may not seem perfect, but in the context of the entire image it’s a nice way to get an easy blend for not too much extra work.
Left: Grayscale mask, Right: final mask with gradient
At this point most of the exposure is blended nicely.
The only place in this particular image I would work some more would be to darken the structure a little bit to lessen the flaring and maybe bring back a little darkness.
This can be accomplished by painting over the top mask.
I used black with a smaller opacity of around 25% to paint over the structure and allow the darker version underneath to show through a bit more:
Left: gradient masked, Right: structure darkened slightly to taste
Enfuse Comparison
I have previously used Enfuse and gotten great results even more quickly.
For comparison here is the result of running Enfuse against the same images:
I prefer our manually blended result personally, but I could see another future article or post about using the Enfuse blend for areas of complexity and blending the Enfuse output into the final image to help.
Might be interesting.
(I prefer our blended result because Enfuse considered extremely bright areas as candidates for fusion with the other exposure - so the bricks and tent highlights got pushed down automatically.)
Fin
From here any further fiddling with the image is purely for fun.
The two versions of the image have been merged nicely.
If you wanted to adjust the result to not appear quite as extreme you could modify each of the layers to taste.
For instance, you could lighten the sky layer to decrease the extreme range difference between it and the foreground layer.
Where’s the fun in keeping it too realistic though? :)
For reference, here’s my final version after masking with a bit of a Portra tone thrown in for good measure:
Not bad for a relatively quick approach.
Resources
This wouldn’t be complete with some resources and further reading for folks!
I have saved the GIMP 2.9.x .XCF file I used to write this.
It has all of the masks and layers I used to create the final version of the image:
The Distrinet Research Group at KULeuven (where I studied!), recently asked me to speak about “Cloud Native” at one of their R&D Bites sessions. My talk covered Kubernetes, cloud automation and all the cool new things we can do in this brave new cloud native world.
Are you interested in all things Raspberry Pi, or just curious about them?
Come join like-minded people this Thursday at 7pm for the inaugural meeting
of the Los Alamos Raspberry Pi club!
At Los Alamos Makers,
we've had the Coder Dojo for Teens going on for over a year now,
but there haven't been any comparable programs that welcomes adults.
Pi club is open to all ages.
The format will be similar to Coder Dojo: no lectures or formal
presentations, just a bunch of people with similar interests.
Bring a project you're working on, see what other people are working
on, ask questions, answer questions, trade ideas and share knowledge.
Bring your own Pi if you like, or try out one of the Pi 3 workstations
Los Alamos Makers has set up. (If you use one of the workstations there,
I recommend bringing a USB stick so you can save your work to take home.)
Although the group is officially for Raspberry Pi hacking, I'm sure
many attendees will interested in Arduino or other microcontrollers, or
Beaglebones or other tiny Linux computers; conversation and projects
along those lines will be welcome.
Beginners are welcome too. You don't have to own a Pi, know a resistor
from a capacitor, or know anything about programming. I've been asked
a few times about where an adult can learn to program. The Raspberry Pi
was originally introduced as a fun way to teach schoolchildren to
program computers, and it includes programming resources suitable to
all ages and abilities. If you want to learn programming on your own
laptop rather than a Raspberry Pi, we won't turn you away.
Raspberry Pi Club:
Thursdays, 7pm, at Los Alamos Makers, 3540 Orange Street (the old PEEC
location), Suite LV1 (the farthest door from the parking lot -- look
for the "elevated walkway" painted outside the door).
There's a Facebook event:
Raspberry Pi club
on Facebook. We have meetings scheduled for the next few Thursdays:
December 7, 14, and 21, and after that we'll decide based on interest.
Here’s an update for people waiting for news on the ColorHug+ spectrophotometer, and perhaps not the update that you were hoping for. Three things have recently happened, and each of them makes producing the ColorHug+ even harder than it was before:
A few weeks I became a father again. Producing the ColorHug and ColorHug2 devices takes a significant amount of time, brain, muscle and love, and I’m still struggling with dividing up my time between being a modern hands-on dad and also a full time job at Red Hat. ColorHug was (and still is) a hobby that got a little out of control, and not something that brings in any significant amount of money. A person spending £300 on a complex device is going to expect at least some level of support, even when I’ve had no sleep and only have half a brain on a Saturday morning.
Brexit has made the GBP currency plunge in value over the last 12 months, which in theory should be good as it will encourage exports. What’s slightly different for me is that 80% of the components for each device are purchased in USD and EUR, and the remaining ones in GBP have risen accordingly with the currency plunge. I have no idea what a post-Brexit Britain looks like, but I think it’s a prudent choice to not “risk” £20k in an investment I’d essentially hope to break even on long term, for fun.
The sensor for the ColorHug+ was going to be based on the bare chip SPARK from OceanOptics. I’ve spent a long time working out all the quirks of the sensor, making it work with a UV and wideband illuminant and working out all the packaging questions. The price of the sensor was always going to be expensive (it was greater than half of the RRP in one component alone, even buying a massive batch) but last month I got an email saying the sensor was going to be discontinued and would no longer be available. This is figuratively and also quite literally back to the drawing board.
I’ve included some photos above to show I’ve not been full of hot air for the last year or so, and to remind everyone that the PCB, 3D light guide model and client software are all in the various ColorHug git repos if you want to have a go at building one yourself (although, buy the sensor quickly…). I’ll still continue selling ColorHug2 devices, and supporting all the existing hardware but this might be the end of the line for ColorHug spectrometer. I’ll keep my eye on all the trade magazines for any new sensor that is inexpensive, reliable and accurate enough for ICC profiles, so all this might just be resurrected in the future, but for the short term this is all on ice. If you want a device right now the X-Rite i1Studio is probably the best of the bunch, although it is sold by Pantone with an RRP of £450. Fair warning: Pantone and free software are not exactly bedfellows, although it does work with ArgyllCMS using a reverse engineered userspace driver that might void your warranty.
I’ll update the website at some point this evening, I’m not sure whether to just post all this or remove the ColorHug+ page completely. Perhaps a sad announcement, but perhaps not one that’s too unexpected considering the lack of updates in the last few months. Sorry to disappoint everybody.
Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!
and the changelog as compared to 2.2.0 can be found below. Some of the fixes might have been backported to the stable 2.2.x series already.
The maintainership of the RawSpeed library was transferred to the darktable project. The work on code cleanup, hardening, modernization, simplification and testing is ongoing.
Well over 2 thousand commits to darktable+rawspeed since 2.2.0
244 pull requests handled
320+ issues closed
Updated user manual is coming soon™
Hell Froze Over
As you might have read on our news post we finally ported darktable to Windows and intend to support it in the future. At the moment it’s still lacking a few features (for example there is not printing support), has a few limitations (tethering requires special drivers to be installed) and comes with its own set of bugs. But overall we are confident that it’s quite usable already and hope you will enjoy it. A very special thanks goes to Peter Budai who finally convinced us to agree to the port and who did most of the work.
The Big Ones
A new module for haze removal
The local contrast module can now be pushed much further, it also got a new local laplacian mode
Add undo support for masks and more intelligent grouping of undo steps
Blending now allows to display individual channels using false colors
darktable now supports loading Fujifilm compressed RAFs
darktable now supports loading floating point HDR DNGs as written by HDRMERGE
We also added channel specific blend modes for Lab and RGB color spaces
The base curve module allows for more control of the exposure fusion feature using the newly added bias slider
The tonecurve module now supports auto colour adjustment in RGB
Add absolute color input as an option to the color look up table module
A new X-Trans demosaicing algorithm, Frequency Domain Chroma, was implemented.
You can now choose from pre-defined scheduling profiles for OpenCL
Speaking of OpenCL, darktable now allows to force-use OpenCL for a specific pixelpipe
Xmp sidecar files are no longer written to disk when the content didn’t actually change. That mostly helps with network storage and backup systems that use files’ time stamps
New Features And Changes
Show a dialog window that tells when locking the database/library failed
Don’t shade the whole region on the map when searching for a location. Instead just draw a border around it.
Also in map mode: Clear the search list and map indicators when resetting the search module.
With OsmGPSMap newer than version 1.1.0 (i.e., anything released after that OsmGPSMap version) the map will show copyright info.
Running jobs with a progressbar (mostly import and export) will show that progress bar ontop the window entry in your task bar – if the system supports it. It should work on GNOME, KDE and Windows at least.
Add bash like string replacement for variables (export, watermark, session settings).
Add a preferences option to ask before removing empty dirs
The “colorbalance” module got a lot faster, thanks to SSE optimized code
Make gradient sliders a little more colorful and use them in the white balance module
Make PNG compression level used for exporting configurable
On OSX, load single images from command line or via drag&drop in darkroom mode
Add an option to omit the intermediate tag hierarchy in exported files and only add the last level
In the watermark module, sort the list of SVG files and omit the file extension
Support XYZ as a proofing profile
Local contrast now got a new slider to set the midtone range
darktable got two new helper scripts (those are not installed by default, grab them from the sources): One to purge thumbnails that no longer have an associated image in the database, and a second script that uses inotify to watch a folder for new files to open them in a running darktable instance.
In the curve editors of base curve and tone curve you can now delete nodes with a right click and see coordinates of nodes while editing. Note that you can use keyboard modifiers ctrl and shift to change the precision of your changes
Creating a new instance of a module can now be done with a quick click of the middle mouse button on the multi-instance icon
New darktable installations on computers with more than 8 Gb of memory will now by default use half of that per module
Several background colors and the brush color are now configurable in the CSS
Some new cameras can bump the ISO level to insane highs. We try to follow as good as we can by no longer limiting it to 51200 in the GUI
Base curve and the highlights module now support multiple instances and use blending and masks
Having the 1 key toggle between 1 and 0 stars wasn’t very popular with many people. You can disable that extra feature and have it behave like the other rating shortcuts now
You can decide if you want to be asked before resetting the history stacks of images from the lighttable
The grain module was slightly changed to have a more pleasing, photographic-paper like appearance
Using the color look up table module you can now convert your images to monochrome, honoring the Helmholtz-Kohlrausch effect
Some more small improvements were made
Bugfixes
Fix the problem with rating images by accident when moving the mouse while typing an image size in the export module
Fix several oddities in folder and tag mode of the collect module.
Print mode’s color profile settings no longer interact with the export module
Update the style lists when importing a style
Fix some bugs with multiple module instances used in a style
On OSX only the main window should be fullscreen, not the popups
Some speedups with VERY big libraries or having A LOTOF tags
Significantly speed up tagging many images
Fix searching locations using OpenStreetMap
Fix partial copies of large files in “import from camera”
Fix a crash in the import dialog when using Lua to add widgets there
Fix some false-positive warnings about another running darktable instance and it having locked the databases
No longer switch to the favourite modules group when duplicating one of its modules
Fix loading of XYZ files
Fix Lab export when the profile was set from the lighttable
Create tmp snapshot files with mode 0600 to stop other people looking at them
Fix several bugs with Wayland. However, there are still issues, so darktable will prefer XWayland
Google deprecated the Picasa Web API so it’s no longer possible to create G+ albums
Fix the default for sliders with target not being “red” in the channel mixer
Fix the removing of directories
Make the escape key cancel history dialogs
Block keyboard accels when editing camera controls
Properly delete XMP sidecars
Make sure that the rating set in darktable is used for the exported file, not something set inside the raw file
Don’t re-write all XMP files when detaching a tag
Sync XMPs when a tag is removed from the database
Sync XMPs after a tag is attached/detached via the Lua API
Many more bugs got fixed
Lua
darktable now uses Lua 5.3. The bundled copy got updated accordingly
Add dt.print_log. It’s like print_error but without the ERROR prefix
Reorder callback parameters for intermediate export image: add the actual image to the parameters of the event
Call lua post-import-image event synchronously
Add darktable.configuration.running_os to detect the OS darktable is running on
New widget type: section_label, adds a label which looks like a section change
Changed Dependencies
CMake 3.1 is now required.
In order to compile darktable you now need at least gcc-4.9+/clang-3.4+, and gcc-5.0+ is highly recommended.
ZLIB is now required for the DNG Deflate compressed raw support.
darktable now uses Lua 5.3
Camera support, compared to 2.2.0
Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!
Having written a basic blink program in C for
my
ATtiny85 with a USBtinyISP (Part 1), I wanted to use it to control other
types of hardware. That meant I wanted to be able to use Arduino libraries.
In "Additional Boards Manager" near the bottom, paste this: https://raw.githubusercontent.com/damellis/attiny/ide-1.6.x-boards-manager/package_damellis_attiny_index.json and click OK
Tools->Boards->Board Manager...
Find the ATTiny entry, click on it, and click Install
Back in the main Arduino IDE, Tools->Boards should now havea
couple of Attiny entries. Choose the one that corresponds to your
ATTiny; then, under Processor, narrow it down further.
In Tools->Programmer, choose the programmer you're using
(for example, USBtinyISP).
Now you should be able to Verify and Upload a blink sketch
just like you would to a regular Arduino, subject to the pin limitations
of the ATTiny.
That worked for blink. But it didn't work when I started adding libraries.
Since the command-line was what I really cared about, I moved on rather
than worrying about libraries just yet.
ATtiny with Arduino-Makefile
For most of my Arduino development I use an excellent package called
Arduino-Makefile.
There's a Debian package called arduino-mk that works fine for normal
Arduinos, but for ATtiny, there have been changes, so use the version
from git.
A minimal blink Makefile looks like this:
BOARD_TAG = uno
include /usr/share/arduino/Arduino.mk
It assumes that if you're in a directory called blink, it
should compile a file called blink.ino. It will also build
any additional .cpp files it finds there. make upload
uploads the code to a normal Arduino.
With Attiny it gets quite a bit more complicated.
The key is that you have to specify an alternate core:
ALTERNATE_CORE = ATTinyCore
But there are lots of different ATtiny cores, they're all different,
and they each need a different set of specifiers like BOARD_TAG in
the Makefile. Arduino-Makefile comes with an example, but it isn't
very useful since it doesn't say where to get the cores that correspond
with the various samples. I ended up filing a documentation bug and
exchanging some back-and-forth with the maintainer of the package,
Simon John, and here's what I learned.
First: as I mentioned earlier, you should use the latest git version
of Arduino-Makefile. The version in Debian is a little older and some
things have changed; while the older version can be made to work with
ATtiny, the recipes will be different from the ones here.
Second, the recipes for each core will be different depending on which
version of the Arduino software you're using. Simon
says he sticks to version 1.0.5 when he uses ATtinys, because newer
versions don't work as well. That may be smart (certainly he has a lot
more experience than I do), but I'm always hesitant to rely on
software that old, so I wanted to get things working with the latest
Arduino, 1.8.5, if i could, so that's what the recipes here will
reflect.
Third, as mentioned in Part 1, clock rate should be 1MHz, not 8MHz
as you'll see in a lot of web examples, so:
F_CPU = 1000000L
Fourth, uploading sketches. As mentioned in the last article, I'm using
a USBtinyISP. For that, I use ISP_PROG = usbtiny and
sketches are uploaded by typing make ispload rather than
the usual make upload. change that if you're usinga
different programmer.
With those preliminaries over:
I ended up getting two different cores working,
and there were two that didn't work.
Install the cores in subdirectories in
your ~/sketchbook/hardware directory. You can have multiple
cores installed at once if you want to test different cores.
Here are the recipes.
CodingBadly's arduino-tiny
This is the core that Simon says he prefers, so it's the one I'm going
to use as my default. It's at
https://github.com/Coding-Badly/arduino-tiny.git,
and also a version on Google Code. (Neither one has been updated since 2013.)
git clone it into your sketchbook/hardware.
Then either cp 'Prospective Boards.txt' boards.txt
or create a new boards.txt and copy from 'Prospective Boards.txt'
all the boards you're interested in (for instance, all the attiny85
definitions if attiny85 is the only attiny board you have).
Then your Makefile should look something like this:
If your Arduino software is installed in /usr/share/arduino you can
omit the first line.
Now copy blink.ino -- of course, you'll have to change pin 13
to be something between 1 and 6 since that's how many pins an ATtiny
has -- and try make and make ispload.
SpenceKonde's ATTinyCore
This core is at https://github.com/SpenceKonde/ATTinyCore.git.
I didn't need to copy boards.txt or make any other changes,
just clone it under sketches/hardware and then use this Makefile:
There are plenty of other ATtiny cores around. Here are two that
apparently worked once, but I couldn't get them working with the
current version of the tools. I'll omit links to them to try to
reduce the probability of search engines linking to them rather
than to the more up-to-date cores.
Damellis's attiny (you may see this referred to as HLT after the
domain name, "Highlowtech"), on GitHub as
damellis/attiny,
was the first core I got working with Debian's older version of
arduino-mk and Arduino 1.8.4. But when I upgraded to the latest
Arduino-Makefile and Arduino 1.8.5, it no longer worked. Ironic since
an older version of it was the one used in most of the tutorials I
found for using ATtiny with the Arduino IDE.
Simon says this core is buggy: in particular, there are problems with
software PWM.
I also tried rexxar-tc's arduino-tiny.core2 (also on GitHub).
I couldn't get it to work with any of the Makefile or Arduino
versions I tried, though it may have worked with Arduino 1.0.
With two working cores, I can get an LED to blink.
But libraries are the point of using the Arduino framework ...
and as I tried to move beyond blink.ino, I found that
not all Arduino libraries work with ATtiny.
In particular, Wire, used for protocols like I2C to talk to all
kinds of useful chips, doesn't work without substantial revisions.
But that's a whole separate topic. Stay tuned.
I’ve spent the last couple of evenings designing an OpenHardware USB 2.0 1-port hub tentatively called the ColorHub (although, better ideas certainly welcome). Back a bit: What’s the point in a 1-port hub?
The finished device is a Cypress 2 port hub internally, with a PIC microcontroller hard wired onto the other “fixed” port. The microcontroller can control the hub and the USB power of the other “removable” port, so you can simulate an unplug, replug or hub reset using a few simple commands. This also allows you forcefully reset hardware that’s not responding, and also allows you to add hardware tests to test enumeration and device removal. With the device it is trivial to write a script to replug a device 5000 times over one evening, or re-connect a USB device that’s not responding for whatever reason. The smart hub also reports when USB devices are connected to the downstream port, and even when they have not enumerated correctly. There could be commands to get the status of that and also optionally wait until those things have happened.
The other killer feature for me is that the microcontroller has lots of spare analog and digital IO, and with two included solid-state MOSFET relays you can wire up two physical switches so that no user interaction is required. This means you can test hardware that has these kind of requirements:
Remove USB plug
Press and hold buttons A&B
Insert USB plug
Release buttons A&B
It would be fairly trivial to wire up the microcontroller ADC to get a rough power consumption figure, or to set some custom hub descriptors; it would be completely open and “hackable” like the ColorHug.
I’ve made just one prototype and am using it quite nicely in the fwupd self tests, but talking to others yesterday this seems the kind of device that would be useful for other people doing similar QA activities. I need to build another 2 for the other devices requiring manual button-presses in the fwupd hardware cardboard-box-tests and it’s exactly the same price to order 50 tiny PCBs as 5.
The dangerous question: Would anyone else be interested in purchasing this kind of thing? The price would be in the £50-60 range, so certainly not cheap, but this is really the cost of ultra-small batches of moderately complicated electronics these days. If you’re interested, send me an email (richard_at_hughsie_dot_com) and depending on demand I’ll either design some nice custom PCBs or just hack together two more prototypes for my own use. Please also tell me if something like this already exists: if so I can save some time and just buy something that someone else has built. Comments welcome.
Thanks to the GNOME Foundation, a handful of designers and developers got together last week in London to refocus on the core element of the GNOME experience, the shell. Allan and Cassidy have already summed up everything in their well written blog posts, so I’d like to point to some pretty pictures and the video above.
Stay tuned for some higher fidelity proposals in the areas of app switching & launching and the lock/login experience.
With all the turmoil the project experienced in 2017 it looked for a while as if we wouldn’t have a face to face meeting this year. But that’s not good for a project working on its fourth major release! We knew we really had to sit together, and finally managed to have a smaller than usual, but very productive, sprint in Deventer, the Netherlands from Thursday 23th to Sunday 26th.
Not having been together since August 2016, we had an agenda stuffed with a enormous backlog of items. And since we’ve been working on new code for a long time ago, our bug tracker was also slowly dying from elephantiasis of the database.
Let’s do the bug tracker first: we managed to close over 120 bugs! Not every bug that gets closed gets closed with a fix: the problem is that most bug reports are actually help requests from users, and many of the rest are duplicates, or requests for features that are irrelevant for Krita. Still, while triaging the list of open and unconfirmed bug reports, we managed to fix more than a dozen real bugs.
Here’s the proof:
Now let’s go to the other topics that were discussed…
Krita 4.0
We finalized the set of features that we want to complete for 4.0, and came up with a schedule. Originally, we wanted to release 4.0 this year, but that’s completely impossible…
We still want to add:
Lazy brush for easy line art colorization (mostly works, needs some testing and bug fixing)
Stacked brushes, redux. Our first implementation was ready in 2016, but we discarded it and will have to redo the work in a different way
The new text tool. This is going to be a massive simplication of our original plans… While Boudewijn is still working on making vertical text work in Qt, that’s not going to be ready. The tool itself will be simple, easy to use for the main use-cases for Krita.
There are some missing bits in features that are mostly done, like the Python plugin manager, or finishing up the brush editor redesign.
We are also still facing some problems on some platforms: on Linux, we don’t have a working script to package Python and GStreamer in the appimages, and the way we package G’Mic is wrong. On OSX, we don’t have a working G’Mic at all, and no support for PDF import. We do need help with that!
The current release schedule looks like this:
31st December: Freeze. No new strings, no new features accepted
January 3rd: first Beta 1 test builds
January 10th: Krita 4.0 Beta 1
From now on, weekly development builds, if we have the resources to automate that.
From now on, bug fixing, bug fixing, bug fixing.
March 7th: first Krita 4.0 release candidate
March 14th: Krita 4.0
Right now, the focus is on releasing Krita 4.0, which means that currently no new Krita 3.3.x bug fix releases are planned, even though there are plenty of fixes that can be backported and should be released.
HDR Support
We’ve been approached about supporting HDR screens in Krita, which is different from the existing 10 bits/channel screens that Krita has supported on some systems for a long time. But even though we can get access to the hardware, we don’t have the time to work on it, and the API’s seem to be exclusively Windows-only.
Get Hot New Stuff
This was a 2017 Google Summer of Code project. We had an almost working implementation, but something changed server-side which means that right now nothing works. It turns out to be pretty hard to figure out what’s up, and that means it’s unlikely Krita 4.0 with have a dialog for download resources from share.krita.org.
Python API
Over the summer, we have gained a lot of experience with our Python API, also thanks to our Google Summer of Code student Eliakin. We found some places where the current API needs to be expanded and decided to do just that…
Google Summer of Code
Krita has participated in all editions of Google Summer of Code, and many of the current developers cut their teeth on Krita’s codebase during GSOC. We discussed how we could improve our participation, and how we could attract the kind of students who stay around after the project ends. One problem there is the new rules, which specify that a student can participate only twice. In our experience, it is much better for retention if students can participate during their entire university career.
We also decided to raise the bar a bit. We often see candidates who fulfill the current requirement of three patches with three one-liner patches. We’ll create a set of tasks that have a rating (1 to 3), and only candidates who have reached at least 6 points of completed tasks will be acceptable. We will also test the candidates on their ability to navigate the codebase by asking them to make a diagram of a particular subsystem.
Telemetry
The telemetry Summer of Code project has resulted in working code and a working server, but was not far enough to be mergeable to Krita. We are still interested in learning what we can about the way Krita is actually used, though. The student is interested in doing his bacherlor’s thesis on this subject, too.
Contributor Guide
Currently, information about contributing to Krita is scattered all over our website, our manual, the KDE community wiki, the source code repository and external websites. Jouni has created an outline for a contributor manual that should be part of our manual website. We’ve even already started on some parts, like the bug tracker howto!
Hello, I’m Radian. I’m a digital artist from Russia. Honestly, I don’t know what else to say here
Do you paint professionally, as a hobby artist, or both?
I’m not working professionally yet but I’m aiming for it! I want to find work in-house because that will speed up my progress, but there is like only one serious company in my city.
What genre(s) do you work in?
I like to make neat atmospheric illustrations, usually it’s fantasy inspired works. Actually, I care more about making it look cool so I can paint whatever I like. Except cityscapes and street views. I hate those.
Whose work inspires you most — who are your role models as an artist?
I think I can name two – Mike Azevedo and Craig Mullins. I like Mike’s colors and light work so much, he definitely knows how to make it awesome. He was one of the artists who inspired me a lot from the start of my art journey.
Craig – digital art pioneer who managed to keep progressing and experimenting for so many years. I’d say I like his “philosophy”, his way to do things, his way to think.
How and when did you get to try digital painting for the first time?
About 3 years ago, I tried to make a drawing with vector tools and mouse
What makes you choose digital over traditional painting?
Colors, all of them! It’s easy to mess with, easy to experiment. I like good traditional art, I like its aesthetics, but I don’t like working with traditional media, mixing colors, cleaning brushes and such.
How did you find out about Krita?
From a comment on some entertainment site. It was something like “hey, there is also Krita, it can do the same things as SAI and even more, plus it’s free”.
What was your first impression?
“Whoa, so many buttons! I’m gonna press them all”
What do you love about Krita?
Someone can say that they like blending brushes or a lot of tools or something else. But I’ll say Krita is extremely comfy. It’s very logical (in most areas and it has many little but very handy things.
What do you think needs improvement in Krita? Is there anything that really annoys you?
There are some things that slightly annoy me but those are either very hard to fix or super small and not even worth ranting about. Sometimes both. Improvements? I’m sort of an idea guy so I can write a kilometer-long text about what can be improved. Though I can’t say anything needs to be improved in Krita. Vectors and text tools aren’t good but they’re already in development.
What sets Krita apart from the other tools that you use?
The best balance between comfort and functionality. There are a few cool Photoshop things that I’d like to have in Krita but if I start Photoshop I
can count a lot of things there that I’d like to get from Krita. It’s just hard to use for me. Any other program I tried could win one or two rounds, but Krita always had more. More functions, more tricks for artists, more little handy things and more possibilities. Also I think Python scripting will add a new level of comfort that will never be defeated.
If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?
I tend to hate any of my artwork if it is more than 1-3 months old but there are a couple of exceptions. The Kiki painting I made for the artbook “Made with Krita” is one of them. I used a bunch of new tricks in here and probably made a few good choices by accident.
What techniques and brushes did you use in it?
I used my own brush pack, many brushes “inspired” by some Photoshop brushpack from pro artists. There was’t a single brush that satisfied me so I had to learn brush creation magic
As I said, I like Mullins’s philosophy: more looking, more thinking, less painting. I usually trying to do as much as I can with as few strokes as I can. But I like to experiment as well. If there are so many techniques, why should I use only one? Try everything, break all rules! Learn from mistakes and especially learn from successful experiments.
It’s kinda funny to see how such a small team makes such big progress in development while Photoshop stagnates for years with a big company full of professionals.
It turns out several things have changed under Raspbian Stretch.
Here's the abbreviated procedure for Stretch:
Install LIRC
$ sudo apt-get install lirc
Enable the LIRC Overlay
Eedit /boot/config.txt as root, look for
this line and uncomment it:
# Uncomment this to enable the lirc-rpi module
dtoverlay=lirc-rpi
Or if you prefer to use a pin other than 18,
change the pin assignment like this:
# Uncomment this to enable the lirc-rpi module
dtoverlay=lirc-rpi,gpio_in_pin=25,gpio_out_pin=17
See /boot/overlays/README for more information on overlays.
Fix the LIRC Options
Edit /etc/lirc/lirc_options.conf,
comment out the existing driver and device lines,
and add:
driver = default
device = /dev/lirc0
Reboot and stop the daemon
Reboot the Pi.
Now a bunch of LIRC daemons will be running. You don't want them
while you're configuring, and if you're eventually going to be
reading button presses from Python, you don't want them at all.
But you need them if you want to read from the
/var/run/lirc/lircd socket.
Use mode2 to verify it sees the buttons
With the daemons not running, a program called mode2
can verify that your device's buttons are being seen at all.
I have no idea why it's named that, or what Mode 1 is.
mode2 -d /dev/lirc0
You should see lots of output. If you don't, double-check your wiring
and everything else you've done up to now.
Set up an lircd.conf
Here's where it gets difficult. On Jessie, you could run
irrecord -d /dev/lirc0 ~/lircd.conf
as described in my
earlier article.
However, that doesn't work on stretch. There's apparently a
bug in the
irrecord in stretch that makes it generate a file that doesn't work.
If you try it and it doesn't work, run
tail -f /var/log/messages | grep lirc
and you may see Info: Cannot configure the rc device for /dev/lirc0
and when you press buttons you'll see Notice: repeat code without
last_code received but you won't get any keys.
If you have a working lirc setup from a Jessie machine, try it first.
If it doesn't work, there's a script you can try that converts older
lirc conf files to a newer format. The safest way to try it is to
copy (with cp -a) the whole /etc/lirc directory to a local
directory and run:
/usr/share/lirc/lirc-old2new your-local-copy
Or if you feel brave, back up /etc/lirc and run
sudo /usr/share/lirc/lirc-old2new with no arguments.
Either way, you should get an lirc.conf that has a
chance of working with stretch.
If you don't have a working Jessie config, you're in trouble.
You might be able to edit the one from irrecord to make it work.
Here's the first part of my
working Jessie lircd.conf:
begin remote
name /home/pi/lircd.conf
bits 16
flags SPACE_ENC|CONST_LENGTH
eps 30
aeps 100
header 9117 4494
one 569 1703
zero 569 568
ptrail 575
repeat 9110 2225
pre_data_bits 16
pre_data 0xFD
gap 108337
toggle_bit_mask 0x0
begin codes
KEY_POWER 0x00FF
KEY_VOLUMEUP 0x807F
KEY_STOP 0x40BF
KEY_BACK 0x20DF
KEY_PLAYPAUSE 0xA05F
KEY_FORWARD 0x609F
KEY_DOWN 0x10EF
and here's the corresponding part of the nonworking one generated on Stretch:
begin remote
name DingMai
bits 32
flags SPACE_ENC|CONST_LENGTH
eps 30
aeps 100
header 9117 4494
one 569 1703
zero 569 568
ptrail 575
repeat 9110 2225
gap 108337
toggle_bit_mask 0x0
frequency 38000
begin codes
KEY_POWER 0x00FD00FF 0xBED8F1BC
KEY_VOLUMEUP 0x00FD807F 0xBED8F1BC
KEY_STOP 0x00FD40BF 0xBED8F1BC
KEY_BACK 0x00FD20DF 0xBED8F1BC
KEY_PLAYPAUSE 0x00FDA05F 0xBED8F1BC
KEY_FORWARD 0x00FD609F 0xBED8F1BC
KEY_DOWN 0x00FD10EF 0xBED8F1BC
It looks like setting bits to 16 and then using the second quartet
from each key might work. So try that if you're stuck.
Once you get irw working, you're home free. The Python modules
probably still won't do anything useful, but you can use my
pyirw.py
script as a model for a simple way to read keys from the lirc daemon.
In case you hit problems beyond what I saw, I found
this
discussion useful, which links to a complete
GitHub
gist of instructions for setting up lirc on Stretch.
Those instructions have a couple of extra steps involving module loading
that it turned out I didn't need, and on the other hand
it doesn't address the problems I saw with irrecord.
It looks like lirc configuration is a black art, not a science.
See what works for you. Good luck!
“Many were sorely neglected and not properly fed, clothed, or housed. Others suffered physical, psychological, and sexual abuse. All were deprived of the love and care of their families, of their parents, and of their communities. These are the hard truths that are part of Canada’s history.“
“Two primary objectives of the residential school system were to remove and isolate children from the influence of their homes, families, traditions and cultures, and to assimilate them into the dominant culture. These objectives were based on the assumption Aboriginal cultures and spiritual beliefs were inferior and unequal. Indeed, some sought, as it was infamously said, “to kill the Indian in the child.” Today, we recognize that this policy of assimilation was wrong, has caused great harm, and has no place in our country.”
It may just be words, but it is important that these issues are acknowledged by the highest office in our country.
This is becoming a sort of tradition for me to post something giving thanks around this holiday.
I think it’s because this community has become such a large part of my life (even if I don’t have nearly as much time to spend on it as I’d like).
Also, I think it helps to remind ourselves once in a while of the good things that happen to us. So in that spirit…
I want to start things off by acknowledging those that go the extra mile and help offset the costs of the infrastructure to keep this crazy ship afloat (sorry, I’m an ocean engineer by training and a sailor - so nautical metaphors abound!).
Once again the amazing Dimitrios Psychogios has graciously covered our server expenses (and then some) for another full year.
On behalf of the community, and particularly myself, thank you so much!
Your generosity will cover infrastructure costs for the year and give us room to grow as the community does.
We also have some awesome folks who support us through monthly donations (which are nice because we can plan better if we need to). Together they cover the costs of data storage + transfer in/out of Amazon AWS S3 storage (basically the storage and transfer of all of the attachments and files in the forums).
So thank you, you cool froods, you really know where your towels are:
Thank you all!
If you happen to see any of these great folks around the forum consider taking a moment to thank them for their generosity!
If you’d like to join them in supporting the site financially, check out the support page.
The community has just been amazing, and we’ve seen nice growth this past year.
Since the end of August we’ve seen about a 50% increase in weekly sessions on discuss.
We’re currently hovering around 2,500 daily pageviews on the forums:
We’ve added almost 950 new users, or almost 3 new users every day!
I figure @LebedevRI will yell at me if I forget to mention raw.pixls.us (RPU) again.
Back in January @andabata built a new site to help pick up the work of the old rawsamples.ch website to collect raw sample files for testing.
So thank you @andabata and @LebedevRI for your work on this!
A big thank you to everyone who has taken the time to check the site and upload missing (or non-freely licensed) raw files to include!
While we’re talking about RPU, please consider having a look at this post about it on discuss and take a few minutes to see if you might be able to contribute by providing raw samples that we are missing or need (see the post for more details).
If you don’t have something we need, please consider sharing the post on social media to help us raise awareness of RPU!
Thank you!
If you’re not aware of it, one of the things we try to do here beside run the site and forum is to assist projects with websites and design work if they want it.
Earlier this year the digiKam team needed to migrate their old Drupal website to something more modern (and secure) and @paperdigits figured, “why not”?
So we rolled up our sleeves and got them setup with a newly designed static website built using Hugo (which was completely new to me).
We were also able to manage their comments on the website for them by embedding topics from right here on discuss.
This way their users can still own their comments and we can manage spam and moderate things for them.
The best part, though, is the addition of their users and knowledge to the community!
I want to personally take a moment to thank @darix for all the work he does keeping things running smoothly here.
If you don’t see him, it means all the work he’s doing is paying off.
I speak with him daily and see firsthand the great work he’s doing to make sure all of us have a nice place to call home.
Thank you so much, @darix!
As usual @paperdigits (https://silentumbrella.com) also has a great attitude and pro-active approach to the community which I am super thankful for.
He also does things that aren’t always visible, but are essential to keeping things running smoothly, like moderating the forum, checking the health of sites we are helping to manage, and writing/editing posts.
I can’t stress enough how much it helps to keep your interest and spirits engaged in the community when you have someone else around who’s so positive and helpful. Thank you so much, @paperdigits!
At the end of the day this is a community, and it’s vibrancy and health is a direct result of all of you, its members.
So above all else this is by far the thing I am most thankful for - getting to meet, learn, and interact with all of you.
improving contrast with the local laplacian filter
sometimes difficult lighting situations arise which, when taking
photographs, result in unappealing pictures.
for instance very uniform lighting on a cloudy day may give dull
results, while very contrasty illumination (such as back lit) may
require to compress the contrast to embrace both highlights and
shadows in the limited dynamic range of the output device.
refer to the following two shots as examples:
input + high contrast b/w
hdr + compressed
many options to achieve this exist in literature and many of them are
implemented in darktable already.
this post is about yet another approach to this which turned out to be
extremely versatile, almost artifact-free, and reasonably fast to compute:
the local laplacian pyramid.
repeat figure with curves and then it’s pretty much exposure fusion with
many different images (one per gray level)
local contrast with the local laplacian pyramid
vanilla laplacian pyramids [0]
are known to be a good tool to blend over between two images. for their
applicability to local contrast, not so much.
perhaps surprisingly, following up on exposure fusion (we
blogged about that for darktable before ),
a clever way to modify local
contrast based on laplacian pyramids has been devised [1]:
it works by pretty much creating a separate laplacian pyramid for every pixel
in the image, and then selecting coefficients from these these based on certain criteria.
process the image n times, mapping it through a curve (see below for examples)
create the n laplacian pyramids that go with the images
merge into a final laplacian pyramid
collapse this output pyramid to create the output image.
this is nicely illustrated in this video exported from halide:
also, as it turns out, the gpu is really good at processing laplacian pyramids.
the opencl port of this turned out to be very useful.
the mapping function
why can we get away with this small fixed number n of processed pyramids?
[2] gives the explanation in sec. 3:
If r is band-limited,
(r is the family of curves which is applied to the input image).
now if you look at the left image (which is taken from fig. 6 in [1]),
it has clear kinks where the center part joints the straight lines.
not band limited at all. don’t use this curve at home. it will produce
random aliasing when used with the fast local laplacian code.
darktable uses what is shown on the right instead: the contrast-s curve in the center
part is modelled by a derivative of gaussian (with infinite support), which is added
to the straight lines on either side, which are blended over using a quadratic
bezier curve. you can
look up the precise code if you’re interested.
now changing the contrast-s curve in the middle changes local contrast, that’s great.
looking at the sides of this curve, we would immediately want to use it for shadow lifting
and highlight compression too. in fact that’s possible (see following section with images
and accompanying curves). unfortunately some of the proposed optimisations in the original
papers can’t be applied any more. for the shadow/highlight use case, the pyramid needs
to be constructed all the way all the time, we can’t stop after three levels (or else
the shadow lifting would depend on the scale of the image).
this means darktable will always build the full pyramids, making this a little slower,
but yielding best results.
the darktable ui
you can find our implementation of this filter in the local contrast module, when
switched to local laplacian mode:
it allows you to change detail (or clarity or local contrast), highlight
contrast, and shadow contrast separately. the last slider governs how wide the
central region of the curve is, i.e. what is classified as shadow vs. mid-tone.
in the following, we’ll go through these contrast sliders, showing a couple of
corresponding conversion curves and the resulting effect
on the output image.
drag the separator in the middle to compare before/after!
in the following there are some comparison images from the previous post on exposure
fusion. these have been processed again with a different approach:
to compress the dynamic range, a very flat base curve has been used which maxes out at 0.5 for input values of 1.0.
as contrast-s curve, the tone curve module has been used, in rgb mode for automatic colour saturation compensation.
the curve was created to match an out of camera raw+jpg pair using darktable-chart which also created a clut for the colour lookup table module.
this approach will leave the image flat and dull (due to the flat base curve). to add back
additional detail, we use the local laplacian module. this turns out to work a lot better
together with a contrast-s tone curve than with a contrast-s base curve, since the base
curve comes before the local contrast module in the pipeline, while the tone curve
comes after. this way, the local contrast module receives more linear input.
in fact, sometimes a contrast-s curve is not necessary at all for hdr input.
exposure fusion shows some nicer behaviour with respect to colour saturation at
times, but sometimes produces halo like artifacts. i find it easier to create
good results with the local laplacian when aiming for a low key look (see the
definition in the darker clouds in the left of the first image). really up
to you and any given input images which method works better.
example history stacks are in the example comparison images seen below:
012.
when increasing contrast, you want to make sure the input isn’t noisy, or else
that noise would be accentuated even more. but even when not increasing
contrast, the local laplacian filter will show structured artifacts for noisy
input. i find this mostly goes away when enabling at least chroma denoising for
instance in the profiled denoising module.
I wrote an article on the LVFS for OpenSource.com. If you’re interested in an overview of how firmware updates work in Linux, and a little history it might be an interesting read.
Linux kernel v4.14 was released this last Sunday, and there’s a bunch of security things I think are interesting:
vmapped kernel stack on arm64
Similar to the same feature on x86, Mark Rutland and Ard Biesheuvel implementedCONFIG_VMAP_STACK for arm64, which moves the kernel stack to an isolated and guard-paged vmap area. With traditional stacks, there were two major risks when exhausting the stack: overwriting the thread_info structure (which contained the addr_limit field which is checked during copy_to/from_user()), and overwriting neighboring stacks (or other things allocated next to the stack). While arm64 previously moved its thread_info off the stack to deal with the former issue, this vmap change adds the last bit of protection by nature of the vmap guard pages. If the kernel tries to write past the end of the stack, it will hit the guard page and fault. (Testing for this is now possible via LKDTM’s STACK_GUARD_PAGE_LEADING/TRAILING tests.)
One aspect of the guard page protection that will need further attention (on all architectures) is that if the stack grew because of a giant Variable Length Array on the stack (effectively an implicit alloca() call), it might be possible to jump over the guard page entirely (as seen in the userspace Stack Clash attacks). Thankfully the use of VLAs is rare in the kernel. In the future, hopefully we’ll see the addition of PaX/grsecurity’s STACKLEAK plugin which, in addition to its primary purpose of clearing the kernel stack on return to userspace, makes sure stack expansion cannot skip over guard pages. This “stack probing” ability will likely also become directly available from the compiler as well.
set_fs() balance checking
Related to the addr_limit field mentioned above, another class of bug is finding a way to force the kernel into accidentally leaving addr_limit open to kernel memory through an unbalanced call to set_fs(). In some areas of the kernel, in order to reuse userspace routines (usually VFS or compat related), code will do something like: set_fs(KERNEL_DS); ...some code here...; set_fs(USER_DS);. When the USER_DS call goes missing (usually due to a buggy error path or exception), subsequent system calls can suddenly start writing into kernel memory via copy_to_user (where the “to user” really means “within the addr_limit range”).
Thomas Garnier implemented USER_DS checking at syscall exit time for x86, arm, and arm64. This means that a broken set_fs() setting will not extend beyond the buggy syscall that fails to set it back to USER_DS. Additionally, as part of the discussion on the best way to deal with this feature, Christoph Hellwig and Al Viro (and others) have been making extensive changes to avoid the need for set_fs() being used at all, which should greatly reduce the number of places where it might be possible to introduce such a bug in the future.
SLUB freelist hardening
A common class of heap attacks is overwriting the freelist pointers stored inline in the unallocated SLUB cache objects. PaX/grsecurity developed an inexpensive defense that XORs the freelist pointer with a global random value (and the storage address). Daniel Micay improved on this by using a per-cache random value, and I refactored the code a bit more. The resulting feature, enabled with CONFIG_SLAB_FREELIST_HARDENED, makes freelist pointer overwrites very hard to exploit unless an attacker has found a way to expose both the random value and the pointer location. This should render blind heap overflow bugs much more difficult to exploit.
Additionally, Alexander Popov implemented a simple double-free defense, similar to the “fasttop” check in the GNU C library, which will catch sequential free()s of the same pointer. (And has already uncovered a bug.)
setuid-exec stack limitation
Continuing the various additional defenses to protect against future problems related to userspace memory layout manipulation (as shown most recently in the Stack Clash attacks), I implemented an 8MiB stack limit for privileged (i.e. setuid) execs, inspired by a similar protection in grsecurity, after reworking the secureexec handling by LSMs. This complements the unconditional limit to the size of exec arguments that landed in v4.13.
randstruct automatic struct selection
While the bulk of the port of the randstruct gcc plugin from grsecurity landed in v4.13, the last of the work needed to enable automatic struct selection landed in v4.14. This means that the coverage of randomized structures, via CONFIG_GCC_PLUGIN_RANDSTRUCT, now includes one of the major targets of exploits: function pointer structures. Without knowing the build-randomized location of a callback pointer an attacker needs to overwrite in a structure, exploits become much less reliable.
structleak passed-by-reference variable initialization
Ard Biesheuvel enhanced the structleak gcc plugin to initialize all variables on the stack that are passed by reference when built with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. Normally the compiler will yell if a variable is used before being initialized, but it silences this warning if the variable’s address is passed into a function call first, as it has no way to tell if the function did actually initialize the contents. So the plugin now zero-initializes such variables (if they hadn’t already been initialized) before the function call that takes their address. Enabling this feature has a small performance impact, but solves many stack content exposure flaws. (In fact at least one such flaw reported during the v4.15 development cycle was mitigated by this plugin.)
eBPF JIT for 32-bit ARM
The ARM BPF JIT had been around a while, but it didn’t support eBPF (and, as a result, did not provide constant value blinding, which meant it was exposed to being used by an attacker to build arbitrary machine code with BPF constant values). Shubham Bansal spent a bunch of time building a full eBPF JIT for 32-bit ARM which both speeds up eBPF and brings it up to date on JIT exploit defenses in the kernel.
seccomp improvements
Tyler Hicks addressed a long-standing deficiency in how seccomp could log action results. In addition to creating a way to mark a specific seccomp filter as needing to be logged with SECCOMP_FILTER_FLAG_LOG, he added a new action result, SECCOMP_RET_LOG. With these changes in place, it should be much easier for developers to inspect the results of seccomp filters, and for process launchers to generate logs for their child processes operating under a seccomp filter.
Additionally, I finally found a way to implement an often-requested feature for seccomp, which was to kill an entire process instead of just the offending thread. This was done by creating the SECCOMP_RET_ACTION_FULL mask (née SECCOMP_RET_ACTION) and implementing SECCOMP_RET_KILL_PROCESS.
That’s it for now; please let me know if I missed anything. The v4.15 merge window is now open!
Perhaps due to my history with Mozilla and Firefox, I’ve been a happy Firefox user since version 0.6 (14 years ago!?). In recent years, the Chrome browser from Google has become the most commonly used browser, especially among web developers.
Even if Firefox doesn’t regain the market-share it once had, these efforts push the other browser vendors to improve and generally improve the Web as a platform.
Thanks and congrats to all of the designers, engineers, and other humans who continue to improve Firefox. The update icon looks slick too.
Yes certainly! I’m Lars Pontoppidan; a 36 year old, self-employed programmer, game developer, musician and artist.
I’ve been drawing and painting since I could put my pen to the paper – so about 35 years.
I made my first recognizable painting when I was around 3 or 4 – my mom still has it framed at her house.
I’ve always wanted to end up at some level where I could combine all my skills and hobbies. Somewhere along the way I found out that game development demand a lot of the skills I possess – so 1.5 years ago, I decided to cancel all my contracts with my clients and go for a new path in life as “indie game developer”. I’ve now found out that it’s probably the worst time I could ever get into indie game development. The bubble has more or less already burst. There’s simply too many game releases for the consumers to cope with at the moment. But hey I’ve tried worse so it doesn’t really bother me – and I get to make art with Krita!
Do you paint professionally, as a hobby artist, or both?
Both I’d say. I’ve always been creating things on a hobby level – but have also delivered a lot of designs, logos and custom graphics as self-employed. I like the hobby work the most – as there are no deadlines or rules for when the project is done.
What genre(s) do you work in?
Cartooning, Digital painting, Animation and Video game art. All these (and maybe more) blend in when producing a game. I also like painting dark and gloomy pictures once in a while.
I think I’ve mostly done cartoon styled work – but with a grain of realism in it. My own little mixture.
I started out with pencil and paper – moved to the Deluxe Paint series, when I got my first Amiga – and ended up with Krita (which is an absolute delight to work with. Thanks to you guys!). I still occasionally do some sketching with pencil and paper – depending on my mood.
Whose work inspires you most — who are your role models as an artist?
* A list too long for me to compile here, of sci-fi fantasy artists. Peter Elson is the first that comes to mind. These artists, in my opinion, lay the very foundation of what’s (supposedly) possible with human technology – and currently, the only possibility to get a glimpse of how life might look like other places in the vast universe that surrounds us. It’s mind blowing how they come up with all the alien designs they do.
* Salvador Dalí – It’s hard to find the right words for his creations – which, I think, is why his works speak to me.
* “vergvoktre” He’s made some really dark, twisted and creepy creations that somehow get under my skin.
How and when did you get to try digital painting for the first time?
The very first digital painting program I’ve ever tried was KoalaPainter for the Commodore 64. I had nothing but a joystick and made, if I recall correctly, a smiley face in black and white.
Thankfully my Amiga 500 came with a copy of Deluxe Paint IV, a two-button mouse and the luxury of a 256+ color palette.
What makes you choose digital over traditional painting?
The glorious “Undo” buffer. I mean… It’s just magic.Especially in the first part of the day (before the first two cups of coffee) where your hand just won’t draw perfect circles, nor any straight lines.
How did you find out about Krita?
I read an article about the Calligra office suite online. It described how Calligra compared to Open Office. I eventually installed it to see how it compared to Open Office and boom there was Krita as part of the package. This was my first encounter – unfortunately it ended up with an uninstall – because of stability issues with the Calligra suite in general.
What was your first impression?
The first impression was actually really good – unfortunately it ended up a bit in the shadows of the Calligra suite’s combined impression. This wasn’t so positive after a few segfaults in the different applications. Luckily I tried Krita later when it entered the Qt5 based versions. I haven’t looked back since.
What do you love about Krita?
The brush engines and the “Layers” docker.
The brushes, and most of the default settings for them, just feel right. Also the many options to tweak the brushes are really awesome.
The layers docker was actually what gave me the best impression of the program – you had working group layers – and you could give any layer the same names! None of the graphic creation applications I used a few years back had these basic, fundamental features done right (Inkscape and GIMP – I’m looking at you). Krita’s layers didn’t feel somewhat broken, hacked-on and had no naming scheme limitations. A small thing that has made a big difference to me.
What do you think needs improvement in Krita? Is there anything that really annoys you?
Uhm… I was going to write ‘speed’ – but everybody is screaming for more of that already. I know how the developers are doing their best to get more juice.
Some great overall stability would be nice. I’ve only ever had 2 or 3 crashes with GIMP over a long period of time – the count is a bit higher with Krita – on a shorter time scale.
My biggest feature request would be: Cut’n’paste functionality through multiple layers, that also paste in separate layers. This would greatly improve my workflow. I’ve always worked with a group layer containing separate layers for outline, color, texture, shadow etc. – on each e.g. movable part in a character rig. So I would really benefit from a (selection based) cut’n’paste that could cut through all the selected layers – and paste all these separate selection+layers elsewhere in the layer tree.
What sets Krita apart from the other tools that you use?
I find that most of Krita’s tools actually do what you expect them to do – without any weird limitations or special cases. Plus the different brushes, brush engines and all the flexibility to tweak them, are real killer features.
The non-destructive masks (Transparency, Filter and Transform) are also on my list of favourite features. I use these layer types a lot when creating game art – to make them blend in better with the game backgrounds.
And maybe the single most important thing: it’s free and open source. So I’m quite certain I will be able to open up my old Krita files many years into the future.
… and speaking of the future; I really look forward to getting my hands dirty with the Python scripting API.
If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?
It would have to be the opening scene of my upcoming 2D game “non”. It’s using a great variety of Krita’s really awesome and powerful features. The scenes in the game features full day and night cycles where all the lighting and shadows change dynamically – this makes it especially hard to get beautiful painted scenes in all the states each scene has between day and night. Krita’s tool set makes it easier and quicker for me to test out a specific feature for an object or sprite – before throwing it into the game engine.
The biggest scene I have so far is 10200×4080 pixels – Krita was actually performing decently up to a certain point where I had to break the scene into smaller projects. I’m not blaming Krita for this
What techniques and brushes did you use in it?
For cartoon styled work I use a Group layer containing:
* background blend (Transparency Mask)
* shadows (Paint layer)
* outlines (Paint layer)
* textures (Group layer)
* solid base color(s) (Paint layer)
For outlines I use the standard Pixel brush ‘Ink_gpen_10’ – it has a really nice sharp edge at small tip sizes. For texturing I mostly use the ‘Splatter_thin’ Pixel brush – with both standard and custom brush tips and settings depending on the project at hand. For shadowing I really like the ‘Airbrush_pressure’ and ‘Airbrush_linear_noisy’ Pixel brushes. I use a selection mask based on the solid base color layer (Layer name -> Right mouse click -> Select Opaque) – and start shadowing the object.
I’d like to thank everyone involved with Krita for making this great open source and free software available to the world. I hope to soon get enough time on my hands to help the project grow.
Every time I see a news story about the flu shot (which is available for free on Prince Edward Island this year), there’s always a stock photo close-up of a needle jabbing into an arm.
If you want to encourage people to get the flu shot, don’t use a photo of the one thing people don’t like about the flu shot.
I was able to get the flu shot for free without an appointment (or any wait time) at Shoppers Drug Mart.
Lina Porras and David Saenz from the Ubuntu Colombia user group wrote to tell us that they will give an introduction to digital painting with Krita starting this Saturday. David will be teaching Krita four Saturday sessions.
It will be an introductory course where people from 14 years old and older will learn the basics of digital painting and will start painting in Krita. Here is more information:
And you can follow them on twitter (@ubuntco and facebook as well: https://www.facebook.com/UbuntuColombia
If you are thinking of organizing a Krita course for your local user group, community art college or similar, contact us so we can help you spread the word, too!
Here’s a free invention idea for you (in that I have a stupid idea and will not do anything with it): The Zen Microwave.
The Zen Microwave only has one control: an on/off switch. Once you’ve started it, you have to remember to turn it off or your left-overs will turn into exploding spaghetti-charcoal.
If you want to heat up your lunch, it can go one of two ways:
Put your lunch in the microwave and turn it on (remember, there’s no timer)
Stand there for two minutes, focused and present
Turn the microwave off and enjoy your hot lunch
Or:
Put your lunch in the microwave and turn it on
Wander around the kitchen, get a glass of water, check in on your online click-farm business on your phone, thumb through the Home Hampers & Hobbits flyer
Smell burning and notice that your lunch has been super-heated for 10 minutes
Is it dangerous? Yes – but how else are you ever going to learn?
Usually near the end of the process of getting a vendor on the LVFS I normally ask them to send me hardware for the tests. Once we’ve got a pretty good idea that the hardware update process is going to work with fwupd (i.e. they’re not insisting on some static linked ELF to be run…) and when they’ve got legal approval to upload the firmware to the LVFS (without an eyewateringly long EULA) we start thinking about how to test the hardware. Once we say “Product Foo from Vendor Bar is supported in Linux” we better make damn sure it doesn’t regress when something in the kernel changes or when someone refactors a plugin to support a different variant of a protocol.
To make this task a little more manageable, we have a little python script that helps automate the devices that can be persuaded to enter DFU mode themselves. To avoid chaos, I also have a little cardboard tray under a little HP Microserver with two 10-port USB hubs with everything organised. Who knew paper-craft would be such an important skill at Red Hat…
As the astute might notice, much of the hardware is a bare PCB. I don’t actually need the complete device for testing, and much of the donated hardware is actually a user return or with a cosmetic defect, or even just a pre-release PCB without the actual hardware attached. This is fine, and actually preferable to the entire device – I only have a small office!
As much of the hardware needs special handling to put it in update mode we can’t 100% automate this task, and sometimes it really is just me sitting in front of the laptop pressing and holding buttons for 30 minutes before uploading a tarball, but it’s sure it comforting to know that firmware updates are tested like this. As usual, thanks should be directed to Red Hat for letting me work on this kind of stuff, they really are a marvelous company to work for.
And then we realized we hadn’t posted news about ongoing Krita development for some time now. The main reason is that we’ve, well, been really busy doing development. The other reason is that we’re stuck making fully-featured preview builds on OSX and Linux. More about that later…
So, what’s been going on? Some of the things we’ve been doing were backported to Krita 3.2 and 3.3, like support for the Windows 8 Pointer API, support for the ANGLE Direct3D display renderer, the new gmic-qt G’Mic plugin, new commandline options, support for touch painting, the new smart patch tool, new brush presets and blending modes… But there is also a lot of other work that simply couldn’t be backported to 3.x.
The last time we did a development update with Krita 4.0 was in June 2017: the first development build for 4.0 already had a large number of new features:
the SVG-based vector layers with improved vector handling tools,
Allan Marshall’s new airbrush system,
Eugene Ingerman’s healing brush,
a new export system that reports which parts of your image cannot be saved to your chosen file format: and that is nowimproved: saving now happens in the background. You can press save, and continue painting. Autosave also doesn’t interrupt your painting anymore.
Wolthera’s new and improved palette docker
A new docker for loading SVG symbol collections, Which now comes with a new symbol libary with brush preset icons. Perfect with the new brush editor.
We added Python scripting (only available in the Windows builds: we need platform maintainers). Eliakin and Wolthera have spent the summer adding great new python-based plugins, extending and improving the scripting API while working:
Ten brushes: a script to assign ten favorite brushes to hotkeys
Quick settings docker: with brush size, opacity and flow
Comic Projects Management tools
And much, much more
What has been recently added to Krita 4.0
Big performance improvements
After the development build release we sent out a user survey: In case you didn’t see the results of our last survey this was the summary.
The biggest item on the list was lag. Lag can have many meanings, and there will always be brushes or operations that are not instant. But we had the opportunity the past couple of months to work on an outside project to help improve the performance of Krita. While we knew this might delay the release of Krita 4.0, it would be much appreciated by artists. Some of the performance improvements contain the following:
multi-threaded performance using all your CPUs for pixel brush engines (80% of all the brushes that are made).
A lot of speed optimizations with dab grouping for all brushes
more caching to speed up brush rendering across all brushes.
Here’s a video of Wolthera using the multithreaded brushes:
Performance Benchmarking
We also added performance benchmarking. We can see much more accurately how brushes are performing and make our brushes better/optimized in the future:
Pixel Grid
Andrey Kamakin added an option to show a thin grid around pixels if you zoom in enough:
Live Brush Preview
Scott Petrovic has been working with a number of artists to rework the brush editor. There have been many things changed including renaming brushes and better saving options. There’s also a live stroke preview now to see what happens when you change settings. Parts of the editor can be shown or hidden to accommodate for smaller monitors.
Isometric Grid
The grid now has a new Isometric option. This can be controlled and modified through the grid docker:
Filters
A new edge detection filter
Height to normal map filter
Improved gradient map filter
A new ASC-CDL color balance filter with slope, offset and power parameters
Layers
File layers now can have the location of their reference changed.
A convert layer to file layer option has been added that saves out layers and replaces them with a file layer referencing them.
Dockers
A new docker for use on touch screens: big buttons in a layout that resembles the button row of a Wacom tablet.
More
And there’s of course a lot of bug fixes, UI polish, performance improvements, small feature improvements. The list is too long to keep here, so we’re working on a separate release notes page. These notes, like this Krita 4.0 build, are very much a work in progress!
Features currently working on
There are still a number of features we want to have done before we release Krita 4.0:
a new text tool (we have started the ground work for this, but it still needs a lot more work)
a faster colorize mask tool (we need to make this much faster as it is currently too slow)
stacked brushes where you can have multiple brush tips similar to other applications.
And then there are no doubt things missing from the big new features, like SVG vector layers and Python scripting that need to be implemented, there will be bugs that need to be fixed. We’ve made packages for you to download and test, but be warned, there are bugs. And:
This is pre-alpha code. It will crash. It will do weird things. It might even destroy your images on saving!
AND: DOCUMENTS WITH VECTOR LAYERS SAVED IN KRITA 4.0 CANNOT BE EDITED IN KRITA 3.x!
You can have both Krita 3 and Krita 4 on the same system. They will use the same configuration (for now, that might change), which means that either Krita 3 or Krita 4 can get confused. They will use the same resources folder, so brush presets and so on are shared.
Downloads
Right now, all releases and builds, except for the Lime PPA, are created by the project maintainer, Boudewijn Rempt. This is not sustainable! Only for the Windows build, a third person is helping out by maintaining the scripts needed to build and package Krita. We really do need people to step up and help maintain the Linux and macOS/OSX builds. This means that:
The Linux AppImage is missing Python scripting and soundplayback. It may be missing the QML-based touch docker. We haven’t managed to figure out how to add those features to the appimage! The appimage build script is also seriously outdated, and Boudewijn doesn’t have time to improve it, next to all the other things that need to be done and managed and, especially, coded. We need a platform maintainer for Linux!
The OSX/macOS DMG is missing Python scripting as well as PDF import and G’Mic integration. Boudewijn simply does not have the in-depth knowledge of OSX/macOS needed to figure out how to add that properly to the OSX/macOS build and packages. Development on OSX is picking up, thanks to Bernhard Liebl, but we need a platform maintainer for macOS/OSX!
Windows Download
Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.
There are no 32 bits Windows builds yet. There is no installer.
The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc. The signatures are here.
Support Krita
Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.
The Vietnam War is a ten-part, 18-hour documentary from PBS. It’s available to stream for free on the PBS website/app, but only if you live in the US. For those of us fortunate enough not to live in the Greatest Country on Earth, there are some hoops to jump through to watch it.
The easiest set of hoops I’ve found is to use the free built-in VPN in the Opera web browser. You can easily select the US as your VPN zone and enjoy the web as our freedom-loving friends see it.
The documentary is exhaustive and is propelled by remarkable interviews with participants from Both Sides of the war. It will leave you wondering how it could have ever been allowed to happen, and terrified that it will happen again.
Jonathan & Melissa Nightingale have gathered some of their best writing from their excellent blog, The Co-Pour. They discuss and share what they’ve learned about the mechanics and humanity of working with people in a business. It’s short, easy to read, has a profane title, and a fine domain name: hfuiym.com.
I came to know Johan Ray though his co-hosting of the Nerdist podcast. He plays the host of a fake travel show, Hidden America with Jonah Ray, which is generally a spoof of low-budget cable travel shows, but occasionally drifts into absurd horror.
Watching it from outside the US is frustrating. It’s a great show – but it’s barely worth jumping through the hoops required. Like The Vietnam War doc, you’ll need the Opera VPN to watch from outside of the US, then you need to create an account on the VRV site, and even then the show is peppered with ads that randomly interrupt playback. As I said, barely worth it – but still worth it.
The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc. The signatures are here.
Support Krita
Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.
Arduinos are great for prototyping, but for a small, low-power,
cheap and simple design, an ATtiny chip seems like just the ticket.
For just a few dollars you can do most of what you could with an
Arduino and use a lot of the same code, as long as you can make do
with a little less memory and fewer pins.
I've been wanting to try them, and recently I ordered a few ATtiny85 chips.
There are quite a few ways to program them. You can buy programmers
specifically intended for an ATtiny, but I already had a USBtinyISP,
a chip used to program Arduino bootloaders, so that's what I'll
discuss here.
Wiring to the USBtinyISP
The best reference I found on wiring was
Using USBTinyISP to program ATTiny45 and ATTiny85.
That's pretty clear, but I made my own Fritzing diagram, with colors,
so it'll be easy to reconstruct it next time I need it.
The colors I used:
#include io.h>
#include <utildelay.h>
int main (void)
{
// Set Data Direction to output on port B, pins 2 and 3:
DDRB = 0b00001000;
while (1) {
// set PB3 high
PORTB = 0b00001000;
_delay_ms(500);
// set PB3 low
PORTB = 0b00000000;
_delay_ms(500);
}
return 1;
}
</utildelay.h>
Then you need a Makefile. I started with the one linked from the electronut
page above. Modify it if you're using a programmer other than a USBtinyISP.
make builds the program, and make install
loads it to the ATtiny. And, incredibly, my light started blinking,
the first time!
Encouraged, I added another LED to make sure I understood.
The ATtiny85 has six pins you can use (the other two are power and ground).
The pin numbers correspond to the bits in DDRB and PORTB:
my LED was on PB3. I added another LED on PB2 and made it alternate
with the first one:
DDRB = 0b00001100;
[ ... ]
// set PB3 high, PB2 low
PORTB = 0b00001000;
_delay_ms(500);
// set PB3 low, PB2 high
PORTB = 0b00000100;
_delay_ms(500);
Timing Woes
But wait -- not everything was rosy. I was calling _delay_ms(500),
but it was waiting a lot longer than half a second between flashes.
What was wrong?
For some reason, a lot of ATtiny sample code on the web assumes the
chip is running at 8MHz. The chip's internal oscillator is indeed 8MHz
(though you can also run it with an external crystal at various
speeds) -- but its default mode uses that oscillator in "divide by
eight" mode, meaning its actual clock rate is 1MHz. But Makefiles
you'll find on the web don't take that into account (maybe because
they're all copied from the same original source). So, for instance,
the Makefile I got from electronut has
CLOCK = 8000000
If I changed that to
CLOCK = 1000000
now my delays were proper milliseconds, as I'd specified.
Here's my working
attiny85
blink Makefile.
In case you're curious about clock rate, it's specified by what are
called fuses, which sound permanent but aren't: they hold their
values when the chip loses power, but you can set them over and over.
You can read the current fuse settings like this:
avrdude -c usbtiny -p attiny85 -U lfuse:r:-:i -v
which should print something like this:
avrdude: safemode: hfuse reads as DF
avrdude: safemode: efuse reads as FF
avrdude: safemode: Fuses OK (E:FF, H:DF, L:62)
To figure out what that means, go to the
Fuse calculator,
scroll down to Current settings and enter the three values
you got from avrdude (E, H and L correspond to Extended, High and Low).
Then scroll up to Feature configuration
to see what the fuse settings correspond to.
In my case it was
Int. RC Osc. 8 Mhz; Start-up time PWRDWN/RESET; 6CK/14CK+
64ms; [CKSEL=1011 SUT=10]; default value
and Divide clock by 8 internally; [CKDIV8=0] was checked.
Nobody seems to have written much about AVR/ATTINY
programming in general. Symbols like PORTB and
functions like _delay_ms() come from files in
/usr/lib/avr/include/, at least on my Debian system.
There's not much there, so if you want library functions to handle
nontrivial hardware, you'll have to write them or find them somewhere else.
As for understanding pins, you're supposed to go to the datasheet and read it
through, all 234 pages. Hint: for understanding basics of reading from and
writing to ports, speed forward to section 10, I/O Ports.
A short excerpt from that section:
Three I/O memory address locations are allocated for each port, one
each for the Data Register - PORTx, Data Direction Register - DDRx,
and the Port Input Pins - PINx. The Port Input Pins I/O location is
read only, while the Data Register and the Data Direction Register are
read/write. However, writing a logic one to a bit in the PINx
Register, (comma sic) will result in a toggle in the
corresponding Data Register. In addition, the Pull-up Disable - PUD
bit in MCUCR disables the pull-up function for all pins in all ports
when set.
There's also some interesting information there about built-in pull-up
resistors and how to activate or deactivate them.
That's helpful, but here's the part I wish they'd said:
PORTB (along with DDRB and PINB) represents all six pins. (Why B? Is
there a PORTA? Not as far as I can tell; at least, no PORTA is
mentioned in the datasheet.) There are six output pins, corresponding
to the six pins on the chip that are not power or ground. Set the bits
in DDRB and PORTB to correspond to the pins you want to set. So if you
want to use pins 0 through 3 for output, do this:
DDRB = 0b00001111;
If you want to set logical pins 1 and 3 (corresponding to pins 6 and 2
on the chip) high, and the rest of the pins low, do this:
PORTB = 0b00001010;
To read from pins, use PINB.
In addition to basic functionality, all the pins have specialized
uses, like timers, SPI, ADC and even temperature measurement (see the
diagram above). The datasheet goes into more detail about how to get
into some of those specialized modes.
But a lot of those specialties are easier to deal with using
libraries. And there are a lot more libraries available for the Arduino
C++ environment than there are for a bare ATtiny using C.
So the next step is to program the ATtiny using Arduino ...
which deserves its own article.
In my previous blog post I hinted at you just have to add one line to a data file to add support for new AVR32 microcontrollers and this blog entry should give a few more details.
A few minutes ago I merged a PR that moves the database of supported and quirked devices out of the C code and into runtime loaded files. When fwupd is installed in long-term support distros it’s very hard to backport new versions as new hardware is released. The idea with this functionalty is that the end user can drop an additional (or replace an existing) file in a .d directory with a simple format and the hardware will magically start working. This assumes no new quirks are required, as this would obviously need code changes, but allows us to get most existing devices working in an easy way without the user compiling anything.
Hi there,
Time for a new report on the development of href="https://www.freecadweb.org/wiki/index.php?title=Arch_Module">Architecture and BIM tools for href="http://www.freecadweb.org">FreeCAD. Remember, you can help me to spend more time working on this, by sponsoring me on href="https://www.patreon.com/yorikvanhavre">Patreon, href="https://liberapay.com/yorik">Librepay or directly (ask me for a PayPal email or bitcoin address).
Campain and future development
Since I just recently opened the Librepay...
The SDN & NFV DevRoom is back this year for FOSDEM, and the call for content is open until November 16th. Submissions are welcome now!
Here’s the full announcement:
We are pleased to announce the Call for Participation in the FOSDEM 2018 Software Defined Networking and Network Functions Virtualization DevRoom!
Important dates:
Nov 16: Deadline for submissions
Dec 1: Speakers notified of acceptance
Dec 5: Schedule published
This year, as it has for the past two years, the DevRoom topics will cover two distinct fields:
Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform
We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.
This year, the DevRoom will focus on the emergence of cloud native Virtual Network Functions, and the management and performance requirements of those applications, in addition to our traditional focus on high performance packet processing.
A representative, but not exhaustive, list of the projects and topics we would like to see on the schedule are:
Low-level networking and switching: IOvisor, eBPF, XDP, DPDK, fd.io, Open vSwitch, OpenDataplane, Free Range Routing, …
SDN controllers and overlay networking: OpenStack Neutron, Calico, OpenDaylight, ONOS, Plumgrid, OVN, OpenContrail, Midonet, …
NFV related features: Service Assurance, enforcement of Quality of Service, Service Function Chaining, fault management, dataplane acceleration, security, …
Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects.
Please include the following information when submitting a proposal:
Your name
The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
Short abstract of one or two paragraphs
Short bio (with photo)
The deadline for submissions is November 16th 2017. FOSDEM will be held on the weekend of February 3-4, 2018 and the SDN/NFV DevRoom will take place on Saturday, February 3, 2017. Please use the FOSDEM submission website to submit your proposals (you do not need to create a new Pentabarf account if you already have one from past years). You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom.
Over 10 years ago the dfu-programmer project was forked into dfu-utils as the former didn’t actually work at all well with generic devices supporting vanilla 1.0 and 1.1 specification-compliant DFU. It was then adapted to also support the STM variant of DFU (standards FTW). One feature that dfu-programmer did have, which dfu-util never seemed to acquire was support for the AVR variant of DFU (very different from STM DFU, but doing basically the same things). This meant if you wanted to program AVR parts you had to use the long-obsolete tool rather than the slightly less-unmaintained newer tool.
Today I merged a PR in fwupd that adds support for flashing AVR32 devices from Atmel. These are the same chips found in some Arduino protoype boards, and are also the core of many thousands of professional devices like the Nitrokey device. You can already program this kind of hardware in Linux, using clunky commands like:
The crazy long chip identifier is specified manually for each command, as the bootloader VID/PID isn’t always unique for each chip type. For fwupd we need to be able to program hardware without any user input, and without any chance of the wrong chip identifier bricking the hardware. This is possible to do as the chip itself knows its own device ID, but for some reason Atmel wants to make it super difficult to autodetect the hardware by publishing a table of all the processor types they have produced. I’ll cover in a future blog post how we do this mapping in fwupd, but at least for hardware like the Nitrokey you can now use the little dfu-tool helper executable shipped in fwupd to do:
# dfu-tool write foo.ihx
Or, for normal people, you can soon just click the Update button in GNOME Software which uses the DFU plugin in fwupd to apply the update. It’s so easy, and safe.
If you manufacture an AVR32 device that uses the Atmel bootloader (not the Arduino one), and you’re interested in making fwupd work with your hardware it’s likely you just have to add one line to a data file. If your dfu-tool list already specifies a Chip ID along with can-download|can-upload then there’s no excuse at all as it should just work. There is a lot of hardware using the AT32UC3, so I’m hopeful spending the time on the AVR support means more vendors can join the LVFS project.
I’m Erica Wagner, a STEAM Nerd, Teenpreneur, Author, Instructor, YouTuber and self-taught 2D and 3D artist. I’ve been doing graphic design for two years, 3D sculpting, voxel art, and 3d modeling for one year, and digital drawing for a little over six months. I’m a homeschool student. My mom uses the majority of my projects as a part of school.
Do you paint professionally, as a hobby artist, or both?
Currently I’m a hobby artist but learning different art forms so I can make my own games and eventually my own animations.
What genre(s) do you work in?
I work in mostly science, cyber, sci-fi, and nature. Most of the work I make is STEAM related due to loving those areas which include but is not limited to movies, shows, games and books. Movies such as Star Wars, Interstellar, and Guardians of The Galaxy. An example of the shows I have watched are Gravity Falls and Doctor Who. Some of the games I have played are Hack ‘n’ Slash, Portal 2, Niche, and Robocraft. Lastly, some of my favorite books are the Nancy Drew Series, and Jurassic Park 1 & 2.
Whose work inspires you most — who are your role models as an artist?
When it comes to 2D art it would be the following Twitter people: loishh, viiolaceus, Cyarine, and samsantala. I love their styles. Some of them have varying styles of cartoony, realistic, and some have a mix of both. The mixture of realistic and cartoon styles appeal to me because they are realistic in the proportions, details, and colors; yet also cartoony that you’d see in webisodes. I’m not sure what the correct name for this style is but I love it. I want to develop my own style that is similar to this realistic cartoony mix so I can make my own concepts, illustrations, designs, and textures for 3d models.
How and when did you get to try digital painting for the first time?
I’m not sure the exact date but it was sometime in late 2016. Even though I did download Krita and two other programs in 2015, I didn’t actually make anything with them until late 2016. I played and tested the brushes to see what they did. I finally made something for a challenge I created in October 2016 called Artober.
What makes you choose digital over traditional painting?
I have endless resources to use. Don’t get me wrong, I enjoy traditional drawing. I did it a lot when I was younger. I can’t imagine spending money to buy lots of pens, pencils, markers, and other things when at the time I was just doing it for fun. I’m more of a techy person, so doing it digitally lets me play with different brushes without wasting anything. Plus it’s easier to paint 3D models this way and it’s easier to make things for graphics for ads, thumbnails, merch designs, etc.
How did you find out about Krita?
In late 2015, I searched in Google “Free Alternatives to Paint Tool Sai”. At the time, I was downloading all kinds of programs and just playing around in them so see which ones I liked. A website called Alternitateto.net popped up with results of different programs to use instead of Paint Tool Sai. I tried three or four different programs, one of them being Krita.
What was your first impression?
I was so amazed at all the things I could do in Krita. I had all kinds of brushes for different things at the time I had no idea what for, I could make my own animations too! I knew I had no idea how to use these features to make my own stories and worlds come to life but that didn’t matter to me. The fact I had the resource to learn how to make my own designs, concepts, and illustration and an alternative to Photoshop and Paint Tool Sai was great for me. It was such a great program I wondered why I had never heard of it or seen it in tutorials on YouTube. I was really excited to have a program that had all the features I wanted and needed to start the learning process.
What do you love about Krita?
I love how versatile and powerful it is. I can make my own brushes, drawings, animations, vectors, and textures for 3D models. When you’re just starting to teach yourself digital drawing you don’t want to spend hundreds of dollars on programs like Photoshop or Paint Tool Sai, especially if you don’t know if you’ll actually make a career from digital drawing or even like it. With Krita, I feel I’m getting the same amount and powerful features as the big name artists with Photoshop or Paint Tool Sai. The possibilities are endless! I also love that I can customize the layout of Krita to work for me or what I’m doing.
What do you think needs improvement in Krita? Is there anything that really annoys you?
I would like to be able to open a project, my 3D model texture for example, and in the history see the brushes, textures, and patterns I used. Currently Krita only remembers what brushes you used when you last opened it and not the brushes, textures, and patterns you used in certain projects.
What sets Krita apart from the other tools that you use?
For me it’s the vector feature. I also do graphic design and since I’m learning other art forms to make my own props for my graphics this really helps me. When you do graphic design you can use raster images but it helps a lot if you have vector images. Vector images don’t lose quality when you size them up or down. Vector images are really useful when you make Merch designs, ads, thumbnails, cover art, and more. The vector feature is so easy to learn and use. Once I got my brother to use Krita, he used it to make shirt designs and remade his brand’s logo.
If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?
My favorite is the texture for the 3d lowpoly model t-rex I made for a shirt design. This is my favorite because it was my first time painting a texture for a 3D model. Based on the program I was using, there were three ways to paint the model. I knew I wanted to get better at drawing so I decided to take my model’s UV map, which is basically the layout of a 3D object in a 2D cutout like form, and paint it in Krita. While following a tutorial, the model took 10 hours, the texture took 11 hours, and the last 8 hours were for last minute fixing of the model, texture, and making it ready to put on a shirt. Right now 3D is my strong suit so having the 2D texture I was happy with work correctly on the model after working on this whole project for a total of 29 hours just made my entire day. I was so proud of how it all turned out and it looked amazing on the shirt. I’m still new to digital drawing and lowpoly modeling so this was a great experience for me.
What techniques and brushes did you use in it?
I used the Krita ink gpen 25 and the smudge rake 2 brushes. I chose the colors of my brand ScienceHerWay which are white, black, neon and dark shades of pink, purple, and teal and then used some light and dark grey. Certain areas of the dinosaur I made darker to give some details such as the dark purple lines in the lips, a darker shade of the color used on the nails, elbow and knee joints, and a light shade of teal on the inside of the mouth for where he teeth would be. For the pink streaks on the dinosaur’s back and legs I made a line of neon pink with the ink gpen 25 brush and then used the smudge rake 2 brush randomly to make it look like a natural pattern until the neon pink line was gone. I repeated this process with the dark neon pink.
I recommend trying art challenges and contests. It’s a great way for you to practice and get out of your comfort zone. Even try art collabs. As long as you find a supportive art community, you shouldn’t have to worry about your skill level when it comes to this. The point is to get to know other artists, practice, and have fun. At the time of writing this, I’m in an art collab myself. I’m still learning how to digitally draw while the others have been doing this for years. It may feel intimidating, but I’m collabing and meeting with people I’ve never meet before and we’re all having fun. Plus I can learn from them.
Two Girtoises about to feast on cloud-rooted Bananeries on the plains of the seastern continent. These animals are also known as Toraffes or by their scientific name: Giradinoides. In German, they have the even better name Schiraffen. The Bananeries contain valuable vitamins and minerals which help the animals in maintaining smooth fur and strong shells.
This is a completely tablet-drawn work. With my trusty serial Wacom Intuos, still working as I keep compiling the module after every kernel update. Originally, I wanted to use Krita for the nice paintbrush engine and the canvas rotation. I found the later to be critical in achieving the smoothest curves, which is a lot easier in a horizontal direction. With what ended up being a 10000 x 10200 resolution and only 4 GiB RAM, I ran into performance problems. Where Krita failed, GIMP still worked, though I had to switch to the development version to have canvas rotation. At the end, GIMP’s PNG export failed due to it not being able to fork a process with no memory left! Flattening the few layers to save memory led to GIMP being killed. Luckily, there’s the package xcftools with xcf2png, so I could get my final PNGs via command line!
Last week was the Google Summer of Code Mentors Summit, a yearly event organized by Google, where they invite mentors of the Google Summer of Code program, a program that pays students to work on open-source projects. This year, like last year, FreeCAD participated to GSOC. This year we had 4 really good students,...
Our makerspace got some new Arduino kits that come with a bunch of fun
parts I hadn't played with before, including an IR remote and receiver.
The kits are intended for Arduino and there are Arduino libraries to
handle it, but I wanted to try it with a Raspberry Pi as well.
It turned out to be much trickier than I expected to read signals from
the IR remote in Python on the Pi. There's plenty of discussion online,
but most howtos are out of date and don't work, or else they assume you
want to use your Pi as a media center and can't be adapted to more
general purposes. So here's what I learned.
Then you have to enable the lirc daemon. Assuming the sensor's pin is
on the Pi's GPIO 18, edit /boot/config.txt as root, look for
this line and uncomment it:
# Uncomment this to enable the lirc-rpi module
dtoverlay=lirc-rpi
Reboot. Then use a program called mode2 to make sure you can
read from the remote at all, after first making sure the
lirc daemon isn't running:
$ sudo service lirc stop
$ ps aux | grep lirc
$ mode2 -d /dev/lirc0
Press a few keys. If you see a lot of output, you're good. If not,
check your wiring.
Set up a lircd.conf
You'll need to make an lircd.conf file mapping the codes the
buttons send to symbols like KEY_PLAY. You can do that -- ina
somewhat slow and painstaking process -- with irrecord.
First you'll need a list of valid key names. Get that with
irrecord -l
and you'll probably want to keep that window up so you can search
or grep in it. Open another window and run:
$ irrecord -d /dev/lirc0 ~/lircd.conf
I had to repeat the command a couple of times; the first few times
it couldn't read anything. But once it's running, then for
each key on the remote, first, find the key name
that most closely matches what you want the key to do (for instance,
if the key is the power button, irrecord -l | grep -i power
will suggest KEY_POWER and KEY_POWER2). Type or paste that key name
into irrecord -d, then press the key.
At the end of this, you should have a ~/lircd.conf.
Some guides say to copy that lircd.conf to /etc/lirc/ andI
did, but I'm not sure it matters if you're going to be running your
programs as you rather than root.
Then enable the lirc daemon that you stopped back when you were testing
with mode2.
In /etc/lirc/hardware.conf, START_LIRCMD is commented out,
so uncomment it.
Then edit /etc/lirc/hardware.conf as specified in
alexba.in's
"Setting Up LIRC on the RaspberryPi".
Now you can start the daemon:
sudo service lirc start
and verify that it's running: ps aux | grep lirc.
Testing with irw
Now it's time to test your lircd.conf:
irw
Press buttons, and hopefully you'll see lines like
If they correspond to the buttons you pressed, your lircd.conf is working.
Reading Button Presses from Python
Now, most tutorials move on to generating a .lircrc
file which sets up your machine to execute programs automatically when
buttons are pressed, and then you can test with ircat.
If you're setting up your Raspberry Pi as a media control center,
that's probably what you want (see below for hints if that's your goal).
But neither .ircrc nor ircat did anything useful for me,
and executing programs is overkill if you just want to read keys from Python.
Python has modules for everything, right?
The Raspbian repos have python-lirc, python-pylirc and python3-lirc,
and pip has a couple of additional options. But none of the packages I
tried actually worked. They all seem to be aimed at setting up media
centers and wanted lircrc files without specifying what they
need from those files. Even when I set up a .lircrc they didn't work.
For instance,
in python-lirc,
lirc.nextcode() always returned an empty list, [].
I didn't want any of the "execute a program" crap that a .lircrc implies.
All I wanted to do was read key symbols one after another -- basically
what irw does. So I looked at the
irw.c
code to see what it did, and it's remarkably simple. It opens a
socket and reads from it. So I tried implementing that in Python, and
it worked fine:
pyirw.py:
Read LIRC button input from Python.
lines printed on the terminal, but after a reboot they went away,
so they might have been an artifact of running irw.
If You Do Want a .lircrc ...
As I mentioned, you don't need a .lircrc just to read keys
from the daemon. But if you do want a .lircrc because you're
running some sort of media center, I did find two ways of generating one.
There's a bash script called lirc-config-tool floating around
that can generate .lircrc files. It's supposed to be included
in the lirc package, but for some reason Raspbian's lirc package omits
it. You can find and download the bash script witha web search
for lirc-config-tool source, and it works fine on Raspbian. It
generates a bunch of .lircrc files that correspond to various possible
uses of the remote: for instance, you'll get an mplayer.lircrc, a
mythtv.lircrc, a vlc.lircrc and so on.
But all those lircrc files lirc-config-tool generates use only
small subsets of the keys on my remote, and I wanted one that included
everything. So I wrote a quickie script called
gen-lircrc.py
that takes your lircd.conf as input and generates a
simple lircrc containing all the buttons represented there.
I wrote it to run a program called "beep" because I was trying to
determine if LIRC was doing anything in response to the lircrc (it
wasn't); obviously, you should edit the generated .lircrc and
change the prog = beep to call your target programs instead.
Once you have a .lircrc, I'm not sure how you get lircd to use it
to call those programs. That's left as an exercise for the reader.
Some great news: the Jabra Speak devices are now supported using fwupd, and firmware files have just been uploaded to the LVFS.
You can now update the firmware just by clicking on a button in GNOME Software when using fwupd >= 1.0.0. Working with Jabra to add the required DFU quirks to fwupd and to get legal clearance to upload the firmware has been a pleasure. Their hardware is well designed and works really well in Linux (with the latest firmware), and they’ve been really helpful providing all the specifications we needed to get the firmware upgrade working reliably. We’ll hopefully be adding some different Jabra devices in the coming months to the LVFS too.
Ten days ago, I spent a week-end in Berlin with a group of KDE friends to have a KDE-edu sprint. I didn’t blog about it yet because we planned to make a group post to summarize the event, but since it takes some time, I decided to write a quick personal report too.
The sprint was hosted in Endocode offices, which was a very nice place to work together.
Of course I came mostly because of GCompris, but the goal in the end was more to work together to try to redefine the goal and direction of KDE-edu and its website, and to work together on different tasks.
I added appstream links for all KDE-edu apps on their respective pages on KDE website. Those appstream links can be used to install directly applications from linux appstores supporting this standard.
On a side note, we thought it is a bit weird to be redirected from the KDE-edu website to KDE.org when looking at application info. This is one of the things that would need some refactoring. Actually, we discussed a lot about the evolution needed for the website. I guess all the details about this discussion will be on the group-post report, but to give you an idea, I would summarize it as : let’s make KDE-edu about how KDE-applications can be used in educational context, rather than just a collection of specific apps. A lot of great ideas to work on!
For GCompris, I was very happy to can meet Rishabh, who did some work on the server part. I could test the branch with him, and discussed about what needs to be done. Next, I fixed and improved the screenshots available for our appdata info, and started to look at building a new package on Mac with Sanjiban.
I also cleaned an svg file of Ktuberling to help Albert who worked on buiding it for Android.
In the end, I would say it was a productive week-end. Many thanks to KDE e.V. for the travel support, and to Endocode for hosting the event and providing cool drinks.
Peter really (ahem) throws a light on many amazing luminaries from not only the Free/Open Source Software community, but in some cases the history and roots of all modern computing.
He has managed to coordinate portrait sessions with many people that may be unassuming to a layperson, but take a moment to read any of the short bios on the site and the gravity of the contributions from the subjects to modern computing becomes apparent.
This project is my attempt to highlight a revolution whose importance is not broadly understood by a world that relies heavily upon the fruits of its labor.
That’s really what Peter has done here.
He has collected individuals whose contributions all add up to something far greater than their collective sums to shape the digital world many take for granted these days, and is presenting them in a powerful and thoughtful way more befitting their gifts.
A Chat with Peter Adams
I was lucky enough to be able to get a little bit of time with Peter recently, and with some help from the community had a few questions to present to him.
He was kind enough to take some time out of his day and be patient while I prattled on…
Linus Torvalds, Santa Fe, New Mexico, 2016 by Peter Adams
What was the motivation for this particular project for you? Why these people?
I had a long career working in the tech industry, and kind of grew up on a lot of this software when I was in college. Then got to apply it throughout a career as senior technologist or CTO at a bunch of different companies in the valley.
So I went from learning about it in college, to being someone that used it, to then being somebody that contributed to it and starting my own open source project back in 2006.
That open source ethos, the software, and the people that created, maintained and promoted it - it’s something that’s been right there in my face for, really, the last 25 years.
I wanted to marry my knowledge of it with my passion for photography, and shine a light on it.
I went through a few different chapters of the story myself in the 80’s and then the mid-90’s with linux.
I kind of felt like the story was starting to slip into obscurity, not because it’s less important - in fact I think it’s more important now than it’s ever been.
The software is actually used by more people now than it has ever been.
The smartphone revolution, mobile, has brought that to a forefront and all of these mobile platforms are based on this open source technology.
Everything Apple does is based on BSD, and everything Google/Android does is based on Linux.
I feel like it’s a more impactful story now than ever, but very few people are telling the story.
As a photographer I’ve always cringed at the photographic response to the story.
Podium shot after podium shot of these incredible people.
So I wanted to put some faces to names, bring these people to life in a more impactful way than I think anyone has done before. Hopefully that’s what the project is doing!
I started this project in 2013/2014, in earnest probably late 2014.
Of all of the people that you’ve shot, I’m curious, who would you say is one that maybe stuck out with you the most, or even better, did you get any cool stories out of some of the subjects?
Everyone that I’ve photographed has been absolutely wonderful. I mean, that’s the first thing about this community: it’s a very gracious community.
Everybody was very gracious with their time, and eager to participate.
I think people recognize that this is a community they belong to and they really want me to be a part of it, which is really great.
So, I enjoyed my time with everybody.
Everybody brought a different, interesting story about things.
The UNIX crew from Bell Labs had particularly colorful stories, very interesting sort of historical tidbits about UNIX and Free Software.
I talked to Ken Thompson about going to Russia and flying MIGs right after the collapse of the Soviet Union.
Wonderful stories from Doug McIlroy about the team and the engineering - how they worked together at Bell labs.
Just a countless list of cool stories and cool people for sure.
Ken Thompson, Menlo Park, California, 2016 by Peter Adams
Doug McIlroy, Boston, Massachusetts, 2015 by Peter Adams
P: It must have been fascinating!
It’s been really fun. A lot of these folks, I’ve really looked up to them over the years as sort of heroes, and so when you get people in front of your lens like that, it’s a really wonderful experience.
It’s also a challenging experience because you want to do justice to them.
Many of these folks that I’ve thought about for 20+ years, finally getting to shoot them is a real treat.
Where are you shooting these? Are you mostly bringing them into your studio in the valley?
I shot a lot of people when I had a studio in Silicon Valley.
I brought a lot of people there and that was great.
Now typically I’m doing shoots on the coasts.
So I’ll do shoots in NY and I’ll rent a studio and bring 6 or 7 people in there or we’ll do a studio up in SF for some people.
But I’ve done shoots in back alleyways, I’ve done shoots in tiny little conference rooms,
I’ll bring the studio to people if that’s what I have to do.
So I’d say so far it’s been about 50-50.
The lighting setups are wonderful and do justice to the subjects, and I think somebody in the community was curious if you had decided on B&W from the beginning for this series of photos? Was this a conscious decision early on?
B&W on a white background was a conscious choice right from the beginning.
Knowing the group, I felt like that was going to be the best way to explore the people and the faces.
Every one of these faces just tells, I think, a really interesting story.
I try to bring the personality of the person into the photo, and B&W has always been my favorite way to do that.
The white background just puts the emphasis right on the person.
How much of it would you say is you that goes into the final pose and setup of the person, or do you let the subject feel out the room and get comfortable and shoot from there?
It’s a little bit of both.
I wish I got to spend a lot of time up front with the person before we started shooting, but the way everybody’s schedule worked is - none of these shoots are more than an hour and many of them are much shorter than an hour.
There’s definitely the pleasantries up front and talking for a little bit, but then I try to get people right in front of the camera as quick as possible.
I don’t really pose them.
My process is to sit back and observe, and I always tell people “if I’m not taking photos, it’s not because you’re doing anything wrong - I’m just waiting for you to settle or looking, examining”.
Which is, for most people, a really uncomfortable process, I try to make it as comfortable as possible.
Then we’ll start taking pictures.
I may move them a little bit, or we may setup a table so they can rest their hand on their chin or something like that.
Generally the photos that come out are not pre-meditated.
It’s very rare that I go into any of these shoots with an actual “I want the person like this, setup like that, etc…”.
I’d say 99% of these shots, the expressions, the feeling that comes out, that I’m capturing is organic.
It’s something that comes up in the shoot.
I just try to capture it whenever I see it by clicking the shutter, that’s basically what I’m doing there.
You list what equipment you shot each portrait with, but I’m curious about the lighting setup. Is there a “go-to” lighting setup that you like to use?
The lighting is literally the same on every shot, though there’s slightly different positions.
It’s a six light setup: there are four lights on the background, there’s a beauty dish overhead, and generally a fill light.
The fill is either a big Photek or PLM, basically a big umbrella, or a ringflash depending on how small the room is.
That’s the same lighting setup on all of them.
Four lights on the background, two lights on the subject.
I’ll vary the two lights on the subject positionally, but for the most part they’re pretty close.
Do you use Free Software in your normal photographic workflow at all?
I don’t use as much Free Software as I’d like in my own workflow.
My workflow, because I shoot with Phase One, the files go into Capture One and then from there they go into Photoshop for final edits.
I have used GIMP in the past.
I really would like to use more Free Software, so I’m a learner in that regard for what tools would make sense.
Spencer Kimball (co-creator of GIMP), Menlo Park, 2015 by Peter Adams
Peter Mattis (co-creator of GIMP), New York City, 2015 by Peter Adams
Did that habit grow out of the professional need of having those tools available to you?
Phase One, which makes the Medium Format digital back and camera that I use for all of my portrait work, also makes Capture One.
They have basically customized the software to get the most of their own files.
That’s pretty much why I’ve wound up there instead of Lightroom or another tool.
It’s just that that software tends to bring out the tonality, especially in the B&W side, better I’ve found than any other tool.
This project was self financed to start with?
Yes, this is a self-financed project.
I do hope that we’ll get some sponsors, especially for the book, just because it tends to be a pretty heavy upfront outlay to produce a book.
I’m going to think about things like Kickstarter but the corporate sponsors I think will be really helpful for the exhibits and the book.
Speaking of the book, is it ready - have you already gone to print?
No, the book isn’t ready yet.
I still have probably another 10-12 people that I need to photograph and then we’ll start producing it.
I’ve done some prototypes and things on it but it’s still a little bit of a ways away.
The biggest hurdle on this project is actually scheduling and logistics.
Getting access to people in a way that is economical.
Instead of me flying all over the place for one shot, I try to stack up a number of people into a day.
It’s tough - this is a busy crowd, very in demand.
Did your working in open source teach you anything beyond computer code in some way? Was there an influence from the people you may have worked around, or the ethos of Free Software in general that stuck with you? Working with this crowd, was there a takeaway for you beyond just the photographic aspects of it?
Absolutely!
First of all it’s an incredibly inspiring group of people.
This is a group of people that have dedicated, in some cases most of, their lives to the development of software that they give away to the world, and don’t monetize themselves.
The work they’re doing is effectively a donation to humanity.
That’s incredibly inspiring when you look at how much time goes into these projects and how much time this group of people spends on that.
It’s a very humbling thing.
I’d say the other big lesson is that Open Source is such a unique thing.
There’s really nothing like it.
It’s starting to take over other industries and moving beyond just software - it’s gone into hardware.
I’ve started to photograph some of the open source hardware pioneers.
It’s going into bio-tech, pharmaceuticals, agriculture (there’s an open source seed project).
I think that the lessons that are learned here and that this group of people is teaching is really affecting humanity on a much much larger level than the fact that this stuff is powering your cell phone or is powering your computer.
Open source is really sort of a way of doing business now.
Even more than doing business it’s a way of operating in the world.
More and more people, industries, and companies are choosing that.
In today’s world where all you read is bad news, that’s a lot of really good news.
It’s an awesome thing to see that accelerating and catching on.
It’s been incredibly inspiring to me.
P: I think even all the way back to the Polio vaccine, is one of those things. The effect that it had on humanity was immeasurable, and the fact that it wasn’t monetized by Salk was amazing.
Look at how many lives were saved because of that.
If you think about the acceleration of the innovation we’ve had just in the technology sector - could things like the iPhone or the Android operating system - would these things have happened now, or over the last decade, without this [open source], or would we be looking at those types of innovations happening twenty years from now?
I think that’s a question you have to ask.
I don’t think it’s an obvious answer that Apple or Google or somebody else would have just come up with this without the open source [contributions].
This stuff is so fundamental, it’s such a basic building block for everything that’s happening now.
It may be responsible for the golden age that we’re seeing now.
I think it is.
The average teenager they pick up and post a photo to Instagram - they don’t realize that there’s a hundred open source projects at work to make that possible.
P: And the fact that the people that underlay that entire stack gave it away.
Right.
And that giving it away was necessary to create the Instagrams to create all these networks.
It wasn’t just this happenstance thing where people didn’t know any better.
In some cases obviously that did exist, but it’s the fact that consciously people are contributing into a commons that makes it so powerful and enables all of this innovation to happen.
It’s really cool.
To close, is there another photographer, book, organization - that you’d like any of the readers to know about and maybe spend some time to go and check out. Something that maybe you’ve long admired or recently discovered?
Sure!
You’ve mentioned Martin Schoeller, who is one of my personal favorites and inspirations out there.
I’d say the other photographer who has had probably the most impact on my photography over the years has been Richard Avedon.
For people that aren’t familiar with his work I’d say definitely go check out the Avedon foundation.
Pick up any of his books which are just wonderful.
You’ll definitely see that influence on my photography, especially this project, since he shot black and white on white background.
Such stunning work.
I’d say that those are two great ones to start with.
Alright! Avedon and Schoeller - I can certainly think of worse people to go start a journey with. Thank you so much for taking time with me today!
Hey no problem! It’s been fun to talk to you.
There are many more fascinating portraits awaiting you over on the project site, and every one of them is worth your time!
See them all at:
tl;dr: If you feel like you want to donate to the LVFS, you can now do so here.
Nearly 100 million files are downloaded from the LVFS every month, the majority being metadata to know what updates are available. Although each metadata file is very small it still adds up to over 1TB in transfered bytes per month. Amazon has kindly given the LVFS a 2000 USD per year open source grant which more than covers the hosting costs and any test EC2 instances. I really appreciate the donation from Amazon as it allows us to continue to grow, both with the number of Linux clients connecting every hour, and with the number of firmware files hosted. Before the grant sometimes Red Hat would pay the bandwidth bill, and other times it was just paid out my own pocket, so the grant does mean a lot to me. Amazon seemed very friendly towards this kind of open source shared infrastructure, so kudos to them for that.
At the moment the secure part of the LVFS is hosted in a dedicated Scaleway instance, so any additional donations would be spent on paying this small bill and perhaps more importantly buying some (2nd hand?) hardware to include as part of our release-time QA checks.
I already test fwupd with about a dozen pieces of hardware, but I’d feel a lot more comfortable testing different classes of device with updates on the LVFS.
One thing I’ve found that also works well is taking a chance and buying a popular device we know is upgradable and adding support for the specific quirks it has to fwupd. This is an easy way to get karma from a previously Linux-unfriendly vendor before we start discussing uploading firmware updates to the LVFS. Hardware on my wanting-to-buy list includes a wireless network card, a fingerprint scanner and SSDs from a couple of different vendors.
If you’d like to donate towards hardware, please donate via LiberaPay or ask me for PayPal/BACS details. Even if you donate €0.01 per week it would make a difference. Thanks!
People following me on Instagram have been asking why I do the daily renders. You’re not gonna get better by thinking about it. The arsenal of tools and methods in Blender is giant and over the years I still struggle to call myself proficient in any of them.
Follow me on Instagram to see them come alive. I’m probably not gonna maintain the daily routine, but I will continue doing these.
Hi everyone – my name is Cillian Clifford, I’m a 21 year old hobbyist artist and electronic musician, and an occasional animator, writer and game developer. I go by the username of Fatal-Exit online. I live in rural Ireland, a strange place for someone so interested in technology. My interests range from creative projects to tech related fields like engineering, robotics and science. Outside of things like these I enjoy gaming from time to time.
Do you paint professionally, as a hobby artist, or both?
Definitely as a hobby. I consider digital painting to be one of my weakest areas of art skills, so I spend a lot of time trying to improve it. Other areas of digital art I’m interested in include CAD, 3d modeling, digital sculpting, vector animation, and pixel art.
What genre(s) do you work in?
It varies! Hugely, in fact. Over the past two years on my current DeviantArt account I’ve uploaded game fan-art paintings, original fantasy and Sci-Fi pieces, landscapes, pixel art, and renders of 3d pieces. I also occasionally paint textures and UV maps for 3d artwork. Outside of still art, I also animate in vector and pixel art styles. I also occasionally make not-great indie games, but as you might guess, most never get finished.
Whose work inspires you most — who are your role models as an artist?
A wide range of artists, often not particular people but more their combined efforts on projects. I will say that David Revoy and GDQuest in the Krita community are a big inspiration. Youtube artists such as Sycra, Jazza and Borodante are another few I can think of. Lots of my favorite art of all time has come from large game companies such as Blizzard and Hi-Rez Studios. Also game related, the recent rise of more retro and pixel based graphics in indie games is a huge interest of mine, and games like Terraria, Stardew Valley and Hyper-Light Drifter have an art style that truly inspires me.
How and when did you get to try digital painting for the first time?
My first time doing some sort of “digital painting” was when I was about 16-17. I did the graphics design work for a board game a team of us were working on for a school enterprise project, using the free graphics software Paint.net and a mouse. It took ages. However the project ended up taking off and we ended up in the final stage of the competition. After that was over (we didn’t win) I decided digital art might be something to seriously invest in and bought a graphics tablet. For a couple of years I made unimaginably terrible art and in 2015 I decided to shut down my DeviantArt account and start fresh on a new account, with my new style. This was about when I found Krita, I believe.
What makes you choose digital over traditional painting?
A few things: Firstly, I could never paint in a traditional sense, I was absolutely terrible. At school I was considered a C grade artist, and that was even when working on pen and ink drawings, a style I used to be good at but have since abandoned. I never learned to paint traditionally.
Secondly, I can do it anywhere. In my bedroom with a Ugee graphics monitor and my workstation desktop, or lots of other places if I take my aging laptop and Huion graphics tablet with me. Soon I’m looking to buy a mobile tablet similar to the Microsoft Surface Pro, that’ll let me paint absolutely anywhere.
Thirdly, the tech involved. So not only am I able to emulate any media that exists in traditional art with various software, I can also work on art styles that aren’t even possible with traditional. As well as this, functions like undo, zooming in and out of the canvas, layers and blending modes, gradients and bucket fill, the list goes on and on.
I can happily say I never want to “go back” to traditional painting even though I was never any good at it in the first place.
How did you find out about Krita?
That’s a hard question. I’m not absolutely sure, but I’ve an idea that it might have been through David Revoy’s work on the Blender Foundation movies, and Pepper and Carrot. I was looking for a cheap or free piece of software because I didn’t want to use cracked Photoshop/Painter, and I’d already used GIMP and Paint.net, and neither were good for the art I was looking to create. I tried MyPaint but it never worked properly with my tablet. I did buy ArtRage at some point but I wasn’t happy with the tools in that. It came down to probably a choice of Krita or Clip Studio Paint. Krita had the price tag of free so it was the first one I tried. And I stuck with it.
What was your first impression?
Wow.
At least I think it was. When I first tried it everything just seemed to work straight off. It seemed simple enough for me to use efficiently. And the brush engine was simply amazing. I don’t know if there’s any other program with brushes that easy to customize to a huge extent but still so simple to set up. I first tried it in version 2.something so it was before animation was added.
What do you love about Krita?
Mostly, the fact that it works to use for most things you can throw at it. I’ve made game assets, textures, paintings, drawings, pixel art, a couple of test animations with the animation function, pretty much everything. I feel like it’s the Blender of 2d, the free tool that does pretty much everything, maybe not the 100% best at it, but certainly the most economical option.
The brush engine like I said before is one of it’s best assets, it has one of the most useful color pickers I’ve used, the inclusion of what is the feature-set of the paid plugin Lazy Nezumi for Photoshop for free, the fact that the interface can be there when you need it but vanish at the press of the button. Just loads of good things.
The variety of brush packs made by the community are also a great asset. I own GDQuest’s premium bundle and also use Deevad’s pack on a regular basis. I love to then tweak those brushes to suit my needs.
What do you think needs improvement in Krita? Is there anything that really annoys you?
The main current annoyance with Krita is the text tool. I just hate it. It’s the one thing that makes me want to have access to Photoshop. And I know it’s supposedly one of the things being focused on in future updates, so hopefully they don’t take too long to happen.
Another problem I had with Krita happened last year. It’s been fixed since, but it’s certainly nothing I’d like to see happen again with V4 (Which I worry is a possibility). Basically what happened was when the Krita 3 update came out it broke support for my Ugee graphics monitor. Completely broke it. I had to either stick with the old version of Krita 2.9, or when I wanted to use tools from V3 I had to uninstall my screen tablet drivers, install drivers for my tiny old Intuos Small tablet and use that. Luckily, later on, (about 6-8 months down the line) an update for my tablet drivers fixed all problems, and it just worked with my screen tablet from then on.
What sets Krita apart from the other tools that you use?
Ease of use, the brush engine, the speed that it works at (even with 4k documents on my pentium powered laptop), the way it currently works well on all my hardware, the price tag (FREE!), the community, and some great providers of custom brushes (GDQuest and David Revoy’s in particular). Even though I’ve since stopped using Krita for pixel art and moved to Aseprite (only because their pixel animation tools are more sophisticated towards making game assets), I believe it’s the most suitable program I have access to for digital painting, comic art, and traditional 2d animation.
If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?
This is a hard question because I feel I am a terrible critic. If I had to choose it’d probably be Sailing to the Edge of the World II – from my Sailing to the Edge of the world painting series I made for a good colleague of mine. I also included the latest painting in that series, though I believe the second one was the best. Even though it’s been maybe 8 months since I made that painting it’s still one of my best.
What techniques and brushes did you use in it?
If I remember correctly I used mostly David Revoy’s brush-pack. The painterly brushes were used along with the pen and ink brushes and some of the airbrushes. To be honest it’s been so long since I made it I’m not 100% sure. I may have also used some of the default brushes such as the basic round and soft round.
As of the time of writing this it’s mostly just home to my music. However I’m looking to expand it into art, animation and game development, with tutorials and process videos. I’m certainly hoping to post some Krita reviews, tutorials and videos on how it can be used in a game development pipeline over the coming months, as well as videos of other software such as Blender, Aseprite, 3d Coat, Moho, Construct 3, Gamemaker Studio 2, Unreal Engine 4, Sunvox, FL Studio Mobile and others.
For those who haven't already read about the issue in the national
press, New Mexico's Public Education Department (a body appointed by
the governor) has a proposal regarding new science standards for all
state schools. The proposal starts with the national
Next Generation Science Standards
but then makes modifications, omitting points like references to
evolution and embryological development or the age of the Earth
and adding a slew of NM-specific standards that are mostly
sociological rather than scientific.
New Mexico residents have until 5.p.m. next Monday, October 16, to speak
out about the proposal.
Email comments to
rule.feedback@state.nm.us
or send snail mail (it must arrive by Monday) to
Jamie Gonzales, Policy Division, New Mexico Public Education Department,
Room 101, 300 Don Gaspar Avenue, Santa Fe, New Mexico 87501.
A few excellent letters people have already written:
I'm sure they said it better than I can. But every voice counts --
they'll be counting letters! So here's my letter. If you live in New
Mexico, please send your own. It doesn't have to be long: the
important thing is that you begin by stating your position on
the proposed standards.
Members of the PED:
Please reconsider the proposed New Mexico STEM-Ready Science Standards,
and instead, adopt the nationwide Next Generation Science Standards
(NGSS) for New Mexico.
With New Mexico schools ranking at the bottom in every national
education comparison, and with New Mexico hurting for jobs and having
trouble attracting technology companies to our state, we need our
students learning rigorous, established science.
The NGSS represents the work of people in 26 states, and
is being used without change in 18 states already. It's been well
vetted, and there are many lesson plans, textbooks, tests and other
educational materials available for it.
The New Mexico Legislature supports NGSS: they passed House Bill 211
in 2017 (vetoed by Governor Martinez) requiring adoption of the NGSS.
The PED's own Math and Science Advisory Council (MSAC) supports NGSS:
they recommended in 2015 that it be adopted. Why has the PED ignored
the legislature and its own advisory council?
Using the NGSS without New Mexico changes will save New Mexico money.
The NGSS is freely available. Open source textbooks and lesson plans
are already available for the NGSS, and more are coming. In contrast,
the New Mexico Stem-Ready standards would be unique to New Mexico:
not only would we be left out of free nationwide educational materials,
but we'd have to pay to develop New Mexico-specific curricula and
textbooks that couldn't be used anywhere else, and the resulting
textbooks would cost far more than standard texts. Most of this money
would go to publishers in other states.
New Mexico consistently ranks at the bottom in educational
comparisons. Yet nearly 15% of the PED's proposed stem-ready standards
are New Mexico specific standards, taught nowhere else, and will take
time away from teaching core science concepts. Where is the evidence
that our state standards would be better than what is taught in other
states? Who are we to think we can write better standards than a
nationwide coalition?
In addition, some of the changes in the proposed NM STEM-Ready Science
Standards seem to be motivated by political ideology, not science.
Science standards used in our schools should be based on widely
accepted scientific principles. Not to mention that the national
coverage on this issue is making our state a laughingstock.
Finally, the lack of transparency in the NMSRSS proposal is alarming.
Who came up with the proposed NMSRSS standards? Are there any experts
in science education that support them? Is there any data to indicate
they'd be more effective than the NGSS? Why wasn't the development of
the NMSRSS discussed in open PED meetings as required by the Open
Meetings Act?
The NGSS are an established, well regarded national standard. Don't
shortchange New Mexico students by teaching them watered-down science.
Please discard the New Mexico Stem-Ready proposal and adopt the Next
Generation Science Standards, without New Mexico-specific changes.
I realize that I’m a bit late in publishing this news but, to be honest, I never was great about the blogging regularly anyway.
In any case, this post is a bit of a public announcement: I’m happy to say that I recently completed an extremely busy year working on my Master of Arts in Typeface Design (MATD) degree at the University of Reading. Consequently, I am now back out in the real world, and I am looking for interesting and engaging employment opportunities. Do get in touch if you have ideas!
For a bit of additional detail, the MATD program combines in-depth training about letterforms, writing, non-Latin scripts, and typeface development with rigorous academic research. On the practical side, we each developed a large, multi-style, mutli-script family of fonts (requiring the inclusion of at least one script that we do not read).
My typeface is named Sark; you can see a web and PDF specimen of it here at the program’s public site. It covers Latin, Greek, Cyrillic, and Bengali; there is a serif subfamily tailored for long-form documents and there is a sans-serif subfamily that incorporates features to make it usable on next-generation display systems like transparent screens and HUDs.
My dissertation was research into software models for automatic (and semi-automatic) spacing and kerning of fonts. It’s not up for public consumption yet (in any formal way), as we are still awaiting the marking and review process. But if you’re interested in the topic, let me know.
Anyway, it was a great experience and I’m glad to have done it. I’m also thrilled that it’s over, because it was intense.
Moving ahead from here, I am looking forward to reconnecting with the free-software community, which I only had tangential contact with during my studies. That was hard; I spent more than thirteen years working full-time as a journalist exclusively covering the free-and-open-source software movement. I did get to see a lot of my friends who work on typography and font-related projects, because I still overlapped with those circles; I look forward to seeing the rest of you at the next meetup, conference, hackathon, or online bikeshedding session.
As for what sort of work I’m looking for, I’m keeping an open mind. What I would really love to find is a way (or ways) to help improve the state of type, typography, and documents within free-software systems. The proprietary software world has typefaces and text-rendering technology that is determined by things like sales figures; free software has no such limitations. The best typesetting systems in the world (like TeX and SILE) are free software; our documents and screens and scripts have no reason to look second-best, compared to anyone.
So if I can do that, I’ll be a happy camper. But by all means, I’m still going to remain a camper with a lot of diverse and peculiar interests, so if there’s a way I can help you out in some other fashion, don’t be shy; let me know.
I have a few contract opportunities I’m working on at the moment, and I am contributing to LWN (the best free-software news source in the dimension) as time allows. And I’m gearing up to tell you all about the next editions of Texas Linux Fest and Libre Graphics Meeting. Oh, and there are some special secret projects that I’m saving for next time….
The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc. The signatures are here.
Support Krita
Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.
Today I released fwupd version 1.0.0, a version number most Open Source projects seldom reach. Unusually it bumps the soname so any applications that link against libfwupd will need to be rebuilt. The reason for bumping is that we removed a lot of the cruft we’ve picked up over the couple of years since we started the project, and also took the opportunity to rename some public interfaces that are now used differently to how they were envisaged. Since we started the project, we’ve basically re-architected the way the daemon works, re-imagined how the metadata is downloaded and managed, and changed core ways we’ve done the upgrades themselves. It’s no surprise that removing all that crufty code makes the core easier to understand and maintain. I’m intending to support the 0_9_X branch for a long time, as that’s what’s going to stay in Fedora 26 and the upcoming Fedora 27.
Since we’ve started we now support 72 different kinds of hardware, with support for another dozen-or-so currently being worked on. Lots of vendors are now either using the LVFS to distribute firmware, or are testing with one or two devices in secret. Although we have 10 (!) different ways of applying firmware already, vendors are slowly either switching to a more standard mechanism for new products (UpdateCapsule/DFU/Redfish) or building custom plugins for fwupd to update existing hardware.
Every month 165,000+ devices get updated using fwupd using the firmware on the LVFS; possibly more as people using corporate mirrors and caching servers don’t show up in the stats. Since we started this project there are now at least 600,000 items of hardware with new firmware. Many people have updated firmware, fixing bugs and solving security issues without having to understand all the horrible details involved.
I guess I should say thanks; to all the people both uploading firmware, and the people using, testing, and reporting bugs. Dell have been a huge supporter since the very early days, and now smaller companies and giants like Logitech are also supporting the project. Red Hat have given me the time and resources that I need to build something as complicated and political as shared infrastructure like this. There is literally no other company on the planet that I would rather work for.
So, go build fwupd 1.0.0 in your distro development branch and report any problems. 1.0.1 will follow soon with fixes I’m sure, and hopefully we can make some more vendor announcements in the near future. There are a few big vendors working on things in secret that I’m sure you’ll all know :)
Thank you very much for your efforts for make Stellarium available to non-English community!
We incorporated into main package a long awaited feature - support of nomenclature for planetary features. All nomenclature items are translatable and this is a big problem now for translators, because we added over 15000 new lines for translation.
All those lines was extracted in the separate category - stellarium-planetary-features. If you can assist with translation to any of the 140 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium
Every fall, Dave and I eagerly look for tarantulas.
They only show up for a few weeks a year -- that's when the males
go out searching for females (the females stay snug in their burrows).
In the bay area, there were a few parks where we used to hunt for them:
Arastradero, Mt Hamilton, occasionally even Alum Rock.
Here in semi-rural New Mexico, our back yard is as good a place
to hunt as anywhere else, though we still don't see many: just
a couple of them a year.
But this year I didn't even have to go out into the yard.
I just looked over from my computer and spotted a tarantula climbing
up our glass patio door. I didn't know they could do that!
Unfortunately it got to the top before I had the camera ready,
so I didn't get a picture of tarantula belly.
Right now he's resting on the sill:
I don't think it's very likely he's going to find any females
up there. I'm hoping he climbs back down the same way and I can
catch a photo then. (Later: nope, he disappeared when I wasn't watching.)
In other invertebrate news: we have a sporadic problem with
centipedes here in White Rock. Last week, a seven-inch one dropped
from the ceiling onto the kitchen floor while I was making cookies,
and it took me a few minutes to chase it down so I could toss it
outside.
But then a few days later, Dave spotted a couple of these
little guys on the patio, and I have to admit they're pretty
amazing. Just like the adults only in micro-miniature.
Though it doesn't make me like them any better in the house.
Hi! My name is Emily Wei, and I’m 19 years old. I was born in Taiwan, but I grew up in New Jersey. Right now, I’m back in Taiwan juggling university, freelance work, sleep, and a one-year course I’m taking at Kadokawa International Edutainment (Advanced Commercial Illustration).
Do you paint professionally, as a hobby artist, or both?
I suppose I’d be considered a hobbyist as of now since I’m not making money off art yet, but I aim to do it professionally in the near future!
What genre(s) do you work in?
My main love is in illustration, but a lot of the things I’ve been working on lately fall under concept design, so things like characters and 2D game assets, among others. Stylewise, I’m somewhere between anime, fantasy/RPG video games, and emotional surrealism. Basically, I’m kind of all over the place since I’m trying different things to get a feel for what my likes and dislikes are; I’ve recently fallen in love with doing background illustrations, for example!
Whose work inspires you most — who are your role models as an artist?
That’s really tough; there are too many! Pretty much everyone I’m following on Twitter/DeviantArt/Artstation, masters like Sargent and Mucha as well as my friends and mentors.
How and when did you get to try digital painting for the first time?
I think I was about 9 or 10 when I started? I was a hardcore Neopets user at the time, and at some point, I stumbled upon the art community there. That led to me discovering How to Draw ____ in Photoshop tutorials by an artist I really looked up to (shameless plug: her social media handle name is droidnaut across various platforms! Do check her out ^^)
It really amazed me how versatile digital art was, and I’ve never stopped since.
What makes you choose digital over traditional painting?
Short version: CTRL+Z!
Long version: Digital art is much more forgiving than traditional medias, and you don’t really need to keep buying art supplies (not counting Adobe CC subscriptions, hardware upgrades, plugins, etc.) There are a lot of tools you can use that save you a boatload of time, and it’s easier to make changes to your work as needed.
That said, I do love traditional art. There’s nothing quite like the feeling of putting pen on paper! It’s also easier in some aspects; for
example, drawing decent circles (and most geometric shapes in general) freehand is ridiculously harder with a tablet. Limited supplies also makes you be more economic and decisive about what goes where, which is a mindset I’d like to carry over into my digital work more.
How did you find out about Krita?
I don’t really remember, actually! I think I might’ve seen a thread about it on Neopets or a post on tumblr. It was some time after the Kickstarter for Krita 3.0 ended, and the “faster than Photoshop” part of the campaign despite all the tools the program offered had me intrigued.
What was your first impression?
“Wow! This is almost just like Photoshop!” The UI is very similar, haha.
What do you love about Krita?
The brush engines are really fantastic. There are a lot of traditional media-esque brushes for people who like a little roughness/texture as well as the standard digital round opacity brushes and soft airbrushes. Here’s one of the first few sketches I did with Krita back in 2015:
There’s also the option to convert your artwork to CMYK if you want to make prints and merch, which is really convenient.
What do you think needs improvement in Krita? Is there anything that really annoys you?
I suppose my only qualm is that the text tool and I don’t really seem to get along, haha. Text input and changing the font size is oddly challenging. It’s not that big of a deal, though.
This might have changed in version 3, but I’m still using version two-point-something since my computer can’t quite handle the newest
version.
What sets Krita apart from the other tools that you use?
I find it amazing how much you can do with a program that is legitimately free to download! It’s basically Photoshop condensed down to just the tools and functions a CG illustrator would use. I think this is especially nice for people who are new to digital art since they can get into it without putting a huge dent into your wallet (or pirating ).
And again, the brushes are great.
If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?
Probably “No”!
It’s not the most technically advanced work I’ve done, and the story behind it isn’t exactly happy, but I still like the colors, the clothing folds, and the overall composition.
What techniques and brushes did you use in it?
I don’t really remember what brushes I used, actually! I do have a favorite brushes preset, though, so they’re probably among them.
As for techniques, nothing super fancy beyond simple digital painting and blending.
Feel free to follow me, come say hi, ask me questions, whatever! I’m most active on Twitter, Tumblr, and Plurk.
(I’m still in the process of updating my Facebook page, Artstation, and Instagram, so hopefully there will be things for you to look at there by the time you read this.)
Anything else you’d like to share?
In a nutshell, people always say how tools do not a craftsman make, and the same thing is true with digital art. The most expensive programs and tablets in the world will not make you a master overnight, nor do you need them to make art. Explore different options (like Krita!), learn as much as you can, and just have fun with it
Hi everybody! Here we are for another monthly report about the development of BIM tools for FreeCAD, our favorite open-source 3D CAD modeling platform. The coding I am doing for FreeCAD is now more and more heavily supported and funded by many of you via my Patreon page, thanks once more to everybody who contributes...
Bullet 2.87 has improved support for robotics, reinforcement learning and VR. In particular, see the “Reinforcement Learning” section in the pybullet quickstart guide at http://pybullet.org . There are also preliminary C# bindings to allow the use of pybullet inside Unity 3D for robotics and reinforcement learning. In addition, vectorunit Beach Buggy Racing using Bullet has been released for the Nintendo Switch!
Version 0.16.1 is based on Qt 5.6.2, but it can still be built from sources with Qt 5.4.
This version is bugfix release with some important features:
- Added moons of Saturn, Uranus and Pluto
- Added improvements for AstroCalc tool
- DSO catalog was updated to version 3.2:
-- Added support 'The Strasbourg-ESO Catalogue of Galactic Planetary Nebulae' (Acker+, 1992)
-- Added support 'A catalogue of Galactic supernova remnants' (Green, 2014)
-- Added support 'A Catalog of Rich Clusters of Galaxies' (Abell+, 1989)
- Added support asterisms and outlines for DSO
- Added improvements for the GUI
A huge thanks to the people who helped us a lot by reporting bugs!
Full list of changes:
- Added two notations for unit of measurement of surface brightness
- Added improvement for hide/unhide lines and grids in Oculars plugin
- Added few moons of Saturn (Phoebe, Janus, Epimetheus, Helene, Telesto, Calypso, Atlas, Prometheus, Pandora, Pan) with classic elliptical orbits
- Added few moons of Uranus (Cordelia, Cressida, Desdemona, Juliet, Ophelia) with classic elliptical orbits
- Added 2 moons of Pluto (Kerberos and Styx) with classic elliptical orbits
- Added code to avoid conflicts for names of asteroids and moons
- Added support of IAU moon numbers
- Added angular size into AstroCalc/Positions tool
- Added option to allow users to choose output formatting of coordinates of objects
- Added optional debug info for HDPI devices
- Added optional calculations of resolution limits for Oculars plugin
- Added new data from IAU Catalog of Star Names (LP: #1705111)
- Added support download zip archives with TLE data to Satellites plugin
- Added link to the Mike McCants' classified TLE data into the default list of TLE sources
- Added link to AMSAT TLE data into the default list of TLE sources
- Added support 'The Strasbourg-ESO Catalogue of Galactic Planetary Nebulae' (Acker+, 1992) [DSO catalog version 3.2]
- Added support 'A catalogue of Galactic supernova remnants' (Green, 2014) [DSO catalog version 3.2]
- Added support 'A Catalog of Rich Clusters of Galaxies' (Abell+, 1989) [DSO catalog version 3.2]
- Added export the predictions of Iridium flares (LP: #1707390)
- Added meta information about version and edition into file of Stellarium DSO Catalog to avoid potential crash of Stellarium in the future (validation the version of catalog before loading)
- Added support of extra physical data for asteroids
- Added support of outlines for DSO
- Added new time step: saros
- Added new time step: 7 sidereal days
- Added more checks to the network connections
- Added support of comments for constellations_boundaries.dat file (LP: #1711433)
- Added support for small asterisms with lines by the equatorial coordinates
- Added support for ray helpers
- Added new feature (crossed lines and output string near mouse cursor) to the Pointer Coordinates plugin
- Added missing cross-id data
- Added support an images within description of landscapes
- Added support Visual Studio 2017 in StelLogger
- Added tool to save list of objects in AstroCalc/WUT tool
- Added tool to save celestial positions of objects in AstroCalc/Positions tool
- Added temporary solution for bug 1498616 (LP: #1498616)
- Fixed wrong rendering Neptune and Uranus (LP: #1699648)
- Fixed Vector3 compilation error in unit tests (LP: #1700095)
- Fixed a conflict around landscape autoselection (LP: #1700199)
- Fixed HMS formatting
- Fixed generating ISS script
- Fixed tooltips for AstroCalc/Positions tool
- Fixed dark nebulae parameters for AstroCalc/Positions tool
- Fixed tool for saving options
- Fixed crash when we on the spaceship
- Fixed Solar system class to avoid conflicts and undefined behaviour
- Fixed orientation angle and its data rendering (LP: #1704561)
- Fixed wrong shadows on Jupiter's moons (Added special case for Jupiter's moons when they are in the shadow of Jupiter for compute magnitudes from Expl. Suppl. 2013 item) (LP: #1704421)
- Fixed work AstroCalc/AltVsTime tool for artificial satellites (a bit slow solution though)
- Fixed search by lists of DSO
- Fixed translation switch issue for AstroCalc/Graphs tool (LP: #1705341)
- Fixed trackpad behaviour on macOS though workaround
- Fixed couple stupid bugs in InnoSetup script
- Fixed morphology for SNR
- Fixed issue in parsing of date format in AstroCalc/Phenomena tool
- Fixed link for fileStructure.html file in README (LP: #1709523)
- Fixed the calculation for drawing a reticle on a HiDPI display (Oculars plugin)
- Fixed default option of units of measure for surface brighness to avoid possible artifacts on the macOS (LP: #1699643)
- Fixed crash when comments is added into constellations_boundaries.dat file (LP: #1711229)
- Fixed behaviour of 'Center on selected object' button (LP: #1712101)
- Fixed impossibility to select a planet after Astronomical Calculations is activated (LP: #1712652)
- Fixed crash with unknown star in asterism
- Fixed cross-ids of 42 bright double stars (LP: #1655493)
- Fixed magnitude computation for Jupiter's satellites
- Fixed crash of Stellarium when answer of freegeoip.net has wrong format (for example this host blocked by firewall or DNS server with HTML answer) (LP: #1706187)
- Fixed translations issue in Script Console
- Fixed illumination in Scenery3D plugin: Take eclipseFactor into account
- Fixed potential crash in DSO outlines
- Fixed various issues in ray helpers, asterisms and constellations support
- Updated InfoString feature
- Updated sky brightness during solar eclipse (really, there are only few stars visible.)
- Updated Maori sky culture
- Updated list of names of deep-sky objects
- Updated list of asterisms
- Updated selection behaviour in Oculars plugin (avoid selection of objects outside ocular circle in eyepiece mode)
- Updated behaviour of methods getEnglishName() and getNameI18n() for minor bodies
- Updated behaviour of planetarium for support a new format of asteroid names
- Updated behaviour of filters for DSO catalogs
- Updated Solar System Editor plugin (support new format of asteroid names)
- Updated RTS2 telecope driver in Telescope Control plugin.
- Updated API docs
- Updated limit of magnitude for Oculars plugin (Improvements)
- Updated AstroCalc/WUT tool
- Updated AstroCalc/Ephemeris tool
- Updated rules for storing default settings
- Updated rules for computation visibility of DSO hints
- Updated plugins
- Updated default values for material fade-in/fade-out times
- Updated stellarium.appdata.xml file
- Updated tab rules in the GUI
- Reduce warnings to one when loading OBJ with non-default w texture/vertex coordinates
Our friendly neighborhood @LebedevRI pointed out to me a little while ago that we had reached some nice milestones for https://raw.pixls.us.
Not surprisingly I had spaced out and not written anything about it (or really any sort of social posts). Bad Pat!
For anyone not familiar with RPU, a quick recap (we had previously written about raw.pixls.us earlier this year).
There used to be a website for housing a repository of raw files for as many digital cameras as possible called rawsamples.ch.
It was created by Jakob Rohrbach and had been running since March of 2007.
Back in 2016 the site was hit with a SQL injection attack that left the Joomla database corrupted (in a teachable moment, the site also didn’t have a database backup).
With the rawsamples.ch site down, @LebedevRI and @andabata worked to get a replacement option in-place and working: https://raw.pixls.us!
We grabbed all the files we could salvage from rawsamples.ch and @andabata setup the new page.
We’ve had a slowly growing response as folks have filled in gaps for camera models we still don’t have.
For reference, we currently have
unique cameras, and
unique samples.
We have many raw samples that were not licensed as freely as we would like.
Ideally we are looking for images that have been released Creative Commons Zero (CC0).
This list is all samples we already have that are not licensed CC0, so if you happen to
have one of the cameras listed below please consider uploading some new samples for us!
Canon IXUS900Ti
Canon PowerShot A550
Canon PowerShot A570 IS
Canon PowerShot A610
Canon PowerShot A620
Canon PowerShot A630
Canon Powershot A650
Canon PowerShot A710 IS
Canon PowerShot G7
Canon PowerShot S2 IS
Canon PowerShot S5 IS
Canon PowerShot SD750
Canon Powershot SX110IS
Canon EOS 10D
Canon EOS 1200D
Canon EOS-1D
Canon EOS-1D Mark II
Canon EOS-1D Mark III
Canon EOS-1D Mark II N
Canon EOS-1D Mark IV
Canon EOS-1Ds
Canon EOS-1Ds Mark II
Canon EOS-1Ds Mark III
Canon EOS-1D X
Canon EOS 300D
Canon EOS 30D
Canon EOS 400D
Canon EOS 40D
Canon EOS 760D
Canon EOS D2000C
Canon EOS D60
Canon EOS Digital Rebel XS
Canon EOS M
Canon EOS Rebel T3
Canon EOS Rebel T6i
Canon PowerShot A3200 IS
Canon Powershot A720 IS
Canon PowerShot G10
Canon PowerShot G11
Canon PowerShot G12
Canon PowerShot G15
Canon PowerShot G1
Canon PowerShot G1 X Mark II
Canon PowerShot G2
Canon PowerShot G3
Canon PowerShot G5
Canon PowerShot G5 X
Canon PowerShot G6
Canon PowerShot Pro1
Canon PowerShot Pro70
Canon PowerShot S30
Canon PowerShot S40
Canon PowerShot S45
Canon PowerShot S50
Canon PowerShot S60
Canon PowerShot S70
Canon PowerShot S90
Canon PowerShot SD450
Canon Powershot SX110IS
Canon PowerShot SX130 IS
Canon PowerShot SX1 IS
Canon PowerShot SX50 HS
Canon PowerShot SX510 HS
Canon PowerShot SX60 HS
Canon Poweshot S3IS
Epson R-D1
Fujifilm FinePix E550
Fujifilm FinePix E900
Fujifilm FinePix F600EXR
Fujifilm FinePix F700
Fujifilm FinePix F900EXR
Fujifilm FinePix HS10 HS11
Fujifilm FinePix HS20EXR
Fujifilm FinePix S200EXR
Fujifilm FinePix S2Pro
Fujifilm FinePix S3Pro
Fujifilm FinePix S5000
Fujifilm FinePix S5600
Fujifilm FinePix S6500fd
Fujifilm FinePix X100
Fujifilm X100S
Fujifilm X-A2
Fujifilm XQ1
Hasselblad CF132
Hasselblad CFV
Hasselblad H3D
Kodak DC120
Kodak DC50
Kodak DCS460D
Kodak DCS560C
Kodak DCS Pro SLR/n
Kodak EOS DCS 1
Kodak Kodak C330
Kodak Kodak C603 / Kodak C643
Kodak Z1015 IS
Leaf Aptus 75
Leaf Leaf Aptus 22
Leica Leica Digilux 2
Leica Leica D-LUX 3
Leica M8
Leica M (Typ 240)
Leica V-LUX 1
Mamiya ZD
Minolta DiMAGE 7
Minolta DiMAGE 7Hi
Minolta DiMAGE 7i
Minolta DiMAGE A1
Minolta DiMAGE A200
Minolta DiMAGE A2
Minolta Dimage Z2
Minolta Dynax 5D
Minolta Dynax 7D
Minolta RD-175
Minolta RD-175
Nikon 1 S2
Nikon 1 V1
Nikon Coolpix P340
Nikon Coolpix P6000
Nikon Coolpix P7000
Nikon Coolpix P7100
Nikon D100
Nikon D1
Nikon D1X
Nikon D2X
Nikon D300S
Nikon D3
Nikon D3X
Nikon D40
Nikon D60
Nikon D70
Nikon D800
Nikon D80
Nikon D810
Nikon E5400
Nikon E5700
Nikon LS-5000
Nokia Lumia 1020
Olympus C5050Z
Olympus C5060WZ
Olympus C8080WZ
Olympus E-1
Olympus E-20
Olympus E-300
Olympus E-30
Olympus E-330
Olympus E-3
Olympus E-420
Olympus E-450
Olympus E-500
Olympus E-510
Olympus E-520
Olympus E-5
Olympus E-600
Olympus E-P1
Olympus E-P2
Olympus E-P3
Olympus E-PL5
Olympus SP350
Olympus SP500UZ
Olympus XZ-1
Panasonic DMC-FZ150
Panasonic DMC-FZ18
Panasonic DMC-FZ200
Panasonic DMC-FZ28
Panasonic DMC-FZ30
Panasonic DMC-FZ38
Panasonic DMC-FZ70
Panasonic DMC-FZ72
Panasonic DMC-FZ8
Panasonic DMC-G1
Panasonic DMC-G3
Panasonic DMC-GF3
Panasonic DMC-GF5
Panasonic DMC-GF7
Panasonic DMC-GH2
Panasonic DMC-GH3
Panasonic DMC-GH4
Panasonic DMC-GM1
Panasonic DMC-GX7
Panasonic DMC-L10
Panasonic DMC-L1
Panasonic DMC-LF1
Panasonic DMC-LX1
Panasonic DMC-LX2
Panasonic DMC-LX3
Panasonic DMC-LX5
Panasonic DMC-LX7
Panasonic DMC-TZ60
Panasonic DMC-TZ71
Pentax *ist D
Pentax *ist DL2
Pentax *ist DS
Pentax K100D Super
Pentax K10D
Pentax K20D
Pentax K-50
Pentax K-m
Pentax K-r
Pentax K-S1
Pentax Optio S4
Polaroid x530
Ricoh GR DIGITAL 2
Samsung EX2F
Samsung NX100
Samsung NX300
Samsung NX300M
Samsung NX500
Samsung WB2000
Sigma DP2 Quattro
Sigma DP1s
Sigma DP2 Merrill
Sigma SD10
Sigma SD14
Sigma SD9
Sony DSC-R1
Sony DSC-RX100
Sony DSC-RX100M2
Sony DSC-RX100M3
Sony DSC-RX100M4
Sony DSC-RX10
Sony DSC-RX10M2
Sony DSLR-A100
Sony DSLR-A200
Sony DSLR-A300
Sony DSLR-A330
Sony DSLR-A350
Sony DSLR-A550
Sony DSLR-A580
Sony DSLR-A700
Sony DSLR-A850
Sony DSLR-A900
Sony NEX-3
Sony NEX-5R
Sony NEX-7
Sony SLT-A35
Sony SLT-A58
Sony SLT-A77
Sony SLT-A99
We are really working hard to make sure we are a good resource of freely available raw samples for all Free Software imaging projects to use.
Thank you so much for helping out if you can!
Someone at our makerspace found a fun Halloween project we could do
at Coder Dojo: a
motion
sensing pumpkin that laughs evilly when anyone comes near.
Great! I've worked with both PIR sensors and ping rangefinders,
and it sounded like a fun project to mentor. I did suggest, however,
that these days a Raspberry Pi Zero W is cheaper than an Arduino, and
playing sounds on it ought to be easier since you have frameworks like
ALSA and pygame to work with.
The key phrase is "ought to be easier".
There's a catch: the Pi Zero and Zero W don't
have an audio output jack like their larger cousins. It's possible to
get analog audio output from two GPIO pins (use the term "PWM output"
for web searches), but there's a lot of noise. Larger Pis have a built-in
low-pass filter to screen out the noise, but on a Pi Zero you have to
add a low-pass filter. Of course, you can buy HATs for Pi Zeros that
add a sound card, but if you're not super picky about audio quality,
you can make your own low-pass filter out of two resistors and two capacitors
per channel (multiply by two if you want both the left and right channels).
There are lots of tutorials scattered around the web about how to add
audio to a Pi Zero, but I found a lot of them confusing; e.g.
Adafruit's
tutorial on Pi Zero sound has three different ways to edit the
system files, and doesn't specify things like the values of the
resistors and capacitors in the circuit diagram (hint: it's clearer if you
download the Fritzing file, run Fritzing and click on each resistor).
There's a clearer diagram in
Sudomod Forums:
PWM Audio Guide, but I didn't find that until after I'd made my own,
so here's mine.
Parts list:
2 x 270 Ω resistor
2 x 150 Ω resistor
2 x 10 nF or 33nF capacitor
2 x 1μF electrolytic capacitor
3.5mm headphone jack, or whatever connection you want to use to
your speakers
This wiring assumes you're using pins 13 and 18 for the left and right
channels. You'll need to configure your Pi to use those pins.
Add this to /boot/config.txt:
dtoverlay=pwm-2chan,pin=18,func=2,pin2=13,func2=4
Testing
Once you build your circuit up, you need to test it.
Plug in your speaker or headphones, then make sure you can play
anything at all:
aplay /usr/share/sounds/alsa/Front_Center.wav
If you need to adjust the volume, run alsamixer and
use the up and down arrow keys to adjust volume. You'll have to press
up or down several times before the bargraph actually shows a change,
so don't despair if your first press does nothing.
That should play in both channels. Next you'll probably be curious
whether stereo is actually working. Curiously, none of the tutorials
address how to test this. If you ls /usr/share/sounds/alsa/
you'll see names like Front_Left.wav, which might lead you to
believe that aplay /usr/share/sounds/alsa/Front_Left.wav
might play only on the left. Not so: it's a recording of a voice
saying "Front left" in both channels. Very confusing!
Of course, you can copy a music file to your Pi, play it (omxplayer
is a nice commandline player that's installed by default and handles
MP3) and see if it's in stereo. But the best way I found to test
audio channels is this:
speaker-test -t wav -c 2
That will play those ALSA voices in the correct channel, alternating
between left and right.
(MythTV has a good
Overview
of how to use speaker-test.
Not loud enough?
I found the volume plenty loud via earbuds, but if you're targeting
something like a Halloween pumpkin, you might need more volume.
The easy way is to use an amplified speaker (if you don't mind
putting your nice amplified speaker amidst the yucky pumpkin guts),
but you can also build a simple amplifier.
Here's one that looks good, but I haven't built one yet:
One Transistor Audio for Pi Zero W
Of course, if you want better sound quality, there are various places
that sell HATs with a sound chip and line or headphone out.
Less than a month after Krita 3.2.1, we’re releasing Krita 3.3.0. We’re bumping the version because there are some important changes, especially for Windows users in this version!
Alvin Wong has implemented support for the Windows 8 event API, which means that Krita now supports the n-trig pen in the Surface line of laptops (and similar laptops from Dell, HP and Acer) natively. This is still very new, so you have to enable this in the tablet settings:
And he also refactored Krita’s hardware-accelerated display functionality to optionally use Angle on Windows instead of native OpenGL. That means that many problems with Intel display chips and broken driver versions are worked around because Krita now can use Direct3D indirectly.
There are more changes in this release, of course:
Some visual glitches when using hi-dpi screens are fixed (remember: on Windows and Linux, you need to enable this in the settings dialog).
If you create a new image from clipboard, the image will have a title
Favorite blending modes and favorite brush presets are now loaded correctly on startup
GMIC
the plugin has been updated to the latest version for Windows and Linux.
the configuration for setting the path to the plugin has been removed. Krita looks for the plugin in the folder where the krita executable is, and optionally inside a folder with a name that starts with ‘gmic’ next to the krita executable.
there are several fixes for handling layers and communication between Krita and the plugin
Some websites save jpeg images with a .png extension: that used to confuse Krita, but Krita now first looks inside the file to see what kind of file it really is.
PNG:
16 and 32 bit floating point images are now converted to 16 bit integer when saving the images as PNG.
It’s now possible to save the alpha channel to PNG images even if there are no (semi-) transparent pixels in the image
When hardware accelerated display is disabled, the color picker mode of the brush tool showed a broken cursor; this has been fixed.
The Reference Images docker now only starts loading images when it is visible, instead on Krita startup. Note: the reference images docker uses Qt’s imageio plugins to load images. If you are running on Linux, remove all Deepin desktop components. Deepin comes with severely broken qimageio plugins that will crash any Qt application that tries to display images.
File layers now correctly reload on change again
Add several new commandline options:
–nosplash to start Krita without showing the splash screen
–canvasonly to start Krita in canvas-only mode
–fullscreen to start Krita full-screen
–workspace Workspace to start Krita with the given workspace
Selections
The Select All action now first clears the selection before selecting the entire image
It is now possible to extend selections outside the canvas boundary
Performance improvements: in several places superfluous reads from the settings were eliminated, which makes generating a layer thumbnail faster and improves painting if display acceleration is turned off.
The smart number input boxes now use the current locale to follow desktop settings for numbers
The system information dialog for bug reports is improved
macOS/OSX specific changes:
Bernhard Liebl has improved the tablet/stylus accuracy. The problem with circles having straight line segments is much improved, though it’s not perfect yet.
On macOS/OSX systems with and AMD gpu, support for hardware accelerated display is disabled because saving to PNG and JPG hangs Krita otherwise.
Download
Windows
Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.
The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc. The signatures are here.
Support Krita
Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.
I had a random idea today and wanted to share it in case anybody has thought about this too, or tried something like it, or could add on to the idea.
How We Onboard Today
I onboard, mentor, and think a lot about enabling new contributors to open source software. Traditionally in Fedora, we’ve called out a ‘join’ process for people to join Fedora. If you visit join.fedoraproject.org, you’ll get redirected to a wiki page that gives broad categories of skill sets and suggests Fedora teams you might want to look at to see if you could join them.
I started thinking about this because I’m giving a keynote about open source and UX at Ohio Linux Fest this weekend. One of the sections of the talk basically reviews where / how to find UX designers to help open source projects. Some of the things I mention that have proven effective are internships (Outreachy, formal Red Hat intern program, etc.), training, and design bounties / job boards. Posting UX assistance on say join.fedoraproject.org? Didn’t come up. I can’t tell you if I’ve actually onboarded folks from that workflow – certainly possible. My best success ratio in onboarding contributors in terms of them feeling productive and sticking around the community for a while, though, is with the methods I listed above – not a general call for folks of a certain discipline to come to the design team.
In fact, one of the ways we onboard people to the design team is to assign them a specific task, with the thought that they can learn how our team / processes / tools work by doing, and have a task to focus on for getting help from another member of the team / mentor.
Successful Onboarding Methods are Task-Oriented
Thinking about this, these successful recruitment methods of new contributors all focus on tasks, not skills:
Internships – internships have a set time period focused on the completion of a particular project, scoped for that duration and complexity, that has been documented for the intern. This is such that digging through archives of proposed Outreachy and GSoC projects unearths (if it were still current) a great set of directions that any new contributor could use to get started.
Training – in my experience, when training folks without UX experience in UX, they had a specific task they were working on already, knew they needed the skill to complete it, and sought out help with the skill. A task was the driver to seek out the skill.
Job board postings – (e.g., like opensourcedesign.net/jobs) – they are focused on a specific task / thing to do.
Bounties – super task-focused!
If onboarding new contributors works well when those new contributors are put to work right away on a specific, assigned task with a well-defined scope, why do we attempt to recruit by categories of skills with loose pointers to teams (that get out of date), instead of tasks? You might have someone fired up to do *something*, but they’re redirected to a wiki page, to a mailing list, to wait a few days for something to respond and tell them “hi, welcome!” without actually helping them figure out what it is they could do.
An Idea For join.fedoraproject.org
If you’re with me here, up to this point… here’s the idea. I haven’t done it yet. I want to hear your feedback on it first.
I thought about redoing join.fedoraproject.org as a bounty board, really a job posting board, but let’s call it a bounty board. Bounties are very well defined tasks. I did a talk on how to create an effective bounty a while back, here’s the high-level crash-course:
Set the Stage. Give the narrative around the task / project. What is the broader story around what the software / website / project / etc. does? Who does it help? How does it make the world a better place? Most importantly, what’s the problem to be solved that the bounty taker will work on, and how does it fit into that broader narrative?
State the Mission. Make a clear statement at what exactly the bounty is – state what the successful completion of the bounty would look like / work.
Provide a Specification with Clear Examples. Give all the details needed – the specification – for the completion of the work. Is there a specific process with steps they should follow? Provide those steps. A specific language,or a specific length, or a certain number of items? Make this all clear.
Provide Resources and Tools. What are the resources that would be the most useful in completing this bounty? Where is the IRC channel for the project? The mailing list? Are there any design asset / source files they will need? How about style guidelines / specifications to follow? Will they need to create any accounts to submit their work? Where? Are there any tutorials / videos / documentation / blog posts that explains the technology of interest that they could refer to in order to familiarize themselves with the domain they’ll be working in? Link out to all this stuff.
Outline the Benefits. Clearly and explicitly state what’s in it for them to take on this bounty. Job sites do (or at least, they try) this too. You’ll become a Fedora contributor! You’ll get a Fedora account and membership in the team, which will get you an email forward! When I did bounties, I sent handwritten thank you notes with some swag through the mail. You’ll gain skills in X, Y, or Z. You’ll make life better for our users. Some of this is obvious, but it helps to state it explicitly!
Ground Rules and Contact Info. How does someone claim the bounty? Do they need to get an account and assign it to themselves? What happens if they don’t do anything and time has passed, can it be opened up to others interested? (We had a 48-hour rule before we passed on to the next person when we did this on the Design Team.) Who is the contact person / mentor for the assignment? How can they contact that person?
Show Off the Work! – After a bounty is completed, show off the work! Make a post, on a blog or mailing list or wherever, to tell the story of how the person who took the bounty completed it and give a demo or show off their work. (This is a big part of the benefits too ) This not only gives the new contributor a boost, it’s encouraging to other potential new contributors as they can see that new contributors are valued and can achieve cool things, and it’s also helpful in that it shows folks who haven’t set up bounties that maybe they should because it works!
I was thinking about setting this up as a pagure repo, and using the issues section for the actual bounty posting. The notion of status that applies to bugs / issues also applies to bounties, as well as assigning, etc. So maybe it would work well. Issues don’t explicitly manage the queue of bounty takers (should the 1st claimer fall through) but that could be managed through the comments. Any one from any Fedora team could post a bounty in this system. The git repo part of the pagure repo could be used for hosting some general bounty assets / resources – maybe a guide on how to write a good bounty with templates and cool graphics to include, maybe some basic instructions that would be useful for all bounty takers like how to create a new FAS account.
What about easy fix?
We do have a great resource, fedoraproject.org/easyfix, that is similar to this idea in that it uses issues/tickets in a manner geared towards new contributors. It provides a list of bugs that have been denoted as easy to fix by project owners all in one place.
The difference here though, is that these are raw bugs. They don’t have all the components of a bounty as explained above, and looking through some of the active and open ones, you could not get started right away without flagging down the right person and getting an explanation of how to proceed or going back and forth on the ticket. I think one of the things that makes bounties compelling is that you can read them and get started right away.
Bounties *do* take a long time to formulate and document. It is a very similar process to proposing a project for an internship program like Outreachy or Google Summer of Code. I bet, though, I could go around different teams in Fedora and find projects that would fit this scope quite well and start building out a list. Maybe as teams have direct success with a program like this, they’d continue to use it and it’d become self-sustaining. I don’t know, though. Clearly, I stopped doing the design team bounties after 4 or 5 because of the amount of work involved. But maybe if it was a regular thing, we did one every month or something… not sure.
What do you think?
Does this idea make sense? Did I miss something (or totally miss the point)? Do you have a great idea to make it better? Let me know in the comments.
Joseph Conover is a 3D artist at Greenhaus GFX, where he created graphics for several high profile film credit sequences such as Wonder Woman, xXx: Return of Xander Cage, Guardians of the Galaxy Vol. 2 and more. As he stepped into the industry and picked up other creative tools, Joseph found that Blender often gave him an edge in terms of workflow.
Text by Joseph Conover, Greenhaus GFX
I started using Blender about ten years ago and still implement it in my workflow for modeling, simulation, texturing, sculpting, and various other general tasks. The software is so comprehensive that it lets me picture the final product from a wide viewpoint. It offers a big advantage in eliminating guesswork and time wasted when jumping between different programs.
The largest project I’ve worked on at Greenhaus so far is the Wonder Woman’s end title sequence.
I did too many random things to count, but these are screenshots of notable parts:
Patty Jenkins (Wonder Woman’s director) thought that many scenes in our sequence were too warlike and wanted some uplifting moments, so I 3D projected this view of Themyscira (home to Wonder Woman and the Amazons) based on a painted version created by my boss, Jason Doherty.
Here are several of my more notable models that were used in various scenes. The woman was based on actress Gal Gadot – sculpted in Zbrush and refined in Blender. For the plane, I took inspiration from the WWII German Biplane. My favorite thing to work on was the Sword structure, in which I used arrays and curve modifiers to create a rotating structure effect.
This was one of the environments I got to develop from start to finish. It was a mix of kitbashing and modeling in Blender. The whole process only took me an afternoon to finish because I was able to quickly duplicate the pieces and fill in the space. This scene was also repurposed in different shots throughout the sequence.
Guardians of the Galaxy Vol. 2’s logo was a different story, because it started off in Blender but ended in C4D. This was the logo our client liked at first, which was done in Blender with some 80’s style comping in After Effects:
While Guardians of the Galaxy Vol. 2’s final product didn’t use much of Blender other than the animation, this promotional ad for the 2017 NHL All-Star Weekend did.
This was a great example of Blender’s versatility. For the two shots below, I had to hand model the scenes to match the Cinerama Dome and the Hollywood Sign. Blender allowed me to quickly draft out my ideas from animation to the final lighting before I exported it to Maya and rendered in V-ray.
So what are your thoughts? Hit me up at josephconover.com if you want to chat about Blender or just talk art!
Soon I’m going to merge a PR to fwupd that breaks API and ABI and bumps the soname. If you want to use the stable branch, please track 0_9_X. The API break removes all the deprecated API and cruft we’ve picked up in the months since we started the project, and with the upcoming 1.0.0 version coming up in a few weeks it seems a sensible time to have a clean out. If it helps, I’m going to put 0.9.x in Fedora 26 and F27, so master branch probably only for F28/rawhide and jhbuild at this point.
In other news, 4 days ago I became a father again, so expect emails to be delayed and full of confusion. All doing great, but it turns out sleep is for the weak. :)
If you follow me on Instagram or Youtube, you’ve probably noticed all my spare time has been consumed by flying racing drones recently. Winter is approaching, so I’d rather spare my fingers from freezing and focus on my other passion, 3D doodling.
Modifier stack explorations
This blog post is the equivalent of a new year’s resolution. I’ll probably be overwhelmed by duties and will drop out from this, but at least being public about it creates some pressure to keep trying. Feel free to help out with the motivation :)
Less than a month after Krita 3.2.1, we’re getting ready to release Krita 3.3.0. We’re bumping the version because there are some important changes for Windows users in this version!
Alvin Wong has implemented support for the Windows 8 event API, which means that Krita now supports the n-trig pen in the Surface line of laptops (and similar laptops from Dell, HP and Acer) natively. This is still very new, so you have to enable this in the tablet settings:
And he also refactored Krita’s hardware-accelerated display functionality to optionally use Angle on Windows instead of native OpenGL. That means that many problems with Intel display chips and broken driver versions are worked around because Krita now indirectly uses Direct3D.
There are more changes in this release, of course:
Some visual glitches when using hi-dpi screens are fixed (remember: on Windows and Linux, you need to enable this in the settings dialog).
If you create a new image from clipboard, the image will have a title
Favorite blending modes and favorite brush presets are now loaded correctly on startup
GMIC
the plugin has been updated to the latest version for Windows and Linux.
the configuration for setting the path to the plugin has been removed. Krita looks for the plugin in the folder where the krita executable is, and optionally inside a folder with a name that starts with ‘gmic’ next to the krita executable.
there are several fixes for handling layers and communication between Krita and the plugin
Some websites save jpeg images with a .png extension: that used to confuse Krita, but Krita now first looks inside the file to see what kind of file it really is.
PNG:
16 and 32 bit floating point images are now converted to 16 bit integer when saving the images as PNG.
It’s now possible to save the alpha channel to PNG images even if there are no (semi-) transparent pixels in the image
When hardware accelerated display is disabled, the color picker mode of the brush tool showed a broken cursor; this has been fixed.
The Reference Images docker now only starts loading images when it is visible, instead on Krita startup. Note: the reference images docker uses Qt’s imageio plugins to load images. If you are running on Linux, remove all Deepin desktop components. Deepin comes with severely broken qimageio plugins that will crash any Qt application that tries to display images.
File layers now correctly reload on change again
Add several new commandline options:
–nosplash to start Krita without showing the splash screen
–canvasonly to start Krita in canvas-only mode
–fullscreen to start Krita full-screen
–workspace Workspace to start Krita with the given workspace
Selections
The Select All action now first clears the selection before selecting the entire image
It is now possible to extend selections outside the canvas boundary
Performance improvements: in several places superfluous reads from the settings were eliminated, which makes generating a layer thumbnail faster and improves painting if display acceleration is turned off.
The smart number input boxes now use the current locale to follow desktop settings for numbers
The system information dialog for bug reports is improved
macOS/OSX specific changes:
Bernhard Liebl has improved the tablet/stylus accuracy. The problem with circles having straight line segments is much improved, though it’s not perfect yet.
On macOS/OSX systems with and AMD gpu, support for hardware accelerated display is disabled because saving to PNG and JPG hangs Krita otherwise.
Download
Windows
Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes. There are no 32 bits packages at this point, but there will be for the final release.
The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc. The signatures are here.
Support Krita
Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.
We've had support for Dual-Shock 3 (aka Sixaxis, aka PlayStation 3 controllers) for a long while, but I've added a long-standing patchset to the Fedora packages that changes the way devices are setup.
The old way was: plug in your joypad via USB, disconnect it, and press the "P" button on the pad. At this point, and since GNOME 3.12, you would have needed the Bluetooth Settings panel opened for a question to pop up about whether the joypad can connect.
This is broken in a number of ways. If you were trying to just charge the joypad, then it would forget its original "console" and you would need to plug it in again. If you didn't have the Bluetooth panel opened when trying to use it wirelessly, then it just wouldn't have worked.
Set up is now simpler. Open the Bluetooth panel, plug in your device, and answer the question. You just want to charge it? Dismiss the query, or simply don't open the Bluetooth panel, it'll work dandily and won't overwrite the joypad's settings.
And finally, we also made sure that it works with PlayStation 4 controllers.
Note that the PlayStation 4 controller has a button combination that allows it to be visible and pairable, except that if the device trying to connect with it doesn't behave in a particular way (probably the same way the 25€ RRP USB adapter does), it just wouldn't work. And it didn't work for me on a number of different devices.
Cable pairing for the win!
And the boring stuff
Hey, do you know what happened last week? There was a security problem in a package that I glance at sideways sometimes! Yes. Again.
We've finally done this in recent fprintd and iio-sensor-proxy upstream releases, as well as for bluez in Fedora Rawhide. If testing goes well, we will integrate this in Fedora 27.
When I upgraded my Asus laptop to Stretch, one of the things that
stopped working was the screen brightness keys (Fn-F5 and Fn-F6).
In Debian Jessie they had always just automagically worked without
my needing to do anything, so I'd never actually learned how to set
brightness on this laptop. The fix, like so many things, is easy
once you know where to look.
It turned out the relevant files are in
/sys/class/backlight/intel_backlight.
cat /sys/class/backlight/intel_backlight/brightness
tells you the current brightness;
write a number to /sys/class/backlight/intel_backlight/brightness
to change it.
That at least got me going (ow my eyes, full brightness is
migraine-inducing in low light) but of course I wanted it back
on the handy function keys.
I wrote a
script named "dimmer", with a symlink to "brighter", that goes like this:
#!/bin/zsh
curbright=$(cat /sys/class/backlight/intel_backlight/brightness)
echo dollar zero $0
if [[ $(basename $0) == 'brighter' ]]; then
newbright=$((curbright + 200))
else
newbright=$((curbright - 200))
fi
echo from $curbright to $newbright
sudo sh -c "echo $newbright > /sys/class/backlight/intel_backlight/brightness"
That let me type "dimmer" or "brighter" to the shell to change the
brightness, with no need to remember that /sys/class/whatsit path.
I got the names of the two function keys by running xev
and typing Fn and F5, then Fn and F6.
Then I edited my Openbox ~/.config/openbox/rc.xml, and added:
This past week artist David Revoy visited the Université Cergy-Pontoise in Paris France to give a Krita training. The university’s teacher, Nicolas Priniotakis, has been using linux and other open source technology such as Blender. This was the first time the students have been exposed to Krita…and the results were a success with the help of David!
At the moment the appstream-builder in Fedora requires a 48x48px application icon to be included in the AppStream metadata. I’m sure it’s no surprise that 48×48 padded to 64×64 and then interpolated up to 128×128 (for HiDPI screens) looks pretty bad. For Fedora 28 and higher I’m going to raise the minimum icon size to 64×64 which I hope people realize is actually a really low bar.
For Fedora 29 I think 128×128 would be a good minimum. From my point of view the best applications in the software center already ship large icons, and the applications with tiny icons are usually of poor quality, buggy, or just unmaintained upstream. I think it’s fine for a software center to do the equivalent of “you must be this high to ride” and if we didn’t keep asking more of upstreams we’d still be in a world with no translations, no release information and no screenshots.
Also note, applications don’t have to do this; it’s not like they’re going to fall out of the Fedora — they’re still installable on the CLI using DNF, although I agree this will impact the number of people installing and using a specific application. Comments welcome.
WebDriver is an automation API to control a web browser. It allows to create automated tests for web applications independently of the browser and platform. WebKitGTK+ 2.18, that will be released next week, includes an initial implementation of the WebDriver specification.
WebDriver in WebKitGTK+
There’s a new process (WebKitWebDriver) that works as the server, processing the clients requests to spawn and control the web browser. The WebKitGTK+ driver is not tied to any specific browser, it can be used with any WebKitGTK+ based browser, but it uses MiniBrowser as the default. The driver uses the same remote controlling protocol used by the remote inspector to communicate and control the web browser instance. The implementation is not complete yet, but it’s enough for what many users need.
The clients
The web application tests are the clients of the WebDriver server. The Selenium project provides APIs for different languages (Java, Python, Ruby, etc.) to write the tests. Python is the only language supported by WebKitGTK+ for now. It’s not yet upstream, but we hope it will be integrated soon. In the meantime you can use our fork in github. Let’s see an example to understand how it works and what we can do.
from selenium import webdriver
# Create a WebKitGTK driver instance. It spawns WebKitWebDriver
# process automatically that will launch MiniBrowser.
wkgtk = webdriver.WebKitGTK()
# Let's load the WebKitGTK+ website.
wkgtk.get("https://www.webkitgtk.org")
# Find the GNOME link.
gnome = wkgtk.find_element_by_partial_link_text("GNOME")
# Click on the link.
gnome.click()
# Find the search form.
search = wkgtk.find_element_by_id("searchform")
# Find the first input element in the search form.
text_field = search.find_element_by_tag_name("input")
# Type epiphany in the search field and submit.
text_field.send_keys("epiphany")
text_field.submit()
# Let's count the links in the contents div to check we got results.
contents = wkgtk.find_element_by_class_name("content")
links = contents.find_elements_by_tag_name("a")
assert len(links) > 0
# Quit the driver. The session is closed so MiniBrowser
# will be closed and then WebKitWebDriver process finishes.
wkgtk.quit()
Note that this is just an example to show how to write a test and what kind of things you can do, there are better ways to achieve the same results, and it depends on the current source of public websites, so it might not work in the future.
Web browsers / applications
As I said before, WebKitWebDriver process supports any WebKitGTK+ based browser, but that doesn’t mean all browsers can automatically be controlled by automation (that would be scary). WebKitGTK+ 2.18 also provides new API for applications to support automation.
First of all the application has to explicitly enable automation using webkit_web_context_set_automation_allowed(). It’s important to know that the WebKitGTK+ API doesn’t allow to enable automation in several WebKitWebContexts at the same time. The driver will spawn the application when a new session is requested, so the application should enable automation at startup. It’s recommended that applications add a new command line option to enable automation, and only enable it when provided.
After launching the application the driver will request the browser to create a new automation session. The signal “automation-started” will be emitted in the context to notify the application that a new session has been created. If automation is not allowed in the context, the session won’t be created and the signal won’t be emitted either.
The WebKitAutomationSession will emit the signal “create-web-view” every time the driver needs to create a new web view. The application can then create a new window or tab containing the new web view that should be returned by the signal. This signal will always be emitted even if the browser has already an initial web view open, in that case it’s recommened to return the existing empty web view.
Web views are also automation aware, similar to ephemeral web views, web views that allow automation should be created with the constructor property “is-controlled-by-automation” enabled.
This is the new API that applications need to implement to support WebDriver, it’s designed to be as safe as possible, but there are many things that can’t be controlled by WebKitGTK+, so we have several recommendations for applications that want to support automation:
Add a way to enable automation in your application at startup, like a command line option, that is disabled by default. Never allow automation in a normal application instance.
Enabling automation is not the only thing the application should do, so add an automation mode to your application.
Add visual feedback when in automation mode, like changing the theme, the window title or whatever that makes clear that a window or instance of the application is controllable by automation.
Add a message to explain that the window is being controlled by automation and the user is not expected to use it.
Use ephemeral web views in automation mode.
Use a temporal user profile in application mode, do not allow automation to change the history, bookmarks, etc. of an existing user.
Do not load any homepage in automation mode, just keep an empty web view (about:blank) that can be used when a new web view is requested by automation.
The WebKitGTK client driver
Applications need to implement the new automation API to support WebDriver, but the WebKitWebDriver process doesn’t know how to launch the browsers. That information should be provided by the client using the WebKitGTKOptions object. The driver constructor can receive an instance of a WebKitGTKOptions object, with the browser information and other options. Let’s see how it works with an example to launch epiphany:
from selenium import webdriver
from selenium.webdriver import WebKitGTKOptions
options = WebKitGTKOptions()
options.browser_executable_path = "/usr/bin/epiphany"
options.add_browser_argument("--automation-mode")
epiphany = webdriver.WebKitGTK(browser_options=options)
Again, this is just an example, Epiphany doesn’t even support WebDriver yet. Browsers or applications could create their own drivers on top of the WebKitGTK one to make it more convenient to use.
from selenium import webdriver
epiphany = webdriver.Epiphany()
Plans
During the next release cycle, we plan to do the following tasks:
Complete the implementation: add support for all commands in the spec and complete the ones that are partially supported now.
Add support for running the WPT WebDriver tests in the WebKit bots.
Add a WebKitGTK driver implementation for other languages in Selenium.
Add support for automation in Epiphany.
Add WebDriver support to WPE/dyz.
Graphics Planet
Graphics Planet is a window into the world of people involved in
free software development in the field of computer graphics and publishing.
We are users and developers on programs such as
GIMP,
Blender,
Inkscape,
Scribus,
Krita and
Dia, and other
related projects such as the
Tango Desktop Project.
Updated on January 08, 2018 03:00 AM UTC. Entries are normalised to UTC
time, and are updated every half hour.
If you are a developer on an existing graphics software project
distributed under a free software license, you are welcome to have
your blog added here. If you are an artist who uses free software
and have contributed artwork to free software projects, and can get
any two people below to vouch for your abilities, you can have your
blog added here. Bug the maintainers about it.
In case you don't have a blog already, it's easy to get a blog
and post messages and images on it. The ideal way is to get your
own web-domain yourname.com, host it someplace and then
install blogging software like Wordpress on it.
Otherwise, you can create an account on any of the numerous free
blogging services such as Blogger, LiveJournal, Wordpress.com, etc. and tell us
so we can add you here. For hosting
images, you can use something like Flickr or Google's Picasa.
The config files of Graphics Planet are stored
in its
git repository. Graphics Planet is maintained
by Mukund.
Please mail him if you have a question or would like
your blog added to the feed.
Blog entries aggregated on this page are owned by, and represent
the opinion of their authors.