F2FS Filesystem Enhancements (for Pixel Devices), Wine HQ Dev Release, Gzip v1.10, VideoLan v3.0.5, KaOS Linux Distro v2018.12

To start things off, a ton of bug fixes alongside a few enhancements are coming to the F2FS filesystem (for Pixel devices) in the the Linux 4.21 kernel.

Wine HQ just officially announced the development release of version 4.0 RC4 which also boasts numerous bug fixes.

The release of Gzip version 1.10 has been announced on the Savannah community site.

All while VideoLAN published VLC version 3.0.5.

In distribution news, KaOS, the rolling release Linux distribution, just pushed out version 2018.12.

The State of Desktop Linux 2019

A snapshot of the current state of Desktop Linux at the start of 2019—with comparison charts and a roundtable Q&A with the leaders of three top Linux distributions.

I've never been able to stay in one place for long—at least in terms of which Linux distribution I call home. In my time as a self-identified "Linux Person", I've bounced around between a number of truly excellent ones. In my early days, I picked up boxed copies of S.u.S.E. (back before they made the U uppercase and dropped the dots entirely) and Red Hat Linux (before Fedora was a thing) from store shelves at various software outlets.

Side note: remember when we used to buy Operating Systems—and even most software—in actual boxes, with actual physical media and actual printed manuals? I still have big printed manuals for a few early Linux versions, which, back then, were necessary for getting just about everything working (from X11 to networking and sound). Heck, sometimes simply getting a successful boot required a few trips through those heavy manuals. Ah, those were the days.

Debian, Ubuntu, Fedora, openSUSE—I spent a good amount of time living in the biggest distributions around (and many others). All of them were fantastic. Truly stellar. Yet, each had their own quirks and peculiarities.

As I bounced from distro to distro, I developed a strong attachment to just about all of them, learning, as I went, to appreciate each for what it was. Just the same, when asked which distribution I recommend to others, my brain begins to melt down. Offering any single recommendation feels simply inadequate.

Choosing which one to call home, even if simply on a secondary PC, is a deeply personal choice.

Maybe you have an aging desktop computer with limited RAM and an older, but still absolutely functional, CPU. You're going to need something light on system resources that runs on 32-bit processors.

Or, perhaps you work with a wide variety of hardware architectures and need a single operating system that works well on all of them—and standardizing on a single Linux distribution would make it easier for you to administer and update all of them. But what options even are available?

To help make this process a bit easier, I've put together a handy set of charts and graphs to let you quickly glance and find the one that fits your needs (Figures 1 and 2).

Figure 1. Distribution Comparison Chart I

Figure 2. Distribution Comparison Chart II

Freescale and NXP PowerPC Microprocessors Protected Against Spectre, Chromebook to Support Dual-Boot Mode, Bloodstained: Ritual of the Night Game Kickstarted Campaign Cancels Linux Port

One year later, the Freescale and NXP PowerPC microprocessors are now protected against the variant 2 of the Spectre vulnerability.

For those who absolutely need those one or two applications from Windows, the Chromebook will soon officially supports a dual-boot mode in which users can install both Windows and Chrome OS side-by-side. Unlike the Linux app support within Chrome OS, this new feature will allow you to run one of operating systems at a time.

In upsetting news, Koji Igarashi's Kickstarter campaign for his yet-to-be released game, Bloodstained: Ritual of the Night, has officially announced that the ports to both the Mac OS and Linux are now cancelled. Bloodstained is a Castlevania clone and I personally funded it, so I am extremely upset myself.

The Ceph Foundation and Building a Community: an Interview with SUSE

ceph logo

On November 12 at the OpenStack Summit in Berlin, Germany, the Linux foundation formally announced the Ceph Foundation. Present at this same summit were key individuals from SUSE and the SUSE Enterprise Storage team. For those less familiar with the SUSE Enterprise Storage product line, it is entirely powered by Ceph technology.

With Ceph, data is treated and stored like objects. This is unlike traditional (and legacy) data storage solutions, where data is written to and read from the storage volumes via sectors and at sector offsets (often referred to as blocks). When dealing with large amounts of large data, treating them as objects is the way to do it. It's also much easier to manage. In fact, this is how the cloud functions—with objects. This object-drive model enables Ceph for simplified scalability to meet consumer demand easily. These objects are replicated across an entire cluster of nodes, giving Ceph its fault-tolerance and further reducing single points of failure. The parent company of the project and its technology was acquired by Red Hat, Inc., in April 2014.

I was fortunate in that I was able to connect with a few key SUSE representatives for a quick Q & A, as it relates to this recent announcement. I spoke with Lars Marowsky-Brée, SUSE Distinguished Engineer and member of the governing board of the Ceph Foundation; Larry Morris, Senior Product Manager for SUSE Enterprise Storage; Sanjeet Singh, Solutions Owner for SUSE Enterprise Storage; and Michael Dilio, Product and Solutions Marketing Manager for SUSE Enterprise Storage.

Petros Koutoupis: How has IBM's recent Red Hat, Inc., acquisition announcement affected the Ceph project, and do you believe this is what led to the creation of the Ceph Foundation?

SUSE: With Ceph being an Open Source community project, there is no anticipated effect on the Ceph project as a result of the pending IBM acquisition of Red Hat. Discussions and planning of the Ceph foundation have been going on for some time and were not a result of the acquisition announcement.

PK: For some time, SUSE has been fully committed to the Ceph project and has even leveraged the same technology in its SUSE Enterprise Storage offering. Will these recent announcements impact both the offering and the customers using it?

SUSE: The Ceph Foundation news is a validation of the vibrancy of the Ceph community. There are 13 premier members, with SUSE being a founding and premier member.

Chrome OS To Test GPU Support for Linux Installed Apps, antiX Distro v17.3, OpenMandriva Project vLx4.0, Hummingboard CBi Released

It seems that Chrome OS will soon start testing GPU support for Linux installed applications. This is good news for those who wish to run applications that require a bit more horsepower (e.g. games). You can view the code commits to these changes here.

Yesterday, the antiX Linux distribution announced the release of version 17.3. It boasts an updated kernel that better mitigates the L1TF/Foreshadow and Meltdown/Spectre vulnerabilities, bug fixes and package updates.

Along those same lines, the OpenMandriva project just announced the first Alpha release of version Lx 4.0.

SolidRun, a company focused on manufacturing Linux supported SBC and embedded boards, just announced the release of another addition to their Hummingboard series called the Hummingboard CBi. This new model swaps the original HDMI port for a CAN and serial ports and is tailor more for industrial use.

More Roman Numerals and Bash

When in Rome: finishing the Roman numeral converter script.

In my last article, I started digging in to a classic computer science puzzle: converting Roman numerals to Arabic numerals. First off, it more accurately should be called Hindu-Arabic, and it's worth mentioning that it's believed to have been invented somewhere between the first and fourth century—a counting system based on 0..9 values.

The script I ended up with last time offered the basics of parsing a specified Roman numeral and converted each value into its decimal equivalent with this simple function:

mapit() {
   case $1 in
     I|i) value=1 ;;
     V|v) value=5 ;;
     X|x) value=10 ;;
     L|l) value=50 ;;
     C|c) value=100 ;;
     D|d) value=500 ;;
     M|m) value=1000 ;;
      * ) echo "Error: Value $1 unknown" >&2 ; exit 2 ;;

Then I demonstrated a slick way to use the underutilized seq command to parse a string character by character, but the sad news is that you won't be able to use it for the final Roman numeral to Arabic numeral converter. Why? Because depending on the situation, the script sometimes will need to jump two ahead, and not just go left to right linearly, one character at a time.

Instead, you can build the main loop as a while loop:

while [ $index -lt $length ] ; do

    our code

    index=$(( $index + 1 ))

There are two basic cases to think about in terms of solving this algorithmic puzzle: the subsequent value is greater than the current value, or it isn't—for example, IX versus II. The first is 9 (literally 1 subtracted from 10), and the second is 2. That's no surprise; you'll need to know both the current and next values within the script.

Sharp readers already will recognize that the last character in a sequence is a special case, because there won't be a next value available. I'm going to ignore the special case to start with, and I'll address it later in the code development. Stay tuned, sharpies!

Because Bash shell scripts don't have elegant in-line functions, the code to get the current and next values won't be value=mapit(romanchar), but it'll be a smidge clumsy with its use of the global variable value:

mapit ${romanvalue:index-1:1}

mapit ${romanvalue:index:1}

It's key to realize that in the situation where the next value isn't greater than the current value (for example, MC), you can't automatically conclude that the next value isn't going to be part of a complex two-value sequence anyway. Like this: MCM. You can't just say M=1000 and C=500, so let's just convert it to 1500 and process the second M when we get to it. MCM=1900, not 2500!

The basic logic turns out to be pretty straightforward:

Temperature Monitoring Support for AMD Zen 2, PowerPC On-Chip Controller in 4.21 Kernel, Changes Coming in the MIPS Arena, 4MLinux Beta Release

Temperature monitoring support for the AMD Zen 2 microprocessor is hitting the 4.21 Linux kernel.

The 4.21 kernel is also introducing the PowerPC On-Chip Controller (OCC), which reports sensor data ranging from temperatures to power. The same OCC hardware is available on IBM POWER platforms and more specifically, their POWER8 and POWER9 generation processors.

It doesn't stop there with the 4.21 kernel. A lot of changes are coming in the MIPS microprocessor arena. The changes are both large and small from removing floating point support and shrinking the kernel for the architecture (in preparation for nanoMIPS), alongside many other optimizations and changes. You can read the full list here.

4MLinux just released its beta release of version 28.0 for testing. It is expected that the stable release will be made available in March 2019.

Five Trends Influencing Linux’s Growth at the Endpoint

A recent IDC InfoBrief identified Linux as the only endpoint operating system growing globally. While Windows market share remains flat, at 39% in 2015 and 2017, Linux has grown from 30% in 2015 to 35% in 2017, worldwide. And the trend is accelerating.

Considering everywhere that systems built around the Linux kernel are used, we quickly realize that Linux is the most dominating operating system in the comparatively brief history of computer technology. Information systems have changed dramatically since August 1991 when Linus Torvalds announced, “I'm doing a (free) operating system (just a hobby, won't be big like gnu) for 386(486) AT clones.” With all due respect to the icons of open source, Linus is without question the Nicolas Tesla of information technology.

The influence of Linux boggles the mind—smartphones, televisions, digital video recorders, airline entertainment systems, automobile control systems, digital signage, routers, switches and, of course, the desktop operating system for the one percent, which in this case are those of us who run a Linux distro as their core OS.

Why Linux?

Although the “Which is the better operating system: Microsoft Windows or a Linux-based OS?” debate is as popular as ever these days, in truth, Linux has won the war. If there is any doubt, consider the influence of the Linux-based Android operating system (and its UNIX-based Apple brethren) to that of Microsoft Windows. Windows still has a place in our lives, but only because of the large volume of core applications that require a Windows OS. This will not always be the case, and to Microsoft’s credit, it has seen the future, and the future is Linux.

Over the past ten years, Microsoft has been enabling Linux and open-source technology. In July 2009, Microsoft quietly contributed 22,000 lines of source code to the Linux kernel under the GPLv2 license. Without a doubt, Microsoft’s motives were self-serving; it needed to ensure that Windows and Linux would interoperate well into the future. Microsoft achieved this goal when its code was accepted by the Linux kernel developers.

In more recent years, Microsoft has continued to embrace Linux. Microsoft supports Linux-based operating systems running on its Hyper-V hypervisor and Microsoft Azure, which uses Linux-based components and supports Linux OS guests. Microsoft ported SQL Server to Linux, albeit for internal use, and has made it available publicly. And by developing the Windows Subsystem for Linux, Microsoft made it possible to run Linux application workloads on Windows Server.

Linux 4.20 and GNU Linux-libre 4.20-gnu Released, Darktable 2.6 Now Available, New Version of SuperTux and GDB 8.2.1 Is Out

News briefs for December 24, 2018.

Linux 4.20 was released yesterday. Of the release, Linux Torvalds writes, "let's face it, last week wasn't quite as quiet as I would have hoped for, but there really doesn't seem to be any point to delay 4.20 because everybody is already taking a break. And it's not like there are any known issues, it's just that the shortlog below is a bit longer than I would have wished for. Nothing screams "oh, that's scary", though."

GNU Linux-libre 4.20-gnu is also now available. Links to sources and tarballs are here.

Darktable 2.6 was released today. Phoronix reports that this new version of the open-source RAW photography workflow software includes experimental PowerPC PPC64LE support and "also brings a number of new modules around handling of duplicate images, allowing changes based on image frequency layers, new logarithm controls for the tone curve, ProPhotoRGB and HSL modes for the color balance module, and a lot more." See also the GitHub page for more details.

The SuperTux team recently announced the release of version 0.6.0 of the game, which comes after almost two years of development. Changes include a "complete redesign of the icy world and forest", a revamp of the rendering engine, support for OpenGL 3.3 Core as well as OpenGL ES 2.0 and more. Source tarballs and builds are available on the Downloads page or via GitHub.

GDB 8.2.1 was released yesterday. This version of the GNU Debugger brings lots of fixes and enhancement. For the complete list, see the gdb/NEWS file. You can download GDB from the GNU FTP server.

Top 12 Tech Tips from 2018

KStars v3.0.0 Now Available, Malware Targeting IoT Devices Is Growing, Enhanced Privacy Settings for Mozilla’s Latest Firefox Focus, Coreboot 4.9 Released and Pivotal Announces Pivotal Cloud Foundry Platform Version 2.4

News briefs for December 21, 2018.

KStars v3.0.0 was released today after four months of development. Jasem's Ekosphere blog post lists all the new features including the XPlanet Solar System View developed by Robert Lancaster, significant improvements to FITS viewer GUI, scheduler improvements and more.

Malware targeting IoT devices is growing. BetaNews reports that according to McAfee Labs, "new malware targeting IoT devices grew 72 percent with total malware growing 203 percent in the last four quarters". The growth is partly attributed to devices being harnessed for cryptomining. See the McAfee Labs Threats Report, December 2018 for all the details.

Mozilla announces the latest release of Firefox Focus, introducing enhanced privacy settings. According to the Mozilla blog, "You can choose to block all cookies on a website, no cookies at all—the default so far—third party cookies or only 3rd party tracking cookies as defined by Disconnect's Tracking Protection list. If you go with the latter option, which is new to Firefox Focus and also the new default, cross-site tracking will be prevented." You can get the latest version of Firefox Focus from Google Play and in the App Store.

Coreboot 4.9 was released this week sporting more than 2,600 changes and ports to 56 new motherboards. According to Phoronix, Coreboot 4.9 "features a number of code clean-ups to the different motherboard ports and all over, the Coreboot documentation is now hosted within the repository, the Intel FSP binaries are now integrated within the build system, and a number of older boards have been deprecated". See the release notes for more details.

Pivotal yesterday announced the release of version 2.4 of its Pivotal Cloud Foundry (PCF) platform, which is a commercial distribution, based on the open-source Cloud Foundry project. New to this version, according to eWeek is "zero downtime updates for application deployments, enabling organizations to roll out upgrades without downtime. PCF 2.4 also introduces a new compliance scanner in beta that will enable organizations to validate that the configuration of PCF deployments meets best practices".

CI/CD and the New Generation of Software Delivery: an Interview with Harness

harness logo

Continuous integration and continuous delivery (CI/CD) is all the rage in the modern world of software development. But actually what is this pipeline process? It's a method or set of principles for which development teams implement and deliver code more frequently and reliably.

Continuous integration embodies a coding philosophy and set of practices propelling teams to implement small and frequent code changes into version control repositories, while the continuous delivery picks up where the CI ends and automates the application's delivery

Many platforms, such as Jenkins and CircleCI, exist to help companies and teams streamline the development and integration of their software stacks, but not much exists in the way of easing and automating the process of delivery. And with what does exist, the solutions tend to fall short with features and functionality, or they are overly complicated to configure in the first place.

This is where Harness comes into the picture. Harness produces the industry's very first Continuous Delivery-as-a-Service platform. Using machine learning, it simplifies and automates the entire CD process. Steve Burton, VP of marketing at Harness, recently took the time to share more details with me.

Petros Koutoupis: Please introduce yourself to our readers.

Steve Burton: While officially the VP of marketing, I am a DevOps Evangelist over at Harness. What this means is that I do a little bit of everything. While most of my career has been in product management and marketing, I stepped out of the university with a bachelor's degree in computer science and an initial career in Java development (ca. 2004 at Sapient), working on large-scale enterprise J2EE implementations. Prior to Harness, I did geek stuff at AppDynamics, Moogsoft and Glassdoor. And when not knee-deep in take, I enjoy spending my time watching F1 and researching cars on the web.

PK: What is Harness?

SB: We provide Continuous Delivery as-a-Service. It's the CD bit of the CI/CD equation that helps customers automate how their software is deployed and delivered to end users in production.

We basically allow customers to move fast without breaking things, so they can increase developer velocity without the risk of downtime or failure.

PK: What problem or problems does Harness solve?

SB: Developers are under tremendous pressure to deliver applications to production, fast and with zero error. It's a constant pain, one that I personally dealt with as a former Java developer. Our founders had also seen this challenge firsthand, and that's why they started Harness.

Qt Announces Qt for Python, All US Publications from 1923 to Enter the Public Domain in 2019, Red Hat Chooses Team Rubicon for Its 2018 Corporate Donation, SUSE Linux Enterprise 15 SP1 Released and Microsoft Announces Open-Source «Project Mu»

News briefs for December 20, 2018.

Qt introduces Qt for Python. This new offering allows "Python developers to streamline and enhance their user interfaces while utilizing Qt's world-class professional support services". According to the press release, "With Qt for Python, developers can quickly and easily visualize the massive amounts of data tied to their Python development projects, in addition to gaining access to Qt's world-class professional support services and large global community." To download Qt for Python, go here.

As of January 1, 2019, all works published in the US in 1923 will enter the public domain. The Smithsonian reports that it's been "21 years since the last mass expiration of copyright in the U.S." The article continues: "The release is unprecedented, and its impact on culture and creativity could be huge. We have never seen such a mass entry into the public domain in the digital age. The last one—in 1998, when 1922 slipped its copyright bond—predated Google. 'We have shortchanged a generation,' said Brewster Kahle, founder of the Internet Archive. 'The 20th century is largely missing from the internet.'"

Red Hat chooses Team Rubicon for its 2018 US corporate holiday donation. The $75,000 donation will "will contribute to the organization's efforts to provide emergency response support to areas devastated by natural disasters." From Red Hat's announcement: "By pairing the skills and experiences of military veterans with first responders, medical professionals and technology solutions, Team Rubicon aims to provide the greatest service and impact possible. Since its inception following the 2010 Haiti earthquake, Team Rubicon has launched more than 310 disaster response operations in the U.S. and across the world—including 86 in 2018 alone."

SUSE Linux Enterprise 15 Service Pack 1 Beta 1 is now available. Some of the changes include Java 11 is now the default JRE, libqt was updated to 5.9.7, LLVM was updated to version 7, and much more. According to the announcement, "roughly 640 packages have been touched specifically for SP1, in addition to packages updated with Maintenance Updates since SLE 15." See the release notes for more information.

Microsoft yesterday announced "Project Mu" as an open-source UEFI alternative to TianoCore. Phoronix reports that "Project Mu is Microsoft's attempt at 'Firmware as a Service' delivered as open-source. Microsoft developed Project Mu under the belief that the open-source TianoCore UEFI reference implementation is 'not optimized for rapid servicing across multiple product lines.'" See also the Microsoft blog for details.

Removing Duplicate PATH Entries: Reboot


In my first post on removing duplicate PATH entries I used an AWK one-liner. In the second post I used a Perl one-liner, or more accurately, I tried to dissect a Perl one-liner provided by reader Shaun. Shaun had asked that if I was willing to use AWK (not Bash), why not use Perl? It occurred to me that one might also ask: why not just use Bash? So, one more time into the void.

Lessons in Vendor Lock-in: Messaging

Is messaging really so complicated that you need five different messaging apps on your phone? Discover the reasons behind messaging vendor lock-in.

One of the saddest stories of vendor lock-in is the story of messaging. What makes this story sad is that the tech industry has continued to repeat the same mistakes and build the same proprietary systems over the last two decades, and we as end users continue to use them. In this article, I look at some of the history of those mistakes, the lessons we should have learned and didn't, and the modern messaging world we find ourselves in now. Along the way, I offer some explanations for why we're in this mess.

The First Wave

My first exposure to instant messaging was in the late 1990s. This was the era of the first dotcom boom, and it seemed like every internet company wanted to be a portal—the home page for your browser and the lens through which you experienced the web and the rest of the internet. Each of these portals created instant messengers of their own as offshoots of group chat rooms, such as AOL Instant Messenger (AIM), Yahoo Chat and MSN chat among others. The goal of each of them was simple: because you had to register an account with the provider to chat with your friends, once a service had a critical mass of your friends, you were sure to follow along so you wouldn't be left out.

My friends ended up using ICQ, so I did too. Unlike some of the others, ICQ didn't have a corresponding portal or internet service. It focused only on instant messaging. This service had its heyday, and for a while, it was the main instant messenger people used unless they were already tied in to another IM service from their internet portal.

The nice thing about ICQ, unlike some of the other services at the time, was that it didn't go to great effort to obscure its API and block unauthorized clients. This meant that quite a few Linux ICQ clients showed up that worked pretty well. Linux clients emerged for the other platforms too, but it seemed like once or twice a year, you could count on an outage for a week or more because the upstream messaging network decided to change the API to try to block unauthorized clients.

Proprietary APIs

Why did the networks want to block unauthorized clients? Simple: instant-messaging networks always have been about trends. One day, you're the popular IM network, and then the next day, someone else comes along. Since the IM network tightly controlled the client, it meant that as a user, you had to make sure all of your friends had accounts on that network. If a new network cropped up that wanted to compete, the first thing it had to do was make it easy for users to switch over. This meant offering compatibility with an existing IM network, so you could pull over your existing buddy list and chat with your friends, knowing that eventually some of them might move over to this new network.

Linux Mint 19.1 «Tessa» Cinnamon Now Available, VirtualBox 6.0 Officially Released, Facebook’s Data-Sharing Deals, Purism’s Librem 5 Dev Kits Shipping and Open Compute Project’s Future Technologies Symposium Call for Poster Submissions

News briefs for December 19, 2018.

Linux Mint 19.1 "Tessa" Cinnamon was released today. This is a long-term support release, which will be supported until 2023. New features include a brand-new panel layout, the Nemo file manager is three times faster than before, a "huge number of upstream changes were ported from the GNOME project" and much more. Read about all the new features here and download here.

VirtualBox 6.0 has been officially released. This is a major update with tons of new features including support for exporting a virtual machine to Oracle Cloud Infrastructure, a major rework of the user interface, a new file manager, major update of 3D graphics support for Windows guests and much more. See the Changelog for the full list of new features and fixes, and visit the Downloads page for links to VirtualBox binaries and source code.

Facebook provided other companies—such as Microsoft, Amazon and Spotify—far greater access to its users' private data than it previously has disclosed. The New York Times obtained hundreds of pages of records showing the extent of the data-sharing practices. NYT reports that "Facebook allowed Microsoft's Bing search engine to see the names of virtually all Facebook users' friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users' private messages." The Times article also notes that the deals benefited more than 150 companies, and that the applications "sought the data of hundreds of millions of people a month, the records show. The deals, the oldest of which date to 2010, were all active in 2017. Some were still in effect this year."

Purism's Librem 5 dev kits are shipping, and backers should receive their dev kits before the end of the year. The Purism blog post notes that "Our backers who are receiving the dev kits will also have access to a Matrix channel for dev kit owners. This channel will be staffed by our engineering team who will be on hand to answer questions, work with the community on merge requests, and be available for those who are using the dev kits. But by no means is this an exclusive channel and all of you are welcome to join in as well! Please reach out to info@puri.sm if you are interested in being added to the group." In addition, the Librem 5's early-bird price of $599 ends January 7th, and the preorder price will increase to $699 to help fund further engineering of the phone and upstream projects.

The Open Compute Project announces Future Technologies Symposium to be held at the 2019 OCP Global Summit in San Jose, California and invites students and researchers from around the world to submit posters. Draft submissions are due January 31, 2019. The theme for this year is "Open Together" and the announcement says "We welcome submissions in computer storage, networking, or any of the OCP project tracks; as well as those which are multi-disciplinary and cover leading technology solutions, such as edge computing." See the OCP Symposium website for more information.

Purism Introduces «It’s a Secure Life» Bundle Sale, Wave Computing Open-Sourcing MIPS, Red Hat Announces Long-Term Commercial Support for OpenJDK on Microsoft Windows, ArchLabs 2018.12 Now Available and RawTherapee 5.5 Released

News briefs for December 18, 2018.

Purism is introducing "It's a Secure Life" bundles from now until January 6. The bundles are 15%–18& off, and they can be made up of different combinations of the Librem 5 smartphone (preorder), the Librem 15 laptop and the Librem Key.

Wave Computing announced yesterday it plans to open-source its MIPS instruction set architecture to "accelerate the ability for semiconductor companies, developers and universities to adopt and innovate using MIPS for next-generation system-on-chip (SoC) designs". According to the announcement, "Under the MIPS Open program, participants will have full access to the most recent versions of the 32-bit and 64-bit MIPS ISA free of charge—with no licensing or royalty fees. Additionally, participants in the MIPS Open program will be licensed under MIPS' hundreds of existing worldwide patents."

Red Hat this morning announced long-term commercial support for OpenJDK on Microsoft Windows. In addition to supporting OpenJDK builds on RHEL, this support will further enable "organizations to standardize the development and deployment of Java applications throughout the enterprise with a flexible, powerful and open alternative to proprietary Java platforms".

The ArchLabs 2018.12 release is now available. It's been six months since the last release, and this version has done away with the live environment, so when you start the USB install, you are thrown straight into the installer. According to the announcement, "Instructions on how to start the installer are right there. No need for passwords with this live USB either." Other changes include Aurman has been replaced with a new homegrown AUR helper called Baph, the package repo has been updated and installing ArchLabs should be easier than ever. You can download it from here.

RawTherapee 5.5 has been released. This new version of the open-source RAW photo editor has several new features, including a new Shadows/Highlights tool, improved support for Canon mRaw format variants, unbounded processing, new color toning methods and more. You can get the new version via your package manager or visit the download page.

Sharing Docker Containers across DevOps Environments


Docker provides a powerful tool for creating lightweight images and containerized processes, but did you know it can make your development environment part of the DevOps pipeline too? Whether you're managing tens of thousands of servers in the cloud or are a software engineer looking to incorporate Docker containers into the software development life cycle, this article has a little something for everyone with a passion for Linux and Docker.

In this article, I describe how Docker containers flow through the DevOps pipeline. I also cover some advanced DevOps concepts (borrowed from object-oriented programming) on how to use dependency injection and encapsulation to improve the DevOps process. And finally, I show how containerization can be useful for the development and testing process itself, rather than just as a place to serve up an application after it's written.


Containers are hot in DevOps shops, and their benefits from an operations and service delivery point of view have been covered well elsewhere. If you want to build a Docker container or deploy a Docker host, container or swarm, a lot of information is available. However, very few articles talk about how to develop inside the Docker containers that will be reused later in the DevOps pipeline, so that's what I focus on here.

Figure 1. Stages a Docker Container Moves Through in a Typical DevOps Pipeline

Container-Based Development Workflows

Two common workflows exist for developing software for use inside Docker containers:

  1. Injecting development tools into an existing Docker container: this is the best option for sharing a consistent development environment with the same toolchain among multiple developers, and it can be used in conjunction with web-based development environments, such as Red Hat's codenvy.com or dockerized IDEs like Eclipse Che.
  2. Bind-mounting a host directory onto the Docker container and using your existing development tools on the host: this is the simplest option, and it offers flexibility for developers to work with their own set of locally installed development tools.

Both workflows have advantages, but local mounting is inherently simpler. For that reason, I focus on the mounting solution as "the simplest thing that could possibly work" here.

How Docker Containers Move between Environments

A core tenet of DevOps is that the source code and runtimes that will be used in production are the same as those used in development. In other words, the most effective pipeline is one where the identical Docker image can be reused for each stage of the pipeline.