Linux Journal Ceases Publication: An Awkward Goodbye



On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.

–Linux Journal, LLC


Final Letter from the Editor: The Awkward Goodbye

by Kyle Rankin

Have you ever met up with a friend at a restaurant for dinner, then after dinner you both step out to the street and say a proper goodbye, only when you leave, you find out that you both are walking in the same direction? So now, you get to walk together awkwardly until the true point where you part, and then you have another, second goodbye, that's much more awkward.

That's basically this post. 

So, it was almost two years ago that I first said goodbye to Linux Journal and the Linux Journal community in my post "So Long and Thanks for All the Bash". That post was a proper goodbye. For starters, it had a catchy title with a pun. The post itself had all the elements of a proper goodbye: part retrospective, part "Thank You" to the Linux Journal team and the community, and OK, yes, it was also part rant. I recommend you read (or re-read) that post, because it captures my feelings about losing Linux Journal way better than I can muster here on our awkward second goodbye. 

Of course, not long after I wrote that post, we found out that Linux Journal wasn't dead after all! We all actually had more time together and got to work fixing everything that had caused us to die in the first place. A lot of our analysis of what went wrong and what we intended to change was captured in my article "What Linux Journal's Resurrection Taught Me about the FOSS Community" that we posted in our 25th anniversary issue.

Oops! Debugging Kernel Panics

debugging kernel panics

A look into what causes kernel panics and some utilities to help gain more information.

Working in a Linux environment, how often have you seen a kernel panic? When it happens, your system is left in a crippled state until you reboot it completely. And, even after you get your system back into a functional state, you're still left with the question: why? You may have no idea what happened or why it happened. Those questions can be answered though, and the following guide will help you root out the cause of some of the conditions that led to the original crash.

Figure 1. A Typical Kernel Panic

Let's start by looking at a set of utilities known as kexec and kdump. kexec allows you to boot into another kernel from an existing (and running) kernel, and kdump is a kexec-based crash-dumping mechanism for Linux.

Installing the Required Packages

First and foremost, your kernel should have the following components statically built in to its image:


You can find this in /boot/config-`uname -r`.

Make sure that your operating system is up to date with the latest-and-greatest package versions:

$ sudo apt update && sudo apt upgrade

Install the following packages (I'm currently using Debian, but the same should and will apply to Ubuntu):

$ sudo apt install gcc make binutils linux-headers-`uname -r`
 ↪kdump-tools crash `uname -r`-dbg

Note: Package names may vary across distributions.

During the installation, you will be prompted with questions to enable kexec to handle reboots (answer whatever you'd like, but I answered "no"; see Figure 2).

Figure 2. kexec Configuration Menu

And to enable kdump to run and load at system boot, answer "yes" (Figure 3).

Figure 3. kdump Configuration Menu

Configuring kdump

Open the /etc/default/kdump-tools file, and at the very top, you should see the following:

Loadsharers: Funding the Load-Bearing Internet Person


The internet has a sustainability problem. Many of its critical services depend on the dedication of unpaid volunteers, because they can't be monetized and thus don't have any revenue stream for the maintainers to live on. I'm talking about services like DNS, time synchronization, crypto libraries—software without which the net and the browser you're using couldn't function.

These volunteer maintainers are the Load-Bearing Internet People (LBIP). Underfunding them is a problem, because underfunded critical services tend to have gaps and holes that could have been fixed if there were more full-time attention on them. As our civilization becomes increasingly dependent on this software infrastructure, that attention shortfall could lead to disastrous outages.

I've been worrying about this problem since 2012, when I watched a hacker I know wreck his health while working on a critical infrastructure problem nobody else understood at the time. Billions of dollars in e-commerce hung on getting the particular software problem he had spotted solved, but because it masqueraded as network undercapacity, he had a lot of trouble getting even technically-savvy people to understand where the problem was. He solved it, but unable to afford medical insurance and literally living in a tent, he eventually went blind in one eye and is now prone to depressive spells.

More recently, I damaged my ankle and discovered that although there is such a thing as minor surgery on the medical level, there is no such thing as "minor surgery" on the financial level. I was looking—still am looking—at a serious prospect of either having my life savings wiped out or having to leave all 52 of the open-source projects I'm responsible for in the lurch as I scrambled for a full-time job. Projects at risk include the likes of GIFLIB, GPSD and NTPsec.

That refocused my mind on the LBIP problem. There aren't many Load-Bearing Internet People—probably on the close order of 1,000 worldwide—but they're a systemic vulnerability made inevitable by the existence of common software and internet services that can't be metered. And, burning them out is a serious problem. Even under the most cold-blooded assessment, civilization needs the mean service life of an LBIP to be long enough to train and acculturate a replacement.

(If that made you wonder—yes, in fact, I am training an apprentice. Different problem for a different article.)

Alas, traditional centralized funding models have failed the LBIPs. There are a few reasons for this:

Documenting Proper Git Usage

git emblem

Jonathan Corbet wrote a document for inclusion in the kernel tree, describing best practices for merging and rebasing git-based kernel repositories. As he put it, it represented workflows that were actually in current use, and it was a living document that hopefully would be added to and corrected over time.

The inspiration for the document came from noticing how frequently Linus Torvalds was unhappy with how other people—typically subsystem maintainers—handled their git trees.

It's interesting to note that before Linus wrote the git tool, branching and merging was virtually unheard of in the Open Source world. In CVS, it was a nightmare horror of leechcraft and broken magic. Other tools were not much better. One of the primary motivations behind git—aside from blazing speed—was, in fact, to make branching and merging trivial operations—and so they have become.

One of the offshoots of branching and merging, Jonathan wrote, was rebasing—altering the patch history of a local repository. The benefits of rebasing are fantastic. They can make a repository history cleaner and clearer, which in turn can make it easier to track down the patches that introduced a given bug. So rebasing has a direct value to the development process.

On the other hand, used poorly, rebasing can make a big mess. For example, suppose you rebase a repository that has already been merged with another, and then merge them again—insane soul death.

So Jonathan explained some good rules of thumb. Never rebase a repository that's already been shared. Never rebase patches that come from someone else's repository. And in general, simply never rebase—unless there's a genuine reason.

Since rebasing changes the history of patches, it relies on a new "base" version, from which the later patches diverge. Jonathan recommended choosing a base version that was generally thought to be more stable rather than less—a new version or a release candidate, for example, rather than just an arbitrary patch during regular development.

Jonathan also recommended, for any rebase, treating all the rebased patches as new code, and testing them thoroughly, even if they had been tested already prior to the rebase.

"If", he said, "rebasing is limited to private trees, commits are based on a well-known starting point, and they are well tested, the potential for trouble is low."

Moving on to merging, Jonathan pointed out that nearly 9% of all kernel commits were merges. There were more than 1,000 merge requests in the 5.1 development cycle alone.

Understanding Python’s asyncio


How to get started using Python's asyncio.

Earlier this year, I attended PyCon, the international Python conference. One topic, presented at numerous talks and discussed informally in the hallway, was the state of threading in Python—which is, in a nutshell, neither ideal nor as terrible as some critics would argue.

A related topic that came up repeatedly was that of "asyncio", a relatively new approach to concurrency in Python. Not only were there formal presentations and informal discussions about asyncio, but a number of people also asked me about courses on the subject.

I must admit, I was a bit surprised by all the interest. After all, asyncio isn't a new addition to Python; it's been around for a few years. And, it doesn't solve all of the problems associated with threads. Plus, it can be confusing for many people to get started with it.

And yet, there's no denying that after a number of years when people ignored asyncio, it's starting to gain steam. I'm sure part of the reason is that asyncio has matured and improved over time, thanks in no small part to much dedicated work by countless developers. But, it's also because asyncio is an increasingly good and useful choice for certain types of tasks—particularly tasks that work across networks.

So with this article, I'm kicking off a series on asyncio—what it is, how to use it, where it's appropriate, and how you can and should (and also can't and shouldn't) incorporate it into your own work.

What Is asyncio?

Everyone's grown used to computers being able to do more than one thing at a time—well, sort of. Although it might seem as though computers are doing more than one thing at a time, they're actually switching, very quickly, across different tasks. For example, when you ssh in to a Linux server, it might seem as though it's only executing your commands. But in actuality, you're getting a small "time slice" from the CPU, with the rest going to other tasks on the computer, such as the systems that handle networking, security and various protocols. Indeed, if you're using SSH to connect to such a server, some of those time slices are being used by sshd to handle your connection and even allow you to issue commands.

All of this is done, on modern operating systems, via "pre-emptive multitasking". In other words, running programs aren't given a choice of when they will give up control of the CPU. Rather, they're forced to give up control and then resume a little while later. Each process running on a computer is handled this way. Each process can, in turn, use threads, sub-processes that subdivide the time slice given to their parent process.

RV Offsite Backup Update


Having an offsite backup in your RV is great, and after a year of use, I've discovered some ways to make it even better.

Last year I wrote a feature-length article on the data backup system I set up for my RV (see Kyle's "DIY RV Offsite Backup and Media Server" from the June 2018 issue of LJ). If you haven't read that article yet, I recommend checking it out first so you can get details on the system. In summary, I set up a Raspberry Pi media center PC connected to a 12V television in the RV. I connected an 8TB hard drive to that system and synchronized all of my files and media so it acted as a kind of off-site backup. Finally, I set up a script that would attempt to sync over all of those files from my NAS whenever it detected that the RV was on the local network. So here, I provide an update on how that system is working and a few tweaks I've made to it since.

What Works

Overall, the media center has worked well. It's been great to have all of my media with me when I'm on a road trip, and my son appreciates having access to his favorite cartoons. Because the interface is identical to the media center we have at home, there's no learning curve—everything just works. Since the Raspberry Pi is powered off the TV in the RV, you just need to turn on the TV and everything fires up.

It's also been great knowing that I have a good backup of all of my files nearby. Should anything happen to my house or my main NAS, I know that I can just get backups from the RV. Having peace of mind about your important files is valuable, and it's nice knowing in the worst case when my NAS broke, I could just disconnect my USB drive from the RV, connect it to a local system, and be back up and running.

The WiFi booster I set up on the RV also has worked pretty well to increase the range of the Raspberry Pi (and the laptops inside the RV) when on the road. When we get to a campsite that happens to offer WiFi, I just reset the booster and set up a new access point that amplifies the campsite signal for inside the RV. On one trip, I even took it out of the RV and inside a hotel room to boost the weak signal.

Another Episode of “Seems Perfectly Feasible and Then Dies”–Script to Simplify the Process of Changing System Call Tables

David Howells put in quite a bit of work on a script, ./scripts/, to simplify the entire process of changing the system call tables. With this script, it was a simple matter to add, remove, rename or renumber any system call you liked. The script also would resolve git conflicts, in the event that two repositories renumbered the system calls in conflicting ways.

Why did David need to write this patch? Why weren't system calls already fairly easy to manage? When you make a system call, you add it to a master list, and then you add it to the system call "tables", which is where the running kernel looks up which kernel function corresponds to which system call number. Kernel developers need to make sure system calls are represented in all relevant spots in the source tree. Renaming, renumbering and making other changes to system calls involves a lot of fiddly little details. David's script simply would do everything right—end of story no problemo hasta la vista.

Arnd Bergmann remarked, "Ah, fun. You had already threatened to add that script in the past. The implementation of course looks fine, I was just hoping we could instead eliminate the need for it first." But, bowing to necessity, Arnd offered some technical suggestions for improvements to the patch.

However, Linus Torvalds swooped in at this particular moment, saying:

Ugh, I hate it.

I'm sure the script is all kinds of clever and useful, but I really think the solution is not this kind of helper script, but simply that we should work at not having each architecture add new system calls individually in the first place.

IOW, we should look at having just one unified table for new system call numbers, and aim for the per-architecture ones to be for "legacy numbering".

Maybe that won't happen, but in the _hope_ that it happens, I really would prefer that people not work at making scripts for the current nasty situation.

And the portcullis came crashing down.

It's interesting that, instead of accepting this relatively obvious improvement to the existing situation, Linus would rather leave it broken and ugly, so that someone someday somewhere might be motivated to do the harder-yet-better fix. And, it's all the more interesting given how extreme the current problem is. Without actually being broken, the situation requires developers to put in a tremendous amount of care and effort into something that David's script could make trivial and easy. Even for such an obviously "good" patch, Linus gives thought to the policy and cultural implications, and the future motivations of other people working in that region of code.

Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to

Experts Attempt to Explain DevOps–and Almost Succeed


What is DevOps? How does it relate to other ideas and methodologies within software development? Linux Journal Deputy Editor and longtime software developer, Bryan Lunduke isn't entirely sure, so he asks some experts to help him better understand the DevOps phenomenon.

The word DevOps confuses me.

I'm not even sure confuses me quite does justice to the pain I experience—right in the center of my brain—every time the word is uttered.

It's not that I dislike DevOps; it's that I genuinely don't understand what in tarnation it actually is. Let me demonstrate. What follows is the definition of DevOps on Wikipedia as of a few moments ago:

DevOps is a set of software development practices that combine software development (Dev) and information technology operations (Ops) to shorten the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

I'm pretty sure I got three aneurysms just by copying and pasting that sentence, and I still have no clue what DevOps really is. Perhaps I should back up and give a little context on where I'm coming from.

My professional career began in the 1990s when I got my first job as a Software Test Engineer (the people that find bugs in software, hopefully before the software ships, and tell the programmers about them). During the years that followed, my title, and responsibilities, gradually evolved as I worked my way through as many software-industry job titles as I could:

  • Automation Engineer: people that automate testing software.
  • Software Development Engineer in Test: people that make tools for the testers to use.
  • Software Development Engineer: aka "Coder", aka "Programmer".
  • Dev Lead: "Hey, you're a good programmer! You should also manage a few other programmers but still code just as much as you did before, but, don't worry, we won't give you much of a raise! It'll be great!"
  • Dev Manager: like a Dev Lead, with less programming, more managing.
  • Director of Engineering: the manager of the managers of the programmers.
  • Vice President of Technology/Engineering: aka "The big boss nerd man who gets to make decisions and gets in trouble first when deadlines are missed."

During my various times with fancy-pants titles, I managed teams that included:

DNA Geometry with cadnano


This article introduces a tool you can use to work on three-dimensional DNA origami. The package is called cadnano, and it's currently being developed at the Wyss Institute. With this package, you'll be able to construct and manipulate the three-dimensional representations of DNA structures, as well as generate publication-quality graphics of your work.

Because this software is research-based, you won't likely find it in the package repository for your favourite distribution, in which case you'll need to install it from the GitHub repository.

Since cadnano is a Python program, written to use the Qt framework, you'll need to install some packages first. For example, in Debian-based distributions, you'll want to run the following commands:

sudo apt-get install python3 python3-pip

I found that installation was a bit tricky, so I created a virtual Python environment to manage module installations.

Once you're in your activated virtualenv, install the required Python modules with the command:

pip3 install pythreejs termcolor pytz pandas pyqt5 sip

After those dependencies are installed, grab the source code with the command:

git clone

This will grab the Qt5 version. The Qt4 version is in the repository

Changing directory into the source directory, you can build and install cadnano with:

python install

Now your cadnano should be available within the virtualenv.

You can start cadnano simply by executing the cadnano command from a terminal window. You'll see an essentially blank workspace, made up of several empty view panes and an empty inspector pane on the far right-hand side.

Figure 1. When you first start cadnano, you get a completely blank work space.

In order to walk through a few of the functions available in cadnano, let's create a six-strand nanotube. The first step is to create a background that you can use to build upon. At the top of the main window, you'll find three buttons in the toolbar that will let you create a "Freeform", "Honeycomb" or "Square" framework. For this example, click the honeycomb button.

Figure 2. Start your construction with one of the available geometric frameworks.

Running GNOME in a Container

Containerizing the GUI separates your work and play.

Virtualization has always been a rich man's game, and more frugal enthusiasts—unable to afford fancy server-class components—often struggle to keep up. Linux provides free high-quality hypervisors, but when you start to throw real workloads at the host, its resources become saturated quickly. No amount of spare RAM shoved into an old Dell desktop is going to remedy this situation. If a properly decked-out host is out of your reach, you might want to consider containers instead.

Instead of virtualizing an entire computer, containers allow parts of the Linux kernel to be portioned into several pieces. This occurs without the overhead of emulating hardware or running several identical kernels. A full GUI environment, such as GNOME Shell can be launched inside a container, with a little gumption.

You can accomplish this through namespaces, a feature built in to the Linux kernel. An in-depth look at this feature is beyond the scope of this article, but a brief example sheds light on how these features can create containers. Each kind of namespace segments a different part of the kernel. The PID namespace, for example, prevents processes inside the namespace from seeing other processes running in the kernel. As a result, those processes believe that they are the only ones running on the computer. Each namespace does the same thing for other areas of the kernel as well. The mount namespace isolates the filesystem of the processes inside of it. The network namespace provides a unique network stack to processes running inside of them. The IPC, user, UTS and cgroup namespaces do the same for those areas of the kernel as well. When the seven namespaces are combined, the result is a container: an environment isolated enough to believe it is a freestanding Linux system.

Container frameworks will abstract the minutia of configuring namespaces away from the user, but each framework has a different emphasis. Docker is the most popular and is designed to run multiple copies of identical containers at scale. LXC/LXD is meant to create containers easily that mimic particular Linux distributions. In fact, earlier versions of LXC included a collection of scripts that created the filesystems of popular distributions. A third option is libvirt's lxc driver. Contrary to how it may sound, libvirt-lxc does not use LXC/LXD at all. Instead, the libvirt-lxc driver manipulates kernel namespaces directly. libvirt-lxc integrates into other tools within the libvirt suite as well, so the configuration of libvirt-lxc containers resembles those of virtual machines running in other libvirt drivers instead of a native LXC/LXD container. It is easy to learn as a result, even if the branding is confusing.

The Bash Trap Command


If you've written any amount of bash code, you've likely come across the trap command. Trap allows you to catch signals and execute code when they occur. Signals are asynchronous notifications that are sent to your script when certain events occur. Most of these notifications are for events that you hope never happen, such as an invalid memory access or a bad system call. However, there are one or two events that you might reasonably want to deal with. There are also "user" events available that are never generated by the system that you can generate to signal your script. Bash also provides a psuedo-signal called "EXIT", which is executed when your script exits; this can be used to make sure that your script executes some cleanup on exit.

Tutanota Interviews Tim Verheyden, the Journalist Who Broke the Story on Google Employees Listening to People’s Audio Recordings


Google employees listen to you, but the issue of "ghost workers" transcends Google. 

Investigative journalist Tim Verheyden, who broke the story on how Google employees listen to people’s audio recordings, explains in an interview how he got hold of the story, why he is now using the encrypted contact form Secure Connect by Tutanota and why the growing number of "ghost workers" in and around Silicon Valley is becoming a big issue in Tech.

Tutanota: Tim, you have broken a great story on VRT News about how employees of Google subcontractors listen to our conversations when using devices such as Google Home. What was that story about? What was the privacy violation?

Tim Verheyden: Google provides a range of information on privacy—and data gathering. In this particular case, Google says on audio gathering that it can save your audio to learn the sound of your voice, learn how we say phrases and words, recognize when we say "Ok Google" to improve speech recognition. Google does not speak about the human interaction in the chain of training the AI on speech recognition. For some experts, this is a violation of the new GDPR law.

Tutanota: How did the employee of the Google subcontractor who leaked the story get in touch with you?

Tim: By email, he shared his thoughts on an article we wrote about Alexa (Amazon) after Bloomberg broke the news about humans listening.

Tutanota: Tutanota has recently launched Secure Connect, and you had added this encrypted contact form to your website a few weeks ago. What do you expect from Secure Connect?

Tim: I hope it will encourage people with a story to get in contact. It does not always need to be a whitsleblower story. Because of security concerns—and other reasons—people are sometimes reluctant to contact a journalist. I hope Secure Connect will help build trust in relationships with journalists.

Tutanota: More and more journalists are offering Secure Connect so that whistleblowers can drop important information or get in touch with investigative journalists confidentially. Why do you believe a secure communication channel is important?

Words, Words, Words–Introducing OpenSearchServer

How to create your own search engine combined with a crawler that will index all sorts of documents.

In William Shakespeare's Hamlet, one of my favorite plays, Prince Hamlet is approached by Polonius, chief counselor to Claudius, King of Denmark, who happens to be Hamlet's stepfather, and uncle, and the new husband of his mother, Queen Gertrude, whose recently deceased last husband was the previous King of Denmark. That would be Hamlet's biological father for those who might be having trouble following along. He was King Hamlet. Polonius, I probably should mention, is also the father of Hamlet's sweetheart, Ophelia. Despite this hilarious sounding setup, Hamlet is most definitely not a comedy. (Note: if you need a refresher, you can read Hamlet here.)

For reasons I won't go into here, Hamlet is doing a great job of trying to convince people that he's completely lost it and is pretending to be reading a book when Polonius approaches and asks, "What do you read, my lord?"

Hamlet replies by saying, "'Words, words, words." In other words, ahem, nothing of any importance, you annoying little man.

Shakespeare wrote a lot of words. In fact, writers, businesses and organizations of any size tend to amass a lot of words in the form of countless documents, many of which seem to contain a great deal of importance at the time they are written and subsequently stored on some lonely corporate server. There, locked in their digital prisons, these many texts await the day when somebody will seek out their wisdom. Trouble is, there are so many of them, in many different formats, often with titles that tell you nothing about the content inside. What you need is a search engine.

Google is a pretty awesome search engine, but it's not for everybody, especially if the documents in question aren't meant for consumption by the public at large. For those times, you need your own search engine, combined with a crawler that will index all sorts of documents, from OpenDocument format, to old Microsoft Docs, to PDFs and even plain text. That's where OpenSearchServer comes into play. OpenSearchServer is, as the name implies, an open-source project designed to perform the function of crawling through and indexing large collections of documents, such as you would find on a website.

I'm going to show you how to go about getting this documentation site set up from scratch so that you can see all the steps. You may, of course, already have a web server up and running, and that's fine. I've gone ahead and spun up a Linode server running Ubuntu 18.04 LTS. This is a great way to get a server up and running quickly without spending a lot of money if you don't want to, and if you've never done this, it's also kind of fun.

Open Source Is Good, but How Can It Do Good?

open source

Open-source coders: we know you are good—now do good.

The ethical use of computers has been at the heart of free software from the beginning. Here's what Richard Stallman told me when I interviewed him in 1999 for my book Rebel Code:

The free software movement is basically a movement for freedom. It's based on values that are not purely material and practical. It's based on the idea that freedom is a benefit in itself. And that being allowed to be part of a community is a benefit in itself, having neighbors who can help you, who are free to help you—they are not told that they are pirates if they help you—is a benefit in itself, and that that's even more important than how powerful and reliable your software is.

The Open Source world may not be so explicit about the underlying ethical aspect, but most coders probably would hope that their programming makes the world a better place. Now that the core technical challenge of how to write good, world-beating open-source code largely has been met, there's another, trickier challenge: how to write open-source code that does good.

One obvious way is to create software that boosts good causes directly. A recent article on discussed eight projects that are working in the area of the environment. Helping to tackle the climate crisis and other environmental challenges with free software is an obvious way to make the world better in a literal sense, and on a massive scale. Particularly notable is Greenpeace's Platform 4—not just open-source software, but an entire platform for doing good. And external coders are welcome:

Co-develop Planet 4!

Planet 4 is 100% open source. If you would like to get involved and show us what you've got, you're very welcome to join us.

Every coder can contribute to the success of P4 by joining forces to code features, review plugins or special functionalities. The help of Greenpeace offices with extra capacity and of the open source community is most welcome!

This is a great model for doing good with open source, by helping established groups build powerful codebases that have an impact on a global scale. In addition, it creates communities of like-minded free software programmers interested in applying their skills to that end. The Greenpeace approach to developing its new platform, usefully mapped out on the site, provides a template for other organizations that want to change the world with the help of ethical coders.

Reality 2.0 Episode 24: A Chat About Redis Labs (Podcast Transcript)

Episode 24: A Chat About Redis Labs (Podcast Transcript) cover

Doc Searls and Katherine Druckman talk to Yiftach Shoolman of Redis Labs about Redis, Open Source licenses, company culture and more.

Listen to the podcast here.

Katherine Druckman: Hey, Linux Journal readers, I am Katherine Druckman, joining you again for our awesome, cool podcast. As always, joining us is Doc Searls, our editor-in-chief. Our special guest this time is Yiftach Shoolman of Redis Labs. He is the CTO and co-founder, and he was kind enough to join us. We’ve talked a bit, in preparation for the podcast, about Redis Labs, but I wondered if you could just give us sort of an overview for the tiny fraction of the people listening that don’t know all about Redis Labs and Redis. If you could just give us a little brief intro, that’d be great. 


Yiftach Shoolman: Thank you very much for hosting me, first. Redis is an extremely popular in-memory data structure database that’s used by many people as just a caching system, but many of them have shifted from just simple cache to a real database, even in the open source world. Just in terms of numbers, only on Docker Hub, Redis has been launched for almost 1.8 billion times, something like five million every day, so it’s extremely popular. It’s used everywhere. Redis Labs is the company behind the open source. When I say “behind the open source,” we sponsor, I would say, 99% of all the open source activities, if not 100%. We also have enterprise products, which is called Redis Enterprise. 

It is available as a cloud service on all the public clouds, as well as a fully-managed Redis cloud service, as well as softwares that you can download and install everywhere. This is our story in general. The way we split between open source and commercial, which is today very tricky, is that we keep the Redis core as open-core BSD, by the way. On top of that, we added what we call enterprise layers that allows Redis to be deployed in an enterprise environment in the most scalable and highly available way. We have all the goodies that you need, including active-active, including data persistence layer, etc., all the boring stuff that the enterprise needs, in addition to that, a lot of security features. In addition to that, we extended Redis with what we call modules. Some of them were initially open source, and then we changed the license. This is probably the reason that you called me.


Katherine Druckman: Right. That was in the news, certainly.


Episode 24: A Chat About Redis Labs

Linux Mint 19.2 “Tina” Cinnamon Now Available, IBM Has Transformed Its Software to Be Cloud-Native and Run on Any Cloud with Red Hat OpenShift, Icinga Web 2.7.0 Released, Google Rolling Out Android Auto Design Updates and Kernel 5.1 Reaches End of Life

News briefs for August 2, 2019.

Linux Mint 19.2 "Tina" Cinnamon was officially released today. This is a long-term support release that will be supported until 2023, and it brings updated software and many improvements. Go here to read about all the new features.

IBM yesterday announced it has transformed its software to be cloud-native and run on any cloud with Red Hat OpenShift. From the announcement: "Enterprises can now build mission-critical applications once and run them on all leading public clouds, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud and on private clouds. The new cloud-native capabilities will be delivered as pre-integrated solutions called IBM Cloud Paks." IBM also announced Red Hat OpenShift on IBM Cloud, Red Hat OpenShift on IBM Z and LinuxOne, and consulting and technology services for Red Hat.

Icinga Web 2.7.0 was released this week. Improvements include Japanese and Ukranian language support, bonus functionality for Modules, an enhanced UI and much more. You can get the official packages from

Google begins rolling out new Android Auto design updates. ZDNet reports that "the new Android Auto starts playing media and Google Maps as soon as the car starts. Maps will also show suggested locations. If a route has already been planned on a phone, Android Auto automatically adds the directions and displays routing information....Android Auto now also can use widescreen displays to give extra space for step-by-step navigation, media playback, and call controls. Changes to improve visibility include easier-to-read fonts and a new dark mode. Overall, the design changes are meant to get users on the road faster and allow easier management of apps with fewer taps."

Greg Kroah-Hartman recently announced that Linux kernel 5.1 has reached end of life: "Everyone should be moved to the 5.2.y kernel at this point in time. 5.1.y is now end-of-life."

Where the Internet Gets Real

Local is the frontier of truth at the dawn of our Digital Age.

The internet showed up in our house in 1995. When that happened, I mansplained to my wife that it was a global drawstring through all the phone and cable companies of the world, pulling everybody and everything together—and that this was going to be good for the world.

My wife, who ran a global business, already knew plenty of things about the internet and expected good things to happen as well. But she pushed back on the global thing, saying "the sweet spot of the internet is local." Her reason: "Local is where the internet gets real." By which she meant the internet wasn't real in the physical sense anywhere, and we still live and work in the physical world, and that was a huge advantage.

Later I made a big thing about how the internet was absent of distance, an observation I owe to Craig Burton. Here's Craig in a 1999 interview for a Linux Journal newsletter that I sourced later in this 2000 column:

I see the Net as a world we might see as a bubble. A sphere. It's growing larger and larger, and yet inside, every point in that sphere is visible to every other one. That's the architecture of a sphere. Nothing stands between any two points. That's its virtue: it's empty in the middle. The distance between any two points is functionally zero, and not just because they can see each other, but because nothing interferes with operation between any two points. There's a word I like for what's going on here: terraform. It's the verb for creating a world. That's what we're making here: a new world. Now the question is, what are we going to do to cause planetary existence? How can we terraform this new world in a way that works for the world and not just ourselves?

In Linux Journal (see my article "The Giant Zero, Part 0.x") and elsewhere, I joined Craig in calling that world "the giant zero". Again my wife weighed in with a helpful point: the internet has no gravity as well as no distance—meaning we are not only placeless when we're on the net, but that prepositions such as on (uttered earlier in this sentence) were literally wrong, even though they made metaphorical sense. See, most prepositions express spatial relations that require distance, gravity or both. Over, under, through, around, beside and within are all examples. The one preposition that does apply for the net is with, because we are clearly with another person (or whatever) when we are engaged with them on (can't help using that word) the net.

The DevOps Issue

Linux Journal August 2019 cover

Every few years a new term is coined within the computer industry—big data, machine learning, agile development, Internet of Things, just to name a few. You'd be forgiven for not knowing them all.

Some of these are new ideas. Some are refinements on existing ideas. Others still are simply notions we've all had for a long time, but now we have a new word to describe said notions.

Which brings me to a topic we cover in depth in this issue of Linux Journal: DevOps.

Not sure what DevOps is? Need it explained to you? It's okay, I was in the same boat. Start off by reading "Experts Attempt to Explain DevOps—and Almost Succeed" to get a high-level explanation of what this whole DevOps brouhaha is all about.

Once you've got the concept of DevOps firmly implanted in your brain, it's time to dive in and look at how specific parts of DevOps can be implemented, starting with "Continuous Integration/Continuous Development with FOSS Tools" by Quentin Hartman, Director of Infrastructure and DevOps at Finalze.

Next, turn to Linux Journal's very own Editor at Large (and senior performance software engineer at Cray), Petros Koutoupis, for a look at how to install and utilize Ansible to deploy and configure large numbers of Linux servers all at once. It's a nifty tool to have in your toolbelt, especially when looking to do things "The DevOps Way".

Okay, you've got the idea of DevOps, and you know some of the tools you can utilize with it as you build out a big, expansive online service. But what does a truly excellent system really look like? What components does it consist of? How does one go about selecting said components?

Luckily, we've got Kyle Rankin's aptly titled "My Favorite Infrastructure" to answer those questions. Linux Journal's illustrious Tech Editor (and Chief Security Officer at Purism) gives a tour of, what he considers to be, the best infrastructure he ever built. Including details on the architecture, configuration management, security and disaster recovery.

Oh, but we're not done! Ever want to build an OpenStack implementation on top of Fedora, openSUSE or Debian? John S. Tonello, the Global Technical Marketing Manager at SUSE, walks through exactly that with the help of free software tools like Kolla, Docker, qemu and pip. It's a veritable smorgasbord of Linux server-y goodness.

Canonical Announces the Availability of Xibo as a Snap, Chrome 76 Released, Viruses Discovered in LibreOffice, Pop!_OS 18.10 Reaches End of Life, and Dutch Ministry of Justice and Security Warns of Microsoft Office Online Privacy Risks

News briefs for August 1, 2019.

Canonical yesterday announced the availability of the Xibo open-source digital signage platform as a snap. From the announcement: "Xibo provides a comprehensive suite of digital signage products, with its Content Management System (CMS) at the heart of this experience-led offering. Xibo for Linux is completely free and natively built for the Xibo CMS, which can be installed on servers or combined with Xibo cloud hosting." You can download the Xibo snap here.

The Chrome team has promoted Chrome 76 to the stable channel for Windows, Mac and Linux. According to Softpedia News, "Highlights of the Chrome 76 release include Flash plugin blocked by default, Dark Mode support for websites, more improvements to the Payments API to allow merchant websites or web apps to respond when a user changes payment instruments, better support for PWAs (Progressive Web Apps), and the ability to control the 'Add to Home' screen mini-infobar."

Silent macro viruses have been discovered in LibreOffice. The Register reports that there's an "issue where documents can be configured to run macros silently on opening". The vulnerability was reported by Nils Emmerich and assigned CVE-2019-9848. According to The Register, "It appears that the supposedly fixed 6.2.5 is still vulnerable—confirmed by us." There is an updated bug report here. To fix it, "disable LibreLogo immediately if it is present and enabled in your build of LibreOffice."

System76 announces that Pop!_OS 18.10 has reached end of life and will no longer receive security updates. To keep your system secure and up to date, upgrade your OS to version 19.04.

Due to security and privacy risks, the Dutch Ministry of Justice and Security is warning government institutions not to use Microsoft Office online or mobile apps. According to The Register, "A report from Privacy Company, which was commissioned by the ministry, found that Office Online and the Office mobile apps should be banned from government work. The report found the apps were not in compliance with a set of privacy measures Redmond has agreed to with the Dutch government."