Linux Journal Ceases Publication: An Awkward Goodbye

Goodbye

IMPORTANT NOTICE FROM LINUX JOURNAL, LLC:

On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.

–Linux Journal, LLC

 


Final Letter from the Editor: The Awkward Goodbye

by Kyle Rankin

Have you ever met up with a friend at a restaurant for dinner, then after dinner you both step out to the street and say a proper goodbye, only when you leave, you find out that you both are walking in the same direction? So now, you get to walk together awkwardly until the true point where you part, and then you have another, second goodbye, that's much more awkward.

That's basically this post. 

So, it was almost two years ago that I first said goodbye to Linux Journal and the Linux Journal community in my post "So Long and Thanks for All the Bash". That post was a proper goodbye. For starters, it had a catchy title with a pun. The post itself had all the elements of a proper goodbye: part retrospective, part "Thank You" to the Linux Journal team and the community, and OK, yes, it was also part rant. I recommend you read (or re-read) that post, because it captures my feelings about losing Linux Journal way better than I can muster here on our awkward second goodbye. 

Of course, not long after I wrote that post, we found out that Linux Journal wasn't dead after all! We all actually had more time together and got to work fixing everything that had caused us to die in the first place. A lot of our analysis of what went wrong and what we intended to change was captured in my article "What Linux Journal's Resurrection Taught Me about the FOSS Community" that we posted in our 25th anniversary issue.

Linux Journal Ceases Publication: An Awkward Goodbye

Goodbye

IMPORTANT NOTICE FROM LINUX JOURNAL, LLC:

On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.

–Linux Journal, LLC

 


Final Letter from the Editor: The Awkward Goodbye

by Kyle Rankin

Have you ever met up with a friend at a restaurant for dinner, then after dinner you both step out to the street and say a proper goodbye, only when you leave, you find out that you both are walking in the same direction? So now, you get to walk together awkwardly until the true point where you part, and then you have another, second goodbye, that's much more awkward.

That's basically this post. 

So, it was almost two years ago that I first said goodbye to Linux Journal and the Linux Journal community in my post "So Long and Thanks for All the Bash". That post was a proper goodbye. For starters, it had a catchy title with a pun. The post itself had all the elements of a proper goodbye: part retrospective, part "Thank You" to the Linux Journal team and the community, and OK, yes, it was also part rant. I recommend you read (or re-read) that post, because it captures my feelings about losing Linux Journal way better than I can muster here on our awkward second goodbye. 

Of course, not long after I wrote that post, we found out that Linux Journal wasn't dead after all! We all actually had more time together and got to work fixing everything that had caused us to die in the first place. A lot of our analysis of what went wrong and what we intended to change was captured in my article "What Linux Journal's Resurrection Taught Me about the FOSS Community" that we posted in our 25th anniversary issue.

Oops! Debugging Kernel Panics

debugging kernel panics

A look into what causes kernel panics and some utilities to help gain more information.

Working in a Linux environment, how often have you seen a kernel panic? When it happens, your system is left in a crippled state until you reboot it completely. And, even after you get your system back into a functional state, you're still left with the question: why? You may have no idea what happened or why it happened. Those questions can be answered though, and the following guide will help you root out the cause of some of the conditions that led to the original crash.

Figure 1. A Typical Kernel Panic

Let's start by looking at a set of utilities known as kexec and kdump. kexec allows you to boot into another kernel from an existing (and running) kernel, and kdump is a kexec-based crash-dumping mechanism for Linux.

Installing the Required Packages

First and foremost, your kernel should have the following components statically built in to its image:


CONFIG_RELOCATABLE=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_DEBUG_INFO=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_PROC_VMCORE=y

You can find this in /boot/config-`uname -r`.

Make sure that your operating system is up to date with the latest-and-greatest package versions:


$ sudo apt update && sudo apt upgrade

Install the following packages (I'm currently using Debian, but the same should and will apply to Ubuntu):


$ sudo apt install gcc make binutils linux-headers-`uname -r`
 ↪kdump-tools crash `uname -r`-dbg

Note: Package names may vary across distributions.

During the installation, you will be prompted with questions to enable kexec to handle reboots (answer whatever you'd like, but I answered "no"; see Figure 2).

Figure 2. kexec Configuration Menu

And to enable kdump to run and load at system boot, answer "yes" (Figure 3).

Figure 3. kdump Configuration Menu

Configuring kdump

Open the /etc/default/kdump-tools file, and at the very top, you should see the following:

Oops! Debugging Kernel Panics

debugging kernel panics

A look into what causes kernel panics and some utilities to help gain more information.

Working in a Linux environment, how often have you seen a kernel panic? When it happens, your system is left in a crippled state until you reboot it completely. And, even after you get your system back into a functional state, you're still left with the question: why? You may have no idea what happened or why it happened. Those questions can be answered though, and the following guide will help you root out the cause of some of the conditions that led to the original crash.

Figure 1. A Typical Kernel Panic

Let's start by looking at a set of utilities known as kexec and kdump. kexec allows you to boot into another kernel from an existing (and running) kernel, and kdump is a kexec-based crash-dumping mechanism for Linux.

Installing the Required Packages

First and foremost, your kernel should have the following components statically built in to its image:


CONFIG_RELOCATABLE=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_DEBUG_INFO=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_PROC_VMCORE=y

You can find this in /boot/config-`uname -r`.

Make sure that your operating system is up to date with the latest-and-greatest package versions:


$ sudo apt update && sudo apt upgrade

Install the following packages (I'm currently using Debian, but the same should and will apply to Ubuntu):


$ sudo apt install gcc make binutils linux-headers-`uname -r`
 ↪kdump-tools crash `uname -r`-dbg

Note: Package names may vary across distributions.

During the installation, you will be prompted with questions to enable kexec to handle reboots (answer whatever you'd like, but I answered "no"; see Figure 2).

Figure 2. kexec Configuration Menu

And to enable kdump to run and load at system boot, answer "yes" (Figure 3).

Figure 3. kdump Configuration Menu

Configuring kdump

Open the /etc/default/kdump-tools file, and at the very top, you should see the following:

Loadsharers: Funding the Load-Bearing Internet Person

loadsharers

The internet has a sustainability problem. Many of its critical services depend on the dedication of unpaid volunteers, because they can't be monetized and thus don't have any revenue stream for the maintainers to live on. I'm talking about services like DNS, time synchronization, crypto libraries—software without which the net and the browser you're using couldn't function.

These volunteer maintainers are the Load-Bearing Internet People (LBIP). Underfunding them is a problem, because underfunded critical services tend to have gaps and holes that could have been fixed if there were more full-time attention on them. As our civilization becomes increasingly dependent on this software infrastructure, that attention shortfall could lead to disastrous outages.

I've been worrying about this problem since 2012, when I watched a hacker I know wreck his health while working on a critical infrastructure problem nobody else understood at the time. Billions of dollars in e-commerce hung on getting the particular software problem he had spotted solved, but because it masqueraded as network undercapacity, he had a lot of trouble getting even technically-savvy people to understand where the problem was. He solved it, but unable to afford medical insurance and literally living in a tent, he eventually went blind in one eye and is now prone to depressive spells.

More recently, I damaged my ankle and discovered that although there is such a thing as minor surgery on the medical level, there is no such thing as "minor surgery" on the financial level. I was looking—still am looking—at a serious prospect of either having my life savings wiped out or having to leave all 52 of the open-source projects I'm responsible for in the lurch as I scrambled for a full-time job. Projects at risk include the likes of GIFLIB, GPSD and NTPsec.

That refocused my mind on the LBIP problem. There aren't many Load-Bearing Internet People—probably on the close order of 1,000 worldwide—but they're a systemic vulnerability made inevitable by the existence of common software and internet services that can't be metered. And, burning them out is a serious problem. Even under the most cold-blooded assessment, civilization needs the mean service life of an LBIP to be long enough to train and acculturate a replacement.

(If that made you wonder—yes, in fact, I am training an apprentice. Different problem for a different article.)

Alas, traditional centralized funding models have failed the LBIPs. There are a few reasons for this:

Loadsharers: Funding the Load-Bearing Internet Person

loadsharers

The internet has a sustainability problem. Many of its critical services depend on the dedication of unpaid volunteers, because they can't be monetized and thus don't have any revenue stream for the maintainers to live on. I'm talking about services like DNS, time synchronization, crypto libraries—software without which the net and the browser you're using couldn't function.

These volunteer maintainers are the Load-Bearing Internet People (LBIP). Underfunding them is a problem, because underfunded critical services tend to have gaps and holes that could have been fixed if there were more full-time attention on them. As our civilization becomes increasingly dependent on this software infrastructure, that attention shortfall could lead to disastrous outages.

I've been worrying about this problem since 2012, when I watched a hacker I know wreck his health while working on a critical infrastructure problem nobody else understood at the time. Billions of dollars in e-commerce hung on getting the particular software problem he had spotted solved, but because it masqueraded as network undercapacity, he had a lot of trouble getting even technically-savvy people to understand where the problem was. He solved it, but unable to afford medical insurance and literally living in a tent, he eventually went blind in one eye and is now prone to depressive spells.

More recently, I damaged my ankle and discovered that although there is such a thing as minor surgery on the medical level, there is no such thing as "minor surgery" on the financial level. I was looking—still am looking—at a serious prospect of either having my life savings wiped out or having to leave all 52 of the open-source projects I'm responsible for in the lurch as I scrambled for a full-time job. Projects at risk include the likes of GIFLIB, GPSD and NTPsec.

That refocused my mind on the LBIP problem. There aren't many Load-Bearing Internet People—probably on the close order of 1,000 worldwide—but they're a systemic vulnerability made inevitable by the existence of common software and internet services that can't be metered. And, burning them out is a serious problem. Even under the most cold-blooded assessment, civilization needs the mean service life of an LBIP to be long enough to train and acculturate a replacement.

(If that made you wonder—yes, in fact, I am training an apprentice. Different problem for a different article.)

Alas, traditional centralized funding models have failed the LBIPs. There are a few reasons for this:

Documenting Proper Git Usage

git emblem

Jonathan Corbet wrote a document for inclusion in the kernel tree, describing best practices for merging and rebasing git-based kernel repositories. As he put it, it represented workflows that were actually in current use, and it was a living document that hopefully would be added to and corrected over time.

The inspiration for the document came from noticing how frequently Linus Torvalds was unhappy with how other people—typically subsystem maintainers—handled their git trees.

It's interesting to note that before Linus wrote the git tool, branching and merging was virtually unheard of in the Open Source world. In CVS, it was a nightmare horror of leechcraft and broken magic. Other tools were not much better. One of the primary motivations behind git—aside from blazing speed—was, in fact, to make branching and merging trivial operations—and so they have become.

One of the offshoots of branching and merging, Jonathan wrote, was rebasing—altering the patch history of a local repository. The benefits of rebasing are fantastic. They can make a repository history cleaner and clearer, which in turn can make it easier to track down the patches that introduced a given bug. So rebasing has a direct value to the development process.

On the other hand, used poorly, rebasing can make a big mess. For example, suppose you rebase a repository that has already been merged with another, and then merge them again—insane soul death.

So Jonathan explained some good rules of thumb. Never rebase a repository that's already been shared. Never rebase patches that come from someone else's repository. And in general, simply never rebase—unless there's a genuine reason.

Since rebasing changes the history of patches, it relies on a new "base" version, from which the later patches diverge. Jonathan recommended choosing a base version that was generally thought to be more stable rather than less—a new version or a release candidate, for example, rather than just an arbitrary patch during regular development.

Jonathan also recommended, for any rebase, treating all the rebased patches as new code, and testing them thoroughly, even if they had been tested already prior to the rebase.

"If", he said, "rebasing is limited to private trees, commits are based on a well-known starting point, and they are well tested, the potential for trouble is low."

Moving on to merging, Jonathan pointed out that nearly 9% of all kernel commits were merges. There were more than 1,000 merge requests in the 5.1 development cycle alone.

Documenting Proper Git Usage

git emblem

Jonathan Corbet wrote a document for inclusion in the kernel tree, describing best practices for merging and rebasing git-based kernel repositories. As he put it, it represented workflows that were actually in current use, and it was a living document that hopefully would be added to and corrected over time.

The inspiration for the document came from noticing how frequently Linus Torvalds was unhappy with how other people—typically subsystem maintainers—handled their git trees.

It's interesting to note that before Linus wrote the git tool, branching and merging was virtually unheard of in the Open Source world. In CVS, it was a nightmare horror of leechcraft and broken magic. Other tools were not much better. One of the primary motivations behind git—aside from blazing speed—was, in fact, to make branching and merging trivial operations—and so they have become.

One of the offshoots of branching and merging, Jonathan wrote, was rebasing—altering the patch history of a local repository. The benefits of rebasing are fantastic. They can make a repository history cleaner and clearer, which in turn can make it easier to track down the patches that introduced a given bug. So rebasing has a direct value to the development process.

On the other hand, used poorly, rebasing can make a big mess. For example, suppose you rebase a repository that has already been merged with another, and then merge them again—insane soul death.

So Jonathan explained some good rules of thumb. Never rebase a repository that's already been shared. Never rebase patches that come from someone else's repository. And in general, simply never rebase—unless there's a genuine reason.

Since rebasing changes the history of patches, it relies on a new "base" version, from which the later patches diverge. Jonathan recommended choosing a base version that was generally thought to be more stable rather than less—a new version or a release candidate, for example, rather than just an arbitrary patch during regular development.

Jonathan also recommended, for any rebase, treating all the rebased patches as new code, and testing them thoroughly, even if they had been tested already prior to the rebase.

"If", he said, "rebasing is limited to private trees, commits are based on a well-known starting point, and they are well tested, the potential for trouble is low."

Moving on to merging, Jonathan pointed out that nearly 9% of all kernel commits were merges. There were more than 1,000 merge requests in the 5.1 development cycle alone.

Understanding Python’s asyncio

Python

How to get started using Python's asyncio.

Earlier this year, I attended PyCon, the international Python conference. One topic, presented at numerous talks and discussed informally in the hallway, was the state of threading in Python—which is, in a nutshell, neither ideal nor as terrible as some critics would argue.

A related topic that came up repeatedly was that of "asyncio", a relatively new approach to concurrency in Python. Not only were there formal presentations and informal discussions about asyncio, but a number of people also asked me about courses on the subject.

I must admit, I was a bit surprised by all the interest. After all, asyncio isn't a new addition to Python; it's been around for a few years. And, it doesn't solve all of the problems associated with threads. Plus, it can be confusing for many people to get started with it.

And yet, there's no denying that after a number of years when people ignored asyncio, it's starting to gain steam. I'm sure part of the reason is that asyncio has matured and improved over time, thanks in no small part to much dedicated work by countless developers. But, it's also because asyncio is an increasingly good and useful choice for certain types of tasks—particularly tasks that work across networks.

So with this article, I'm kicking off a series on asyncio—what it is, how to use it, where it's appropriate, and how you can and should (and also can't and shouldn't) incorporate it into your own work.

What Is asyncio?

Everyone's grown used to computers being able to do more than one thing at a time—well, sort of. Although it might seem as though computers are doing more than one thing at a time, they're actually switching, very quickly, across different tasks. For example, when you ssh in to a Linux server, it might seem as though it's only executing your commands. But in actuality, you're getting a small "time slice" from the CPU, with the rest going to other tasks on the computer, such as the systems that handle networking, security and various protocols. Indeed, if you're using SSH to connect to such a server, some of those time slices are being used by sshd to handle your connection and even allow you to issue commands.

All of this is done, on modern operating systems, via "pre-emptive multitasking". In other words, running programs aren't given a choice of when they will give up control of the CPU. Rather, they're forced to give up control and then resume a little while later. Each process running on a computer is handled this way. Each process can, in turn, use threads, sub-processes that subdivide the time slice given to their parent process.

Understanding Python’s asyncio

Python

How to get started using Python's asyncio.

Earlier this year, I attended PyCon, the international Python conference. One topic, presented at numerous talks and discussed informally in the hallway, was the state of threading in Python—which is, in a nutshell, neither ideal nor as terrible as some critics would argue.

A related topic that came up repeatedly was that of "asyncio", a relatively new approach to concurrency in Python. Not only were there formal presentations and informal discussions about asyncio, but a number of people also asked me about courses on the subject.

I must admit, I was a bit surprised by all the interest. After all, asyncio isn't a new addition to Python; it's been around for a few years. And, it doesn't solve all of the problems associated with threads. Plus, it can be confusing for many people to get started with it.

And yet, there's no denying that after a number of years when people ignored asyncio, it's starting to gain steam. I'm sure part of the reason is that asyncio has matured and improved over time, thanks in no small part to much dedicated work by countless developers. But, it's also because asyncio is an increasingly good and useful choice for certain types of tasks—particularly tasks that work across networks.

So with this article, I'm kicking off a series on asyncio—what it is, how to use it, where it's appropriate, and how you can and should (and also can't and shouldn't) incorporate it into your own work.

What Is asyncio?

Everyone's grown used to computers being able to do more than one thing at a time—well, sort of. Although it might seem as though computers are doing more than one thing at a time, they're actually switching, very quickly, across different tasks. For example, when you ssh in to a Linux server, it might seem as though it's only executing your commands. But in actuality, you're getting a small "time slice" from the CPU, with the rest going to other tasks on the computer, such as the systems that handle networking, security and various protocols. Indeed, if you're using SSH to connect to such a server, some of those time slices are being used by sshd to handle your connection and even allow you to issue commands.

All of this is done, on modern operating systems, via "pre-emptive multitasking". In other words, running programs aren't given a choice of when they will give up control of the CPU. Rather, they're forced to give up control and then resume a little while later. Each process running on a computer is handled this way. Each process can, in turn, use threads, sub-processes that subdivide the time slice given to their parent process.

RV Offsite Backup Update

RV

Having an offsite backup in your RV is great, and after a year of use, I've discovered some ways to make it even better.

Last year I wrote a feature-length article on the data backup system I set up for my RV (see Kyle's "DIY RV Offsite Backup and Media Server" from the June 2018 issue of LJ). If you haven't read that article yet, I recommend checking it out first so you can get details on the system. In summary, I set up a Raspberry Pi media center PC connected to a 12V television in the RV. I connected an 8TB hard drive to that system and synchronized all of my files and media so it acted as a kind of off-site backup. Finally, I set up a script that would attempt to sync over all of those files from my NAS whenever it detected that the RV was on the local network. So here, I provide an update on how that system is working and a few tweaks I've made to it since.

What Works

Overall, the media center has worked well. It's been great to have all of my media with me when I'm on a road trip, and my son appreciates having access to his favorite cartoons. Because the interface is identical to the media center we have at home, there's no learning curve—everything just works. Since the Raspberry Pi is powered off the TV in the RV, you just need to turn on the TV and everything fires up.

It's also been great knowing that I have a good backup of all of my files nearby. Should anything happen to my house or my main NAS, I know that I can just get backups from the RV. Having peace of mind about your important files is valuable, and it's nice knowing in the worst case when my NAS broke, I could just disconnect my USB drive from the RV, connect it to a local system, and be back up and running.

The WiFi booster I set up on the RV also has worked pretty well to increase the range of the Raspberry Pi (and the laptops inside the RV) when on the road. When we get to a campsite that happens to offer WiFi, I just reset the booster and set up a new access point that amplifies the campsite signal for inside the RV. On one trip, I even took it out of the RV and inside a hotel room to boost the weak signal.

RV Offsite Backup Update

RV

Having an offsite backup in your RV is great, and after a year of use, I've discovered some ways to make it even better.

Last year I wrote a feature-length article on the data backup system I set up for my RV (see Kyle's "DIY RV Offsite Backup and Media Server" from the June 2018 issue of LJ). If you haven't read that article yet, I recommend checking it out first so you can get details on the system. In summary, I set up a Raspberry Pi media center PC connected to a 12V television in the RV. I connected an 8TB hard drive to that system and synchronized all of my files and media so it acted as a kind of off-site backup. Finally, I set up a script that would attempt to sync over all of those files from my NAS whenever it detected that the RV was on the local network. So here, I provide an update on how that system is working and a few tweaks I've made to it since.

What Works

Overall, the media center has worked well. It's been great to have all of my media with me when I'm on a road trip, and my son appreciates having access to his favorite cartoons. Because the interface is identical to the media center we have at home, there's no learning curve—everything just works. Since the Raspberry Pi is powered off the TV in the RV, you just need to turn on the TV and everything fires up.

It's also been great knowing that I have a good backup of all of my files nearby. Should anything happen to my house or my main NAS, I know that I can just get backups from the RV. Having peace of mind about your important files is valuable, and it's nice knowing in the worst case when my NAS broke, I could just disconnect my USB drive from the RV, connect it to a local system, and be back up and running.

The WiFi booster I set up on the RV also has worked pretty well to increase the range of the Raspberry Pi (and the laptops inside the RV) when on the road. When we get to a campsite that happens to offer WiFi, I just reset the booster and set up a new access point that amplifies the campsite signal for inside the RV. On one trip, I even took it out of the RV and inside a hotel room to boost the weak signal.

Another Episode of «Seems Perfectly Feasible and Then Dies»–Script to Simplify the Process of Changing System Call Tables

David Howells put in quite a bit of work on a script, ./scripts/syscall-manage.pl, to simplify the entire process of changing the system call tables. With this script, it was a simple matter to add, remove, rename or renumber any system call you liked. The script also would resolve git conflicts, in the event that two repositories renumbered the system calls in conflicting ways.

Why did David need to write this patch? Why weren't system calls already fairly easy to manage? When you make a system call, you add it to a master list, and then you add it to the system call "tables", which is where the running kernel looks up which kernel function corresponds to which system call number. Kernel developers need to make sure system calls are represented in all relevant spots in the source tree. Renaming, renumbering and making other changes to system calls involves a lot of fiddly little details. David's script simply would do everything right—end of story no problemo hasta la vista.

Arnd Bergmann remarked, "Ah, fun. You had already threatened to add that script in the past. The implementation of course looks fine, I was just hoping we could instead eliminate the need for it first." But, bowing to necessity, Arnd offered some technical suggestions for improvements to the patch.

However, Linus Torvalds swooped in at this particular moment, saying:

Ugh, I hate it.

I'm sure the script is all kinds of clever and useful, but I really think the solution is not this kind of helper script, but simply that we should work at not having each architecture add new system calls individually in the first place.

IOW, we should look at having just one unified table for new system call numbers, and aim for the per-architecture ones to be for "legacy numbering".

Maybe that won't happen, but in the _hope_ that it happens, I really would prefer that people not work at making scripts for the current nasty situation.

And the portcullis came crashing down.

It's interesting that, instead of accepting this relatively obvious improvement to the existing situation, Linus would rather leave it broken and ugly, so that someone someday somewhere might be motivated to do the harder-yet-better fix. And, it's all the more interesting given how extreme the current problem is. Without actually being broken, the situation requires developers to put in a tremendous amount of care and effort into something that David's script could make trivial and easy. Even for such an obviously "good" patch, Linus gives thought to the policy and cultural implications, and the future motivations of other people working in that region of code.

Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Another Episode of «Seems Perfectly Feasible and Then Dies»–Script to Simplify the Process of Changing System Call Tables

David Howells put in quite a bit of work on a script, ./scripts/syscall-manage.pl, to simplify the entire process of changing the system call tables. With this script, it was a simple matter to add, remove, rename or renumber any system call you liked. The script also would resolve git conflicts, in the event that two repositories renumbered the system calls in conflicting ways.

Why did David need to write this patch? Why weren't system calls already fairly easy to manage? When you make a system call, you add it to a master list, and then you add it to the system call "tables", which is where the running kernel looks up which kernel function corresponds to which system call number. Kernel developers need to make sure system calls are represented in all relevant spots in the source tree. Renaming, renumbering and making other changes to system calls involves a lot of fiddly little details. David's script simply would do everything right—end of story no problemo hasta la vista.

Arnd Bergmann remarked, "Ah, fun. You had already threatened to add that script in the past. The implementation of course looks fine, I was just hoping we could instead eliminate the need for it first." But, bowing to necessity, Arnd offered some technical suggestions for improvements to the patch.

However, Linus Torvalds swooped in at this particular moment, saying:

Ugh, I hate it.

I'm sure the script is all kinds of clever and useful, but I really think the solution is not this kind of helper script, but simply that we should work at not having each architecture add new system calls individually in the first place.

IOW, we should look at having just one unified table for new system call numbers, and aim for the per-architecture ones to be for "legacy numbering".

Maybe that won't happen, but in the _hope_ that it happens, I really would prefer that people not work at making scripts for the current nasty situation.

And the portcullis came crashing down.

It's interesting that, instead of accepting this relatively obvious improvement to the existing situation, Linus would rather leave it broken and ugly, so that someone someday somewhere might be motivated to do the harder-yet-better fix. And, it's all the more interesting given how extreme the current problem is. Without actually being broken, the situation requires developers to put in a tremendous amount of care and effort into something that David's script could make trivial and easy. Even for such an obviously "good" patch, Linus gives thought to the policy and cultural implications, and the future motivations of other people working in that region of code.

Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Experts Attempt to Explain DevOps–and Almost Succeed

DevOps

What is DevOps? How does it relate to other ideas and methodologies within software development? Linux Journal Deputy Editor and longtime software developer, Bryan Lunduke isn't entirely sure, so he asks some experts to help him better understand the DevOps phenomenon.

The word DevOps confuses me.

I'm not even sure confuses me quite does justice to the pain I experience—right in the center of my brain—every time the word is uttered.

It's not that I dislike DevOps; it's that I genuinely don't understand what in tarnation it actually is. Let me demonstrate. What follows is the definition of DevOps on Wikipedia as of a few moments ago:

DevOps is a set of software development practices that combine software development (Dev) and information technology operations (Ops) to shorten the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

I'm pretty sure I got three aneurysms just by copying and pasting that sentence, and I still have no clue what DevOps really is. Perhaps I should back up and give a little context on where I'm coming from.

My professional career began in the 1990s when I got my first job as a Software Test Engineer (the people that find bugs in software, hopefully before the software ships, and tell the programmers about them). During the years that followed, my title, and responsibilities, gradually evolved as I worked my way through as many software-industry job titles as I could:

  • Automation Engineer: people that automate testing software.
  • Software Development Engineer in Test: people that make tools for the testers to use.
  • Software Development Engineer: aka "Coder", aka "Programmer".
  • Dev Lead: "Hey, you're a good programmer! You should also manage a few other programmers but still code just as much as you did before, but, don't worry, we won't give you much of a raise! It'll be great!"
  • Dev Manager: like a Dev Lead, with less programming, more managing.
  • Director of Engineering: the manager of the managers of the programmers.
  • Vice President of Technology/Engineering: aka "The big boss nerd man who gets to make decisions and gets in trouble first when deadlines are missed."

During my various times with fancy-pants titles, I managed teams that included:

Experts Attempt to Explain DevOps–and Almost Succeed

DevOps

What is DevOps? How does it relate to other ideas and methodologies within software development? Linux Journal Deputy Editor and longtime software developer, Bryan Lunduke isn't entirely sure, so he asks some experts to help him better understand the DevOps phenomenon.

The word DevOps confuses me.

I'm not even sure confuses me quite does justice to the pain I experience—right in the center of my brain—every time the word is uttered.

It's not that I dislike DevOps; it's that I genuinely don't understand what in tarnation it actually is. Let me demonstrate. What follows is the definition of DevOps on Wikipedia as of a few moments ago:

DevOps is a set of software development practices that combine software development (Dev) and information technology operations (Ops) to shorten the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

I'm pretty sure I got three aneurysms just by copying and pasting that sentence, and I still have no clue what DevOps really is. Perhaps I should back up and give a little context on where I'm coming from.

My professional career began in the 1990s when I got my first job as a Software Test Engineer (the people that find bugs in software, hopefully before the software ships, and tell the programmers about them). During the years that followed, my title, and responsibilities, gradually evolved as I worked my way through as many software-industry job titles as I could:

  • Automation Engineer: people that automate testing software.
  • Software Development Engineer in Test: people that make tools for the testers to use.
  • Software Development Engineer: aka "Coder", aka "Programmer".
  • Dev Lead: "Hey, you're a good programmer! You should also manage a few other programmers but still code just as much as you did before, but, don't worry, we won't give you much of a raise! It'll be great!"
  • Dev Manager: like a Dev Lead, with less programming, more managing.
  • Director of Engineering: the manager of the managers of the programmers.
  • Vice President of Technology/Engineering: aka "The big boss nerd man who gets to make decisions and gets in trouble first when deadlines are missed."

During my various times with fancy-pants titles, I managed teams that included:

DNA Geometry with cadnano

cadnano

This article introduces a tool you can use to work on three-dimensional DNA origami. The package is called cadnano, and it's currently being developed at the Wyss Institute. With this package, you'll be able to construct and manipulate the three-dimensional representations of DNA structures, as well as generate publication-quality graphics of your work.

Because this software is research-based, you won't likely find it in the package repository for your favourite distribution, in which case you'll need to install it from the GitHub repository.

Since cadnano is a Python program, written to use the Qt framework, you'll need to install some packages first. For example, in Debian-based distributions, you'll want to run the following commands:


sudo apt-get install python3 python3-pip

I found that installation was a bit tricky, so I created a virtual Python environment to manage module installations.

Once you're in your activated virtualenv, install the required Python modules with the command:


pip3 install pythreejs termcolor pytz pandas pyqt5 sip

After those dependencies are installed, grab the source code with the command:


git clone https://github.com/cadnano/cadnano2.5.git

This will grab the Qt5 version. The Qt4 version is in the repository https://github.com/cadnano/cadnano2.git.

Changing directory into the source directory, you can build and install cadnano with:


python setup.py install

Now your cadnano should be available within the virtualenv.

You can start cadnano simply by executing the cadnano command from a terminal window. You'll see an essentially blank workspace, made up of several empty view panes and an empty inspector pane on the far right-hand side.

Figure 1. When you first start cadnano, you get a completely blank work space.

In order to walk through a few of the functions available in cadnano, let's create a six-strand nanotube. The first step is to create a background that you can use to build upon. At the top of the main window, you'll find three buttons in the toolbar that will let you create a "Freeform", "Honeycomb" or "Square" framework. For this example, click the honeycomb button.

Figure 2. Start your construction with one of the available geometric frameworks.

DNA Geometry with cadnano

cadnano

This article introduces a tool you can use to work on three-dimensional DNA origami. The package is called cadnano, and it's currently being developed at the Wyss Institute. With this package, you'll be able to construct and manipulate the three-dimensional representations of DNA structures, as well as generate publication-quality graphics of your work.

Because this software is research-based, you won't likely find it in the package repository for your favourite distribution, in which case you'll need to install it from the GitHub repository.

Since cadnano is a Python program, written to use the Qt framework, you'll need to install some packages first. For example, in Debian-based distributions, you'll want to run the following commands:


sudo apt-get install python3 python3-pip

I found that installation was a bit tricky, so I created a virtual Python environment to manage module installations.

Once you're in your activated virtualenv, install the required Python modules with the command:


pip3 install pythreejs termcolor pytz pandas pyqt5 sip

After those dependencies are installed, grab the source code with the command:


git clone https://github.com/cadnano/cadnano2.5.git

This will grab the Qt5 version. The Qt4 version is in the repository https://github.com/cadnano/cadnano2.git.

Changing directory into the source directory, you can build and install cadnano with:


python setup.py install

Now your cadnano should be available within the virtualenv.

You can start cadnano simply by executing the cadnano command from a terminal window. You'll see an essentially blank workspace, made up of several empty view panes and an empty inspector pane on the far right-hand side.

Figure 1. When you first start cadnano, you get a completely blank work space.

In order to walk through a few of the functions available in cadnano, let's create a six-strand nanotube. The first step is to create a background that you can use to build upon. At the top of the main window, you'll find three buttons in the toolbar that will let you create a "Freeform", "Honeycomb" or "Square" framework. For this example, click the honeycomb button.

Figure 2. Start your construction with one of the available geometric frameworks.

Running GNOME in a Container

Containerizing the GUI separates your work and play.

Virtualization has always been a rich man's game, and more frugal enthusiasts—unable to afford fancy server-class components—often struggle to keep up. Linux provides free high-quality hypervisors, but when you start to throw real workloads at the host, its resources become saturated quickly. No amount of spare RAM shoved into an old Dell desktop is going to remedy this situation. If a properly decked-out host is out of your reach, you might want to consider containers instead.

Instead of virtualizing an entire computer, containers allow parts of the Linux kernel to be portioned into several pieces. This occurs without the overhead of emulating hardware or running several identical kernels. A full GUI environment, such as GNOME Shell can be launched inside a container, with a little gumption.

You can accomplish this through namespaces, a feature built in to the Linux kernel. An in-depth look at this feature is beyond the scope of this article, but a brief example sheds light on how these features can create containers. Each kind of namespace segments a different part of the kernel. The PID namespace, for example, prevents processes inside the namespace from seeing other processes running in the kernel. As a result, those processes believe that they are the only ones running on the computer. Each namespace does the same thing for other areas of the kernel as well. The mount namespace isolates the filesystem of the processes inside of it. The network namespace provides a unique network stack to processes running inside of them. The IPC, user, UTS and cgroup namespaces do the same for those areas of the kernel as well. When the seven namespaces are combined, the result is a container: an environment isolated enough to believe it is a freestanding Linux system.

Container frameworks will abstract the minutia of configuring namespaces away from the user, but each framework has a different emphasis. Docker is the most popular and is designed to run multiple copies of identical containers at scale. LXC/LXD is meant to create containers easily that mimic particular Linux distributions. In fact, earlier versions of LXC included a collection of scripts that created the filesystems of popular distributions. A third option is libvirt's lxc driver. Contrary to how it may sound, libvirt-lxc does not use LXC/LXD at all. Instead, the libvirt-lxc driver manipulates kernel namespaces directly. libvirt-lxc integrates into other tools within the libvirt suite as well, so the configuration of libvirt-lxc containers resembles those of virtual machines running in other libvirt drivers instead of a native LXC/LXD container. It is easy to learn as a result, even if the branding is confusing.

Running GNOME in a Container

Containerizing the GUI separates your work and play.

Virtualization has always been a rich man's game, and more frugal enthusiasts—unable to afford fancy server-class components—often struggle to keep up. Linux provides free high-quality hypervisors, but when you start to throw real workloads at the host, its resources become saturated quickly. No amount of spare RAM shoved into an old Dell desktop is going to remedy this situation. If a properly decked-out host is out of your reach, you might want to consider containers instead.

Instead of virtualizing an entire computer, containers allow parts of the Linux kernel to be portioned into several pieces. This occurs without the overhead of emulating hardware or running several identical kernels. A full GUI environment, such as GNOME Shell can be launched inside a container, with a little gumption.

You can accomplish this through namespaces, a feature built in to the Linux kernel. An in-depth look at this feature is beyond the scope of this article, but a brief example sheds light on how these features can create containers. Each kind of namespace segments a different part of the kernel. The PID namespace, for example, prevents processes inside the namespace from seeing other processes running in the kernel. As a result, those processes believe that they are the only ones running on the computer. Each namespace does the same thing for other areas of the kernel as well. The mount namespace isolates the filesystem of the processes inside of it. The network namespace provides a unique network stack to processes running inside of them. The IPC, user, UTS and cgroup namespaces do the same for those areas of the kernel as well. When the seven namespaces are combined, the result is a container: an environment isolated enough to believe it is a freestanding Linux system.

Container frameworks will abstract the minutia of configuring namespaces away from the user, but each framework has a different emphasis. Docker is the most popular and is designed to run multiple copies of identical containers at scale. LXC/LXD is meant to create containers easily that mimic particular Linux distributions. In fact, earlier versions of LXC included a collection of scripts that created the filesystems of popular distributions. A third option is libvirt's lxc driver. Contrary to how it may sound, libvirt-lxc does not use LXC/LXD at all. Instead, the libvirt-lxc driver manipulates kernel namespaces directly. libvirt-lxc integrates into other tools within the libvirt suite as well, so the configuration of libvirt-lxc containers resembles those of virtual machines running in other libvirt drivers instead of a native LXC/LXD container. It is easy to learn as a result, even if the branding is confusing.