Btrfs on CentOS: Living with Loopback

Btrfs on CentOS

Introduction

The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions.

Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years.

For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn. Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.

How to Secure Your Website with OpenSSL and SSL Certificates

How to Secure Your Website with OpenSSL and SSL Certificates

The Internet has become the number one resources for news, information, events, and all things social. As most people know there are many ways to create a website of your own and capture your own piece of the internet to share your stories, ideas, or even things you like with others. When doing so it is important to make sure you stay protected on the internet the same way you would in the real world. There are many steps to take in the real world to stay safe, however, in this article we will be talking about staying secure on the web with an SSL certificate.

OpenSSL is a command line tool we can use as a type of "bodyguard" for our webservers and applications. It can be used for a variety of things related to HTTPS, generating private keys and CSRs (certificate signing requests), and other examples. This article will break down what OpenSSL is, what it does, and examples on how to use it to keep your website secure. Most online web/domain platforms provide SSL certificates for a fixed yearly price. This method, although it takes a bit of technical knowledge, can save you some money and keep you secure on the web.

* For example purposes we will use testmastersite.com for commands and examples

How this guide may help you:

  • Using OpenSSL to generate and configure CSRs
  • Understanding SSL certificates and their importance
  • Learn about certificate signing requests (CSRs)
  • Learn how to create your own CSR and private key
  • Learn about OpenSSL and its common use cases

Requirements

OpenSSL

The first thing to do would be to generate a 2048-bit RSA key pair on your machine. This pair i'm referring to is both your private and public key. You can use a list of tools online to do so, but for this example we will be working with OpenSSL.

What are SSL certificates and who cares?

According to GlobalSign.com an SSL certificate is a small data file that digitally binds a cryptographic key to an organizations details. When installed on a webserver, it activates the padlock and the https protocol and allows secure connections from a web server to a browser. Let me break that down for you. An SSL certificate is like a bodyguard for your website. To confirm that a site is using an SSL you can typically check that the site has an https in the url rather than an http string in the name. the "s" stands for Secure.

  • Example SECURE Site: https://www.testmastersite.com/

Pretty Good Privacy (PGP) and Digital Signatures

Pretty Good Privacy (PGP) and Digital Signatures

If you have sent any plaintext confidential emails to someone (most likely you did), have you ever questioned yourself about the mail being tampered with or read by anyone during transit? If not, you should!

Any unencrypted email is like a postcard. It can be seen by anyone (crackers/security hackers, corporations, governments, or anyone with the required skills), during its transit.

In 1991 Phil Zimmermann, a free speech activist, and anti-nuclear pacifist developed Pretty Good Privacy (PGP), the first software available to the general public that utilized RSA (a public key cryptosystem, will discuss it later) for email encryption and signing. Zimmermann, after having had a friend post the program on the worldwide Usenet, got prosecuted by the U.S. government; later he was charged by the FBI for illegal weapon export because encryption tools were considered as such (all charges were eventually dropped). Zimmermann later founded PGP Inc., which is now part of Symantec Corporation.

In 1997 PGP Inc. submitted a standardization proposal to the Internet Engineering Task Force. The standard was called OpenPGP and was defined in 1998 in the IETF document RFC 2440. The latest version of the OpenPGP standard is described in RFC 4880, published in 2007.

Nowadays there are many OpenPGP-compliant products: the most widespread is probably GnuPG (GNU Privacy Guard, or GPG for short) which has been developed since 1999 by Werner Koch. GnuPG is free, open-source, and available for several platforms. It is a command-line only tool.

PGP is used for digital signature, encryption (and decrypting obviously, nobody will use software which only encrypts!), compression, Radix-64 conversion.

In this article, we will explain encryption and digital signatures.

So what encryption is, how does it work, and how does it benefit us?

Encryption (Confidentiality)

Encryption is the process of conversion of any information to a ciphertext or an unreadable form. A very simple example of encrypting text is:

Hello this is Knownymous and this is a ciphertext.

Uryyb guvf vf Xabjalzbhf naq guvf vf n pvcuregrkg.

If you read it carefully, you will notice that every letter of the English alphabet is converted to its next 13th letter in the English alphabet, so 13 is the key here, needed to decrypt it. It was known as Caesar cipher (Yes, the method is named after Julius Caesar).

Since then there are many encryption techniques (Cryptography) developed like- Diffie–Hellman key exchange (DH), RSA.

The techniques can be used in two ways:

Mark Text vs. Typora: Best Markdown Editor For Linux?

Mark Text vs. Typora: Best Markdown Editor For Linux?

Markdown is a widely used markup language, which is now not only used for creating documentation or notes but also for creating static websites (using Hugo or Jekyll). It is supported by major sites like GitHub, Bitbucket, GitLab, Stack Exchange, and Reddit.

Markdown follows a simple easy-to-read and easy-to-write plain text formatting syntax. By just using non-alphabetic characters like asterisk (*), hashtag (#), backtick (`), or dash (-), you can format text as bold, italics, lists, headings, tables and so on.

Now, to write in Markdown, you can choose any Markdown applications available for Windows, macOS, and Linux desktop. You can even use web-based in-browser Markdown editors like StackEdit. But if you’re specifically looking for the best Markdown editor for Linux desktop, I present you two Markdown editors: Mark Text and Typora.

I’ve also tried other popular Markdown apps available for Linux platforms such as Joplin, Remarkable, ReText, and Mark My Words. But the reason I chose Mark Text and Typora is the seamless live preview features with distraction free user interface. Unlike other Markdown editors, these two do not have a dual panel (writing and preview window) interface, which is why I find both the most distinguishable applications among others.

Typora vs. Mark Text

Before I start discussing the extensive dissimilarities between Typora and Mark Text, let me briefly tell you the common features that both of them offer.

Similarities Between Mark Text And Typora

  • Real time preview
  • Export to HTML and PDF
  • GitHub Flavored Markdown
  • Inline styles
  • Code and Math Blocks
  • Support for Flowchart, Sequence diagram
  • Light and Dark Themes
  • Source Code, Typewriter, and Focus mode
  • Auto save
  • Paste images directly from clipboard
  • Available for Linux, macOS, and Windows

Differences Between Mark Text And Typora

Installation

If you’re a beginner and using non-Debian Linux distribution, you may find it difficult to install Typora. This is because Typora is packaged and tested only on Ubuntu, hence, you can install it easily on Debian-based distros like Ubuntu and Linux Mint by using commands or Debian packages, but not on other distros like Arch, or Void, where you’ve to build from binary packages for which official command is also not available.

Quick Tutorial on How to Use Shell Scripting in Linux: Coin Toss App

How to Use Shell Scripting in Linux

Simply put, a Shell Script is a program that is run by a UNIX/Linux shell. It is a file that contains a series of commands which are executed sequentially as if they were entered on the command line interface (CLI) or terminal.

In this quick tutorial on Shell Scripting, we will write a simple program to toss a coin. Basically, the output of our program should be either HEADS or TAILS (of course, randomly).

To start with, the first line of a shell script should indicate which interpreter/shell is to be used to execute the script. In this tutorial we will be using /bin/bash and it will be denoted as #!/bin/bash which is called a shebang!

Next, we will be using an internal Bash function - a shell variable named RANDOM. It returns a random (actually, pseudorandom) integer in the range 0-32767. We will use this variable to get 2 random values – either 0 (for HEADS) or 1 (for TAILS). This will be done via a simple arithmetic operation in shell using % (Modulus operator, returns remainder), $((RANDOM%2)) and this will be stored in a result variable. So, the second line of our program becomes Result=$((RANDOM%2)) – Note that there should be no space around = (assignment operator) while assigning value to a variable in shell scripts.

At last, we just need to print HEADS if we got 0 or TAILS if we got 1, in the Result variable. Perhaps you guessed it by now, we will use if conditional statements for this. Within the conditions, we will compare the value of Result variable with 0 and 1; and print HEADS or TAILS accordingly. For this, the operator for integer comparison -eq (is equal to) is used to check if the value of two operands are equal or not.

Ergo, our shell script looks like the following:

 

#!/bin/bash
Result=$((RANDOM%2))
if [[ ${Result} -eq 0 ]]; then
    echo HEADS
elif [[ ${Result} -eq 1 ]]; then
    echo TAILS
fi

 

Let’s say we name the script cointoss.sh – Note that .sh is only to make it identifiable for user(s) that the file/script is a shell script. And, Linux is an Extensionless system.

Finally, to run the script we need to make it executable and that can be done by using the chmod command – chmod +x cointoss.sh

Few script executions:

 

$ ./cointoss.sh

TAILS

$ ./cointoss.sh

HEADS

$ ./cointoss.sh

HEADS

$ ./cointoss.sh

TAILS

 

 

To wrap up, in this quick tutorial about writing shell scripts, we learned about shebang, RANDOM, variable assignment, an arithmetic operation using Modulus operator %, if conditional statements, integer comparison operator -eq and executing a shell script.

How To Kill Zombie Processes on Linux

How To Kill Zombie Processes on Linux

Killing Zombies!

Also known as “defunct” or “dead” process – In simple words, a Zombie process is one that is dead but is present in the system’s process table. Ideally, it should have been cleaned from the process table once it completed its job/execution but for some reason, its parent process didn’t clean it up properly after the execution.

In a just (Linux) world, a process notifies its parent process once it has completed its execution and has exited. Then the parent process would remove the process from process table. At this step, if the parent process is unable to read the process status from its child (the completed process), it won’t be able to remove the process from memory and thus the process being dead still continues to exist in the process table – hence, called a Zombie!

In order to kill a Zombie process, we need to identify it first. The following command can be used to find zombie processes:

$ ps aux | egrep "Z|defunct"

Z in the STAT column and/or [defunct] in the last (COMMAND) column of the output would identify a Zombie process.

Now practically you can’t kill a Zombie because it is already dead! What can be done is to notify its parent process explicitly so that it can retry to read the child (dead) process’s status and eventually clean them from the process table. This can be done by sending a SIGCHLD signal to the parent process. The following command can be used to find the parent process ID (PID):

$ ps -o ppid=

Once you have the Zombie’s parent process ID, you can use the following command to send a SIGCHLD signal to the parent process:

$ kill -s SIGCHLD

However, if this does not help clearing out the Zombie process, you will have to kill or restart its parent process OR in case of a huge surge in Zombie processes causing or heading towards system outage, you will have no choice but to go for a system reboot. The following command can be used to kill its parent process:

$ kill -9

Note that killing a parent process will affect all of its child processes, so a quick double check will be helpful to be safe. Alternatively, if few lying zombie processes are not consuming much CPU/Memory, it’s better to kill the parent process or reboot the system in the next scheduled system maintenance.

Linux Command Line Interface Introduction: A Guide to the Linux CLI

Linux Command Line Interface Introduction: A Guide to the Linux CLI

Let’s get to know the Linux Command Line Interface (CLI).

Introduction

The Linux command line is a text interface to your computer.

Also known as shell, terminal, console, command prompts and many others, is a computer program intended to interpret commands.

Allows users to execute commands by manually typing at the terminal, or has the ability to automatically execute commands which were programmed in “Shell Scripts”.

A bit of history

The Bourne Shell (sh) was originally developed by Stephen Bourne while working at Bell Labs.

Released in 1979 in the Version 7 Unix release distributed to colleges and universities.

The Bourne Again Shell (bash) was written as a free and open source replacement for the Bourne Shell.

Given the open nature of Bash, over time it has been adopted as the default shell on most Linux systems.

First look at the command line

Now that we have covered some basics, let’s open a terminal window and see how it looks!

First look at the command line

When a terminal is open, it presents you with a prompt.

Let's analyze the screenshot above:

Line 1: The shell prompt, it is composed by username@hostname:location$

  • Username: our username is called “john”
  • Hostname: The name of the system we are logged on
  • Location: the working directory we are in
  • $: Delimits the end of prompt

After the $ sign, we can type a command and press Enter for this command to be executed.

Line 2: After the prompt, we have typed the command whoami which stands for “who am i“ and pressed [Enter] on the keyboard.

How To Upgrade From Fedora 32 To Fedora 33 [CLI & Graphical Methods]

How To Upgrade From Fedora 32 To Fedora 33

Last week, a Red Hat-sponsored community project, Fedora, announced the availability of Fedora 33 Beta. It is a prerelease version of the upcoming Fedora 33 Linux distribution, whose final stable version will arrive in the last week of October.

Fedora 33 is one of the exciting releases as it contains the fundamental shift of the default filesystem from ext4 to btrfs for all Fedora desktop editions and spins, along with other new features and visual changes.

Here are some of the key updates that Fedora 33 Beta includes:

  • GNOME 3.38 desktop environment
  • Linux Kernel 5.8
  • GNU Nano as default terminal text editor
  • earlyOOM enabled by default in Fedora 33 KDE
  • Fedora IoT as an official edition
  • Package update like Ruby, Python, and Perl

For complete details of all features, you can check out the Fedora 33 change set.

Coming to the main topic, you can also upgrade your current Fedora system to the beta version of Fedora 33, which you’ll also be able to upgrade further to the final stable release by simply updating your system once it arrives at the end of October.

So, if you’re the one who wants to test all new features of the upcoming Fedora 33, come along with me and upgrade your Fedora 32 Workstation to the Fedora 33 Beta Workstation using either of two methods.

If you’re comfortable playing with the terminal, you can upgrade Fedora 32 to 33 using the command line method or else follow the upgrade process using the graphical Software Center app.

What You Need To Do Before Upgrading Fedora Linux

Before you follow the steps to upgrade your Fedora Workstation, I would highly recommend backing up your data. Well, I didn’t encounter any problems while upgrading but if your data is very important, then I would say prevention is better than a cure.

After data backup, you should also keep it in mind that upgrading the system takes time. So, before you start this operation, buy enough time to finish the upgrade process properly. Needless to say, you should also have a stable internet connection to download all the update data.

Lastly, I also want to mention that the new release may halt some of the functions that worked perfectly in your previous version. For example, I was using Dash to Dock GNOME extension, which was broken in GNOME 3.38. So, I needed to re-install it manually.

Now, let’s begin the migration to Fedora 33.

Upgrade Fedora Linux To New Release Using Terminal

First, open the terminal and run the following command to update your system by getting the latest software packages for Fedora 32.

$ sudo dnf upgrade --refresh

Linux Mint 20.1 “Ulyssa” Will Arrive In Mid-December With Chromium, WebApp Manager

Article Images
Image
Linux Mint 20.1 “Ulyssa” Will Arrive In Mid-December With Chromium, WebApp Manager

As the Linux Mint team is progressing to release the first point version of Linux Mint 20 series, its founder and project leader Clement Lefebvre has finally revealed the codename for Linux Mint 20.1 as “Ulyssa”. He has also announced that Mint 20.1 will most probably arrive in mid-December (just before Christmas).

Until you wait for its beta release to test Linux Mint 20.1, Clement has also shared some great news regarding the new updates and features that you’ll get in Mint 20.1.

First, packaging of open source Chromium web browser and its updates directly through the official Mint repositories. As the team noticed delays between the official release and the version available in Linux distros, it has now decided to set up their own packaging and build Chromium package based on upstream code, along with some patches from Debian and Ubuntu as well.

As a result, the first test build of Chromium is available to download from here.

In last month's blog, the Mint team introduced a new WebApp Manager, inspired by Peppermint OS and its SSB (Site Specific Browser) application manager, ICE. It is a WebApp management system that will debut in Linux Mint 20.1 to turn a website into a standalone desktop application.

However, the Debian package of WebApp Manager v1.0.5 is now available to download, which comes with UI improvements, bug fixes and better translations for languages.

 

Web App Manager

 

Another feature that you’ll be thrilled to see in Linux Mint 20.1 is the hardware video acceleration enabled by default in the Celluloid video player. Obviously, hardware-accelerated players will bring smoother playback, better performance and reduced CPU usage.

 

Hardware Video Acceleration

 

Besides the confirmed features, the Linux Mint team is also looking for feedback on a side-project by Stephen Collins, “Sticky notes.” It is a note-taking app, which is still in Alpha stage. But if all goes well, who knows, you’ll see Sticky notes app in the upcoming Linux Mint.

 

Sticky Notes

 

The Linux Mint team has also asked for opinion on IPTV (Internet Protocol Television). If you use M3U IPTV on your phone, tablet or smart TV, you can let them know. The team seems interested to develop an IPTV solution for Linux desktop as a side project if the audience is small or turn it into an official Linux Mint project, if demand is good enough.

The Preservation and Continuation of the Iconic Linux Journal

The Preservation and Continuation of the Iconic Linux Journal

Editor's note: Thank you to returning contributor Matthew Higgins for these reflections on what the return and preservation of Linux Journal means.

As we welcome the return of Linux Journal, it’s worth recognizing the impact of the September 22nd announcement of the magazine’s return and how it sparked many feelings of nostalgia and excitement in thousands among the Linux community. That being said, it is also worth noting that the ways in which journalism has changed since Linux Journal’s first publication in 1994. The number of printed magazines have significantly decreased and exclusively digitally published content has become the norm in most cases. Linux Journal experienced this change in 2011 when the print version of the magazine was discontinued. Although many resented the change, it is far from the only magazine that embraced this trend. Despite the bitterness by some, embracing the digital version of Linux Journal allowed for its writers and publishers to direct their focus on taking full advantage of what the internet had to offer. 

Despite several advantages of an online publishing format, one concern that was becoming increasingly concerning for Linux Journal until September 22nd, 2020 was the survival of the Linux Journal website. If the website were to have shut down, the community would have potentially lost access to hundreds (or thousands) of articles and documents that were only published on the Linux Journal website and were not collectively available anywhere else. Even if an individual possessed the archive of the monthly issues of the journal, an attempt to republish it would be potentially legally problematic and would certainly show a lack of consideration for the rights of the authors who originally wrote the articles. 

Thanks to Slashdot Media, however, the Linux community no longer needs to express concern over the potential loss of the official Linux Journal archive of publications for the foreseeable future. Given its recent return, it seems like an appropriate time to emphasize the important role that Linux Journal played (and will continue to play) in the Linux community since 1994 and the opportunity to continue this role as the number of Linux users and enthusiasts continues to grow. The journal provides readers with access to several decades of articles and content that date back to the earliest days of Linux. Furthermore, Linux Journal preserves this content as an archive that tells a fascinating history of the kernel and the community built around it.

Installing Ubuntu with Two Hard Drives

Installing Ubuntu with Two Hard Drives

Many computers these days come with two hard drives, one SSD for fast boot speeds, and one that can be used for storage. My Dell G5 gaming laptop is a great example with a 128GB NAND SSD and a 1TB SSD. When building out a Linux installation I have a few options. Option 1: Follow the steps and install Ubuntu on one SSD hard drive for quick boot times and better speed performance when opening files or moving data. Then mounting the second drive and copying files to it when I want to backup files or need to move files off the first drive. Or, Option 2: install Ubuntu on an older hard drive with more storage but slower start up speeds and use the 128GB as a small mount point.

However, as most Linux users are aware, solid state drives are much faster, and files, folders, and drives on a Linux system all have mount points that can be setup with ease.

In this article we’ll go over how to install Ubuntu Linux with separate /root and /home directories on two separate drives – with root folder on the SSD and home folder on the 1TB hard drive. This allows me to leverage the boot times and speed of the 128GB SSD and still have plenty of space to install steam games or large applications.

This guide can also be used for other use cases as well. An example would be old or cheaper laptops that don't have hard drives with high RPM spinning SSDs. If your computer is a bit on the older side (and has an SD card slot) but you want to utilize faster boot times, you can go out and buy an SD card and install the /root partition onto that for quick boot times, and the /home partition on the main drive for storage. This guide, like Linux, can be used for many other use cases as well.

Linux Journal is Back

Linux Journal

As of today, Linux Journal is back, and operating under the ownership of Slashdot Media.

As Linux enthusiasts and long-time fans of Linux Journal, we were disappointed to hear about Linux Journal closing its doors last year. It took some time, but fortunately we were able to get a deal done that allows us to keep Linux Journal alive now and indefinitely. It's important that amazing resources like Linux Journal never disappear.

We will begin publishing digital content again as soon as we can. If you're a former Linux Journal contributor or a Linux enthusiast that would like to get involved, please contact us and let us know the capacity in which you'd like to contribute. We're looking for people to cover Linux news, create Linux guides, and moderate the community and comments. We'd also appreciate any other ideas or feedback you might have. Right now, we don't have any immediate plans to resurrect the subscription/issue model, and will be publishing exclusively on LinuxJournal.com free of charge. Our immediate goal is to familiarize ourself with the Linux Journal website and ensure it doesn't ever get shut down again.

Many of you are probably already aware of Slashdot Media, but for those who aren't, we own and operate Slashdot and SourceForge: two iconic open source software and technology websites that have been around for decades. We didn't always own SourceForge, but we acquired it in 2016, and immediately began improving, and have since come a long way in restoring and growing one of the most important resources in open source. We'd like to do the same here. We're ecstatic to be able to take the helm at Linux Journal, and ensure that this legendary Linux resource and community not only stays alive forever, but continues to grow and improve.

Reach out if you'd like to get involved!

Update Wednesday, September 23rd @ 3:43pm PST: Thanks for the great response to Linux Journal being revived! We're overwhelmed with the thousands of emails so it may take a bit of time to get back to you. This came together last minute as a way to avoid losing 25+ years of Linux history so bear with us as we get organized.

Newest IPFire Release Includes Security Fixes and Additional Hardware Support (IPFire 2.25 – Core Update 147)

Image
IPFire 2.25 - Core Update 147

Michael Tremer, maintainer of the IPFire project, announced IPFire 2.25 Core Update 147 today. This is the newest IPFire release since Core Update 146 on June 29th.

IPFire 2.25 Core Update 147 includes some important security updates including a newer version of Squid web proxy that has patched recent vulnerabilities.

Beyond security updates, IPFire 2.25 Core Update 147 adds support for additional hardware, as well as enhancing support for existing hardware because the new release ships with version 20200519 of the Linux firmware package.

IPFire 2.25 Core Update 147 also rectified a recurring issue relating to forwarding GRE connections.

In addition, the update improved IPFire on AWS configurations.

IPFire 2.25 Core Update 147 includes these updated packages: bind 9.11.20, dhcpcd 9.1.2, GnuTLS 3.6.14, gmp 6.2.0, iproute2 5.7.0, libassuan 2.5.3, libgcrypt 1.8.5, libgpg-error 1.38, OpenSSH 8.3p1, squidguard 1.6.0.

You can download IPFire 2.25 Core Update 147 here.

Linux Journal Ceases Publication: An Awkward Goodbye

Goodbye

IMPORTANT NOTICE FROM LINUX JOURNAL, LLC:

On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.

–Linux Journal, LLC

 


Final Letter from the Editor: The Awkward Goodbye

by Kyle Rankin

Have you ever met up with a friend at a restaurant for dinner, then after dinner you both step out to the street and say a proper goodbye, only when you leave, you find out that you both are walking in the same direction? So now, you get to walk together awkwardly until the true point where you part, and then you have another, second goodbye, that's much more awkward.

That's basically this post. 

So, it was almost two years ago that I first said goodbye to Linux Journal and the Linux Journal community in my post "So Long and Thanks for All the Bash". That post was a proper goodbye. For starters, it had a catchy title with a pun. The post itself had all the elements of a proper goodbye: part retrospective, part "Thank You" to the Linux Journal team and the community, and OK, yes, it was also part rant. I recommend you read (or re-read) that post, because it captures my feelings about losing Linux Journal way better than I can muster here on our awkward second goodbye. 

Of course, not long after I wrote that post, we found out that Linux Journal wasn't dead after all! We all actually had more time together and got to work fixing everything that had caused us to die in the first place. A lot of our analysis of what went wrong and what we intended to change was captured in my article "What Linux Journal's Resurrection Taught Me about the FOSS Community" that we posted in our 25th anniversary issue.

Linux Journal Ceases Publication: An Awkward Goodbye

Goodbye

IMPORTANT NOTICE FROM LINUX JOURNAL, LLC:

On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.

–Linux Journal, LLC

 


Final Letter from the Editor: The Awkward Goodbye

by Kyle Rankin

Have you ever met up with a friend at a restaurant for dinner, then after dinner you both step out to the street and say a proper goodbye, only when you leave, you find out that you both are walking in the same direction? So now, you get to walk together awkwardly until the true point where you part, and then you have another, second goodbye, that's much more awkward.

That's basically this post. 

So, it was almost two years ago that I first said goodbye to Linux Journal and the Linux Journal community in my post "So Long and Thanks for All the Bash". That post was a proper goodbye. For starters, it had a catchy title with a pun. The post itself had all the elements of a proper goodbye: part retrospective, part "Thank You" to the Linux Journal team and the community, and OK, yes, it was also part rant. I recommend you read (or re-read) that post, because it captures my feelings about losing Linux Journal way better than I can muster here on our awkward second goodbye. 

Of course, not long after I wrote that post, we found out that Linux Journal wasn't dead after all! We all actually had more time together and got to work fixing everything that had caused us to die in the first place. A lot of our analysis of what went wrong and what we intended to change was captured in my article "What Linux Journal's Resurrection Taught Me about the FOSS Community" that we posted in our 25th anniversary issue.

Oops! Debugging Kernel Panics

debugging kernel panics

A look into what causes kernel panics and some utilities to help gain more information.

Working in a Linux environment, how often have you seen a kernel panic? When it happens, your system is left in a crippled state until you reboot it completely. And, even after you get your system back into a functional state, you're still left with the question: why? You may have no idea what happened or why it happened. Those questions can be answered though, and the following guide will help you root out the cause of some of the conditions that led to the original crash.

Figure 1. A Typical Kernel Panic

Let's start by looking at a set of utilities known as kexec and kdump. kexec allows you to boot into another kernel from an existing (and running) kernel, and kdump is a kexec-based crash-dumping mechanism for Linux.

Installing the Required Packages

First and foremost, your kernel should have the following components statically built in to its image:


CONFIG_RELOCATABLE=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_DEBUG_INFO=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_PROC_VMCORE=y

You can find this in /boot/config-`uname -r`.

Make sure that your operating system is up to date with the latest-and-greatest package versions:


$ sudo apt update && sudo apt upgrade

Install the following packages (I'm currently using Debian, but the same should and will apply to Ubuntu):


$ sudo apt install gcc make binutils linux-headers-`uname -r`
 ↪kdump-tools crash `uname -r`-dbg

Note: Package names may vary across distributions.

During the installation, you will be prompted with questions to enable kexec to handle reboots (answer whatever you'd like, but I answered "no"; see Figure 2).

Figure 2. kexec Configuration Menu

And to enable kdump to run and load at system boot, answer "yes" (Figure 3).

Figure 3. kdump Configuration Menu

Configuring kdump

Open the /etc/default/kdump-tools file, and at the very top, you should see the following:

Oops! Debugging Kernel Panics

debugging kernel panics

A look into what causes kernel panics and some utilities to help gain more information.

Working in a Linux environment, how often have you seen a kernel panic? When it happens, your system is left in a crippled state until you reboot it completely. And, even after you get your system back into a functional state, you're still left with the question: why? You may have no idea what happened or why it happened. Those questions can be answered though, and the following guide will help you root out the cause of some of the conditions that led to the original crash.

Figure 1. A Typical Kernel Panic

Let's start by looking at a set of utilities known as kexec and kdump. kexec allows you to boot into another kernel from an existing (and running) kernel, and kdump is a kexec-based crash-dumping mechanism for Linux.

Installing the Required Packages

First and foremost, your kernel should have the following components statically built in to its image:


CONFIG_RELOCATABLE=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_DEBUG_INFO=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_PROC_VMCORE=y

You can find this in /boot/config-`uname -r`.

Make sure that your operating system is up to date with the latest-and-greatest package versions:


$ sudo apt update && sudo apt upgrade

Install the following packages (I'm currently using Debian, but the same should and will apply to Ubuntu):


$ sudo apt install gcc make binutils linux-headers-`uname -r`
 ↪kdump-tools crash `uname -r`-dbg

Note: Package names may vary across distributions.

During the installation, you will be prompted with questions to enable kexec to handle reboots (answer whatever you'd like, but I answered "no"; see Figure 2).

Figure 2. kexec Configuration Menu

And to enable kdump to run and load at system boot, answer "yes" (Figure 3).

Figure 3. kdump Configuration Menu

Configuring kdump

Open the /etc/default/kdump-tools file, and at the very top, you should see the following:

Loadsharers: Funding the Load-Bearing Internet Person

loadsharers

The internet has a sustainability problem. Many of its critical services depend on the dedication of unpaid volunteers, because they can't be monetized and thus don't have any revenue stream for the maintainers to live on. I'm talking about services like DNS, time synchronization, crypto libraries—software without which the net and the browser you're using couldn't function.

These volunteer maintainers are the Load-Bearing Internet People (LBIP). Underfunding them is a problem, because underfunded critical services tend to have gaps and holes that could have been fixed if there were more full-time attention on them. As our civilization becomes increasingly dependent on this software infrastructure, that attention shortfall could lead to disastrous outages.

I've been worrying about this problem since 2012, when I watched a hacker I know wreck his health while working on a critical infrastructure problem nobody else understood at the time. Billions of dollars in e-commerce hung on getting the particular software problem he had spotted solved, but because it masqueraded as network undercapacity, he had a lot of trouble getting even technically-savvy people to understand where the problem was. He solved it, but unable to afford medical insurance and literally living in a tent, he eventually went blind in one eye and is now prone to depressive spells.

More recently, I damaged my ankle and discovered that although there is such a thing as minor surgery on the medical level, there is no such thing as "minor surgery" on the financial level. I was looking—still am looking—at a serious prospect of either having my life savings wiped out or having to leave all 52 of the open-source projects I'm responsible for in the lurch as I scrambled for a full-time job. Projects at risk include the likes of GIFLIB, GPSD and NTPsec.

That refocused my mind on the LBIP problem. There aren't many Load-Bearing Internet People—probably on the close order of 1,000 worldwide—but they're a systemic vulnerability made inevitable by the existence of common software and internet services that can't be metered. And, burning them out is a serious problem. Even under the most cold-blooded assessment, civilization needs the mean service life of an LBIP to be long enough to train and acculturate a replacement.

(If that made you wonder—yes, in fact, I am training an apprentice. Different problem for a different article.)

Alas, traditional centralized funding models have failed the LBIPs. There are a few reasons for this:

Loadsharers: Funding the Load-Bearing Internet Person

loadsharers

The internet has a sustainability problem. Many of its critical services depend on the dedication of unpaid volunteers, because they can't be monetized and thus don't have any revenue stream for the maintainers to live on. I'm talking about services like DNS, time synchronization, crypto libraries—software without which the net and the browser you're using couldn't function.

These volunteer maintainers are the Load-Bearing Internet People (LBIP). Underfunding them is a problem, because underfunded critical services tend to have gaps and holes that could have been fixed if there were more full-time attention on them. As our civilization becomes increasingly dependent on this software infrastructure, that attention shortfall could lead to disastrous outages.

I've been worrying about this problem since 2012, when I watched a hacker I know wreck his health while working on a critical infrastructure problem nobody else understood at the time. Billions of dollars in e-commerce hung on getting the particular software problem he had spotted solved, but because it masqueraded as network undercapacity, he had a lot of trouble getting even technically-savvy people to understand where the problem was. He solved it, but unable to afford medical insurance and literally living in a tent, he eventually went blind in one eye and is now prone to depressive spells.

More recently, I damaged my ankle and discovered that although there is such a thing as minor surgery on the medical level, there is no such thing as "minor surgery" on the financial level. I was looking—still am looking—at a serious prospect of either having my life savings wiped out or having to leave all 52 of the open-source projects I'm responsible for in the lurch as I scrambled for a full-time job. Projects at risk include the likes of GIFLIB, GPSD and NTPsec.

That refocused my mind on the LBIP problem. There aren't many Load-Bearing Internet People—probably on the close order of 1,000 worldwide—but they're a systemic vulnerability made inevitable by the existence of common software and internet services that can't be metered. And, burning them out is a serious problem. Even under the most cold-blooded assessment, civilization needs the mean service life of an LBIP to be long enough to train and acculturate a replacement.

(If that made you wonder—yes, in fact, I am training an apprentice. Different problem for a different article.)

Alas, traditional centralized funding models have failed the LBIPs. There are a few reasons for this:

Documenting Proper Git Usage

git emblem

Jonathan Corbet wrote a document for inclusion in the kernel tree, describing best practices for merging and rebasing git-based kernel repositories. As he put it, it represented workflows that were actually in current use, and it was a living document that hopefully would be added to and corrected over time.

The inspiration for the document came from noticing how frequently Linus Torvalds was unhappy with how other people—typically subsystem maintainers—handled their git trees.

It's interesting to note that before Linus wrote the git tool, branching and merging was virtually unheard of in the Open Source world. In CVS, it was a nightmare horror of leechcraft and broken magic. Other tools were not much better. One of the primary motivations behind git—aside from blazing speed—was, in fact, to make branching and merging trivial operations—and so they have become.

One of the offshoots of branching and merging, Jonathan wrote, was rebasing—altering the patch history of a local repository. The benefits of rebasing are fantastic. They can make a repository history cleaner and clearer, which in turn can make it easier to track down the patches that introduced a given bug. So rebasing has a direct value to the development process.

On the other hand, used poorly, rebasing can make a big mess. For example, suppose you rebase a repository that has already been merged with another, and then merge them again—insane soul death.

So Jonathan explained some good rules of thumb. Never rebase a repository that's already been shared. Never rebase patches that come from someone else's repository. And in general, simply never rebase—unless there's a genuine reason.

Since rebasing changes the history of patches, it relies on a new "base" version, from which the later patches diverge. Jonathan recommended choosing a base version that was generally thought to be more stable rather than less—a new version or a release candidate, for example, rather than just an arbitrary patch during regular development.

Jonathan also recommended, for any rebase, treating all the rebased patches as new code, and testing them thoroughly, even if they had been tested already prior to the rebase.

"If", he said, "rebasing is limited to private trees, commits are based on a well-known starting point, and they are well tested, the potential for trouble is low."

Moving on to merging, Jonathan pointed out that nearly 9% of all kernel commits were merges. There were more than 1,000 merge requests in the 5.1 development cycle alone.