Engineers vs. Re-engineering

In an age when people are being re-engineered into farm animals for AI ranchers, it's the job of engineers to save humanity through true personal agency.

A few months ago, I was driving through Los Angeles when the Waze app on my phone told me to take the Stadium Way exit off the 110 freeway. About five other cars peeled off with me, and we became a caravan, snaking through side streets and back onto the freeway a few miles later. I knew Waze had to be in charge of us, since Waze is the navigation app of choice in Los Angeles, and it was beyond coincidence that all these cars took the same wild maze run through streets only locals knew well.

What was Waze up to here, besides offering its users (or a subset of them) a way around a jam? Was it optimizing traffic by taking some cars off the highway and leaving others on? Running an experiment only some AI understood? There was no way to tell. I doubt anyone at Waze could say exactly what was going on either. Algorithms are like that. So are the large and constantly changing data sets informing algorithms most of us with mobile devices depend on every day.

In Re-engineering Humanity, Brett Frischmann and Evan Selinger have dug deeply into what's going on behind the "cheap bliss" in our fully connected world.

What they say is that we are all subjects of techno-social engineering. In other words, our algorithmic conveniences are re-making us, much as the technologies and techniques of agriculture re-makes farm animals. And, as with farming, there's an extraction business behind a lot of it.

They say "humanity's techno-social dilemma" is that "companies, institutions, and designers regularly treat us as programmable objects through personalized technologies that are attuned to our personal histories, present behavior and feelings, and predicted futures."

And we are not innocent of complicity in this. "We outsource memory, decision-making and even our interpersonal relations...we rely on the techno-social engineers' tools to train ourselves, and in doing so, let ourselves be trained."

There are obvious benefits to "delegating physical, cognitive, emotional and ethical labor to a third party", such as Waze, but there are downsides, which Brett and Evan number: 1) passivity, 2) decreased agency, 3) decreased responsibility, 4) increased ignorance, 5) detachment and 6) decreased independence. On the road to these diminished human states, we have "fetishised computers and idealized computation".

Doing both means "we work on problems best solved by computation", which in turn leads to "the imperialism of instrumental reason and the improper assumption that all problems are comprehensible in the language of computation and thus can be solved with the same set of social and technological tools".

New Issue: Linux Journal August 2018 with a Deep Dive into Containers

Linux Journal August 2018 Issue Containers

The recent rise in popularity of container technology within the data center is a direct result of its portability and ability to isolate working environments, thus limiting its impact and overall footprint to the underlying computing system. To understand the technology completely, you first need to understand the many pieces that make it all possible. With that, may we introduce Linux Journal's Container issue.

Featured Articles in this Issue Include:

  • Linux Control Groups and Process Isolation 
  • Working with Linux Containers (LXC)
  • Orchestration with Kubernetes
  • The Search for a GUI Docker
  • Sharing Docker Containers Across DevOps Environments

Additional Articles:

  • The Chromebook Grows Up
  • FOSS Project Spotlight: SIT (Serverless Information Tracker)
  • #geeklife: weBoost 4G-X OTR Review
  • Astronomy on KDE
  • Road to RCHA: Bumps and Bruises and What I'm Studying
  • Tech Tip: Easy SSH Automation

Regular Columns Include:

  • From the Editor—Doc Searls: Engineers vs. Re-engineering
  • Kyle Rankin's Hack and /: Cleaning Your Inbox with Mutt
  • Reuven M. Lerner's At the Forge: Python and Its Community Enter a New Phase
  • Dave Taylor's Work the Shell: Creating the Concentration Game PAIRS with Bash
  • Zack Brown's diff -u: What's New in Kernel Development
  • Glyn Moody's Open Sauce: What Does "Ethical" AI Mean for Open Source?

Subscribers, you can download your August issue now.

Not a subscriber? It’s not too late. Subscribe today and receive instant access to this and ALL back issues since 1994!

Want to buy a single issue? Buy the August magazine or other single back issues in the LJ store.

GNU C Library v. 2.28 Released, Purism Update on Librem 5 Communication Apps, Istio v. 1.0 Now Available, 4.18 Kernel Delayed and City of Rome Switching to LibreOffice

News briefs for August 1, 2018.

The GNU C Library version 2.28 was released this morning. New features include localization data for ISO 14651 has been updated to match Edition 4, introducing significant improvements to the collation of Unicode characters; it now can be compiled with support for Intel CET, aka Intel Control-flow Enforcement Technology; it now supports ABSOLUTE symbols; and more. Packages for the 2.28 release are available from or

Purism posted an update on the Librem 5's communication apps yesterday. The "Calls" app is not only for regular calls, but is "designed to integrate a much higher level of security and privacy through end to end encrypted technologies in a very transparent way". You can see the repository of designs for the Calls app here. The plan for the "Messages" app is "to be able to handle regular text messages (SMS) while also handling secure end-to-end encrypted messages in a transparent way between two compatible devices", and that repository is available here.

Istio, the open-source service mesh, released version 1.0 yesterday. According to the post on Light Reading, "Istio provides visibility into container performance, support for user testing, updating controls and security for service interactions. The availability of version 1.0 of the software means those features are locked down, ready for deployment in production applications, and developers can write software to those features without worrying that the apps will break due to changes in future versions, as future Istio versions will be backwards-compatible with 1.0."

The 4.18 kernel will be delayed one week, LWN reports, due to "some late-discovered problems". Linus Torvalds posted on LKML: "I _prefer_ just the regular cadence of releases, but when I have a reason to delay, I'll delay."

The city of Rome is switching to open-source LibreOffice. The city installed LibreOffice alongside the proprietary alternative on all of its 14,000 PC workstations in April and is gradually making the change. There are 112 staff members called "innovation champions", who are in favour of free and open source, and who are helping with the switch by explaining the reasons for changing to open source and training co-workers (source: Open Source Observatory).

LMDE 3 Beta Released, IPFire 2.21- Core Update 122 Now Available, Firefox Icon Redesign, New Rust Programming Language Book from No Starch Press and Google Chrome in VR with Daydream

News briefs for July 31, 2018.

The Linux Mint team announces the LMDE 3 "Cindy" Cinnamon Beta release. LMDE stands for Linux Mint Debian Edition, and its goal is "to see how viable the distribution would be and how much work would be necessary if Ubuntu was ever to disappear". It's as similar as possible to Linux Mint, but doesn't use Ubuntu. See the release notes for more information, and note that this is a beta release, not intended for production environments.

IPFire 2.21 - Core Update 122 has been released. According to the official release announcement, this update of the open-source firewall distribution is rebased on the long-term supported Linux kernel 4.14 and includes many improvements and bug fixes. The announcement also notes that the update is split into two parts: "First, you will need to install IPFire 2.19 - Core Update 121 and then, the second part will automatically be installed after. Please be patient and let the system complete the update. When everything is done, please reboot into the new kernel."

Mozilla is redesigning its Firefox icon, and its team of product and branding designers have begun "imagining a new system to embrace all of the Firefox products in the pipeline and those still in the minds of our Emerging Technologies group". They've created two new design approaches and are asking for your feedback. See the blog post to view the images and leave your feedback in the comments.

No Starch Press has just released The Rust Programming Language, the "undisputed go-to book on Rust", authored by two members of the Rust core team—Steve Klabnik and Carol Nichols—and featuring contributions from 42 community members. No Starch comments that "this huge undertaking is sure to make some waves and help build the Rust community". The book is published under an open license and is available for free via the Rust site or for purchase from No Starch in either in print or ebook format.

Google Chrome is now available in virtual reality with Daydream. Android Central reports that "all of the features you know and love from Chrome on your computer and phone are available with its Daydream port, including voice search, any bookmarks you've saved, and an Incognito Mode for private browsing. In addition to those existing features, Google's also added a new Cinema Mode that 'optimizes web video for the best viewing experience in VR'."

The Search for a GUI Docker


Docker is everything but pretty; let's try to fix that. Here's a rundown of some GUI options available for Docker.

I love Docker. At first it seemed a bit silly to me for a small-scale implementation like my home setup, but after learning how to use it, I fell in love. The standard features are certainly beneficial. It's great not worrying that one application's dependencies will step on or conflict with another's. But most applications are good about playing well with others, and package management systems keep things in order. So why do I docker run instead of apt-get install? Individualized system settings.

With Docker, I can have three of the same apps running side by side. They even can use the same port (internally) and not conflict. My torrent client can live inside a forced-VPN network, and I don't need to worry that it will somehow "leak" my personal IP data. Heck, I can run apps that work only on CentOS inside my Ubuntu Docker server, and it just works! In short, Docker is amazing.

I just wish I could remember all the commands.

Don't get me wrong, I'm familiar with Docker. I use it for most of my server needs. It's my first go-to when testing a new app. Heck, I taught an entire course on Docker for CBT Nuggets (my day job). The problem is, Docker works so well, I rarely need to interact with it. So, my FIFO buffer fills up, and I forget the simple command-line options to make Docker work. Also, because I like charts and graphs, I decided to install a Docker GUI. It was a bit of an adventure, so I thought I'd share the ins and outs of my experience.

My GUI Expectations

There are some things I don't really care about for a GUI. Oddly, one of the most common uses people have for a visual interface is the ability to create a Docker container. I actually don't mind using the command line when I'm creating a container, because it usually takes 5–10 attempts and tweaks before I get it how I want it. So for me, I'd like to have at least the following features:

  • A visual layout of all containers, whether or not they're running.
  • A way to start/stop/delete containers.
  • The ability to rename running containers, because I always forget to name them, and I get tired of seeing "chubby_cheetah" for container names.
  • A way to change the restart policy easily, so when I finally get a container right, I can have it --restart=always.
  • Show some statistics about the system and individual containers.
  • Read logs.
  • Work via web interface, so I can use it remotely.
  • Be a Docker container itself!

My list of needs is fairly simple, but oddly, many GUIs left me wanting. Since everyone's desires are different, I'll go over the most popular options I tried, and mention some pros and cons.

Pulseway: Systems Management at Your Fingertips

pulseway logo

In today's IT world, staying on top of anything and everything related to the most mission-critical applications or machines is increasingly important. With this need in mind, Pulseway provides a product of the same name built to give IT personnel the ability to monitor, manage and automate these very systems and the tasks or applications that they host. Managing an entire computing ecosystem (consisting of both physical and virtual machines) never should be too difficult a task, and Pulseway has proven that to be the case.

I recently was fortunate to have the opportunity to take this product for a spin. It's extremely simple to install and configure, and if you need help, everything is well documented in the User Manual on the company's website.

So, how does it work? First, you need to register an account on the Pulseway website. Two offerings currently are available: a limited free offering and a paid subscription offering. As you might expect, the limited free account limits the numbers of nodes you can manage, and it also restricts users from leveraging additional features and functionality, including an antivirus, backup/disaster recovery and more.

Once registered, you can sign in to the website and even download the mobile application to your phone or tablet—either Android or iOS. The last step is to download and install the monitoring agents to your mission-critical machines.

In my case, I installed the DEB file into an instance of Ubuntu 18.04 LTS. Once installed, the dæmon configuration file was modified to use my account credentials, and as soon as it started, the dashboard on both the website and on my mobile device saw the system, and it immediately began reporting CPU utilization, memory usage and so much more.

Figure 1. The Pulseway Web User Interface Dashboard

Both the web user interface and the mobile device share the same set of functions, so for the purposes of this review, I'm continuing with the mobile user interface.

Figure 2. The Pulseway Mobile User Interface Dashboard Summary of a Single System

Here's a rundown of some things you can do with Pulseway:

1) You can monitor historical CPU utilization to see how active or inactive your CPU cores are.

Figure 3. The Pulseway Mobile User Interface System CPU Graph

What Really IRCs Me: Slack

Find out how to reconnect to Slack over IRC using a Bitlbee libpurple plugin.

I'm an IRC kind of guy. I appreciate the simplicity of pure text chat, emoticons instead of emojis, and the vast array of IRC clients and servers to choose from, including the option to host your own. All of my interactive communication happens over IRC either through native IRC channels (like #linuxjournal on Freenode) or using a local instance of Bitlbee to act as an IRC gateway to other chat protocols. Because my IRC client supports connecting to multiple networks at the same time, I've been able to manage all of my personal chat, group chat and work chat from a single window that I can connect to from any of my computers.

Before I upgraded to IRC, my very first chat experience was in the late 1990s on a web-based Java chat applet, and although I hold some nostalgia for web-based chat because I met my wife on that network, chatting via a web browser just seems like a slow and painful way to send text across the internet. Also, shortly after we met, the maintainers of that network decided to shut down the whole thing, and since it was a proprietary network with proprietary servers and clients, when they shut it down, all those chat rooms and groups were lost.

What's old is new again. Instead of Java, we have JavaScript, and kids these days like to treat their web browsers like Emacs, and so every application has to run as a web app. This leads to the latest trend in chat: Slack. I say the latest trend, because it wasn't very long ago that Hipchat was hip, and before that, even Yammer had a brief day in the sun. In the past, a software project might set up a channel on one of the many public or private IRC servers, but nowadays, everyone seems to want to consolidate their projects under Slack's infrastructure. This means if you joined a company or a software project that started during the past few years, more likely than not, you'll need to use Slack.

I'm part of a few Slack networks, and up until recently, I honestly didn't think all that much about Slack, because unlike some other proprietary chat networks, Slack had the sense to offer IRC and XMPP gateways. This meant that you weren't required to use its heavy web app, but instead, you could use whatever client you preferred yet still connect to Slack networks. Sure, my text-based IRC client didn't show animated Giphy images or the 20 party-parrot gifs in a row, but to me, that was a feature. Unfortunately, Slack could no longer justify the engineering effort to backport web chat features to IRC and XMPP, so the company announced it was shutting down its IRC and XMPP gateways.

Dell’s XPS 13 Developer Edition with Ubuntu 18.04 Preinstalled Now Available, GCC Conversion to Git Update, Lubuntu Shifts Focus, OpenMW Team Releases Version 0.44.0 and Serverless Inc Announces New Framework and Gateway

News briefs for June 30, 2018.

Dell's XPS 13 Developer Edition laptop with Ubuntu 18.04 LTS preinstalled is now available. According to Canonical's blog post, this launch marks the "the first availability of Ubuntu's latest LTS on a major OEM's hardware since its release in April. Canonical and Dell have worked together to certify Ubuntu 18.04 LTS on the XPS 13 to ensure a seamless experience from first use." You can purchase one in the US via, and they will be available in Europe in early August.

The GCC conversion from Subversion to Git is not going well, and the problem is more complex than Eric S. Raymond needing more RAM as he originally thought. According to the story on Phoronix regarding the conversion, Raymond said, "that light at the end of the tunnel turned out to be an oncoming train." He went on to say "The GCC repo is just too large and weird...My tools need to get a lot faster, like more than an order of magnitude faster, before digging out of the bad situation the conversion is now in will be practical. Hardware improvements won't do that. Nobody knows how to build a machine that can crank a single process enough faster than 1.3GHz. And the problem doesn't parallelize."

The Lubuntu Linux distro is shifting from being an OS for older PCs to a modular and modern one. Softpedia News quotes Lubuntu developer Simon Quigley as saying "We decided that going forward, we need to adapt for the current state of the market." He also stated that Lubuntu would be a "functional yet modular distribution focused on getting out of the way and letting users use their computer".

The OpenMW team announces the release of version 0.44.0 of the "free, open source and modern" Morrowind engine. The new release brings several bug fixes and new features, including "a search bar for spells, a tab for advanced settings in the launcher, and multiple quicksaves." You can download it from here, and also view the release commentary video.

Serverless, Inc, has announced a $10 million Series A round led by Lightspeed Ventures, TechCrunch reports. In addition, the company also announced the release of the Serverless Platform, including the Serverless Framework, Serverless Dashboard and Serverless Gateway: "the Framework lets developers set up their serverless code across different cloud platforms and set conditions on the deployment such as function rules and infrastructure dependencies." The framework and gateway are open source, and the company will charge for "use of the dashboard to get insights into their serverless code or to access a hosted version of the gateway". You can host your own version of the gateway via the open-source version of the product.

An Interview with Heptio, the Kubernetes Pioneers

Heptio logo

I recently spent some time chatting with Craig McLuckie, CEO of the leading Kubernetes solutions provider Heptio. Centered around both developers and system administrators, Heptio's products and services simplify and scale the Kubernetes ecosystem.

Petros Koutoupis: For all our readers who have yet to hear of the remarkable things Heptio is doing in this space, please start by telling us, who is Craig McLuckie?

Craig McLuckie: I am the CEO and founder of Heptio. My co-founder, Joe Beda, and I were two of the three creators of Kubernetes and previously started the Google Compute Engine, Google's traditional infrastructure as a service product. He also started the Cloud Native Computing Foundation (CNCF), of which he is a board member.

PK: Why did you start Heptio? What services does Heptio provide?

CL: Since we announced Kubernetes in June 2014, it has garnered a lot of attention from enterprises looking to develop a strategy for running their business applications efficiently in a multi-cloud world.

Perhaps the most interesting trend we saw that motivated us to start Heptio was that enterprises were looking at open-source technology adoption as the best way to create a common platform that spanned on-premises, private cloud, public cloud and edge deployments without fear of vendor lock-in. Kubernetes and the cloud native technology suite represented an incredible opportunity to create a powerful "utility computing platform" spanning every cloud provider and hosting option, that also radically improves developer productivity and resource efficiency.

In order to get the most out of Kubernetes and the broader array of cloud native technologies, we believed a company needed to exist that was committed to helping organizations get closer to the vibrant Kubernetes ecosystem. Heptio offers both consultative services and a commercial subscription product that delivers the deep support and the advanced operational tooling needed to stitch upstream Kubernetes into modern enterprise IT environments.

PK: What makes Heptio relevant in the Container space?

Weekend Reading: Raspberry Pi Projects

Raspberry Pi board

The Raspberry Pi has been very popular among hobbyists and educators ever since its launch in 2011. It’s a credit-card-sized single-board computer with a Broadcom BCM 2835 SoC, 256MB to 512MB of RAM, USB ports, GPIO pins, Ethernet, HDMI out, camera header and an SD card slot. The most attractive aspects of the Raspberry Pi are its low cost of $35 and large user community following.

Raspberry Strudel: My Raspberry Pi in Austria by Kyle Rankin:  In this article, I explain how I was able to colocate a Raspberry Pi and the steps I went through to prepare it for remote management.

Raspberry Pi: the Perfect Home Server by Brian Trapp: If you've got several different computers in need of a consistent and automated backup strategy, the RPi can do that. If you have music and video you'd like to be able to access from almost any screen in the house, the RPi can make that happen too. Maybe you have a printer or two you'd like to share with everyone easily? The Raspberry Pi can fill all those needs with a minimal investment in hardware and time.

Securi-Pi: Using the Raspberry Pi as a Secure Landing Point by Bill Childers: Set up a Raspberry Pi to act as an OpenVPN endpoint, SSH endpoint and Apache server—with all these services listening on port 443 so networks with restrictive policies aren't an issue.

Real-Time Rogue Wireless Access Point Detection with the Raspberry Pi by Chris Jenks: A couple years ago, I decided to go back to school to get a Bachelor's degree. I needed to find a single credit hour to fill for graduation. That one credit hour became an independent study on using the Raspberry Pi (RPi) to create a passive real-time wireless sensor network. I share my work with you here.

Flash ROMs with a Raspberry Pi by Kyle Rankin: In this article, I describe the steps I performed to turn a regular Raspberry Pi running Raspbian into a BIOS-flashing machine.

Kata Containers Now Available as a Snap, First Point Release of Ubuntu 18.04 LTS, New NetSpectre Attack Vulnerability, IBM and Google Launch Knative, and Google Play Store Bans Cryptocurrency Mining Apps

News briefs for July 27, 2018.

Kata Containers, the lightweight, fast booting, open-source VM, is now available as a Snap from the Snap Store. According to the Ubuntu blog post, "Kata Containers are compatible with the OCI specification for Docker and CRI for Kubernetes which means they can be managed like a traditional container. Its agnostic design allows Kata Containers to run on multiple architectures like amd64, ppc64 and aarch64 with different hypervisors including QEMU."

The first point release for Ubuntu 18.04 LTS was released yesterday. New in 18.04.1 is the move from Unity to GNOME Shell, Captive Portal, Night Light, color emojis and more. You can download it from here.

There's a new network-based Spectre V1-style attack vulnerability called NetSpectre that "doesn't require exploited code to be running on the target machine", Phoronix reports. However, "if your system is patched against the other known Spectre vulnerabilities, it's believed you should be largely safe from NetSpectre." See the whitepaper for more info on this speculative attack vulnerability.

IBM and Google announced a new open-source serverless cloud computing project this week called Knative. According to eWeek, Knative "will serve as a bridge for serverless computing to coexist and integrate with containers atop Google Kubernetes in a cloud-native computing system".

The Google Play Store has banned cryptocurrency mining apps. Its new policy states "We don't allow apps that mine cryptocurrency on devices", and the company will begin removing apps from the store that violate this policy. See the Slashdot post for more details.

A Git Origin Story

A look at Linux kernel developers' various revision control solutions through the years, Linus Torvalds' decision to use BitKeeper and the controversy that followed, and how Git came to be created.

Originally, Linus Torvalds used no revision control at all. Kernel contributors would post their patches to the Usenet group, and later to the mailing list, and Linus would apply them to his own source tree. Eventually, Linus would put out a new release of the whole tree, with no division between any of the patches. The only way to examine the history of his process was as a giant diff between two full releases.

This was not because there were no open-source revision control systems available. CVS had been around since the 1980s, and it was still the most popular system around. At its core, it would allow contributors to submit patches to a central repository and examine the history of patches going into that repository.

There were many complaints about CVS though. One was that it tracked changes on a per-file basis and didn't recognize a larger patch as a single revision, which made it hard to interpret the past contributions of other developers. There also were some hard-to-fix bugs, like race conditions when two conflicting patches were submitted at the same time.

Linus didn't like CVS, partly for the same reasons voiced by others and partly for his own reasons that would become clear only later. He also didn't like Subversion, an open-source project that emerged around the start of the 2000s and had the specific goal of addressing the bugs and misfeatures in CVS.

Many Linux kernel developers were unhappy with the lack of proper revision control, so there always was a certain amount of community pressure for Linus to choose something from one of the available options. Then, in 2002, he did. To everyone's shock and dismay, Linus chose BitKeeper, a closed-source commercial system developed by the BitMover company, run by Larry McVoy.

The Linux kernel was the most important open-source project in history, and Linus himself was the person who first discovered the techniques of open-source development that had eluded the GNU project, and that would be imitated by open-source projects for decades to come, right up to the present day. What was Linus thinking? How could he betray his community and the Open Source world like this? Those were the thoughts of many when Linus first started using BitKeeper for kernel development.

OSCON at 19, Open Source at 20, Linux at 27

Cooperation isn't as exciting as conflict, but it does get the most done.

Now that Linux has achieved World Domination, seems it has nothing but friends. Big ones.

That was my first take-away from O'Reilly's 19th OSCON in Portland, Oregon. This one celebrated 20 years of Open Source, a category anchored by Linux, now aged 27. The biggest sponsors with the biggest booths—Microsoft, AWS, Oracle, Salesforce, Huawei—are all rare-metal-level members of the Linux Foundation, a collection that also includes pretty much every tech brand you can name, plus plenty you can't. Hats off to Jim Zemlin and the LF crew for making that happen, and continuing to grow.

My second take-away was finding these giants at work on collective barn-raising. For example, in his keynote, The whole is greater than the sum of its parts. (sponsored by IBM), Chris Ferris, IBM's CTO for Open Technology, told the story behind Hyperledger, a collaborative effort to foster cross-industry blockchain technologies. Hyperledger was started by Chris and friends at IBM and handed over to the Linux Foundation, where it is headed by Brian Behlendorf, whose long history with open source began with Apache in the mid-1990s.

In an interview I did with Chris afterwards, he enlarged on examples of collaboration between development projects within Hyperledger, most of which are led by large companies that are more accustomed to competing than to cooperating. A corollary point might be that the best wheels are the ones not re-invented.

I got the same impressions from conversations with folks from AWS and Microsoft.

In the case of AWS, I was surprised to learn from Adrian Cockcroft, VP of Cloud Architecture at Amazon, that the company had found itself in the ironic position of supporting a massive amount of open-source development by others on its Linux-based clouds while also not doing much to cooperate on its own open-source developments. Now, led by Adrian, it's working hard at changing that. (To help unpack what's going on there, I got shots of Adrian and some of his slides, which you can step through starting here in our OSCON 2018 photo album.)

New Version of KStars, Google Launches Edge TPU and Cloud IoT Edge, Lower Saxony to Migrate from Linux to Windows, GCC 8.2 Now Available and VMware Announces VMworld 2018

News briefs for July 26, 2018.

A new version of KStars—the free, open-source, cross-platform astronomy software—was released today. Version 2.9.7 includes new features, such as improvements to the polar alignment assistant and support for Ekos Live, as well as stability fixes. See the release notes for all the changes.

Google yesterday announced two new products: Edge TPU, a new "ASIC chip designed to run TensorFlow Lite ML models at the edge", and Cloud IoT Edge, which is "a software stack that extends Google Cloud's powerful AI capability to gateways and connected devices". Google states that "By running on-device machine learning models, Cloud IoT Edge with Edge TPU provides significantly faster predictions for critical IoT applications than general-purpose IoT gateways—all while ensuring data privacy and confidentiality."

The state of Lower Saxony in Germany is set to migrate away from Linux and back to Windows, following Munich's similar decision, ZDNet reports. The state currently has 13,000 workstations running openSUSE that it plans to migrate to "a current version of Windows" because "many of its field workers and telephone support services already use Windows, so standardisation makes sense". It's unclear how many years this migration will take.

GCC 8.2 was released today. This release is a bug-fix release and contains "important fixes for regressions and serious bugs in GCC 8.1 with more than 99 bugs fixed since the previous release", according to Jakub Jelinek's release statement. You can download GCC 8.2 here.

VMware announces VMworld 2018, which will be held August 26–30 in Las Vegas. The theme for the conference is "Possible Begins with You", and the event will feature keynotes by industry leaders, user-driven panels, certification training and labs. Topics will include "Data Center and Cloud, Networking and Security, Digital Workspace, Leading Digital Transformation, and Next-Gen Trends including the Internet of Things, Network Functions Virtualization and DevOps". For more information and to register, go here.

Progress with Your Image

Learn a few different ways to get a progress bar for your dd command.

The dd tool has been a critical component on the Linux (and UNIX) command line for ages. You know a command-line tool is important if it has only two letters, and dd is no exception. What I love about it in particular is that it truly embodies the sense of a powerful tool with no safety features, as described in Neal Stephenson's In the Beginning was the Command Line. The dd command does something simple: it takes input from one file and outputs it to another file, and since in UNIX "everything is a file", that means dd doesn't care if the output file is another file on your disk, a partition or even your active hard drive, it happily will overwrite it! Because of this, dd fits in that immortal category of sysadmin tools that I type out and then pause for five to ten seconds, examining the command, before I press Enter.

Unfortunately, dd has fallen out of favor lately, and some distributions even will advise using tools like cp or a graphical tool to image drives. This is largely out of the concern that dd doesn't wait for the disk to sync before it exits, so even if it thinks it's done writing, that doesn't mean all of the data is on the output file, particularly if it's over slow I/O like in the case of USB flash storage. The other reason people have tended to use other imaging tools is that traditionally dd doesn't output any progress. You type the command, and then if the image is large, you just wait, wait and then wait some more, wondering if dd will ever complete.

But, it turns out that there are quite a few different ways to get progress output from dd, so I cover a few popular ones here, all based on the following dd command to image an ISO file to a disk:

$ sudo dd if=/some/file.iso of=/dev/sdX bs=1M

Option 1: Use pv

Like many command-line tools, dd can accept input from a pipe and output to a pipe. This means if you had a tool that could measure the data flowing over a pipe, you could sandwich it in between two different dd commands and get live progress output. The pv (pipe viewer) command-line tool is just such a tool, so one approach is to install pv using your distribution's packaging tool and then create a pv and dd sandwich:

Red Hat’s “Road to A.I.” Film, Google Chrome Marks HTTP Connections Not Secure, BlueData Launches BlueK8s Project, Linux Bots Account for 95% of DDoS Attacks and Tron Buys BitTorrent

News briefs for July 25, 2018.

Red Hat's Road to A.I. film has been chosen as an entry in the 19th Annual Real to Reel International Film Festival. According to the Red Hat blog post, this "documentary film looks at the current state of the emerging autonomous vehicle industry, how it is shaping the future of public transportation, why it is a best use case for advancing artificial intelligence and how open source can fill the gap between the present and the future of autonomy." The Road to A.I. is the fourth in Red Hat's Open Source Stories series, and you can view it here.

Google officially has begun marking HTTP connections as not secure for all Chrome users, as it promised in a security announcement two years ago. The goal is eventually "to make it so that the only markings you see in Chrome are when a site is not secure, and the default unmarked state is secure". Also, beginning in October 2018, Chrome will start showing a red "not secure" warning when users enter data on HTTP pages.

BlueData launched the BlueK8s project, which is an "open source project that seeks to make it easier to deploy big data and artificial intelligence (AI) application workloads on top of Kubernetes", Container Journal reports. The BlueK8s "project is based on container technologies the company developed originally to accelerate the deployment of big data based on Hadoop and Apache Spark software".

According to the latest Kaspersky Lab report, Linux bots now account for 95% of all DDoS attacks. A post on Beta News reports that these attacks are based on some rather old vulnerabilities, such as one in the Universal Plug-and-Play protocol, which has been around since 2001, and one in the CHARGEN protocol, which was first described in 1983. See also the Kaspersky Lab blog for more Q2 security news.

BitTorrent has been bought by Tron, a blockchain startup, for "around $126 million in cash". According to the story on Engadget, Tron's founder Justin Sun says that this deal now makes his company the "largest decentralized Internet ecosystem in the world."

Some of Intel’s Effort to Repair Spectre in Future CPUs

Dave Hansen from Intel posted a patch and said, "Intel is considering adding a new bit to the IA32_ARCH_CAPABILITIES MSR (Model-Specific Register) to tell when RSB (Return Stack Buffer) underflow might be happening. Feedback on this would be greatly appreciated before the specification is finalized." He explained that RSB: a microarchitectural structure that attempts to help predict the branch target of RET instructions. It is implemented as a stack that is pushed on CALL and popped on RET. Being a stack, it can become empty. On some processors, an empty condition leads to use of the other indirect branch predictors which have been targeted by Spectre variant 2 (branch target injection) exploits.

The new MSR bit, Dave explained, would tell the CPU not to rely on data from the RSB if the RSB was already empty.

Linus Torvalds replied:

Yes, please. It would be lovely to not have any "this model" kind of checks.

Of course, your patch still doesn't allow for "we claim to be skylake for various other independent reasons, but the RSB issue is fixed".

So it might actually be even better with _two_ bits: "explicitly needs RSB stuffing" and "explicitly fixed and does _not_ need RSB stuffing".

And then if neither bit it set, we fall back to the implicit "we know Skylake needs it".

If both bits are set, we just go with a "CPU is batshit schitzo" message, and assume it needs RSB stuffing just because it's obviously broken.

On second thought, however, Linus withdrew his initial criticism of Dave's patch, regarding claiming to be skylake for nonRSB reasons. In a subsequent email Linus said, "maybe nobody ever has a reason to do that, though?" He went on to say:

Virtualization people may simply want the user to specify the model, but then make the Spectre decisions be based on actual hardware capabilities (whether those are "current" or "some minimum base"). Two bits allow that. One bit means "if you claim you're running skylake, we'll always have to stuff, whether you _really_ are or not".

Arjan van de Ven agreed it was extremely unlikely that anyone would claim to be skylake unless it was to take advantage of the RSB issue.

That was it for the discussion, but it's very cool that Intel is consulting with the kernel people about these sorts of hardware decisions. It's an indication of good transparency and an attempt to avoid the fallout of making a bad technical decision that would incur further ire from the kernel developers.

Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to

Cooking with Linux (without a Net): Backups in Linux, LuckyBackup, gNewSense and PonyOS

Please support Linux Journal by subscribing or becoming a patron.

It's Tuesday, and it's time for Cooking with Linux (without a Net) where I do some live Linuxy and open-source stuff, live, on camera, and without the benefit of post-video editing—therefore providing a high probability of falling flat on my face. And now, the classic question: What shall I cover? Today, I'm going to look at backing up your data using the command line and a graphical front end. I'm also going to look at the free-iest and open-iest distribution ever. And, I'm also going to check out a horse-based operating system that is open source but supposedly not Linux. Hmm...

Security Keys Work for Google Employees, Canonical Releases Kernel Update, Plasma 5.14 Wallpaper Revealed, Qmmp Releases New Version, Toshiba Introduces New SSDs

News briefs for July 24, 2018.

Google requires all of its 85,000 employees to use security keys, and it hasn't had one case of account takeover by phishing since, Engadget reports. The security key method is considered to be safer than two-factor authentication that requires codes sent via SMS.

Canonical has released a new kernel update to "fix the regression causing boot failures on 64-bit machines, as well as for OEM processors and systems running on Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and other cloud environments", according to Softpedia News. Users of Ubuntu 18.04 and 16.04 LTS should update to the new kernel version as soon as possible. See the Linux kernel regression security notice (USN-3718-1) for more information.

New Plasma 5.14 wallpaper, "Cluster", has been revealed on Ken Vermette's blog. He writes that it's "the first wallpaper for KDE produced using the ever excellent Krita." You can see the full image here.

Qmmp, the Qt-based Linux audio player, recently released version 1.2.3. Changes in the new version include adding qmmp 0.12/1.3 config compatibility, disabling global shortcuts during configuration, fixing some gcc warnings and metadata updating issues and more. Downloads are available here.

Toshiba introduces a new lineup of SSDs based on its 96-layer, BiCS FLASH 3D flash memory. It's the first SSD to use this "breakthrough technology", and "the new XG6 series is targeted to the client PC, high-performance mobile, embedded, and gaming segments—as well as data center environments for boot drives in servers, caching and logging, and commodity storage." According to the press release, "the XG6 series will be available in capacities of 256, 512 and 1,024 gigabytes" and are currently available only as samples to select OEM customers.

Building a Bare-Bones Git Environment

How to migrate repositories from GitHub, configure the software and get started with hosting Git repositories on your own Linux server.

With the recent news of Microsoft's acquisition of GitHub, many people have chosen to research other code-hosting options. Self-hosted solutions like GitLabs offer a polished UI, similar in functionality to GitHub but one that requires reasonably well-powered hardware and provides many features that casual Git users won't necessarily find useful.

For those who want a simpler solution, it's possible to host Git repositories locally on a Linux server using a few basic pieces of software that require minimal system resources and provide basic Git functionality including web accessibility and HTTP/SSH cloning.

In this article, I show how to migrate repositories from GitHub, configure the necessary software and perform some basic operations.

Migrating Repositories

The first step in migrating away from GitHub is to relocate your repositories to the server where they'll be hosted. Since Git is a distributed version control system, a cloned copy of a repository contains all information necessary for running the entire repository. As such, the repositories can be cloned from GitHub to your server, and all the repository data, including commit logs, will be retained. If you have a large number of repositories this could be a time-consuming process. To ease this process, here's a bash function to return the URLs for all repositories hosted by a specific GitHub user:

genrepos() {
    if [ -z "$1" ]; then
        echo "usage: genrepos "
        while [ -n "$repourl" ]; do
            curl -s "$repourl" | awk '/href.*codeRepository/
 ↪{print gensub(/^.*href="\/(.*)\/(.*)".*$/,
↪"\\1/\\2.git","g",$0); }'
            export repourl=$(curl -s "$repourl" | grep'>Previous<.
↪*href.*>Next<' | grep -v 'disabled">Next' | sed

This function accepts a single argument of the GitHub user name. If the output of this command is piped into a while loop to read each line, each line can be fed into a git clone statement. The repositories will be cloned into the /opt/repos directory: