Qt and LG Collaborating on webOS for Embedded Smart Devices, Valve to Continue Steam Gaming on Ubuntu, Qt Creator 4.10 Beta2 Released, The Official Raspberry Pi Beginner’s Guide Updated for Raspberry Pi 4 and Opera 62 Now Available

News briefs for June 28, 2019.

Qt recently announced an expansion of its partnership with LG Electronics to collaborate on making open-source webOS the platform of choice for embedded smart devices. From the press release: "In order to meet and exceed challenging requirements and navigate the distinct market dynamics of the automotive, smart home and robotics industries, LG selected Qt as its business and technical partner for webOS. The most impactful technology trends of recent years, including AI, IoT and automation, require a new approach to the user experience (UX), and UX has been one of Qt's primary focus areas since the company's founding. Through the partnership, Qt will provide LG with the most powerful end-to-end, integrated and hardware-agnostic development environment for developers, engineers and designers to create innovative and immersive apps and devices. In addition, webOS will officially become a reference operating system of Qt."

Valve will continue Steam gaming on Ubuntu, now that Canonical announced it won't drop 32-bit software support in Ubuntu after all. ZDNet reports that "Ubuntu will no longer be called out as 'the best-supported path for desktop users.' Instead, Valve is re-thinking how it wants to approach distribution support going forward. There are several distributions on the market today that offer a great gaming desktop experience such as Arch Linux, Manjaro, Pop!_OS, Fedora, and many others."

Qt Creator 4.10 Beta2 was released today. The most notable fix in this version was a regression in the signing option for iOS devices. See the change log for all the bug fixes and new features, and go here to download the open-source version.

Raspberry Pi Press has released The Official Raspberry Pi Beginner's Guide, which has been fully updated for Raspberry Pi 4 and the latest version of the Raspbian OS (Buster). You can order a hard copy of the book here or get the free PDF here.

Opera 62 was released yesterday. Updates include an improved Dark Mode and support for Windows Dark theme. It also has created an option so you can connect your browser history to Speed Dial, so you can quickly return to tasks you've stared. See the full changelog for more details.

Without a GUI–How to Live Entirely in a Terminal

command line

Sure, it may be hard, but it is possible to give up graphical interfaces entirely—even in 2019.

About three years back, I attempted to live entirely on the command line for 30 days—no graphical interface, no X Server, just a big-old terminal and me, for a month.

I lasted all of ten days.

Why did I attempt this? What on Earth would compel a man to give up all the trappings and features of modern graphical desktops and, instead, artificially restrict himself to using nothing but text-based, command-line software, as if he were stuck in the early 1980s?

Who knows. Clearly, I make questionable decisions.

But you know, if I'm being honest, the experience was not entirely unpleasant. Sure, I missed certain niceties from the graphical side of things, but there were some distinct benefits to living in a shell. My computers, even the low-powered ones, felt faster (command-line software tends to be a whole lot lighter and leaner than those with a graphical user interface). Plus, I was able to focus and get more work done without all the distractions of a graphical desktop, which wasn't bad.

What follows are the applications I found myself relying upon the most during those fateful ten days, separated into categories. In some cases, these are applications I currently use over (or in addition to) their graphical equivalents.

Quite honestly, it is entirely possible to live completely without a GUI (more or less)—even today, in 2019. And, these applications make it possible—challenging, but possible.

Web Browsing

Plenty of command-line web browsers exist. The classic Lynx typically comes to mind, as does ELinks. Both are capable of browsing basic HTML websites just fine. In fact, the experience of doing so is rather enjoyable. Sure, most websites don't load properly in the "everything is a dynamically loading, JavaScript thingamadoodle" future we live in, but the ones that do load, load fast, and free of distractions, which makes reading them downright enjoyable.

But for me, personally, I recommend w3m.

w3m

Figure 1. Browsing Wikipedia with Inline Images Using w3m

w3m supports inline images (via installing the w3m-img package)—seriously, a web browser with image support, inside the terminal. The future is now.

It also makes filling out web forms easy—well, maybe not easy, but at least doable—by opening a configured text editor (such as nano or vim) for entering form text. It feels a little weird the first time you do it, but it's surprisingly intuitive.

Nextcloud Has a New Collaborative Rich Text Editor Called Nextcloud Text, GNOME Announces GNOME Usage, Linus Torvalds Warns of Future Hardware Issues, Red Hat Introduces Red Hat Insights and Offensive Security Launches OffSec Flex

News briefs for June 27, 2019.

Nextcloud announces a new collaborative rich text editor called Nextcloud Text. Nextcloud Text is described as not "a replacement to a full office suite, but rather a distraction-free, focused way of writing rich-text documents alone or together with others." See the Nextcloud blog post for more details.

GNOME announces GNOME Usage, a new app for visualizing system resources. The app was developed by Petr Stetka, a high-school intern in GNOME's Red Hat office in Brno. From the announcement: "Usage is powered by libgtop, the same library used by GNOME System Monitor. One is not a replacement for the other, they complement our user experience by offering two different use cases: Usage is for the everyday user that wants to check which application is eating their resources, and System Monitor is for the expert that knows a bit of operating system internals and wants more technical information being displayed." See the GNOME Wiki for more information on GNOME Usage.

Linus Torvalds this week warned attendees at KubeCon + CloudNative + Open Source Summit China that managing software will become more challenging, due to two hardware issues that are beyond DevOps teams' control. According to DevOps.com, the first issue is "the steady stream of patches being generated as new cybersecurity issues related to the speculative execution model that Intel and other processor vendors rely on to accelerate performance." And the second future hardware challenge is "as processor vendors approach the limits of Moore's Law, many developers will need to reoptimize their code to continue achieving increased performance. In many cases, that requirement will be a shock to many development teams that have counted on those performance improvements to make up for inefficient coding processes".

Red Hat introduces Red Hat Insights. Red Hat Insights is now included with Red Hat Enterprise Linux subscriptions, and it's described as "a Software-as-a-Service (SaaS) product that provides continuous, in-depth analysis of registered Red Hat-based systems to proactively identify threats to availability, security, performance and stability across physical, virtual and cloud environments. Insights works off of an intelligent rules engine, comparing system configuration information to rules in order to identify issues, often before a problem occurs." See the Red Hat Insights Get Started Page for more information.

Offensive Security launches OffSec Flex, "a new program for enterprises to simplify the cybersecurity training process and allow organizations to invest more in cyber security skills development". Some of its training courses and certifications include the Penetration Testing with Kali Linux (PWK) course and the Offensive Security Certified Professional (OSCP) along with the Advance Web Attacks and Exploitations (AWAE) course and the Offensive Security Web Expert (OSWE). Go here to learn more.

FreeDOS’s Linux Roots

blinky

On June 29, 2019, the FreeDOS Project turns 25 years old. That's a major milestone for any open-source software project! In honor of this anniversary, Jim Hall shares this look at how FreeDOS got started and describes its Linux roots.

The Origins of FreeDOS

I've been involved with computers from an early age. In the late 1970s, my family bought an Apple II computer. It was here that I taught myself how to write programs in AppleSoft BASIC. These were not always simple programs. I quickly advanced from writing trivial "math quiz" programs to more complex "Dungeons and Dragons"-style adventure games, complete with graphics.

In the early 1980s, my parents replaced the Apple with an IBM Personal Computer running MS-DOS. Compared to the Apple, the PC had a much more powerful command line. You could connect simple utilities and commands to do more complex functions. I fell in love with DOS.

Throughout the 1980s and into the early 1990s, I considered myself a DOS "power user". I taught myself how to write programs in C and created new DOS command-line utilities that enhanced my MS-DOS experience. Some of my custom utilities merely reproduced the MS-DOS command line with a few extra features. Other programs added new functionality to my command-line experience.

I discovered Linux in 1993 and instantly recognized it as a Big Deal. Linux had a command line that was much more powerful than MS-DOS, and you could view the source code to study the Linux commands, fix bugs and add new features. I installed Linux on my computer, in a dual-boot configuration with MS-DOS. Since Linux didn't have the applications I needed as a working college student (a word processor to write class papers or a spreadsheet program to do physics lab analysis), I booted into MS-DOS to do much of my classwork and into Linux to do other things. I was moving to Linux, but I still relied on MS-DOS.

In 1994, I read articles in technology magazines saying that Microsoft planned to do away with MS-DOS soon. The next version of Windows would not use DOS. MS-DOS was on the way out. I'd already tried Windows 3, and I wasn't impressed. Windows was not great. And, running Windows would mean replacing the DOS applications that I used every day. I wanted to keep using DOS. I decided that the only way to keep DOS was to write my own. On June 29, 1994, I announced my plans on the Usenet discussion group comp.os.msdos.apps, and things took off from there:

ANNOUNCEMENT OF PD-DOS PROJECT:

A few months ago, I posted articles relating to starting a public domain version of DOS. The general support for this at the time was strong, and many people agreed with the statement, "start writing!" So, I have...

people.kernel.org Has Launched, GitLab 12.0 Released, TheoTown Now on Steam for Linux, Pulseway Introduces New File Transfer Feature, and SUSE Manager 4 and SUSE Manager for Retail 4 Are Now Available

News briefs for June 26, 2019.

Konstantin Ryabitsev yesterday announced the launch of people.kernel.org to replace Google+ for kernel developers. people.kernel.org is "an ActivityPub-enabled federated platform powered by WriteFreely and hosted by very nice and accommodating folks at write.as." Initially the service is being rolled out to those listed in the kernel's MAINTAINERS file. See the about page for more information.

GitLab 12.0 was released yesterday. From the announcement: "GitLab 12.0 marks a key step in our journey to create an inclusive approach to DevSecOps, empowering "everyone to contribute". For the past year, we've been on an amazing journey, collaborating and creating a solution that brings teams together. There have been thousands of community contributions making GitLab more lovable. We believe everyone can contribute, and we've enabled cross-team collaboration, faster delivery of great code, and bringing together Dev, Ops, and Security."

TheoTown, the retro-themed city-building game, is now available on Steam for Linux. GamingOnLinux reports that "On Android at least, the game is very highly rated and I imagine a number of readers have played it there so now you can pick it up again on your Linux PC and continue building the city of your dreams. So far, the Steam user reviews are also giving it a good overall picture." You can find TheoTown on Steam.

Pulseway introduces its new File Transfer feature to the Pulseway Remote Desktop app. With File Transfer, "businesses can now send and receive files from both the source and destination endpoint". Go here for more details on Pulseway's File Transfer capabilities.

SUSE Manager 4 and SUSE Manager for Retail 4 are now available. The press release notes that these open-source infrastructure management solutions "help enterprise DevOps and IT operations teams reduce complexity and regain control of IT assets no matter where they are, increase efficiency while meeting security policies, and optimize operations via automation to reduce costs". Go here to learn more about SUSE Manager and here for more information on SUSE Manager for Retail.

Ten Years of “Linux in the GNU/South”: an Overview of SELF 2019

10 years of Southeast Linux Fest

Highlights of the 2019 Southeast LinuxFest.

The tenth annual SouthEast LinuxFest (SELF) was held on the weekend of June 14–16 at the Sheraton Charlotte Airport Hotel in Charlotte, North Carolina. Still running strong, SELF serves partially as a replacement for the Atlanta Linux Showcase, a former conference for all things Linux in the southeastern United States. Since 2009, the conference has provided a venue for those living in the southeastern United States to come and listen to talks by speakers who all share a passion for using Linux-based operating systems and free and open-source software (FOSS). Although some of my praises of the conference are not exclusive to SELF, the presence of such a conference in the "GNU/South" has the long-term potential to have a significant effect on the Linux and FOSS community.

Despite facing several challenges along the way, SELF's current success is the result of what is now ten years of hard work by the conference organizers, who currently are led by Jeremy Sands, one of the founding members of the conference. Scanning through the materials for SELF 2019, however, there is no mention that this year's conference marked a decade of "Linux in the GNU/South". It actually wasn't until the conference already was over that I realized this marked SELF's decennial anniversary. I initially asked myself why this wasn't front and center on event advertisements, but looking back on SELF, neglecting questions such as "how long have we been going?" and instead focusing on "what is going on now?" and "where do we go from here?" speaks to the admirable spirit and focus of the conference and its attendees. This focus on the content of SELF rather than SELF itself shows the true passion for the Linux community rather than any particular organization or institution that benefits off the community.

Another element worthy of praise is SELF's "all are welcome" atmosphere. Whether attendees were met with feelings of excitement to return to an event they waited 362 days for or a sense of apprehension as they stepped down the L-shaped hall of conference rooms for the first time, it took little time for the contagious, positive energy to take its effect. People of all ages and all skill levels could be seen intermingling and enthusiastically inviting anybody who was willing into their conversations and activities. The conference talks, which took all kinds of approaches to thinking about and using Linux, proved that everybody is welcome to attend and participate at the event.

Canonical to Continue Building Selected 32-Bit i386 Packages for Ubuntu 19.10, Azul Systems Announces Zulu Mission Control v7.0, Elisa v. 0.4.1 Now Available, Firefox Adds Fission to the Nightly Build and Tails Emergency Release

News briefs for June 25, 2019.

After much feedback from the community, Canonical yesterday announced it will continue to build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS. The statement notes that Canonical "will also work with the WINE, Ubuntu Studio and gaming communities to use container technology to address the ultimate end of life of 32-bit libraries; it should stay possible to run old applications on newer versions of Ubuntu. Snaps and LXD enable us both to have complete 32-bit environments, and bundled libraries, to solve these issues in the long term."

Azul Systems announces Zulu Mission Control v7.0. From the press release: "Based on the OpenJDK Mission Control project, Zulu Mission Control is a powerful Java performance management and application profiling tool that works with Azul's Zing and Zulu JDKs/JVMs and supports both Java SE 8 and 11. Zulu Mission Control is free to use, and may be downloaded from www.azul.com/products/zulu-mission-control."

Version 0.4.1 of KDE's Elisa music player is now available. Some fixes with this release include improved accessibility, improved focus handling and an improved build system. You can get the source code tarball here.

Firefox recently added Fission to its latest nightly build. Softpedia News quotes developer Nika Layzell on the new site isolation feature: "We aim to build a browser which isn't just secure against known security vulnerabilities, but also has layers of built-in defense against potential future vulnerabilities. To accomplish this, we need to revamp the architecture of Firefox and support full Site Isolation. We call this next step in the evolution of Firefox's process model 'Project Fission'. While Electrolysis split our browser into Content and Chrome, with Fission, we will "split the atom", splitting cross-site iframes into different processes than their parent frame."

Tails announced an emergency release this week, 3.14.2, to address a critical security vulnerability in the Tor browser. Be sure to update the Tor Browser to version 8.5.3 to fix the sandbox escape vulnerability. Go here to download.

Deprecating a.out Binaries

Remember a.out binaries? They were the file format of the Linux kernel till around 1995 when ELF took over. ELF is better. It allows you to load shared libraries anywhere in memory, while a.out binaries need you to register shared library locations. That's fine at small scales, but it gets to be more and more of a headache as you have more and more shared libraries to deal with. But a.out is still supported in the Linux source tree, 25 years after ELF became the standard default format.

Recently, Borislav Petkov recommended deprecating it in the source tree, with the idea of removing it if it turned out there were no remaining users. He posted a patch to implement the deprecation. Alan Cox also remarked that "in the unlikely event that someone actually has an a.out binary they can't live with, they can also just write an a.out loader as an ELF program entirely in userspace."

Richard Weinberger had no problem deprecating a.out and gave his official approval of Borislav's patch.

In fact, there's a reason the issue happens to be coming up now, 25 years after the fact. Linus Torvalds pointed out:

I'd prefer to try to deprecate a.out core dumping first....That's the part that is actually broken, no?

In fact, I'd be happy to deprecate a.out entirely, but if somebody _does_ complain, I'd like to be able to bring it back without the core dumping.

Because I think the likelihood that anybody cares about a.out core dumps is basically zero. While the likelihood that we have some odd old binary that is still a.out is slightly above zero.

So I'd be much happier with this if it was a two-stage thing where we just delete a.out core dumping entirely first, and then deprecate even running a.out binaries separately.

Because I think all the known *bugs* we had were with the core dumping code, weren't they?

Removing it looks trivial. Untested patch attached.

Then I'd be much happier with your "let's deprecate a.out entirely" as a second patch, because I think it's an unrelated issue and much more likely to have somebody pipe up and say "hey, I have this sequence that generates executables dynamically, and I use a.out because it's much simpler than ELF, and now it's broken". Or something.

Jann Horn looked over Linus' patch and suggested additional elements of a.out that would no longer be used by anything, if core dumping was coming out. He suggested those things also could be removed with the same git commit, without risking anyone complaining.

Raspberry Pi 4 on Sale Now, SUSE Linux Enterprise 15 Service Pack 1 Released, Instaclustr Service Broker Now Available, Steam for Linux to Drop Support for Ubuntu 19.10 and Beyond, and Linux 5.2-rc6 Is Out

News briefs for June 24, 2019.

Raspberry Pi 4 is on sale now, starting at $35. The Raspberry Pi blog post notes that "this is a comprehensive upgrade, touching almost every element of the platform. For the first time we provide a PC-like level of performance for most users, while retaining the interfacing capabilities and hackability of the classic Raspberry Pi line". This version also comes with different memory options (1GB for $35, 2GB for $45 or 4GB for $55). You can order one from approved resellers here.

SUSE releases SUSE Linux Enterprise 15 Service Pack 1 on its one-year anniversary of launching the world's first multimodal OS. From the SUSE blog: "SUSE Linux Enterprise 15 SP1 advances the multimodal OS model by enhancing the core tenets of common code base, modularity and community development while hardening business-critical attributes such as data security, reduced downtime and optimized workloads." Some highlights include faster and easier transition from community Linux to enterprise Linux, enhanced support for edge to HPC workloads and improved hardware-based security. Go here for release notes and download links.

Instaclustr announces the availability of its Instaclustr Service Broker. This release "enables customers to easily integrate their containerized applications, or cloud native applications, with open source data-layer technologies provided by the Instaclustr Managed Platform—including Apache Cassandra and Apache Kafka. Doing so enables organizations—cloud native applications to leverage key capabilities of the Instaclustr platform such as automated service discovery, provisioning, management, and deprovisioning of data-layer clusters." Go here for more details.

Valve developer announces that Steam for Linux will drop support for the upcoming Ubuntu 19.10 release and future Ubuntu releases. Softpedia News reports that "Valve's harsh announcement comes just a few days after Canonical's announcement that they will drop support for 32-bit (i386) architectures in Ubuntu 19.10 (Eoan Ermine). Pierre-Loup Griffais said on Twitter that Steam for Linux won't be officially supported on Ubuntu 19.10, nor any future releases. The Steam developer also added that Valve will focus their efforts on supporting other Linux-based operating systems for Steam for Linux. They will be looking for a GNU/Linux distribution that still offers support for 32-bit apps, and that they will try to minimize the breakage for Ubuntu users."

Linux 5.2-rc6 was released on Saturday. Linus Torvalds writes, "rc6 is the biggest rc in number of commits we've had so far for this 5.2 cycle (obviously ignoring the merge window itself and rc1). And it's not just because of trivial patches (although admittedly we have those too), but we obviously had the TCP SACK/fragmentation/mss fixes in there, and they in turn required some fixes too." He also noted that he's "still reasonably optimistic that we're on track for a calm final part of the release, and I don't think there is anything particularly bad on the horizon."

Python’s Mypy–Advanced Usage

Python

Mypy can check more than simple Python types.

In my last article, I introduced Mypy, a package that enforces type checking in Python programs. Python itself is, and always will remain, a dynamically typed language. However, Python 3 supports "annotations", a feature that allows you to attach an object to variables, function parameters and function return values. These annotations are ignored by Python itself, but they can be used by external tools.

Mypy is one such tool, and it's an increasingly popular one. The idea is that you run Mypy on your code before running it. Mypy looks at your code and makes sure that your annotations correspond with actual usage. In that sense, it's far stricter than Python itself, but that's the whole point.

In my last article, I covered some basic uses for Mypy. Here, I want to expand upon those basics and show how Mypy really digs deeply into type definitions, allowing you to describe your code in a way that lets you be more confident of its stability.

Type Inference

Consider the following code:


x: int = 5
x = 'abc'
print(x)

This first defines the variable x, giving it a type annotation of int. It also assigns it to the integer 5. On the next line, it assigns x the string abc. And on the third line, it prints the value of x.

The Python language itself has no problems with the above code. But if you run mypy against it, you'll get an error message:


mytest.py:5: error: Incompatible types in assignment
   (expression has type "str", variable has type "int")

As the message says, the code declared the variable to have type int, but then assigned a string to it. Mypy can figure this out because, despite what many people believe, Python is a strongly typed language. That is, every object has one clearly defined type. Mypy notices this and then warns that the code is assigning values that are contrary to what the declarations said.

In the above code, you can see that I declared x to be of type int at definition time, but then assigned it to a string, and then I got an error. What if I don't add the annotation at all? That is, what if I run the following code via Mypy:

GNOME 3.33.3 Released, Kernel Security Updates for RHEL and CentOS, Wine Developers Concerned with Ubuntu 19.10 Dropping 32-Bit Support, Bzip2 to Get an Update and OpenMandriva Lx 4.0 Now Available

News briefs for June 21, 2019.

GNOME 3.33.3 was released yesterday. Note that this release is development code and is intended for testing purposes. Go here to see the list of modules and changes, get the BuildStream project snapshot here or get the source packages here.

Red Hat Enterprise Linux and CentOS Linux have received new kernel security updates to address the recent TCP vulnerabilities. Softpedia News reports that "The new Linux kernel security updates patch an integer overflow flaw (CVE-2019-11477) discovered by Jonathan Looney in Linux kernel's networking subsystem processed TCP Selective Acknowledgment (SACK) segments, which could allow a remote attacker to cause a so-called SACK Panic attack (denial of service) by sending malicious sequences of SACK segments on a TCP connection that has a small TCP MSS value." Update immediately.

Wine developers are concerned with Ubuntu's decision to drop 32-bit support with Ubuntu 19.10. From Linux Uprising: "The Wine developers are concerned with this news because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components." See the wine-devel mailing list for the discussion.

Bzip2 is about to get its first update since September 2010. According to Phoronix, the new version will include new build systems and security fixes, among other things. See Federico's blog post for details.

OpenMandriva Lx 4.0 was released recently. One major change for OM Lx 4 is switching from rpm5/URPMI to rpm.org/DNF for package management. This change requires users to learn new commands if they use command line, DNF. See the OpenMandriva wiki for all the details and go here to install.

Understanding Public Key Infrastructure and X.509 Certificates

security

An introduction to PKI, TLS and X.509, from the ground up.

Public Key Infrastructure (PKI) provides a framework of encryption and data communications standards used to secure communications over public networks. At the heart of PKI is a trust built among clients, servers and certificate authorities (CAs). This trust is established and propagated through the generation, exchange and verification of certificates.

This article focuses on understanding the certificates used to establish trust between clients and servers. These certificates are the most visible part of the PKI (especially when things break!), so understanding them will help to make sense of—and correct—many common errors.

As a brief introduction, imagine you want to connect to your bank to schedule a bill payment, but you want to ensure that your communication is secure. "Secure" in this context means not only that the content remains confidential, but also that the server with which you're communicating actually belongs to your bank.

Without protecting your information in transit, someone located between you and your bank could observe the credentials you use to log in to the server, your account information, or perhaps the parties to which your payments are being sent. Without being able to confirm the identity of the server, you might be surprised to learn that you are talking to an impostor (who now has access to your account information).

Transport layer security (TLS) is a suite of protocols used to negotiate a secured connection using PKI. TLS builds on the SSL standards of the late 1990s, and using it to secure client to server connections on the internet has become ubiquitous. Unfortunately, it remains one of the least understood technologies, with errors (often resulting from an incorrectly configured website) becoming a regular part of daily life. Because those errors are inconvenient, users regularly click through them without a second thought.

Understanding the X.509 certificate, which is fully defined in RFC 5280, is key to making sense of those errors. Unfortunately, these certificates have a well deserved reputation of being opaque and difficult to manage. With the multitude of formats used to encode them, this reputation is rightly deserved.

An X.509 certificate is a structured, binary record. This record consists of several key and value pairs. Keys represent field names, where values may be simple types (numbers, strings) to more complex structures (lists). The encoding from the key/value pairs to the structured binary record is done using a standard known as ASN.1 (Abstract Syntax Notation, One), which is a platform-agnostic encoding format.

Episode 21: From Mac to Linux

Linux Journal Reality 2.0 Episode 21: From Mac to Linux Cover

Katherine Druckman and Doc Searls talk to Linux Journal Editor at Large, Petros Koutoupis, about moving from Mac to Linux.

Links Mentioned:

Kubernetes 1.15 Releaased, Offensive Security Reveals the 2019-2020 Roadmap for Kali Linux, Canonical Releases a New Kernel Live Patch for Ubuntu 18.04 and 16.04 LTS, Vivaldi 2.6 Now Available, and Mathieu Parent Announces GitLabracadabra

News briefs for June 20, 2019.

Kubernetes 1.15 was released yesterday. This is the second release of the year and contains 25 enhancements. The two main themes of the release are continuous improvement and extensibility. See the Kubernetes blog post for all the details.

Offensive Security yesterday revealed much of the 2019–2020 roadmap for the open-source Kali Linux project. The press release claims that "The strategy behind much of the roadmap is opening up Kali Linux even more to the community for contributions and helping speed the process of updates and improvements." See the blog post for more details on upcoming changes and new features for Kali Linux.

Canonical released a new kernel live patch for Ubuntu 18.04 LTS and 16.04 LTS to address the recently discovered TCP DoS vulnerabilities. From Softpedia News: "Canonical urges all users of the Ubuntu 18.04 LTS (Bionic Beaver) and Ubuntu 16.04 LTS (Xenial Xerus) operating system series who use the Linux kernel live patch to update their installations as soon as possible to the new kernel versions. These are rebootless kernel updates, so you won't need to restart your computer to apply them."

Vivaldi 2.6 was released today. This new version block abusive ads, improves security, and adds new options for quicker navigation and customization. You can download Vivaldi from here.

Mathieu Parent today announces GitLabracadabra 0.2.1. He started working on the tool to in Python to create and update projects in GitLab. He notes that "This tool is still very young and documentation is sparse, but following the 'release early, release often' motto I think it is ready for general usage."

Getting Started with Rust: Working with Files and Doing File I/O

Rust logo

How to develop command-line utilities in Rust.

This article demonstrates how to perform basic file and file I/O operations in Rust, and also introduces Rust's ownership concept and the Cargo tool. If you are seeing Rust code for the first time, this article should provide a pretty good idea of how Rust deals with files and file I/O, and if you've used Rust before, you still will appreciate the code examples in this article.

Ownership

It would be unfair to start talking about Rust without first discussing ownership. Ownership is the Rust way of the developer having control over the lifetime of a variable and the language in order to be safe. Ownership means that the passing of a variable also passes the ownership of the value to the new variable.

Another Rust feature related to ownership is borrowing. Borrowing is about taking control over a variable for a while and then returning that ownership of the variable back. Although borrowing allows you to have multiple references to a variable, only one reference can be mutable at any given time.

Instead of continuing to talk theoretically about ownership and borrowing, let's look at a code example called ownership.rs:


fn main() {
    // Part 1
    let integer = 321;
    let mut _my_integer = integer;
    println!("integer is {}", integer);
    println!("_my_integer is {}", _my_integer);
    _my_integer = 124;
    println!("_my_integer is {}", _my_integer);

    // Part 2
    let a_vector = vec![1, 2, 3, 4, 5];
    let ref _a_correct_vector = a_vector;
    println!("_a_correct_vector is {:?}", _a_correct_vector);

    // Part 3
    let mut a_var = 3.14;
    {
        let b_var = &mut a_var;
        *b_var = 3.14159;
    }
    println!("a_var is now {}", a_var);
}

So, what's happening here? In the first part, you define an integer variable (integer) and create a mutable variable based on integer. Rust performs a full copy for primitive data types because they are cheaper, so in this case, the integer and _my_integer variables are independent from each other.

However, for other types, such as a vector, you aren't allowed to change a variable after you have assigned it to another variable. Additionally, you should use a reference for the _a_correct_vector variable of Part 2 in the above example, because Rust won't make a copy of a_vector.

Docker Is Porting Its Container Platform to Microsoft Windows Subsystem for Linux 2, Ubuntu 19.10 Will Drop 32-Bit Builds, Children of Morta Still Coming to Linux and Vulnerabilities Discovered in the Linux TCP System

News briefs for June 19, 2019.

The development team over at Docker is porting their container platform to Microsoft's Windows Subsystem for Linux 2 (WSL 2) It looks as if pretty soon, Docker containers will be managed across both Linux and Windows. See ZDNet for details.

Canonical and the community behind Ubuntu announced that Ubuntu 19.10 will officially drop 32-bit (i386) builds. There has been talk of this for a while, but now it's official. See OMG! Ubuntu! for more information.

Dead Mage, the studio behind Children of Morta posted an update stating that even after all the delays, they still will be bringing the game to Linux, GamingOnLinux reports. The project originally was funded via Kickstarter in 2015.

Security researchers over at Netflix uncovered some troubling security vulnerabilities inside the Linux (and FreeBSD) TCP subsystem, the worst of which is being called SACK. It can permit remote attackers to induce a kernel panic from within your Linux operating system. Patches are available for affected Linux distributions. See Beta News for details.

Study the Elements with KDE’s Kalzium

KDE's Kalzium

I've written about a number of chemistry packages in the past and all of the computational chemistry that you can do in a Linux environment. But, what is fundamental to chemistry? Why, the elements, of course. So in this article, I focus on how you can learn more about the elements that make up everything around you with Kalzium. KDE's Kalzium is kind of like a periodic table on steroids. Not only does it have information on each of the elements, it also has extra functionality to do other types of calculations.

Kalzium should be available within the package repositories for most distributions. In Debian-based distributions, you can install it with the command:


sudo apt-get install kalzium

When you start it, you get a simplified view of the classical periodic table.

Figure 1. The default view is of the classical ordering of the elements.

You can change this overall view either by clicking the drop-down menu in the top-left side of the window or via the View→Tables menu item. You can select from five different display formats. Clicking one of the elements pops open a new window with detailed information.

Figure 2. Kalzium provides a large number of details for each element.

The default detail pane is an overview of the various physical characteristics of the given element. This includes items like the melting point, electron affinity or atomic mass. Five other information panes also are available. The atom model provides a graphical representation of the electron orbitals around the nucleus of the given atom. The isotopes pane shows a table of values for each of the known isotopes for the selected element, ordered by neutron number. This includes things like the atomic mass or the half-life for radioactive isotopes. The miscellaneous detail pane includes some of the extra facts and trivia that might be of interest. The spectrum detail pane shows the emission and absorption spectra, both as a graphical display and a table of values. The last detail pane provides a list of external links where you can learn more about the selected element. This includes links to Wikipedia, the Jefferson Lab and the Webelements sites.

Figure 3. For those elements that are stable enough, you even can see the emission and absorption spectra.

Slimbook Launches New “Apollo” Linux PC, First Beta for Service Pack 5 of SUSE Linux Enterprise 12 Is Out, NVIDIA Binary Drivers for Ubuntu Growing Stale, DragonFly BSD v 5.6 Released and Qt v. 5.12.4 Now Available

News briefs for June 18, 2019.

Slimbook, the Spanish Linux computer company, just unveiled a brand-new all-in-one Linux PC called the "Apollo". It has a 23.6 inch IPS LED display with a 1920x1080 resolution, and a choice between an Intel i5-8500 and i7-8700 processors. It comes with up to 32GB of RAM and integrated Intel UHD 630 4K graphics. Pricing starts at $799.

The first beta for service pack 5 of SUSE Linux Enterprise 12 is out and available. It contains updated drivers, a new version of the OpenJDK, support for Intel Optane memory and more.

NVIDIA binary drivers for Ubuntu have grown a bit stale, which is pushing developers to update the drivers for Ubuntu 19.10.

DragonFly BSD version 5.6 is officially released with improvements in the management of virtual memory, updates and bug fixes to both the DRM code and especially to the HAMMER2 filesystem and much more.

Qt version 5.12.4 is available with support for OpenSSL version 1.1.1 and about 250 bug fixes.

Android Low-Memory Killer–In or Out?

One of the jobs of the Linux kernel—and all operating system kernels—is to manage the resources available to the system. When those resources get used up, what should it do? If the resource is RAM, there's not much choice. It's not feasible to take over the behavior of any piece of user software, understand what that software does, and make it more memory-efficient. Instead, the kernel has very little choice but to try to identify the software that is most responsible for using up the system's RAM and kill that process.

The official kernel does this with its OOM (out-of-memory) killer. But, Linux descendants like Android want a little more—they want to perform a similar form of garbage collection, but while the system is still fully responsive. They want a low-memory killer that doesn't wait until the last possible moment to terminate an app. The unspoken assumption is that phone apps are not so likely to run crucial systems like heart-lung machines or nuclear fusion reactors, so one running process (more or less) doesn't really matter on an Android machine.

A low-memory killer did exist in the Linux source tree until recently. It was removed, partly because of the overlap with the existing OOM code, and partly because the same functionality could be provided by a userspace process. And, one element of Linux kernel development is that if something can be done just as well in userspace, it should be done there.

Sultan Alsawaf recently threw open his window, thrust his head out, and shouted, "I'm mad as hell, and I'm not gonna take this anymore!" And, he re-implemented a low-memory killer for the Android kernel. He felt the userspace version was terrible and needed to be ditched. Among other things, he said, it killed too many processes and was too slow. He felt that the technical justification of migrating to the userspace dæmon had not been made clear, and an in-kernel solution was really the way to go.

In Sultan's implementation, the algorithm was simple—if a memory request failed, then the process was killed—no fuss, no muss and no rough stuff.

There was a unified wall of opposition to this patch. So much so that it became clear that Sultan's main purpose was not to submit the patch successfully, but to light a fire under the asses of the people maintaining the userspace version, in hopes that they might implement some of the improvements he wanted.

Michal Hocko articulated his opposition to Sultan's patch very clearly—the Linux kernel would not have two separate OOM killers sitting side by side. The proper OOM killer would be implemented as well as could be, and any low-memory killers and other memory finaglers would have to exist in userspace for particular projects like Android.