FSF’s LibrePlanet 2021 Free Software Conference Is This Weekend, Online Only

LibrePlanet 2021 Free Software Conference

On Saturday and Sunday, March 20th and 21st, 2021, free software supporters from all over the world will log in to share knowledge and experiences, and to socialize with others within the free software community. This year’s theme is “Empowering Users,” and keynotes will be Julia Reda, Nathan Freitas, and Nadya Peek. Free Software Foundation (FSF) associate members and students attend gratis at the Supporter level. 

You can see the schedule and learn more about the conference at https://libreplanet.org/2021/, and participants are encouraged to register in advance at https://u.fsf.org/lp21-sp

The conference will also include workshops, community-submitted five-minute Lightning Talks, Birds of a Feather (BoF) sessions, and an interactive “exhibitor hall” and “hallway” for socializing.

Review: The New weLees Visual LVM, a new style of LVM management, has been released

weLees Visual LVM Manager

Maintenance of the storage system is a daily job for system administrators. Linux provides users with a wealth of storage capabilities, and powerful built-in maintenance tools. However, these tools are hardly friendly to system administrators while generally considerable effort is required for mastery.

As a Linux built-in storage model, LVM provides users with plenty flexible management modes to fit various needs. For users who can fully utilize its functions, LVM could meet almost all needs. But the premise is thorough understanding of the LVM model, dozens of commands as well as accompanying parameters.

The graphical interface would dramatically simplify both learning curve and operation with LVM, in a similar approach as partition tools that are widely used on Windows/Linux platforms. Although scripts with commands are suitable for daily, automatic tasks, the script could not handle all functions in LVM. For instance, manual calculation and processing are still required by many tasks.

Significant effort had been spent on this problem. Nowadays, several graphical LVM management tools are already available on the Internet, some of them are built-in with Linux distributions and others are developed by third parties. But there remains a critical problem: desire for remote machines or headless servers are completely ignored.

This is now solved by Visual LVM Remote. Front end of this tool is developed based on the HTTP protocol. With any smart device that can connect to the storage server, Users can perform management operations.

Visual LVM is developed by weLees Corporation and supports all Linux distributions. In addition to working with remote/headless servers, it also supports more advanced features of LVM compared with various on-shelf graphic LVM management tools.

Dependences of Visual LVM Remote

Visual LVM Remote can work on any Linux distribution that including two components below:

  • LVM2

  • Libstdc++.so

UI of Visual LVM Remote

With a concise UI, partitions/physical volumes/logical volumes are displayed by disk layout. With a glance, disk/volume group information can be obtained immediately. In addition, detailed relevant information of the object will be displayed in the information bar below with the mouse hover on the concerned object.

Nvidia Linux drivers causing random hard crashes and now a major security risk still not fixed after 5+ months

Image
Nvidia Linux Drivers

The recent fiasco with Nvidia trying to block Hardware Unboxed from future GPU review samples for the content of their review is one example of how they choose to play this game. This hatred is not only shared by reviewers, but also developers and especially Linux users.

The infamous Torvalds videos still traverse the web today as Nvidia conjures up another evil plan to suck up more of your money and market share. This is not just one off shoot case; oh how much I wish it was. I just want my computer to work.

If anyone has used Sway-WM with an Nvidia GPU I’m sure they would remember the –my-next-gpu-wont-be-nvidia option.

These are a few examples of many.

The Nvidia Linux drivers have never been good but whatever has been happening at Nvidia for the past decade has to stop today. The topic in question today is this bug: [https://forums.developer.nvidia.com/t/bug-report-455-23-04-kernel-panic-due-to-null-pointer-dereference]

This bug causes hard irrecoverable crashes from driver 440+. This issue is still happening 5+ months later with no end in sight. At first users could work around this by using an older DKMS driver along with a LTS kernel. However today this is no longer possible. Many distributions of Linux are now dropping the old kernels. DKMS cannot build. The users are now FORCED with this “choice”:

{Use an older driver and risk security implications} or {“use” the new drivers that cause random irrecoverable crashes.}

This issue is only going to get more and more prevalent as the kernel is a core dependency by definition. This is just another example of the implications of an unsafe older kernel causing issue for users: https://archlinux.org/news/moving-to-zstandard-images-by-default-on-mkinitcpio/

If you use Linux or care about the implications of a GPU monopoly, consider AMD. Nvidia is already rearing its ugly head and AMD is actually putting up a fight this year.

Parallel shells with xargs: Utilize all your cpu cores on UNIX and Windows

Parallel Shells With xargs Unix

Introduction

One particular frustration with the UNIX shell is the inability to easily schedule multiple, concurrent tasks that fully utilize CPU cores presented on modern systems. The example of focus in this article is file compression, but the problem rises with many computationally intensive tasks, such as image/audio/media processing, password cracking and hash analysis, database Extract, Transform, and Load, and backup activities. It is understandably frustrating to wait for gzip * running on a single CPU core, while most of a machine's processing power lies idle.

This can be understood as a weakness of the first decade of Research UNIX which was not developed on machines with SMP. The Bourne shell did not emerge from the 7th edition with any native syntax or controls for cohesively managing the resource consumption of background processes.

Utilities have haphazardly evolved to perform some of these functions. The GNU version of xargs is able to exercise some primitive control in allocating background processes, which is discussed at some length in the documentation. While the GNU extensions to xargs have proliferated to many other implementations (notably BusyBox, including the release for Microsoft Windows, example below), they are not POSIX.2-compliant, and likely will not be found on commercial UNIX.

Historic users of xargs will remember it as a useful tool for directories that contained too many files for echo * or other wildcards to be used; in this situation xargs is called to repeatedly batch groups of files with a single command. As xargs has evolved beyond POSIX, it has assumed a new relevance which is useful to explore.


Why is POSIX.2 this bad?

A clear understanding of the lack of cohesive job scheduling in UNIX requires some history of the evolution of these utilities.

Bypassing Deep Packet Inspection: Tunneling Traffic Over TLS VPN

Bypassing Deep Packet Inspection

In some countries, network operators employ deep packet inspection techniques to block certain types of traffic. For example, Virtual Private Network (VPN) traffic can be analyzed and blocked to prevent users from sending encrypted packets over such networks.

By observing that HTTPS works all over the world (configured for an extremely large number of web-servers) and cannot be easily analyzed (the payload is usually encrypted), we argue that in the same manner VPN tunneling can be organized: By masquerading the VPN traffic with TLS or its older version - SSL, we can build a reliable and secure network. Packets, which are sent over such tunnels, can cross multiple domains, which have various (strict and not so strict) security policies. Despite that the SSH can be potentially used to build such network, we have evidence that in certain countries connections made over such tunnels are analyzed statistically: If the network utilization by such tunnels is high, bursts do exist, or connections are long-living, then underlying TCP connections are reset by network operators.

Thus, here we make an experimental effort in this direction: First, we describe different VPN solutions, which exist on the Internet; and, second, we describe our experimental effort with Python-based software and Linux, which allows users to create VPN tunnels using TLS protocol and tunnel small office/home office (SOHO) traffic through such tunnels.

I. INTRODUCTION

Virtual private networks (VPN) are crucial in the modern era. By encapsulating and sending client’s traffic inside protected tunnels it is possible for users to obtain network services, which otherwise would be blocked by a network operator. VPN solutions are also useful when accessing a company’s Intranet network. For example, corporate employees can access the internal network in a secure way by establishing a VPN connection and directing all traffic through the tunnel towards the corporate network. This way they can get services, which otherwise would be impossible to get from the outside world.

II. BACKGROUND

There are various solutions that can be used to build VPNs. One example is Host Identity Protocols (HIP) [7]. HIP is a layer 3.5 solution (it is in fact located between transport and network layers) and was originally designed to split the dual role of IP addresses - identifier and locator. For example, a company called Tempered Networks uses HIP protocol to build secure networks (for sampling see [4]).

How to Save Time Running Automated Tests with Parallel CI Machines

Knapsack Pro Ruby JavaScript Tests

Automated tests are part of many programming projects, ensuring the software is flawless. The bigger the project, the larger the test suite can be.This can result in automated tests taking a lot of time to run. In this article you will learn how to run automated tests faster with parallel Continuous Integration machines (CI) and what problems can be encountered. The article covers common parallel testing problems, based on Ruby & JavaScript tests.

Knapsack Pro LogoSlow automated tests

Automated tests can be considered slow when programmers stop running the whole test suite on their local machine because it is too time consuming. Most of the time you use CI servers such as Jenkins, CircleCI, Github Actions to run your tests on an external machine instead of your own. When you have a test suite that runs for an hour then it’s not efficient to run it on your computer. Browser end-to-end tests for your web project can take a really long time to execute. Running tests on a CI server for an hour is also not efficient. You as a developer need a fast feedback loop to know if your software works fine. Automated tests should help you with that.

Split tests between many CI machines to save time

A way to save you time is to make CI build as fast as possible. When you have tests taking e.g. 1 hour to run then you could leverage your CI server config and setup parallel jobs (parallel CI machines/nodes). Each of the parallel jobs can run a chunk of the test suite. 

You need to divide your tests between parallel CI machines. When you have a 60 minutes test suite you can run 20 parallel jobs where each job runs a small set of tests and this should save you time. In an optimal scenario you would run tests for 3 minutes per job. 

How to make sure each job runs for 3 minutes? As a first step you can apply a simple solution. Sort all of your test files alphabetically and divide them by the number of parallel jobs. Each of your test files can have a different execution time depending on how many test cases you have per test file and how complex each test case is. But you can end up with test files divided in a suboptimal way, and this is problematic. The image below illustrates a suboptimal split of tests between parallel CI jobs where one job runs too many tests and ends up being a bottleneck.

The KISS Web Development Framework

KISS Framework

Perhaps the most popular platform for applications is the web. There are many reasons for this including portability across platforms, no need to update the program, data backup, sharing data with others, and many more. This popularity has driven many of us to the platform.

Unfortunately, the platform is a bit complex. Rather than developing in a particular environment, with web applications it is necessary to create two halves of a program utilizing vastly different technologies. On top of that, there are many additional challenges such as the communications and security between the two halves.

A typical web application would include all of the following building blocks:

  1. Front-end layout (HTML/CSS)
  2. Front-end functionality (JavaScript)
  3. Back-end server code (Java, C#, etc.)
  4. Communications (REST, etc.)
  5. Authentication
  6. Data persistence (SQL, etc.)

All these don't even touch on all the other pieces that are not part of your application proper, such as the server (Apache, tomcat, etc), the database server (PostgreSQL, MySQL, MongoDB, etc), the OS (Linux, etc.), domain name, DNS, yadda, yadda, yadda.

The tremendous complexity notwithstanding, most application developers mainly have to concern themselves with the six items listed above. These are their main concerns.

Although there are many fine solutions available for these main concerns, in general, these solutions are siloed, complex, and incongruent. Let me explain.

Many solutions are siloed because they are single-solution packages that are complete within themselves and disconnected from other pieces of the system.

Some solutions are so complex that they can take years to learn well. Developers can struggle more with the framework they are using than the language or application they are trying to write. This is a major problem.

Lastly, by incongruent I mean that the siloed tools do not naturally fit well together. A bunch of glue code has to be written, learned, and supported to fit the various pieces together. Each tool has a different feel, a different approach, a different way of thinking.

Being frustrated with all of these problems, I wrote the KISS Web Development Framework. At first it was just various solutions I had developed. But later it evolved into a single, comprehensive web development framework. KISS, an open-source project, was specifically designed to solve these exact challenges.

KISS is a single, comprehensive, fully integrated web development framework that includes integrated solutions for:

Front-end

  1. Custom HTML controls
  2. Easy communications with the back-end with built-in authentication
  3. Browser cache control (so the user never has to clear their cache)
  4. A variety of general purpose utilities

Back-end

Linux in Healthcare – Cutting Costs & Adding Safety

Linux in Healthcare

Healthcare domain directly deals with our health and lives. Healthcare is prevention, diagnosis, and treatment of any disease, injury, illness, or any other physical and mental impairments in humans. Emergency situations are often dealt with by the healthcare sector very frequently. With immense scope for improvisations, a thriving healthcare domain deals from telemedicine to insurance, and inpatient hospitals to outpatient clinics.  With practitioners practicing in multiple areas like medicine, chiropractic, nursing, dentistry, pharmacy, allied health, and others, it's an industry with complex processes and data-oriented maintenance systems often difficult to manage manually with paperwork.

Need is the mother of innovation and hence people across the world have invented software and systems to manage:

  • Patients’ data or rather medical history
  • Bills and claims for own and third-party services
  • Inventory management
  • Communication channels among various departments like reception, doctor’s room, investigation rooms, wards, Operation theaters, etc.
  • Controlled Medical equipment and much more.

Thus, saving our precious time, making life easier, and minimizing human errors.

HealthCare integrated with Linux: With high availability, critical workloads, low power consumption and reliability, Linux has established itself in the likes of windows, and Mac OS. With a “stripped-down” graphical interface and minimal OS version, it provides a strong impetus for performance restricting many services from running and direct control over hardware. Integrating Linux with the latest technological solutions in healthcare (check out Elinext healthcare solutions, as an example), businesses are saving a lot along with enhanced security.

Linux in Healthcare Categories

 

Few drivers promoting Linux in healthcare are: 

Open Source: One of the utmost benefits of Linux is its open-source saving license cost for  health care organizations. Most of the software and programs running on Linux OS are largely open sources too. Anyone can modify Linux kernel based on open source license, resulting customization as per your needs. Using open-source, there is no need to request additional resources or sign additional agreements. It provides you vendor independence. With a creditable Linux community backed by various organizations, you have satisfactory support.

MuseScore Created New Font in Memory of Original SCORE Program Creator

Image
MuseScore

MuseScore represents a free notation software for operating systems such as Windows, macOS and Linux. It is designed and suitable for music teachers, students & both amateur and professional composers. MuseScore is released as FOSS under the GNU GPL license and it’s accompanied by freemium MuseScore.com sheet music catalogue with mobile score viewer, playback app and an online score sharing platform. In 2018, the MuseScore company was acquired by Ultimate Guitar, which included full-time paid developers in the open source team. Since 2019 the MuseScore design team has been led by Martin Keary, known as blogger Tantacrul, who has consistently criticized composer software in connection with design and usability. From that moment on, a qualitative change was set in motion in MuseScore.

Historically, the engraving quality in MuseScore has not been entirely satisfactory. After the review by Martin Keary, MuseScore product owner (previously known as MuseScore head of design) and Simon Smith, an engraving expert, who has produced multiple detailed reports on the engraving quality of MuseScore 3.5, it has become apparent that some key engraving issues should be resolved immediately.That would have a significant impact on the overall quality of our scores. Therefore, these changes will considerably improve the quality of scores published in the sheet music catalog, MuseScore.com.

The MuseScore 3.6 was called 'engraving release,' which addressed many of the biggest issues affecting sheet music's layout and appearance and resulted from a massive collaboration between the community and internal team.

MuseScore sheet

 

Two of the most notable additions in this release are Leland, our new notation font and Edwin, our new typeface.

Leland is a highly sophisticated notation style created by Martin Keary & Simon Smith. Leland aims to provide a classic notation style that feels 'just right' with a balanced, consistent weight and a finessed appearance that avoids overly stylized quirks.

The new typeface, Edwin, is based on the New Century Schoolbook, which has long been the typeface of choice by some of the world's leading publishers, explicitly chosen as a complementary companion to Leland. We have also provided new default style settings (margins, line thickness, etc.) to compliment Leland and Edwin, which match conventions used by the world's leading publishing houses.

MuseScore - Leland Smith, the creator of SCORE program

“Then there's our new typeface, Edwin, which is an open license version of new Century Schoolbook - long a favourite of professional publishers, like Boosey and Hawkes. But since there is no music written yet, you'll be forgiven for missing the largest change of all: our new notation font: Leland, which is named after Leland Smith, the creator of a now abandoned application called SCORE, which was known for the amazing quality of its engraving. We have spent a lot of time finessing this font to be a world beater.”

— Martin Keary, product owner of MuseScore

Equally as important as the new notation style is the new vertical layout system. This is switched on by default for new scores and can be activated on older scores too. It is a tremendous improvement to how staves are vertically arranged and will save the composer’s work hours by significantly reducing his reliance on vertical spacers and manual adjustment.

MuseScore 3.6 developers also created a system for automatically organizing the instruments on your score to conform with a range of common conventions (orchestral, marching band, etc.). Besides, newly created scores will also be accurately bracketed by default. A user can even specify soloists, which will be arranged and bracketed according to your chosen convention. These three new systems result from a collaboration between Simon Smith and the MuseScore community member, Niek van den Berg.

MuseScore team has also greatly improved how the software displays the notation fonts: Emmentaler and Bravura, which more accurately match the original designers' intentions and have included a new jazz font called 'Petaluma' designed by Anthony Hughes at Steinberg.

Lastly, MuseScore has made some beneficial improvements to the export process, including a new dialog containing lots of practical and time-saving settings. This work was implemented by one more community member, Casper Jeukendrup.

The team's current plans are to improve the engraving capabilities of MuseScore, including substantial overhauls to the horizontal spacing and beaming systems. MuseScore 3.6 may be a massive step, although there is a great deal of work ahead.

Links

Official release notes: MuseScore 3.6

Martin Keary’s video: “How I Designed a Free Music Font for 5 Million Musicians (MuseScore 3.6)”

Official video: “MuseScore 3.6 - A Massive Engraving Overhaul!”

Download MuseScore for free: MuseScore.org

MuseScore Created New Font in Memory of Original SCORE Program Creator

Image
MuseScore

MuseScore represents a free notation software for operating systems such as Windows, macOS and Linux. It is designed and suitable for music teachers, students & both amateur and professional composers. MuseScore is released as FOSS under the GNU GPL license and it’s accompanied by freemium MuseScore.com sheet music catalogue with mobile score viewer, playback app and an online score sharing platform. In 2018, the MuseScore company was acquired by Ultimate Guitar, which included full-time paid developers in the open source team. Since 2019 the MuseScore design team has been led by Martin Keary, known as blogger Tantacrul, who has consistently criticized composer software in connection with design and usability. From that moment on, a qualitative change was set in motion in MuseScore.

Historically, the engraving quality in MuseScore has not been entirely satisfactory. After the review by Martin Keary, MuseScore product owner (previously known as MuseScore head of design) and Simon Smith, an engraving expert, who has produced multiple detailed reports on the engraving quality of MuseScore 3.5, it has become apparent that some key engraving issues should be resolved immediately.That would have a significant impact on the overall quality of our scores. Therefore, these changes will considerably improve the quality of scores published in the sheet music catalog, MuseScore.com.

The MuseScore 3.6 was called 'engraving release,' which addressed many of the biggest issues affecting sheet music's layout and appearance and resulted from a massive collaboration between the community and internal team.

MuseScore sheet

 

Two of the most notable additions in this release are Leland, our new notation font and Edwin, our new typeface.

Leland is a highly sophisticated notation style created by Martin Keary & Simon Smith. Leland aims to provide a classic notation style that feels 'just right' with a balanced, consistent weight and a finessed appearance that avoids overly stylized quirks.

The new typeface, Edwin, is based on the New Century Schoolbook, which has long been the typeface of choice by some of the world's leading publishers, explicitly chosen as a complementary companion to Leland. We have also provided new default style settings (margins, line thickness, etc.) to compliment Leland and Edwin, which match conventions used by the world's leading publishing houses.

MuseScore - Leland Smith, the creator of SCORE program

“Then there's our new typeface, Edwin, which is an open license version of new Century Schoolbook - long a favourite of professional publishers, like Boosey and Hawkes. But since there is no music written yet, you'll be forgiven for missing the largest change of all: our new notation font: Leland, which is named after Leland Smith, the creator of a now abandoned application called SCORE, which was known for the amazing quality of its engraving. We have spent a lot of time finessing this font to be a world beater.”

— Martin Keary, product owner of MuseScore

Equally as important as the new notation style is the new vertical layout system. This is switched on by default for new scores and can be activated on older scores too. It is a tremendous improvement to how staves are vertically arranged and will save the composer’s work hours by significantly reducing his reliance on vertical spacers and manual adjustment.

MuseScore 3.6 developers also created a system for automatically organizing the instruments on your score to conform with a range of common conventions (orchestral, marching band, etc.). Besides, newly created scores will also be accurately bracketed by default. A user can even specify soloists, which will be arranged and bracketed according to your chosen convention. These three new systems result from a collaboration between Simon Smith and the MuseScore community member, Niek van den Berg.

MuseScore team has also greatly improved how the software displays the notation fonts: Emmentaler and Bravura, which more accurately match the original designers' intentions and have included a new jazz font called 'Petaluma' designed by Anthony Hughes at Steinberg.

Lastly, MuseScore has made some beneficial improvements to the export process, including a new dialog containing lots of practical and time-saving settings. This work was implemented by one more community member, Casper Jeukendrup.

The team's current plans are to improve the engraving capabilities of MuseScore, including substantial overhauls to the horizontal spacing and beaming systems. MuseScore 3.6 may be a massive step, although there is a great deal of work ahead.

Links

Official release notes: MuseScore 3.6

Martin Keary’s video: “How I Designed a Free Music Font for 5 Million Musicians (MuseScore 3.6)”

Official video: “MuseScore 3.6 - A Massive Engraving Overhaul!”

Download MuseScore for free: MuseScore.org

Virtual Machine Startup Shells Closes the Digital Divide One Cloud Computer at a Time

Image
Shells Virtual Machine and Cloud Computing

Startup turns devices you probably already own - from smartphones and tablets to smart TVs and game consoles - into full-fledged computers.

Shells (shells.com), a new entrant in the virtual machine and cloud computing space, is excited to launch their new product which gives new users the freedom to code and create on nearly any device with an internet connection.  Flexibility, ease, and competitive pricing are a focus for Shells which makes it easy for a user to start-up their own virtual cloud computer in minutes.  The company is also offering multiple Linux distros (and continuing to add more offerings) to ensure the user can have the computer that they “want” to have and are most comfortable with.

The US-based startup Shells turns idle screens, including smart TVs, tablets, older or low-spec laptops, gaming consoles, smartphones, and more, into fully-functioning cloud computers. The company utilizes real computers, with Intel processors and top-of-the-line components, to send processing power into your device of choice. When a user accesses their Shell, they are essentially seeing the screen of the computer being hosted in the cloud - rather than relying on the processing power of the device they’re physically using.

Shells was designed to run seamlessly on a number of devices that most users likely already own, as long as it can open an internet browser or run one of Shells’ dedicated applications for iOS or Android. Shells are always on and always up to date, ensuring speed and security while avoiding the need to constantly upgrade or buy new hardware.

Shells offers four tiers (Lite, Basic, Plus, and Pro) catering to casual users and professionals alike. Shells Pro targets the latter, and offers a quad-core virtual CPU, 8GB of RAM, 160GB of storage, and unlimited access and bandwidth which is a great option for software engineers, music producers, video editors, and other digital creatives.

Using your Shell for testing eliminates the worry associated with tasks or software that could potentially break the development environment on your main computer or laptop. Because Shells are running round the clock, users can compile on any device without overheating - and allow large compile jobs to complete in the background or overnight. Shells also enables snapshots, so a user can revert their system to a previous date or time. In the event of a major error, simply reinstall your operating system in seconds.

“What Dropbox did for cloud storage, Shells endeavors to accomplish for cloud computing at large,” says CEO Alex Lee. “Shells offers developers a one-stop shop for testing and deployment, on any device that can connect to the web. With the ability to use different operating systems, both Windows and Linux, developers can utilize their favorite IDE on the operating system they need. We also offer the added advantage of being able to utilize just about any device for that preferred IDE, giving devs a level of flexibility previously not available.”

“Shells is hyper focused on closing the digital divide as it relates to fair and equal access to computers - an issue that has been unfortunately exacerbated by the ongoing pandemic,” Lee continues. “We see Shells as more than just a cloud computing solution - it’s leveling the playing field for anyone interested in coding, regardless of whether they have a high-end computer at home or not.”

Follow Shells for more information on service availability, new features, and the future of “bring your own device” cloud computing:

Website: https://www.shells.com

Twitter: @shellsdotcom

Facebook: https://www.facebook.com/shellsdotcom

Instagram: https://www.instagram.com/shellscom

An Introduction to Linux Gaming thanks to ProtonDB

An Introduction to Linux Gaming thanks to ProtonDB

Video Games On Linux? 

In this article, the newest compatibility feature for gaming will be introduced and explained for all you dedicated video game fanatics. 

Valve releases its new compatibility feature to innovate Linux gaming, included with its own community of play testers and reviewers.

In recent years we have made leaps and strides on making Linux and Unix systems more accessible for everyone. Now we come to a commonly asked question, can we play games on Linux? Well, of course! And almost, let me explain. 

Proton compatibility layer for Steam client 

With the rising popularity of Linux systems, valve is going ahead of the crowd yet again with proton for their steam client (computer program that runs your purchased games from Steam). Proton is a variant of Wine and DXVK that lets Microsoft Games run on Linux operating systems. Proton is backed by Valve itself and can easily be added to any steam account for Linux gaming, through an integration called "Steam Play." 

Lately, there has been a lot of controversy as Microsoft is rumored to someday release its own app store and disable downloading software online. In response, many companies and software developers are pressured to find a new "haven" to share content with the internet. Proton might be Valve's response to this and is working to make more of its games accessible to Linux users. 

Activating Proton with Steam Play 

Proton is integrated into the Steam Client with "Steam Play." To activate proton, go into your steam client and click on Steam in the upper right corner. Then click on settings to open a new window.

Linux Gaming Steamplay
Steam Client's settings window

 

From here, click on the Steam Play button at the bottom of the panel. Click "Enable Steam Play for Supported Titles." After, it will ask you to restart steam, click yes and you are ready to play after the restart.

Your computer will now play all of steam's whitelisted games seamlessly. But, if you would like to try other games that are not guaranteed to work on Linux, then click "Enable Steam Play for All Other Titles."

What Happens if a Game has Issues?

Don't worry, this can and will happen for games that are not in Steam's whitelisted games archive. But, there is help for you online on steam and in proton's growing community. Be patient and don't give up! There will always be a solution out there.

The Review of GUI LVM Tools

GUI LVM Tools

The LVM is a powerful storage management module which is included in all the distributions of Linux now. It provides users with a variety of valuable features to fit different requirements. The management tools that come with LVM are based on the command line interface, which is very powerful and suitable for automated/batch operations. But LVM's operations and configuration are quite complex because of its own complexity. So many software companies including Red Hat have launched some GUI-based LVM tools to help users manage LVM more easily. Let’s review them here to see the similarities and differences between individual tools.

system-config-lvm (alternate name LVM GUI)

Provider: Red Hat

The system-config-lvm is the first GUI LVM tool which was originally released as part of Red Hat Linux. It is also called LVM GUI because it is the first one. Later, Red Hat also created an installation package for it. So system-config-lvm is able to be used in other Linux distributions. The installation package includes RPM packages and DEB packages.

The main panel of system-config-lvm

The main panel of system-config-lvm

The system-config-lvm only supports lvm-related operations. Its user interface is divided into three parts. The left part is tree view of disk devices and LVM devices (VGs); the middle part is the main view which shows VG usage, divided into LV and PV columns.

There are zoom in/zoom out buttons in the main view to control display ratio, but it is not enough for displaying complex LVM information.The right part displays details of the selected related objects (PV/LV/VG).

The different versions of system-config-lvm are not completely consistent in the organized way of devices. Some of them show both LVM devices and non-lvm devices (disk), the others show LVM devices only. I have tried two versions, one shows LVM devices existing in the system, namely PV/VG/LV only, no other devices; The other can display non-lvm disks and PV can be removed in disk view.

The version which shows non-lvm disks

The version which shows non-lvm disks

Supported operations

PV Operations

  • Delete PV
  • Migrate PV

VG Operations

  • Create VG
  • Append PV to VG/Remove PV from VG
  • Delete VG (Delete last PV in VG)

LV Operations

Boost Up Productivity in Bash – Tips and Tricks

Bash Tips and Tricks

Introduction

When spending most of your day around bash shell, it is not uncommon to waste time typing the same commands over and over again. This is pretty close to the definition of insanity.

Luckily, bash gives us several ways to avoid repetition and increase productivity.

Today, we will explore the tools we can leverage to optimize what I love to call “shell time”.

Aliases

Bash aliases are one of the methods to define custom or override default commands.

You can consider an alias as a “shortcut” to your desired command with options included.

Many popular Linux distributions come with a set of predefined aliases.

Let’s see the default aliases of Ubuntu 20.04, to do so simply type “alias” and press [ENTER].

Bash Tips and Tricks 1

By simply issuing the command “l”, behind the scenes, bash will execute “ls -CF”.

It's as simple as that.

This is definitely nice, but what if we could specify our own aliases for the most used commands?! The answer is, of course we can!

One of the commands I use extremely often is “cd ..” to change the working directory to the parent folder. I have spent so much time hitting the same keys…

One day I decided it was enough and I set up an alias!

To create a new alias type “alias ” the alias name, in my case I have chosen “..” followed by “=” and finally the command we want an alias for enclosed in single quotes.

Here is an example below.

Bash Tips and Tricks 2

Functions

Sometimes you will have the need to automate a complex command, perhaps accept arguments as input. Under these constraints, aliases will not be enough to accomplish your goal, but no worries. There is always a way out!

Functions give you the ability to create complex custom commands which can be called directly from the terminal like any other command.

For instance, there are two consecutive actions I do all the time, creating a folder and then cd into it. To avoid the hassle of typing “mkdir newfolder” and then “cd newfolder” i have create a bash function called “mkcd” which takes the name of the folder to be created as argument, create the folder and cd into it.

To declare a new function, we need to type the function name “mkcd ” follower by “()” and our complex command enclosed in curly brackets “{ mkdir -vp "$@" && cd "$@"; }”

Case Study: Success of Pardus GNU/Linux Migration

Pardus GNU/Linux Migration

Eyüpsultan Municipality decided to use an open source operating system in desktop computers in 2015.

The most important goal of the project was to ensure information security and reduce foreign dependency.

As a result of the research and analyzes prepared, a detailed migration plan was prepared.

As a first step, licensed office software installed on all computers has been removed. LibreOffice software was installed instead.

Later, LibreOffice training was given to the municipal staff.

Pardus GNU/Linux

Meanwhile, preparations were made for the operating system migration.

Instead of the existing licensed operating system, Turkey's developed Pardus GNU / Linux distribution was decided to use.

Applications on the Pardus GNU / linux operating system were examined in detail and unnecessary applications were removed.

And a new ISO file was created with the applications used in Eyüpsultan municipality.

This process automated the setup steps and reduced setup time.

While the project continued at full speed, the staff were again trained on LibreOffice and Pardus GNU / linux.

After their training, the users took the exam.

The Pardus GNU / Linux operating system was installed on the computers of the successful ones.

Those who failed were retrained and took the exam again.

As of 2016, 25% of a computer's operating system migration was completed.

Immigration Project Implementation Steps

Analysis

A detailed inventory of all software and hardware products used in the institution was created. The analysis should go down to the department, unit and personnel details.

It should be evaluated whether extra costs will arise in the migration project.

Planning

Migration plan should be prepared, migration targets should be determined.

The duration of the migration should be calculated and the team that will carry out the migration should be determined.

Production

You can use an existing Linux distribution.

Or you can customize the distribution you will use according to your own preferences.

Making a customized ISO file will give you speed and flexibility.ISO file icon

It also helps you compensate for the loss of time caused by incorrect entries.

Test

Start using the ISO file you have prepared in a lab environment consisting of the hardware you use.

Look for solutions, noting any problems encountered during and after installation.

BPF For Observability: Getting Started Quickly

Linux BPF For Observability: Getting Started Quickly

How and Why for BPF

BPF is a powerful component in the Linux kernel and the tools that make use of it are vastly varied and numerous. In this article we examine the general usefulness of BPF and guide you on a path towards taking advantage of BPF’s utility and power. One aspect of BPF, like many technologies, is that at first blush it can appear overwhelming. We seek to remove that feeling and to get you started.

What is BPF?

BPF is the name, and no longer an acronym, but it was originally Berkeley Packet Filter and then eBPF for Extended BPF, and now just BPF. BPF is a kernel and user-space observability scheme for Linux.

A description is that BPF is a verified-to-be-safe, fast to switch-to, mechanism, for running code in Linux kernel space to react to events such as function calls, function returns, and trace points in kernel or user space.

To use BPF one runs a program that is translated to instructions that will be run in kernel space. Those instructions may be interpreted or translated to native instructions. For most users it doesn’t matter the exact nature.

While in the kernel, the BPF code can perform actions for events, like, create stack traces, count the events or collect counts into buckets for histograms.

Through this BPF programs provide both fast and immensely powerful and flexible means for deep observability of what is going on in the Linux kernel or in user space. Observability into user space from kernel space is possible, of course, because the kernel can control and observe code executing in user mode.

Running BPF programs amounts to having a user program make BPF system calls which are checked for appropriate privileges and verified to execute within limits. For example, in the Linux kernel version 5.4.44, the BPF system call checks for privilege with:

if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN))

return -EPERM;

The BPF system call checks for a sysctl controlled value and for a capability. The sysctl variable can be set to one with the command

sysctl kernel.unprivileged_bpf_disabled=1

but to set it to zero you must reboot and make sure to not have your system configured to set it to one at boot time.

Because BPF is doing the work in kernel space significant time and overhead is saved avoiding context switches and by not necessitating transferring large amounts of data back to user space.

Not all kernel functions can be traced. For example if you were to try funccount-bpfcc '*_copy_to_user' you may get output like:

cannot attach kprobe, Invalid argument

Failed to attach BPF program b'trace_count_3' to kprobe

b'_copy_to_user'

This is kind of mysterious. If you check the output from dmesg you would see something like:

A Linux Survey For Beginners

Linux For Beginners

So you have decided to give the Linux operating system a try. You have heard it is a good stable operating system with lots of free software and you are ready to give it a shot. It is downloadable for free, so you get on the net and search for a copy, and you are in for a shock. Because there isn’t one “Linux”, there are many. Now you feel like a deer in the headlights. You want to make a wise choice, but have no idea where to start. Unfortunately, this is where a lot new Linux users give up. It is just too confusing.

The many versions of Linux are often referred to as “flavors” or distributions. Imagine yourself in an ice cream shop displaying 30+ flavors. They all look delicious, but it’s hard to pick one and try it. You may find yourself confused by the many choices but you can be sure you will leave with something delicious. Picking a Linux flavor should be viewed in the same way.

As with ice cream lovers, Linux users have their favorites, so you will hear people profess which is the “best”. Of course, the best is the one that you conclude, will fit your needs. That might not be the first one you try. According to linuxquestions.org there are currently 481 distributions, but you don’t need to consider every one. The same source lists these distributions as “popular”: Ubuntu, Fedora, Linux Mint, OpenSUSE, PCLinuxOS, Debian, Mageia, Slackware, CentOS, Puppy, Arch. Personally I have only tried about five of these and I have been a Linux user for more than 20 years. Today, I mostly use Fedora.

Many of these also have derivatives that are made for special purpose uses. For example, Fedora lists special releases for Astronomy, Comp Neuro, Design Suite, Games, Jam, Python Classroom, Security Lab, Robotics Suite. All of these are still Fedora, but the installation includes a large quantity of programs for the specific purpose. Often a particular set of uses can spawn a whole new distribution with a new name. If you have a special interest, you can still install the general one (Workstation) and update later.

Very likely one of these systems will suit you. Even within these there are subtypes and “windows treatments” to customize your operating system. Gnome, Xfce, LXDE, and so on are different windows treatments available in all of the Linux flavors. Some try to look like MS windows, some try to look like a Mac. Some try to be original, light weight, graphically awesome. But that is best left for another article. You are running Linux no matter which of those you choose. If you don’t like the one you choose, you can try another without losing anything. You also need to know that some of these distributions are related, so that can help simplify your choice.

 

Terminal Vitality

Terminal Vitality - Difference Engine

Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire. 

Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.

GUI Linux

But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.

 

GUI Linux Terminal
A tiling window manager makes xterm overload more manageable

 

A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.

Bashing GUIs

Terminal Vitality

Terminal Vitality - Difference Engine

Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire. 

Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.

GUI Linux

But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.

 

GUI Linux Terminal
A tiling window manager makes xterm overload more manageable

 

A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.

Bashing GUIs

Building A Dashcam With The Raspberry Pi Zero W

raspberry-pi-zero-w

I've been playing around with the Raspberry Pi Zero W lately and having so much fun on the command line. For those uninitiated it's a tiny Arm computer running Raspbian, a derivative of Debian. It has a 1 GHz processor that had the ability to be overclocked and 512 MB of RAM, in addition to wireless g and bluetooth.

raspberry pi zero w with wireless g and bluetooth

A few weeks ago I built a garage door opener with video and accessible via the net. I wanted to do something a bit different and settled on a dashcam for my brother-in-law's SUV.

I wanted the camera and Pi Zero W mounted on the dashboard and to be removed with ease. On boot it should autostart the RamDashCam (RDC) and there should also be 4 desktop scripts dashcam.sh, startdashcam.sh, stopdashcam.sh, shutdownshutdown.sh. Also create and a folder named video on the Desktop for the older video files. I also needed a way to power the RDC when there is no power to the vehicle's usb ports. Lastly I wanted it's data accessible on the local LAN when the vehicle is at home.

Here is the parts list:

  1. Raspberry Pi Zero W kit (I got mine from Vilros.com)
  2. Raspberry Pi official camera
  3. Micro SD card, at least 32 gigs
  4. A 3d printed case from thingverse.com
  5. Portable charger, usually used to charge cell phones and tablets on the go
  6. Command strips, it's like double sided tape that's easy to remove or velcro strips

 

First I flashed the SD card with Raspbian, powered it up and followed the setup menu. I also set a static IP address.

Now to the fun stuff. Lets create a service so we can start and stop RDC via systemd. Using your favorite editor, navigate to "/etc/systemd/system/" and create "dashcam.service"  and add the following:

[Unit]
Description=dashcam service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=forking
Restart=on-failure
RestartSec=1
User=pi
WorkingDirectory=/home/pi/Desktop
ExecStart=/bin/bash /home/pi/Desktop/startdashcam.sh

[Install]
WantedBy=multi-user.target

 

Now that's complete lets enable the service, run the following: sudo systemctl enable dashcam

I added these scripts to start and stop RDC on the Desktop so my brother-in-law doesn't have to mess around in the menus or command line. Remember to "chmod +x" these 4 scripts.

 

startdashcam.sh

#!/bin/bash

# remove files older than 3 days
find /home/pi/Desktopvideo -type f -iname '*.flv' -mtime +3 -exec rm {} \;

# start dashcam service
sudo systemctl start dashcam

 

stopdashcam.sh