A Guide to 5 Fair Selections of Open Source Ticketing Tools for Linux

A Guide to 5 Fair Selections of Open Source Ticketing Tools for Linux

Are you in search of open-source ticketing tools for Linux? Well, this article brings a guide to 5 fair selections of open source ticketing software to provide uninterrupted customer support.

Why You Need Ticketing Tools

A customer trouble ticketing (help desk) is an assistance resource to solve a customer query. Companies often provide customer support using email, website, and/or telephone. The importance of ticketing software is a crucial part for any business to be successful.

Your business can’t run properly without a satisfied client base. Increased customer retention is what businesses need. Right ticketing tools help ensure the best customer service for any business. 

Linux makes sure enterprises get the best possible customer service software for their businesses to have sustainable growth. Because a powerful set of ticketing software provides undivided support that the businesses deserve.

5 Best Ticketing Tools for Linux

This section takes you through 5 different ticketing software to be downloaded on Linux and why you should use them. So let’s begin!

osTicket

For all the newly started businesses, osTicket would be a viable open source ticketing tool. It’s a lightweight and efficient support ticket software used by a good number of companies. If you run an enterprise or a non-profit and are not ready for paid ticketing tools just yet, osTicket is a must-try.

osTicket provides a simple and intuitive web interface to integrate customer queries via phone, email, and web forms. Worried of spam emails? osTicket helps reduce spam enabling captcha filling and auto-refreshing techniques.

You can work on a priority basis through this ticketing tool and get the issues solved in the lowest possible time.

PHD Help Desk

PHD Help Desk is a PHP+Javascript+MySQL-based open source ticketing tool and is used in the registry. PHD helps follow-up incidents in an organization. PHD has a user base all across the world. The latest version of the PHD Help Desk is 2.12.

This ticketing tool works in various ways. Using PHD, incidents can be classified and registered into multiple levels, such as the state of incident, type, sub-type, priority, description of Incident, historical factors, to name a few. 

The database is consulted in a particular format depending on the user requirements. The data is then processed on a tallying sheet. Some of the advanced features of PHD Help Desk are the ability to export tickets into excel format, a PHPMailer Library to configure emails, and new password creation.

In Search of Linux Laptops? Check these 6 Places to Get Your Laptop in 2021

Linux Laptops

Are you in search of Linux laptops? This article takes you through 6 different places that offer the best Linux laptops. So get prepared to choose your Linux laptop in 2021.

Dell

When it comes to laptops, the first name that comes to my mind is Dell. For over 20 years Dell has been selling high-end Linux laptops. In a Dell store, you can get Ubuntu and Redhat Enterprise Linux laptops. These laptops are built to meet the needs of developers, businesses, and sysadmins.

For developers, who travel a lot, XPS 13 Developer Edition would be the confirmed best choice. Dell XPS comes at an expensive cost of around $1,000. So, if you’re in search of something less expensive, you can check Dell Inspiron laptops. Dell’s Precision workstations with RHEL or Ubuntu are designed for small business owners or CG professionals.

Side Note: Dell doesn’t have a separate section for Linux laptops. Type Ubuntu in the search to get a view of all its laptops with Linux preinstalled.

Slimbook

Slimbook is well known for its thin, rigid, and light durable laptops starting at a reasonable price of €930 (approx $1,075). These come with a nice screen, solid battery life, powerful CPU, and very good speakers.

This brand is from Spain. Slimbook came ahead of its competitors launching the first KDE laptops.

Slimbook brings laptops with a good variety of popular Linux distros, such as KDE Neon, Ubuntu, Ubuntu MATE, Linux Mint, Kubuntu. Additionally, their laptops have two Spanish Linux distros – Max and Lliurex. You can choose Windows OS as well with their laptops, but for that, additional costs are there.

Slimbook offers desktop systems too. So, if you ever need desktops, check it here

System76 

System76’s Linux laptops are very well built, powerful, and extremely portable. If you are a software developer, you travel a lot, and you’re in search of a laptop with 32G RAM and 1T SSD, then go for System76.

System76 laptops used to be Ubuntu-powered, initially. Later on, in 2017, this US-based company released their own Linux distro, called the Pop! OS. Pop OS is designed using Ubuntu. After that, Pop became the default OS with Ubuntu being still available.

Q&A trip to Linux’s Black Hole – /dev/null

Q&A trip to Linux’s Black Hole - /dev/null

As per NASA, “A black hole is a place in space where gravity pulls so much that even light can not get out”. Something similar exists in the Linux universe as well - it discards anything written to it and when read, just returns an EOF (end-of-file). It’s a special file which is also referred to as null device - /dev/null

So, it’s just a file?

Yes and most of the things in Linux is a file but /dev/null is not a regular file – lets dig deeper.

/dev/null 1

c in crw-rw-rw- tells us that it's a character special file, which means it processes data character by character. This can be checked using test -c as well:

/dev/null 2

What are the contents of the file?

Let’s check that using the cat command:

/dev/null 3

As stated earlier, it just returns an EOF (end-of-file) when read. So, it's empty!

What more can we know about the file?

Let’s find out using the stat command:

/dev/null 4

This tells us that its size is 0. Also, it’s good to note that the file’s read and write permission is enabled for everyone but it doesn't require execute permission. 

What happens to the file’s size when we write data to it?

Let’s try that:

/dev/null 5

The cat command returned nothing and as per the stat command, its size did not change.

As stated earlier, it discards anything written to it. You may write any amount of data to it, which will be immediately discarded, so its size will always remain 0 – Singularity?

In other words, you cannot change /dev/null

Download These 7 Cool Apps on Your Linux Machine to Make Life Easier

Linux Apps

Not only the Linux distros are open-source but the apps for Linux are also free. Though some business apps come with a cost, most of the apps created for individuals don’t have any charges.

Want to know about some of the cool apps to download on your Linux machine?

This article walks you through 7 apps to download on Linux to make your life easier. Head over to the next section!  

Ulauncher

Before downloading any other application on Linux, we recommend getting Ulauncher. That’s because you can launch any application via Ulauncher just by using the keyboard.

Try adding Ulaucher extensions to get the most of this app inspired by Alfred for Mac. You can extend capabilities with the extensions, such as looking up dictionary definitions, launching web searches, finding and copying emojis to a clipboard, and lots more.

Ulaucher runs smoothly and allows searching files and apps using hotkeys. Ulaucher features include built-in themes, customizable shortcuts, Fuzzy search, a wide variety of plugins, searching on Google, Stack Overflow, and Wikipedia.

Thunderbird

Thunderbird by Mozilla is an open-source email client. Some Linux distros offer Thunderbird installed. If it’s not, hop onto your App Center or Software Center and get it installed. You can download the app from their website as well.

The setup wizard guides you through the process of creating your own email address. Thunderbird provides email settings for most of the common email application providers. So, an existing email account can be added too. Attach multiple email accounts as per your needs.

Want to make Thunderbird look cool? Add-ons, such as themes, Lightning extension, sorting out Mail folders, are some of the features to try out.

Steam

Looking for gaming clients on Linux? Use Steam from Valve. Steam is, admittedly, the best games distribution store for top OSs like Linux.

From Shadow of the Tomb Raider to DiRT 4, and from DOTA 2 to Warhammer – Steam boasts many thousands of indie hits, retro-flavored, and AAA titled games for Linux

Improve The CrowdSec Multi-Server Installation With HTTPS Between Agents

CrowdSec Multi-Server Installation With HTTPS

Prerequisites

This article is a follow-up from the Crowdsec multi-server setup. It applies to a configuration with at least two servers (referred to as server-1 and one of server-2 or server-3).

Goals

To address security issues posed by clear http communication in our previous crowdsec multi-server installation, we propose solutions to achieve communication between Crowdsec agents over encrypted channels. On top of that, the third solution allows server-2 or server-3 to trust server-1 identity, and avoid man-in -the -middle attacks.

Using self-signed certificates

Create the certificate

First we have to create a certificate. This can be achieved with the following one-liner.

openssl req -x509 -newkey rsa:4096 -keyout encrypted-key.pem -out cert.pem -days 365 -addext "subjectAltName = IP:172.31.100.242"

For now crowdsec is not able to ask for the passphrase of the private key when starting.  Thus we have the choice to decipher by hand the private key each time we start or reload crowdsec or store the key unencrypted. In any way to strip the passphrase one can do:

openssl rsa -in encrypted-key.pem -out key.pem

Then, the unencrypted key file can be safely deleted after Crowdsec is started.

Configure crowdsec for using a self-signed certificate

On server-1 we have to tell crowdsec to use the generated certificate. Hence, the  tls.cert_file and tls.key_file option in the api.server section of the following /etc/crowdec/config.yaml excerpt set to the generated certificate file.

api:

  server:

    log_level: info

    listen_uri: 10.0.0.1:8080

    profiles_path: /etc/crowdsec/profiles.yaml

    online_client: # Crowdsec API credentials (to push signals and receive bad 

    tls:

      cert_file: /etc/crowdsec/ssl/cert.pem

      key_file: /etc/crowdsec/ssl/key.pem

On the client side configuration changes happen in two files. First we have to modify /etc/crowdec/config.yaml to accept self-signed certificates by setting the insecure_skip_verify to true.

We have to change http for https in the  /etc/crowdsec/local_api_credentials.yaml file too in order to reflect the changes. This small change has to be done on all three servers (server-1, server-2 and server-3).

Experimenting with Python implementation of Host Identity Protocol

Experimenting with Python implementation of Host Identity Protocol

INTRODUCTION

Sometimes it is easier to implement prototypes in user space using high-level languages, such as Python or Java. In this document we attempt to describe our implementation effort related to Host Identity Protocol version 2. In the first part, we describe various security solutions, then we discuss some implementation details of the HIP protocol, and finally, in the last part of this work we discuss the performance of the HIP and IPSec protocols implemented using Python language.

BACKGROUND

In this section we will describe the basic background. First, we will discuss the problem of mobile Internet and introduce the Host Identity Protocol. We then move to the discussion of various security protocols. We will conclude the section with the discussion of Elliptic Curves and a variant of DiffieHellman algorithm, which uses EC cryptography (ECC).

Dual role of IP

Internet was designed initially so that the Internet Protocol (IP) address is playing dual role: it is the locator, so that the routers can find the recipient of a message, and it is an identifier, so that the upper layer protocols (such as TCP and UDP) can make bindings (for example, transport layer sockets use IP addresses and ports to make a connections). This becomes a problem when a networked device roams from one network to another, and so the IP address changes, leading to failures in upper layer connections. The other problem is establishment of the authenticated channel between the communicating parties. In practice, when making connections, long term identities of the parties are not verified. Of course, there are solutions such as SSL which can readily solve the problem at hand. However, SSL is suitable only for TCP connections and most of the time practical use cases include only secure web surfing and establishment of VPN tunnels. Host Identity Protocol on the other hand is more flexible: it allows peers to create authenticated secure channels on the network layer, and so all upper layer protocols can benefit from such channels.

HIP13 relies on the 4-way handshake to establish an authenticated session. During the handshake, the peers authenticate each other using long-term public keys and derive session keys using Diffie-Hellman or Elliptic Curve (EC) Diffie-Hellman algorithms. To combat the denial-of-service attacks, HIP also introduces computational puzzles.

Gaming Time? Top 3 VR Games Available on Linux

Top 3 VR Games Available on Linux

It’s possible to deep dive into the virtual reality gaming world on your Linux system. Want to explore VR games on Linux? This article takes you through the top 3 VR games available on Linux.

Ready to get amazed? Let’s start.

What are VR Games?

VR games are the new-gen computer games enabled with virtual reality, in short, VR technology. It gives players a first-person perspective of all the gaming actions. As a participant, you can enjoy the gaming environment through your VR gaming devices, such as hand controllers, VR headsets, sensor-equipped gloves, and others.

VR games are played on gaming consoles, standalone systems, powerful laptops, and PCs compatible with VR headsets including HTC Vive, Oculus Rift, HP Reverb G2, Valve Index, and others.

Now, a little brief about VR technology. By now, you know that VR is an abbreviation of Virtual Reality. This is, basically, a computer-generated simulation where the player controls its generated objects through the limb and facial movements in a three-dimensional environment. This environment is interacted with through special equipment, like clothing having touch simulating pressure nodes and enclosed glasses with screens in front, instead of lenses.

A lot of VR objects are usable as they are in reality and the gaming developers are making the VR universe more and more immersive with each passing day.

How to Get VR Games on Linux

The Steam store seems to be the best way to get VR games on your system. Good news: you don’t need to worry about installing all the modules and software to run the game smoothly. Steam client is ready to take all the worries. So, get a Steam account by downloading the client from Steam’s site.

Back in 2019, it was reported that VR Linux desktops are around the corner. What about now? Xrdesktop is here for you. Xrdesktop is free to use. It lets you work with the common desktop environments, like GNOME and KDE.

The SimulaVR is a similar open-source project to check out.

Top 3 VR Games Available on Linux

Now the fun part: In this section, we’ll share the best 5 VR games to play on Linux in your gaming time.

How to Check Battery Status Using Linux Command Line

How to Check Battery Status Using Linux Command Line

Checking the battery status through GUI is easy. Hovering the mouse cursor over the battery indicator given in the Laptop task bar simply shows the battery level. But, did you know you can find the battery status through the Linux command line as well?

Yes, there are some utilities in Linux that can be of help in this regard.

This article explains 4 different methods of checking laptop battery status using the Linux command line. So,

Why Do You Need to Check Battery Status?

So, why do you need to check the battery status? Knowing laptop battery health on a monthly basis is a good practice. It’ll inform you about any issues your computer might have related to charging or battery life. You can get alerted earlier and take the measures required, such as charging or altering batteries.

When your PC is not active, the power management feature levels down its components to a low-power state. And also turns off the power. 

Similarly, knowing the power source, battery model name, the technology used, vendors, etc helps operate your devices better and keep work going without any hassles.

How to Check Battery Status Using Linux Command Line

Follow the methods mentioned below to check battery status using the Linux command line. Check Battery Status with “upower” CommandThe command produces output 

Check Battery Status with upower Command

The upower command-line tool helps extract information related to the power source (batteries). It provides an interface to list down all the power sources of your PC or laptop.

Options Used with the upower Command

  • –monitor: You can print a line each time a battery or power source is added by connecting –monitor to upower. It also produces outputs while the power sources are removed or changed.

  • –monitor-detail: This option prints the full power source detail whenever an event occurs.

 

Syntax

upower -i /org/freedesktop/UPower/devices/battery_BAT0

upower -i `upower -e | grep 'BAT'`

upower -i $(upower -e | grep BAT) | grep --color=never -E "state|to\ full|to\ empty|percentage"

The above are three different ways of using acpi command to find power source information.

Use cat and find

The “cat” and “find” commands also help find details about your battery and power source.

Syntax

For the battery capacity, the syntax would be:

cat /sys/class/power_supply/BAT0/capacity

For more detailed battery information use the find command.

How to Decrease Video Sizes Using FFmpeg in Linux

How to Decrease Video Sizes Using FFmpeg in Linux

Decreasing video sizes becomes necessary when space is limited in cloud services, disks, or personal storage drives. You can easily hold onto larger files by chopping them down to a lower size.

The world of open-source video editing tools is huge. So, choosing one can be tricky. This article explains how you can efficiently decrease video sizes using FFmpeg in Linux.

What is FFmpeg?

So, what is FFmpeg? FFmpeg is a free and open-source command-line utility used in handling audio, video, other multimedia files, and streams in Linux. It has widespread use in video scaling, format transcoding, basic editing, standards compliance, and video post-production effects.

It can create GIFs, edit videos, and record also. You can convert videos at up to a minuscule level while maintaining the quality to a great extent. 

MPEG video standards group brought inspiration in defining the name of this media handling software project, while “FF” stands for “Fast Forward”. FFmpeg functions as a backbone of several software projects and renowned media players – YouTube, Blender, VLC, and iTunes, to name a few.

How to Install FFmpeg

Want to get hands-on with it? Let’s install FFmpeg.

Basically, you have to use the following codes for Ubuntu, Arch Linux, and Fedora respectively.

# Debian/Ubuntu

sudo apt-get install ffmpeg


# Arch Linux

sudo pacman -S ffmpeg


#REHL/CentOS/Fedora

sudo dnf install ffmpeg

sudo rpm install ffmpeg

sudo yum install ffmpeg

 

And FFmpeg will be in your Linux distro.

Basic Usage of FFmpeg

To convert a media file using the default settings of FFmpeg, type:

ffmpeg -i inputfile.video outputfile.video

The above command will change the specified format into the output format given. 

How to Decrease Video Sizes Using FFmpeg

Going to the basics: Not all video files are created following the same procedure. Hence, file sizes tend to be different. For example, the avi video file extensions are larger than mp4 files.

Takeaway? The smallest mp4 file of a video will be smaller than the smallest avi file of the same video. However, the quality will vary with each of these varied file sizes. Mp4s are not the smallest size you can expect. Various containers for Windows media videos and flash videos (FLV and WMV) are the winners.

How to Replace a Variable in a File Using SED

How to Replace a Variable in a File Using SED

Want to know the tricks of replacing a variable in a file using the SED command?

This article will give you an overview of replacing a variable value in a file using SED. Before replacing a variable in a file using SED, you need to understand what SED is, the syntax of SED, and how SED works.

I’ll also show how to perform delete operations using SED. This will come after the variable value replacement part. If you’re looking for that, you can directly jump onto that, and skip the rest.

So, let’s begin the guide.

 

What is SED?

So, what is  SED?

SED command in Linux stands for Stream Editor. It performs searching, insertion, find and replace, deletion. In the Linux world, SED is mainly popular for its find and replace functionality.

With the help of SED, coders can edit files without even opening them.

In a nutshell,

  • SED is a text stream editor. It can be used to do find and replace, insertion, and delete operations in Linux.

  • You can modify the files as per your requirements without having to open them.

  • SED is also capable of performing complex pattern matching.

 

Syntax of SED

Here we’ll see the syntax of SED used in a simple string replacement. This will help understand the command better.

So the syntax is:

sed -i 's/old-string/new-string/g' file_name

 

How SED Works

In the syntax, you only need to provide a suitable “new string” name that you want to be placed with the “old string”. Of course, the old string name needs to be entered as well.

Then, provide the file name in the place of “file_name” from where the old string will be found and replaced.

Here’s a quick example to clear the concept.

Suppose, we have a random text “Welcome to Linux Channel” in a text file called “file.txt”.

Now, we want to replace “Channel” with “Family”. How can we do that?

First, write the below-given command in the terminal to create the file.

cat file.txt

Press enter, then type:

Welcome to Linux Channel

Let’s alter “Channel” with “Family” now. So, go to the next line, and type:

sed -i 's/Channel/Family/g' file.txt

After running the command, to view the file again, type:

cat file.txt

You’ll see “Channel” has been replaced with “Family”. In this way, you can replace a string using the SED command. Let’s learn how to replace a variable using SED, now.

How to Create a Shell Script in Linux

How to Create a Shell Script in Linux

Do you want to create a Shell script in your Linux system?

This guide will take you through how to create a shell script using multiple text editors, how to add comments, and how to use Shell variables.

But before heading over to creating a shell script, let’s understand what Shell scripting in Linux is.

What is Shell Scripting in Linux?

So, what’s Shell scripting?

Shell Scripting is defined as an open-source program that’s run by Linux or Unix shell. Through shell scripting, you can write commands to be executed by the shell.

Lengthy and repetitive commands are usually combined into a simple command script. You can store this script and execute it whenever needed. 

Shell scripting in Linux makes programming effortless.

Ways of Creating a Simple Shell Script in Linux

Creating a simple shell script in Linux is very easy. You can do that using multiple text editors. This tutorial will show how to create a shell script with two different methods, such as 1) using the default text editor, and 2) Using the Vim text editor tool.

Method 1: Using the Default Text Editor

To create a shell script using the default text editor, just follow the steps given below.

Step 1: Create a text file having a “.sh” extension. Then type a simple script.

Shell Script testing.sh

Step 2: Now don’t change the directory. And open the terminal. Using the command below, give executable access to the file created.

chmod +x testing.sh

Step 3: Execute the below-given script in the terminal:

./testing.sh

This was a simple technique of creating a shell script in Linux using the default editor. Now, let’s look at the next method.

Method 2: Using the Vim Text Editor Tool

Vim text editor tool is a tool that helps create a shell script in a Linux system. In case you don’t have it already installed, use the command to install Vim:

sudo apt install vim

Now follow the steps for creating a shell script using the tool.

Step 1: For opening the editor, simply type:

vim

Step 2: Once you’re in, open the terminal. Then create a bash file via:

vi testing.sh

After the execution of the command, the editor will appear as below.

SQLite Extraction of Oracle Tables Tools, Methods and Pitfalls

sqlite

Introduction

The SQLite database is a wildly successful and ubiquitous software package that is mostly unknown to the larger IT community. Designed and coded by Dr. Richard Hipp, the third major revision of SQLite serves many users in market segments with critical requirements for software quality, which SQLite has met with compliance to the DO-178B avionics standard. In addition to a strong presence in aerospace and automotive, most major operating system vendors (including Oracle, Microsoft, Apple, Google, and RedHat) include SQLite as a core OS component.

There are a few eccentricities that may trip up users from other RDBMS environments. SQLite is known as a “flexibly-typed” database, unlike Oracle which rigidly enforces columnar datatypes; character values can be inserted into SQLite columns that are declared integer without error (although check constraints can strengthen SQLite type rigidity, if desired). While many concurrent processes are allowed to read from a SQLite database, only one process is allowed write privilege at any time (applications requiring concurrent writers should tread carefully with SQLite). There is no network interface, and all connections are made through a filesystem; SQLite does not implement a client-server model. There is no “point in time recovery,” and backup operations are basically an Oracle 7-style ALTER DATAFILE BEGIN BACKUP that makes a transaction-consistent copy of the whole database. GRANT and REVOKE are not implemented in SQLite, which uses filesystem permissions for all access control. There are no background processes, and newly-connecting clients may find themselves delayed and responsible for transaction recovery, statistics collection, or other administrative functions that are quietly performed in the background in this “zero-administration database.” Some history and architecture of SQLite can be found in audio and video records of Dr. Hipp's discussions.

Vulnerability Detection and Patching: A Survey Of The Enterprise Environment

Vulnerability Detection and Patching Linux

Detecting vulnerabilities and managing the associated patching is challenging even in a small-scale Linux environment. Scale things up and the challenge becomes almost unsurmountable. There are approaches that help, but these approaches are unevenly applied.

In our survey, State of Enterprise Vulnerability Detection and Patch Management, we set out to investigate how large organizations handle the dual, linked security concerns of vulnerability detection and patch management.

The results produced interesting insights into the tools that organizations depend on to effectively deal with vulnerability and patch management at scale, how these tools are used, and which restrictions organizations face in their battle against threat actors. Download the copy of the report here.

Vulnerability management is an enterprise responsibility

Before we dive into the results of our survey, let’s take a quick look at why vulnerability management operations matter so much in large organizations.

Vulnerabilities are widespread and a major cybersecurity headache. In fact, vulnerabilities are such a critical problem that laws and regulations are in place to ensure that covered organizations adequately perform vulnerability management tasks – because the failure to do so can hurt a company’s customers.

Each industry has different rules that apply to it – with organizations that handle personal data such as healthcare records and financial service firms operating under the strictest rules. It has an impact on day-to-day vulnerability management operations – some organizations must act much faster and more thoroughly than others.

This is one of the points we explored in the survey, trying to understand how different industry compliance requirements affect vulnerability operations on the ground.

The survey

Early in 2021, we kicked off a survey with the intention to study three key factors in vulnerability and patch management operations. We examined patch deployment practices, how maintenance windows are handled, and tried to get a view into the overall level of security awareness of the organizations that responded.

The survey was advertised publicly to IT professionals around the world and it continues to run, even though we have published the initial results.

Live Patching Requires Reproducible Builds – and Containers Are the Answer

Live patching a key threat management tool

We know that live patching has real benefits because it significantly reduces the downtime associated with frequent patching. But live patching is relatively difficult to achieve without causing other problems and for that reason live patching is not implemented as frequently as it could be. After all, the last thing sysadmins want is a live patch that crashes a system.

Reproducible builds are one of the tools that can help developers to implement live patching consistently and safely. In this article, I explain why reproducible builds matter for live patching, what exactly reproducible builds are, and how containers are coming to the rescue.

Live patching: a key threat management tool

Patching is a critical part of systems maintenance because patching fixes faulty and buggy code. More importantly, security teams rely on patching to plug security holes, and there is a real urgency to it. Waiting for a convenient maintenance window to patch is risky because it leaves an opportunity for hackers to take advantage of an exploit.

It creates a difficult conundrum: maintain high availability but run a security risk, or patch frequently but end up with frustrated stakeholders. Live patching bridges that gap. With live patching, the offending code is swapped out while a process is actively running, without restarting the application or service that depends on that process.

Implementing live patching isn’t easy

Live patching is not that straightforward to accomplish – the drop-in code must “fit” in a like-for-like manner, or all sorts of unwanted things can happen. Get it wrong, and the application – or entire server – will crash.

The code behind a running process usually comes from a binary executable file – a machine-readable block of code compiled from source code. A kernel, for example, has thousands of source files all compiled into a few binaries.

With live patching, the live patch code must fit in at an exact level. Yes, the binary file containing the patch code will be different from the binary file containing the bad code. Nonetheless, the new code must slot into place precisely and must depend on the same version of imported libraries. The live patch code must also be compiled using the same compiler options and flags. Bit endianness matters too – the binary file must be ordered in exactly the same way.

In principle, all this is achievable – but in practice, it is a challenge. For example, day-to-day system updates often impact libraries. These libraries could be slightly different, in turn producing binaries that are slightly different when compiling code.

An Abridged Guide to the Enterprise Linux Landscape

An Abridged Guide to the Enterprise Linux Landscape

Whether you are welcoming CentOS Stream or looking for alternatives, the recent decision from the CentOS community to focus on CentOS Stream has forced a lot of technical leaders to rethink their Enterprise Linux strategy.  Beneath that decision, the business landscape involving Linux has shifted and expanded since its enterprise debut in the late 90s, when IBM would invest $1 billion in its development.

Today, Linux comes in every shape and size imaginable — with the kernel running on tiny low power computers and IoT devices, mobile phones, tablets, laptops all the way up to midrange and high-power mainframe servers.

Cutting through that expansive selection to understand which Linux distributions truly align with the needs of a business can lead to more frictionless deployments and successful execution while minimizing waste in maintenance cycles and optimizing overall cost.

This abridged guide to the Enterprise Linux landscape can give businesses an overview of which flavor (or flavors) of Linux will most adequately match their use cases.

For those looking for a more comprehensive guide, be sure to check out the Decision Maker’s Guide to Enterprise Linux.

Finding the Right Linux Flavor

Committing to a flavor can introduce many concerns. Beyond managing the deployments host-by-host, administrators must also consider the ecosystem components available to support the implementation at scale.

What mechanisms will be available for automatic patching? Can you optimize bandwidth by mirroring the distributions repository? Is remote desktop a concern?  What about the kernel version requirements? Linux Kernel 4 contains optimizations that lead directly to dollars saved on cloud deployments, can you take advantage of that?TUX

Are you looking at a container strategy, thinking of deploying your apps into Kubernetes, or other multi-cloud strategies? What about options for embedded Linux

Nowadays there’s a preferred flavor of Linux for each of these concerns. A single flavor of Linux is really the Linux kernel surrounded by a curated suite of other free software. That other free software is what makes one flavor of Linux distinct from another.

Systemd Service Strengthening

Systemd Service Strengthening

Introduction

In an age where hacker attacks are a daily occurrence, it is of fundamental importance to minimize the attack surface. Containerization is probably the best way to isolate a service provided for the public, but this is not always possible for several reasons. For example, think of a legacy system application developed on systemd. This could make the most of the capabilities provided by a systemd-based operative system and it could be managed via a systemd unit, or it could automatically pull updates using a systemd timer, and so on.

For this reason, we are going to explain how to improve the security of a systemd service. But first, we need to step back for a moment.  With the latest releases systemd has implemented some interesting features relating to security, especially sandboxing. In this article we are going to show step-by-step how to strengthen services using specific directives, and how to check them with the provided systemd suite.

Debugging

Systemd provided an interesting tool named systemd-analyze. This command analyzes the security and the sandboxing settings of one or more specified services. The command checks for various security-related service settings, assigning each a numeric "exposure level" value, depending on how important the setting is. It then calculates an overall exposure level for the whole unit through an estimation in the range 0.0…10.0, which tells us how exposed a service is security-wise.

Systemd Analyze

 

This allows us to check the improvements applied to our systemd service step-by-step. As you can see, several services are now marked as UNSAFE, this is probably due to the fact that not all of the applications are applying the features provided by systemd.

Getting Started

Let's start from a basic example. We want to create a systemd unit to start the command python3 -m http.server as a service:

[Unit]
Description=Simple Http Server
Documentation=https://docs.python.org/3/library/http.server.html

[Service]
Type=simple
ExecStart=/usr/bin/python3 -m http.server
ExecStop=/bin/kill -9 $MAINPID

[Install]
WantedBy=multi-user.target

Save the file and place it under the specific systemd directory of yor distribution.

By checking the security exposure through systemd-analyze security we get the following result:

eBPF for Advanced Linux Infrastructure Monitoring

eBPF for Advanced Linux Infrastructure Monitoring

A year has passed since the pandemic left us spending the better part of our days sheltering inside our homes. It has been a challenging time for developers, Sysadmins, and entire IT teams for that matter who began to juggle the task of monitoring and troubleshooting an influx of data within their systems and infrastructures as the world was forced online. To do their job properly, free, open-source technologies like Linux have become increasingly attractive, especially amongst Ops professionals and Sysadmins in charge of maintaining growing and complex environments. Engineers, as well, are using more open-source technologies largely due to the flexibility and openness they have to offer, versus commercial offerings that are accompanied by high-cost pricing and stringent feature lock-ins.

One emerging technology in particular - eBPF - has made its appearance in multiple projects, including commercial and open-source offerings. Before discussing more about the community surrounding eBPF and its growth during the pandemic, it’s important to understand what it is and how it’s being utilized. eBPF, or extended Berkley packet filtering, was originally introduced as BPF back in 1992 in a paper by Lawrence Berkeley Laboratory researchers as a rule-based mechanism to filter and capture network packets. Filters would be implemented to run inside a register-based Virtual Machine (VM), which itself would exist inside the Linux Kernel. After several years of non-activity, BPF was extended to eBPF, featuring a full-blown VM to run small programs inside the Linux Kernel. Since these programs run from inside the Kernel, they can be attached to a particular code path and be executed when it is traversed, making them perfect to create applications for packet filtering and performance analysis and monitoring.

Originally, it was not easy to create eBPF programs, as the programmer needed to know an extremely low-level language. However, the community around that technology has evolved considerably through their creation of tools and libraries to simplify and speed up the process of developing and loading an eBPF program inside the Kernel. This was crucial for creating a large number of tools that can trace system and application activity down to a very granular level. The image that follows demonstrates this, showing the sheer number of tools that exist to trace various parts of the Linux stack.

How to set up a CrowdSec multi-server installation

How to set up a CrowdSec multi-server installation

Introduction

CrowdSec is an open-source & collaborative security solution built to secure Internet-exposed Linux services, servers, containers, or virtual machines with a server-side agent. It is a modernized version of Fail2ban which was a great source of inspiration to the project founders.

CrowdSec is free (under an MIT License) and its source code available on GitHub. The solution is leveraging a log-based IP behavior analysis engine to detect attacks. When the CrowdSec agent detects any aggression, it offers different types of remediation to deal with the IP behind it (access prohibition, captcha, 2FA authentication etc.). The report is curated by the platform and, if legitimate, shared across the CrowdSec community so users can also protect their assets from this IP address.

A few months ago, we added some interesting features to CrowdSec when releasing v1.0.x. One of the most exciting ones is the ability of the CrowdSec agent to act as an HTTP rest API to collect signals from other CrowdSec agents. Thus, it is the responsibility of this special agent to store and share the collected signals. We will call this special agent the LAPI server from now on.

Another worth noting feature, is that mitigation no longer has to take place on the same server as detection. Mitigation is done using bouncers. Bouncers rely on the HTTP REST API served by the LAPI server.

Goals

In this article we’ll describe how to deploy CrowdSec in a multi-server setup with one server sharing signal.

CrowdSec Goals Infographic

Both server-2 and server-3 are meant to host services. You can take a look on our Hub to know which services CrowdSec can help you secure. Last but not least, server-1 is meant to host the following local services:

  • the local API needed by bouncers

  • the database fed by both the three local CrowdSec agents and the online CrowdSec blocklist service. As server-1 is serving the local API, we will call it the LAPI server.

We choose to use a postgresql backend for CrowdSec database in order to allow high availability. This topic will be covered in future posts. If you are ok with no high availability, you can skip step 2.

Develop a Linux command-line Tool to Track and Plot Covid-19 Stats

Develop a Linux command-line Tool to Track and Plot Covid-19 Stats

It’s been over a year and we are still fighting with the pandemic at almost every aspect of our life. Thanks to technology, we have various tools and mechanisms to track Covid-19 related metrics which help us make informed decisions. This introductory-level tutorial discusses developing one such tool at just Linux command-line, from scratch.

We will start with introducing the most important parts of the tool – the APIs and the commands. We will be using 2 APIs for our tool - COVID19 API and Quickchart API and 2 key commands – curl and jq. In simple terms, curl command is used for data transfer and jq command to process JSON data.

The complete tool can be broken down into 2 keys steps:

1. Fetching (GET request) data from the COVID19 API and piping the JSON output to jq so as to process out only global data (or similarly, country specific data).

$ curl -s --location --request GET 'https://api.covid19api.com/summary' | jq -r '.Global'

{

  "NewConfirmed": 561661,

  "TotalConfirmed": 136069313,

  "NewDeaths": 8077,

  "TotalDeaths": 2937292,

  "NewRecovered": 487901,

  "TotalRecovered": 77585186,

  "Date": "2021-04-13T02:28:22.158Z"

}

2. Storing the output of step 1 in variables and calling the Quickchart API using those variables, to plot a chart. Subsequently piping the JSON output to jq so as to filter only the link to our chart.

$ curl -s -X POST \

       -H 'Content-Type: application/json' \

       -d '{"chart": {"type": "bar", "data": {"labels": ["NewConfirmed ('''${newConf}''')", "TotalConfirmed ('''${totConf}''')", "NewDeaths ('''${newDeath}''')", "TotalDeaths ('''${totDeath}''')", "NewRecovered ('''${newRecover}''')", "TotalRecovered ('''${totRecover}''')"], "datasets": [{"label": "Global Covid-19 Stats ('''${datetime}''')", "data": ['''${newConf}''', '''${totConf}''', '''${newDeath}''', '''${totDeath}''', '''${newRecover}''', '''${totRecover}''']}]}}}' \

       https://quickchart.io/chart/create | jq -r '.url'

https://quickchart.io/chart/render/zf-be27ef29-4495-4e9a-9180-dbf76f485eaf

That’s it! Now we have our data plotted out in a chart:

LJ Global-Stats-Track-And-Plot-Covid19-Stats

FSF’s LibrePlanet 2021 Free Software Conference Is This Weekend, Online Only

LibrePlanet 2021 Free Software Conference

On Saturday and Sunday, March 20th and 21st, 2021, free software supporters from all over the world will log in to share knowledge and experiences, and to socialize with others within the free software community. This year’s theme is “Empowering Users,” and keynotes will be Julia Reda, Nathan Freitas, and Nadya Peek. Free Software Foundation (FSF) associate members and students attend gratis at the Supporter level. 

You can see the schedule and learn more about the conference at https://libreplanet.org/2021/, and participants are encouraged to register in advance at https://u.fsf.org/lp21-sp

The conference will also include workshops, community-submitted five-minute Lightning Talks, Birds of a Feather (BoF) sessions, and an interactive “exhibitor hall” and “hallway” for socializing.