SFTP Port Forwarding: Enabling Suppressed Functionality

SFTP Port Forwarding: Enabling Suppressed Functionality

Introduction

The SSH protocol enables three major classes of remote server activities: a) command execution (including a login shell), b) network forwarding and manipulation, and c) file transfer.

The OpenSSH maintainers have determined that sftp and scp have no legitimate use for port forwarding (via the -L and -R options). A flag to explicitly disable these features is unconditionally passed to the child SSH executable during file transfers with these utilities.

There may be users with a legitimate need for these features. An obvious subset are penetration testers tasked to verify that this capability is explicitly disabled on public SFTP servers.

Below are two techniques to enable these suppressed features, by either modifying strings in the sftp binary itself, or by redirection through shells that are able to easily edit the command line. Depending upon the capabilities of the platform, either technique might be required to achieve this goal.

Suppression Details

To begin, it is important to locate running processes of interest. The shell function below will reveal PIDs that match a shell pattern (and note this is not a regex). This runs under Debian dash (and most other common shells) and relies on BSD options to ps:

pps () { local a= b= c= IFS=$'\r'; ps ax | while read -r a
    do [ "$b" ] || c=1; for b; do case "$a" in *"$b"*) c=1;;
        esac; done; [ "$c" ] && printf '%s\n' "$a" && c=; done; }

A conventional SFTP session is launched, in order to examine the processes associated with it:

$ id
uid=1001(aturing) gid=1001(aturing) groups=1001(aturing)...

$ sftp aturing@sftp.victimandum.com
aturing@sftp.victimandum.com's password:
Connected to sftp.victimandum.com.
sftp>

We assume above that the local UNIX user has an account on the remote SFTP server of the same username.

Once the session is running, a local process search for the username reveals the child SSH process that is spawned by SFTP:

SoCal Linux Expo Back For 20th Anniversary

 SoCal Linux Expo Back For 20th Anniversary

Now in its 20th year of supporting and promoting the FOSS community, SCaLE 20x – the Southern California Linux Expo – will be held at the Pasadena Convention Center March 9-12, 2023.

This 4-day annual event brings together the vibrant Open Source user community, tech industry leaders, developers, users and many more.  Session track themes have included security, developer, embedded, medical and legal to name a few.  The expert speakers have never failed to impress and inform.  Your biggest challenge will likely be trying to pick which session to attend.  One certainty: Saturday’s keynote by Arun Gupta will be standing room only.

Since my first SCaLE, way back to 3x, the expo floor has been my favorite hangout.  The expansive exhibit hall provides an opportunity to meet the people behind a favorite distribution or application.  There are compelling demonstrations of Open Source-based solutions, educational offerings and companies looking for top talent.  Oh, and SCaLE exhibitors always have the best swag.

Would you like a glimpse of the future of Linux/FOSS? SCALE is a family friendly event that welcomes all ages.   SCaLE: The Next Generation, the track focused on kids and the work they do in OSS, is back for 20x.  All sessions are delivered by K-12 students and highlight interesting work or projects these students have been working on. It also includes hands-on activities for younger attendees.  Topics are expected to include security, video editing, big data and more.  These young folks will leave you feeling confident in the future of FOSS.

Other co-located events include Ceph Days SoCal, Kubernetes Community Day, DevOpsDayLA, PostgreSQL @ SCaLE , SCaLE Kids and Embedded Apprentice Linux Engineer (E-ALE).  Always popular, the Hands-On Beginner Linux Training is here for 20x.  Book your spot early for this one.

Monitoring Oracle Servers With Checkmk

Monitoring Oracle Servers With Checkmk

Databases are essential for many IT processes. Their performance and reliability depends on many factors and it makes sense to use a dedicated tool that helps you to stay on top of things. Monitoring your database with an external tool helps you identify performance issues proactively, but there are many factors to consider. With the wrong approach, you run the risk of missing valuable information and also can waste a lot of time configuring your database monitoring.

In this tutorial, I will give a quick guide on how to monitor Oracle Database with Checkmk, a universal monitoring tool for all kinds of IT assets. Oracle Database is one of the most common database management systems (DBMS) for relational databases and Checkmk comes with a great preconfigured Oracle monitoring, so it will only take you a few minutes to get started. This will not only ensure the best performance of your databases, but also give you the option to find optimization opportunities.

Preconditions

You need a Checkmk site up and running. For this article, I am using the Checkmk Free Edition version 2.1.0p19, which I installed on Ubuntu server (version 20.04). Checkmk runs on Linux, including RHEL, CentOS, Debian, and others, as well as in a container, or as a virtual appliance. You can download the latest Checkmk version for all platforms from the official Checkmk website and follow this video tutorial to take your first steps.

In this tutorial, I will use a simple Oracle server as an example. In my case, my Oracle database version 19.0 runs on a hardware server, and I use Rocky Linux version 9.0 as my operating system. I will show you how to configure and install the Checkmk agent. However, Checkmk can also monitor remote databases without the need to install an agent.

You don't need any previous experience with Oracle monitoring, as Checkmk takes over the collection of the most important monitoring services and also sets threshold values for warnings and critical states. However, you need access rights to create user accounts for the database you want to monitor, you will do this in the first step.

Step 1: Creating an Oracle user account for the monitoring

First, you need to create a user account that Checkmk will use to query the monitoring data from your database. In my case, I am using SQL Plus and create the user through the terminal. The procedure differs depending on which Oracle environment and tool you are using. You can read more details about this in the Oracle documentation.

Fault-Tolerant SFTP scripting – Retry Failed Transfers Automatically

Fault-Tolerant SFTP scripting

Introduction

The whole of modern networking is built upon an unreliable medium. Routing equipment has free license to discard, corrupt, reorder, or duplicate data which it forwards. The understanding of the IP layer in TCP/IP is that there are no guarantees of accuracy. No IP network can claim to be 100% reliable.

The TCP layer acts as a guardian atop IP, ensuring data that it produces is correct. This is achieved with a number of techniques that sometimes purposely lose data in order to determine network limits. As most might know, TCP provides a connection-based network with guaranteed delivery atop an IP connectionless network that can and does discard traffic at will.

How curious it is that our file transfer tools are not similarly robust in the face of broken TCP connections. The SFTP protocol resembles both its ancestors and peers in that no effort is made to recover from TCP errors that cause connection closure. There are tools to address failed transfers (reget and reput), but these are not triggered automatically in a regenerated TCP session (those requiring this property might normally turn to NFS, but this requires both privilege and architectural configuration). Users and network administrators alike might be rapt with joy should such tools suddenly become pervasive.

What SFTP is able provide is a return status, an integer that signals success when it is the value of zero. It does not return status by default for file transfers, but only does so when called in batch mode. This return status can be captured by a POSIX shell and retried when non-zero. This check can even be done on Windows with Microsoft's port of OpenSSH with the help of Busybox (or even PowerShell, with restricted functionality). The POSIX shell script is deceptively simple, but uncommon. Let's change that.

Failure Detection with the POSIX Shell

The core implementation of SFTP fault tolerance is not particularly large, but batch mode assurance and standard input handling add some length and complexity, as demonstrated below in a Windows environment.

Installing LibreOffice On Slackware 15

Installing LibreOffice On Slackware 15

Slackware has been one of my favorite GNU/Linux distributions for a very long time, especially since Version 8.0 came out, many moons back. The reason is that it embodies the "KISS" method of designing a distribution. "KISS" means, "Keep It Simple, Stupid!", and that's what the Slackware team has done since the distribution's inception. When Slackware 15.0 came out in February 2022, I celebrated like other "Slackers", and I'd been running the beta and release candidates (the then-"Slackware-current") since early 2021.

I've even used Slackware at work in a "Microsoft shop". Yes, it can be done, and it can be done well. To do so, I needed something compatible with Microsoft Office file formats. OpenOffice.org was the ticket back then even in its Beta Build 638c days (yes, I've been using it for a long time!), and the tradition continues today, 21 years later with today's LibreOffice. It is this office productivity suite that really makes using Free Software platforms (e. g. GNU/Linux, the BSD's) on general-purpose business computers possible.

Sadly, Slackware didn't include OpenOffice.org back then, and it doesn't include LibreOffice now. This is speculation on my part, but several years ago, Patrick Volkerding stopped including GNOME because it was too much of a pain to package and distribute for a project that doesn't have the resources of Red Hat, Debian, or Ubuntu. I suspect this may also be true for LibreOffice. Also, the binary packages from LibreOffice come in RPM and DEB format. This choice by the LibreOffice developers is quite understandable, as Red Hat- and Debian-based distros are by far the dominant presence on personal computers. That still leaves us "Slackers" out in the cold, though.

I realize that nowadays there are "Slackbuilds", analogous to BSD's "Packages" collection, and the people who maintain those are definitely to be thanked and appreciated (and I do). The reality is that those aren't always updated to the latest versions of applications, given time constraints. Remember that Slackware is a relatively small all-volunteer project, like OpenBSD. Also, I prefer to stay as up-to-date as possible.

So, what to do?

Fortunately, there is a way to install a fully-functional, latest-greatest, LibreOffice on our Slackware 15.0 computers and use it. The best part is that it's not difficult to do...at least, not now that you have this handy-dandy HOW-TO document to follow.

SQLite for Secrecy Management – Tools and Methods

SQLite for Secrecy Management - Tools and Methods

Introduction

Secrets pervade enterprise systems. Access to critical corporate resources will always require credentials of some type, and this sensitive data is often inadequately protected. It is rife both for erroneous exposure and malicious exploitation. Best practices are few, and often fail.

SQLite is a natural storage platform, approved by the Library of the U.S. Congress as a long-term archival medium. “SQLite is likely used more than all other database engines combined.” The software undergoes extensive testing as it has acquired DO-178B certification for reliability due to the needs of the avionics industry, and is currently used on the Airbus A350's flight systems. The need for SQLite emerged from a damage control application tasked for the U.S. battleship DDG-79 Oscar Austin. An Informix database was running under HP-UX on this vessel, and during ship power losses, the database would not always restart without maintenance, presenting physical risks for the crew. SQLite is an answer to that danger; when used properly, it will transparently recover from such crashes. Despite a small number of CVEs patched in CentOS 7 (CVE-2015-3414, CVE-2015-3415, CVE-2015-3416, CVE-2019-13734), few databases can match SQLite's reliability record, and none that are commercially prevalent.

SQLite specifically avoids any question of access control. It does not implement GRANT and REVOKE as found in other databases, and delegates permissions to the OS. Adapting it for sensitive data always requires strong security to be implemented upon it.

The free releases of CyberArk Conjur and Summon build a basic platform for secrecy management. These tools are somewhat awkward, as conjur requires a running instance of PostgreSQL, which brings an attack surface that is far larger than hoped. Slaving an enterprise to a free, centralized instance of conjur and PostgreSQL is a large risk, as CyberArk's documentation attests.

CyberArk summon, however, can be configured with custom backend providers, which have simple interfacing requirements. SQLite is a fit both for summon and as a standalone secrecy provider.

Pwndrop on Linode

Pwndrop on Linode

When I first ran across PwnDrop, I was intrigued at what the developers had in mind with it. For instance, if you're a white-hat hacker and are looking to share exploits safely with your client, you might use a service like PwnDrop. If you're a journalist communicating with, well, just about anyone who is trying to keep their identity secret, you might use a service like PwnDrop.

In this tutorial, we're going to look at how easy it is to set up and use in just a few minutes.

Prerequisites for PwnDrop in Docker

First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.

Another thing you’ll need is a domain name, which you can buy from almost anywhere online for a wide range of prices depending on where you make your purchase. Be sure to point the domain's DNS settings to Linode. You can find more information about that here: https://www.linode.com/docs/guides/dns-manager/

You’ll also want a reverse proxy set up on your Docker Server so that you can do things like route traffic and manage SSLs on your server. I made a video about the process of setting up a Docker server with Portainer and a reverse proxy called Nginx Proxy Manager that you can check out here: https://www.youtube.com/watch?v=7oUjfsaR0NU

Once you’ve got your Docker server set up, you can begin the process of setting up your PwnDrop password manager on that server.

There are 2 primary ways you can do this:

  1. In the command line via SSH.
  2. In Portainer via the Portainer dashboard.

We're going to take a look at how to do this in Portainer so that we can have a user interface to work with.

Head over to http://your-server-ip-address:9000 and get logged into Portainer with the credentials we set up in our previous post/video.

On the left side of the screen, we're going to click the "Stacks" link and then, on the next page, click the "+ Add stack" button.

This will bring up a page where you'll enter the name of the stack. Below that that you can then copy and paste the following:

FileRun on Docker

FileRun on Docker

You may want to set up a file server like FileRun for any number of reasons. The main reason, I would think, would be so you can have your own Google Drive alternative that is under your control instead of Google's.

FileRun claims to be "Probably the best File Manager in the world with desktop Sync and File Sharing," but I think you'll have to be the judge of that for yourself.

Just to be completely transparent here, I like FileRun, but there is a shortcoming that I hope they will eventually fix. That shortcoming is that there are some, in my opinion, very important settings that are locked away behind an Enterprise Licence requirement.

That aside, I really like the ease-of-use and flexibility of FileRun. So let's take a look at it.

Prerequisites for FileRun in Docker

First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.

Another thing you’ll need is a domain name, which you can buy from almost anywhere online for a wide range of prices depending on where you make your purchase. Be sure to point the domains DNS settings to point to Linode. You can find more information about that here: https://www.linode.com/docs/guides/dns-manager/

You’ll also want a reverse proxy set up on your Docker Server so that you can do things like route traffic and manage SSLs on your server. I made a video about the process of setting up a Docker server with Portainer and a reverse proxy called Nginx Proxy Manager that you can check out here: https://www.youtube.com/watch?v=7oUjfsaR0NU

Once you’ve got your Docker server set up, you can begin the process of setting up your VaultWarden password manager on that server.

There are 2 primary ways you can do this:

  1. In the command line via SSH.
  2. In Portainer via the Portainer dashboard.

We're going to take a look at how to do this in Portainer so that we can have a user interface to work with.

Head over to http://your-server-ip-address:9000 and get logged into Portainer with the credentials we setup in our previous post/video.

On the left side of the screen, we're going to click the "Stacks" link and then, on the next page, click the "+ Add stack" buton.

This will bring up a page where you'll enter the name of the stack. Below that that you can then copy and paste the following:

Static Site Generation with Hugo

Static Site Generation with Hugo

Hugo is quickly becoming one of the best ways to create a website. Hugo is a free and open source static website generator that allows you to build beautiful static websites with ease. Static websites are awesome because they take very little system resources to host. Compared to something like Wordpress that replies on databases, php, and more static sites are simply HTML, CSS, and the occasional line JavaScript. So static sites are perfect for simple blogs, documentation sites, portfolios, and more.

What is a Static Site?

Static websites are simply sites that consist of basic HTML and CSS files for each individual page. A static site can be easily created and published as server requirments are small and their is very limited server-side software requirements to publish them. You don’t need to know coding and database design to build a static website.

In the early days of the internet most everything was static, but sites were bland and poorly designed. Also if you wanted to make a site wide change such as a link in the footer you’d need to go though every file for your website and make changes on a page by page basis. Maintaining a huge number of fixed pages of files seems to be impractical without having automated tools. However, with the modern web template systems, this scenario is changing.

Over the past few years, static sites are again becoming popular. This is due to advances in programming languages and libraries. Now, with static site generators you can host blogs, large websites, and more with the ability to make site wide changes on the fly.

Advantages of Static

Static files are lightweight making the site faster to load. Cost efficiency is another vital reason why companies tend to migrate to static sites. Below are some of the advantages of static sites over traditional sites based on content management systems and server-side scripting, like PHP, MySQL and others.

Speed

With server-side rendering, potential difficulties regarding web page loading are lesser. Here, your site’s content is presented as an entirely pre-rendered web page. Whereas, in traditional sites, the web page is built separately for every visitor. Better speed provides a better SEO ranking and better site performance as a whole.

Flexibility

Static websites have multiple options in terms of using frameworks for rendering. You’re free to choose any programming language and framework from Ruby, JavaScript, Vue, React, etc. This makes the build and maintenance smoother than the traditional sites. Also, static sites have fewer dependencies. So, you can easily leverage your cloud infrastructure and migrate easily.

How to Use Sar (System Activity Reporter)

How to Use Sar (System Activity Reporter)

Overview

In this article, we're going to take a look at the System Activity Reporter, also known as the sar command. This command will help us with seeing a historical view of the performance of our server. You'll see examples of installing it, running it manually, and more. Let's get started!

Prerequisites

Before we do get started, there's a few quick things to mention. If your server is a production server, then I hope you've already installed all available updates. There's already articles within Linode's documentation when it comes to updating packages.

To get started, we'll first need to install the sar command, which is available in the sysstat package:

sudo apt update
sudo apt install sysstat

Installation of the sysstat package should be fairly fast.

However, having the sysstat package installed by itself isn't enough - we'll need to configure its defaults. We can use the nano text editor, for example, to edit the /etc/default/sysstat file:

sudo nano /etc/default/sysstat

The first change to make within this file, is to enable stat collection:

ENABLED="true"

Save the file, and then we're all set with that file in particular.

Optionally, you could consider editing other configuration files that configure sar:

  • /etc/cron.d/sysstat

  • /etc/sysstat/sysstat

The first configures how often stats are collected, the second example will give you even more options to fine-tun sar, which might be useful. Feel free to take a look at it.

The data file

List the storage of the /var/log/sysstat/ directory:

ls -l /var/log/sysstat/

sar will run every ten minutes by default, so if ten minutes hasn't passed since you've enabled stat collecting, then wait a bit, and it should be present.

Running the sar command

Here's an example of sar in action:

sudo sar -u -f /var/log/sysstat/saNUM

Note: NUM in the example is a placeholder for the number next to your data file, which will actually be the same as the date, specifically the day of the month (for example, sa22 corresponds to the 22nd of the current month). The output will give you the overall performance for your server at a given time.

Continuing, let's look at a simpler example:

sar -u

This should give you the same output as before, but without waiting for the data file to be updated.

Yet enother example to show you, is the

How to Use the rsync Command

How to Use the rsync Command

Overview

One of my favorite utilities on the Linux command-line, and block storage is one of my favorite features on Linode's platform, so in this article I get to combine both of these together - because what I'm going to be doing is show you how to use rsync to copy data from one server to another, in the form of a backup. What's really cool about this, is that this example will utilize block storage.

Note: I'll be using the Nextcloud server that was set up in a previous article, but it doesn't really matter if it's Nextcloud - you can back up any server that you'd like.

Setting up our environment

On the Linode dashboard, I created an instance named "backup-server" to use as the example here. On your side, be sure to have a Linode instance ready to go in order to have a destination to copy your files to. Also, create a block storage volume to hold the backup files. If you don't already have block storage set up, you can check out other articles and videos on Linode's documentation and YouTube channel respectively, to see an overview of the process.

Again, in the examples, I'm going to be backing up a Nextcloud instance, but feel free to back up any server you may have set up - just be sure to update the paths accordingly to ensure everything matches your environment. In the Nextcloud video, I set up the data volume onto a block storage volume, so block storage is used at both ends.

First, let's create a new directory where we will mount our block storage volume on the backup server. I decided to use /mnt/backup-data:

sudo mkdir /mnt/backup-data

Since the backup server I used in the example stores backups for more than one Linode instance, I decided to have each server back up to a sub-directory within the /mnt/backup-data directory.

sudo mkdir /mnt/backup-data/nextcloud.learnlinux.cloud

Note: I like to name the sub-directories after the fully qualified domain name for that instance, but that is not required.

Continuing, let's make sure our local user (or a backup user) owns the destination directory:

sudo chown jay:jay /mnt/backup-data/nextcloud.learnlinux.cloud

After running that command, the user and group you specify will become the owner of the target directory, as well as everything underneath it (due to the -R option).

Note: Be sure to update the username, group name, and directory names to match your environment.

How to use Block Storage to Increase Space on Your Nextcloud Instance

How to use Block Storage to Increase Space on Your Nextcloud Instance

Overview

In a previous article, I showed you how to build your very own Nextcloud server. In this article, we're going to extend the storage for our Nextcloud instance by utilizing block storage. To follow along, you'll either need your own Nextcloud server to extend, or perhaps you can add block storage to a different type of server you may control, which would mean you'd need to update the paths accordingly as we go along. Block storage is incredibly useful, so we'll definitely want to take advantage of this.

Let's begin!

Setting up the block storage volume

First, use SSH to log in to your Nextcloud instance:

ssh 

If we execute df -h, we can see the current list of attached storage volumes:

df -h

One of the benefits of block storage, is that you can have a smaller instance (but still have a bigger disk). Right now, unless you're working ahead, we won't have a block storage volume attached yet, so create one within the Linode dashboard.

You can do this by clicking on "Volumes" within the dashboard, and then you can get started with the process. Fill out each of the fields while creating the block storage device. But pay special attention to the region - you want to set this to be the same region that your Linode instance is in.

After creating the volume, you should see some example commands that give you everything you need to set up the volume. The first command, the one we will use to format the volume, we can copy and paste that command directly into a command shell. For example, it might look similar to this:

sudo mkfs.ext4 "/dev/disk/by-id/scsi-0Linode_Volume_nextcloud-data"

Of course, that's just an example command, it's best to use the command provided from the Linode dashboard, so if you'd like to copy and paste - use the command that you're provided within the dashboard.

At this point, the volume will be formatted, but we'll need to mount it in order to start using it. The second command presented in the dashboard will end up creating a directory into which to mount the volume:

sudo mkdir "/mnt/nextcloud-data"

The third command will actually mount the new volume to your filesystem. Be sure to use the command from the dashboard, the one below is presented only as an example of what that generally looks like:

sudo mount "/dev/disk/by-id/scsi-0Linode_Volume_nextcloud-data"

Next, check the output of the df command and ensure the new volume is listed within the output:

df -h

Next, let's make sure we update /etc/fstab for the new volume, to ensure that it's automatically mounted every time the server starts up:

How To Install Nextcloud On An Ubuntu Server

How To Install Nextcloud On An Ubuntu Server

Introduction, and Getting Started

Nextcloud is a powerful productivity platform that gives you access to some amazing features, such as collaborative editing, cloud file sync, private audio/video chat, email, calendar, and more! Best of all, Nextcloud is under your control and is completely customizable. In this article, we're going to be setting up our very own Nextcloud server on Linode. Alternatively, you can also spin up a Nextcloud server by utilizing the Linode marketplace, which you can use to set up Nextcloud in a single click. However, this article will walk you through the manual installation method. While this method has more steps, by the end you'd have built your very own Nextcloud server from scratch, which will be not only a valuable learning experience - you'll become intimately familiar with the process of setting up Nextcloud. Let's get started!

In order to install Nextcloud, we'll need a Linux instance to install it onto. That's the easy part - there's no shortage of Linux on Linode, so what we'll do in order to get started, is create a brand-new Ubuntu 20.04 Linode instance to serve as our base. Many of the commands we'll be using have changed since Ubuntu 20.04, so while you might be tempted to start with a newer instance, these commands were all tested on Ubuntu 20.04. And considering that Ubuntu 20.04 is supported until April of 2025, it's not a bad choice at all.

Creating your instance

During the process of creating your new Linode instance, choose a region that's closest to you geographically (or close to your target audience). For the instance type, be sure to choose a plan with 2GB of RAM (preferably 4GB). You can always increase the plan later, should you need to do so. You can save some additional money by choosing an instance from the Shared CPU section. For the label, give it a label that matches the designated purpose for the instance. A good name might be something like "nextcloud", but if you have a domain for you instance, you an use that as the name as well.

Continuing, you can consider using tags, which are basically basically a name value pair you can add to your instance. This is completely optional, but you could create whatever tags for your instance if you have a need to do so. For example, you could have a "production" tag, or maybe a "development" tag depending on whether or not you intend to use the instance for production. Again, this is optional, and there's no right or wrong way to tag an instance. If in doubt, you can just leave this blank.

Next, the root password should be unique, and preferably, randomly-generated. This password in particular is going to be the password we will use to log into our instance so make sure you remember it. SSH keys are preferred, and if you have one set up within your profile, you can check a box on this page to add it to your instance.

The Echo Command

The Echo Command

In this article, we're going to look at the echo command, which is useful for showing text on the terminal, as well as the contents of variables. Let's get started!

Basic usage of the echo command

Basic usage of the echo command is very simple. For example:

echo "Hello World"

The output will be what you expect, it will echo Hello World onto the screen. You can also use echo to view the contents of a variable:

msg="Hello World"

echo $msg

This also works for built-in shell variables as well:

echo $HOME

Additional Examples

As with most Linux commands, there's definitely more that we can do with echo;than what we've seen so far.

Audible Alerts

You can also sound an audible alert with echo as well:

echo -e "\aHello World"

The -e option allows you to change the format of the output while using echo.

echo -e "This is a\bLinux server."

The example used \b within the command, which actually lets you call backspace, which gives you the same behavior as actually pressing backspace. In the above example, the letter "a" will not print, because \b backspaces that.

Truncating

The ability to truncate, means you can remove something from the output. For example:

echo -e "This is a Linux\c server."

The output of the above command will actually truncate the word right after \c, which means we'll see the following output:

This is a Linux

Adding a new line

To force a new line to be created:

echo -e "This is a Linux\n server"

The output will end up becoming:

This is a Linux
 server.

Adding a tab character

To add a tab character to the output:

echo -e "This is a\t Linux\t server."

This will produce the following output:

This is a     Linux     server.

Redirecting output to a text file

Rather than showing the output on the terminal, we can instruct echo to instead send its output to a text file.

echo "Logfile started: $(date +'%D %T'$" > log.txt

Closing

The basics of the echo command were covered in this article. Of course, there's more options where that came from - but this should be more than enough to get you started!

You can watch the tutorial here:

Open Source Community to Gather in LA for SCALE 19x

Open Source Community to Gather in LA for SCALE 19x

The Southern California Linux Expo – SCALE 19x – returns to its regularly scheduled annual program this year from July 28-31 at the Hilton Los Angeles Airport hotel.

As this continent’s largest community-run Linux/FOSS expo, SCALE 19x continues a nearly two-decade tradition of bringing the latest Free/Open Source Software developments, DevOps, Security and related trends to the general public during the course of the four-day event. Whether you are interested in low level system tuning, how to scale and secure your applications, or how to use OSS at home - SCALE is for you.

Some of this year's highlights include keynotes by Internet pioneer Vint Cerf, who now serves as Chief Internet Evangelist for Google, and Demetris Cheatham, Senior Director, Diversity and Inclusion.

Along with over 100 speakers in sessions spanning the four-day event, SCALE 19x also brings about 100 exhibitors to the expo floor providing their latest software and other developments. In addition, co-located events return to SCALE 19x, which include sessions by IEEE SA Open, AWS, FreeBSD, PostgreSQL, and DevOps Day LA among others. More information on the co-located events can be found at https://www.socallinuxexpo.org/scale/19x/events

Sponsors – both long-time friends of the Expo and newcomers with whom we expect a long relationship – have lined up to support SCALE 19x. Amazon Web Services – AWS for short – leads off the Platinum List, along with Portworx and Mattermost.

Returning to the Hilton Los Angeles Airport hotel provides that there’s one place to stay and attend during the four-day Expo. The Hilton LAX offers a special deal for SCALE 19x attendees, and to take advantage of the savings, visit https://book.passkey.com/event/50305242/owner/50954/home

And, of course, SCALE wouldn’t be SCALE without the attendees – registration for SCALE 19x ranges from an expo-only pass to an all-access SCALE Pass for the exhibit floor and speakers. To register, visit https://register.socallinuxexpo.org/reg6/

For more information, visit https://www.socallinuxexpo.org/scale/19x

How You Can Change the Cursor Theme on Your Ubuntu Desktop

How You Can Change the Cursor Theme on Your Ubuntu Desktop

Are you finding an alternative for your default Yaru cursor themes on Ubuntu? This article is where you’ll get to know about the procedure of changing and installing cursor themes on Ubuntu. So, read on and find out.

Change the Cursor Themes Using GNOME Tweak

To change the mouse pointer theme on Ubuntu, open the Software app. Then, look out for the GNOME Tweaks tool. GNOME Tweaks is one of the most-used configuration tools to manage the GNOME desktop. So, install the same, without any delay. 

After installing GNOME Tweaks, navigate to the top-left ‘Activities’ overview. Go to GNOME Tweaks and open it. Once you open GNOME Tweaks, go to the Appearance option from the left pane. Choose a different cursor theme from the drop-down menu.

Ubuntu Change The Cursor Theme 3

 

Note: Since Ubuntu is the default Linux distribution for GNOME Desktops, you can apply this method for other distributions as well, including Debian, CentOS, Fedora, SUSE Linux, Red Hat Enterprise Linux, and other GNOME-based Linux distros.

5 Beautiful Cursor Themes for Ubuntu

There might not be plenty of cursor themes available. But, you can always install any of them from the internet. Below are some of the most excellent cursor themes to choose from.

Oreo Cursors

Oreo offers colored cursors with cute animations. They have 64 px and 32 px with HiDPI (High Dots Per Inch) display support for Linux desktops. You can get more than 10 varieties in the colors of the cursors. The icon theme comprises various states of a cursor within the cursor icon itself. If you find the Oreo Cursors attractive, you can get them here.

Ubuntu Change The Cursor Theme 2

Bibata Cursors

Another favorite cursor theme is Bibata. Bibata Cursors is a modern-style cursor theme available for Ubuntu. And it comes in three different options: Classic, Ice, and Amber. Bibata supports HiDPI Display also. Each of the themes of Bibata has round and sharp edge icons. If you want Bibata Cursors for your Linux desktop, find them here.

Everything You Need to Know about Linux Input-Output Redirection

Everything You Need to Know about Linux Input-Output Redirection

Are you looking for information related to the Linux input-output redirection? Then, read on. So, what’s redirection? Redirection is a Linux feature. With the help of it, you are able to change standard I/O devices. In Linux, when you enter a command as an input, you receive an output. It’s the basic workflow of Linux.

The standard input or stdin device to give commands is the keyboard and the standard output or stdout device is your terminal screen. With redirection, you can change the standard input/output. From this article, let’s find out how Linux input-output redirection works.

Standard Streams in Input-Output Redirection

The bash shell of Linux has three standard streams of input-output redirection, 1) Standard Input or Stdin, 2) Standard Output or Stdout, and 3) Standard Error or Stderr.

The standard input stream is denoted as stdin (0). The bash shell receives input from stdin. The keyboard is used to give input. The standard output stream is denoted as stdout (1). The bash shell sends the output to stdout. The final output goes to the display screen. Here 0, 1, and 2 are called file descriptors (FD). In the following section, we’ll look into file descriptors in detail.

File Descriptors

In Linux, everything is a file. Directories, regular files, and even the devices are considered to be files. Each file has an associated number. This number is called File Descriptor or FD.

Interestingly, your terminal screen also has a definite File Descriptor. Whenever a particular program is executed, its output gets sent to your screen’s File Descriptor. Then, you can see the program output on the display screen. If the program output gets sent to your printer’s FD, the output would be printed.

0, 1, and 2 are used as file descriptors for stdin, stdout, and stderr files respectively.

Input Redirection

The ‘<’ sign is used for the input or stdin redirection. For example, Linux’s mail program sends emails from your Linux terminal.

You can type the email contents with the standard input device, keyboard. However, if you’re willing to attach a file to the email, use Linux’s input redirection feature. Below is a format to use the stdin redirection operator.

Mail -s "Subject" to-address < Filename

This would attach a file with your email, and then the email would be sent to a recipient.

Output Redirection

The ‘>’ sign signifies the output redirection. Below is an example to help you understand its functions.

How to Use the VI Editor in Linux

How to Use the VI Editor in Linux

If you’re searching for info related to the VI editor, this article is for you. So, what’s VI editor? VI is a text editor that’s screen-oriented and the most popular in the Linux world. The reasons for its popularity are 1) availability for almost all Linux distros, 2) VI works the same throughout multiple platforms, and 3) its user-friendly features. Currently, VI Improved or VIM is the most used advanced counterpart of VI.

To work on the VI text editor, you have to know how to use the VI editor in Linux. Let’s find it out from this article.

Modes of VI Text Editor

VI text editor works in two modes, 1) Command mode and 2) Insert mode. In the command mode, users’ commands are taken to take action on a file. The VI editor, usually, starts in the command mode. Here, the words typed act as commands. So, you should be in the command mode while passing a command.

On the other hand, in the Insert mode, file editing is done. Here, the text is inserted into the file. So, you need to be in the insert mode to enter text. Just type ‘i’ to be in the insert mode. Use the Esc key to switch from insert mode to command mode in the editor. If you don’t know your current mode, press the Esc key twice. This takes you to the command mode.

Launch VI Text Editor 

First, you need to launch the VI editor to begin working on it. To launch the editor, open your Linux terminal and then type:

vi  or 

And if you mention an existing file, VI would open it to edit. Alternatively, you’re free to create a completely new file.

VI Editing Commands

You need to be in the command mode to run editing commands in the VI editor. VI is case-sensitive. Hence, make sure you use the commands in the correct letter case. Also, make sure you type the right command to avoid undesired changes. Below are some of the essential commands to use in VI.

i – Inserts at cursor (gets into the insert mode)

a – Writes after the cursor (gets into the insert mode)

A – Writes at the ending of a line (gets into the insert mode)

o – Opens a new line (gets into the insert mode)

ESC – Terminates the insert mode

u – Undo the last change

U – Undo all changes of the entire line

D – Deletes the content of a line after the cursor

R – Overwrites characters from the cursor onwards

r – Replaces a character

s – Substitutes one character under the cursor and continue to insert

S – Substitutes a full line and start inserting at the beginning of a line

Primer to Container Security

Primer to Container Security

Containers are considered to be a standard way of deploying these microservices to the cloud. Containers are better than virtual machines in almost all ways except security, which may be the main barrier to their widespread adoption.

This article will provide a better understanding of container security and available techniques to secure them.

A Linux container can be defined as a process or a set of processes running in the userspace that is/are isolated from the rest of the system by different kernel tools.

Containers are great alternatives to virtual machines (VMs). Even though containers and virtual machines provide the same isolation benefits, they differ in the way that containers provide operating system virtualization instead of hardware. This makes them lightweight, faster to start, and consumes less memory.

As multiple containers share the same kernel, the solution is less secure than the VMs, where they have their copies of OS, libraries, dedicated resources, and applications. That makes VM excellently secure but because of their high storage size and reduced performance, it creates a limitation on the total number of VMs which can be run simultaneously on a server. Further VMs take a lot of time to boot.

The introduction of microservice architecture has changed the way of developing software. Microservices allow the development of software in small self-contained independent services. This makes the application easier to scale and provides agility.

If a part of the software needs to be rewritten it can easily be done by changing only that part of the code without interrupting any other service, which wasn't possible with the monolithic kernel.

Protection requirement use cases and solutions
Protection requirement use cases and solutions

1) Linux Kernel Features

a. Namespaces

Namespaces ensure the isolation of resources for processes running in a container to that of others. They partition the kernel resources for different processes. One set of processes in a separate namespace will see one set of resources while another set of processes will see another. Processes in different see different process IDs, hostnames, user IDs, file names, names for network access, and some interprocess communication. Hence, each file system namespace has its private mount table and root directory.

Scrolling Up and Down in the Linux Terminal

Scrolling Up and Down in the Linux Terminal

Are you looking for the technique of scrolling through your Linux terminal? Brace yourself. This article is written for you. Today you’ll learn how to scroll up and down in the Linux terminal. So, let’s begin.

Why You Need to Scroll in Linux Terminal

But before going ahead and learning about up and down scrolling in the terminal, let’s find out what’s the importance of scrolling in the Linux terminal. When you have a lot of output printed on your terminal screen, it becomes helpful to make your Linux terminal behave in a particular manner. You can clear the terminal at any time. This may make your work easier and quicker to complete. But what if you’re troubleshooting an issue and you need a previously entered command, then scrolling up or down comes to the rescue.

Various shortcuts and commands allow you to perform scrolling in the Linux terminal whenever you want. So, for easy navigation in your terminal using the keyboard, read on.

How to Scroll Up and Down in Linux Terminal

In the Linux terminal, you can scroll up by page using the Shift + PageUp shortcut. And to scroll down in the terminal, use Shift + PageDown. To go up or down in the terminal by line, use Ctrl + Shift + Up or Ctrl + Shift + Down respectively.

Key Combinations Used in Scrolling

Following are some key combinations that are useful in scrolling through the Linux terminal. 

Ctrl+End: This allows you to scroll down to your cursor.

Ctrl+Page Up: This key combination lets you scroll up by one page.

Ctrl+Page Dn: This lets you scroll down by one page.

Ctrl+Line Up: To scroll up by one line, use this key combination.

Scrolling Up and Down with More Command

The more command allows you to see the text files within the command prompt. For bigger files (for example, log files), it shows one screen at one time. The more command is also used to scroll up and down within the page. To scroll up the display one line at a time, press the Enter key. To scroll a screenful at a time, use Spacebar. To do backward scrolling, press ‘b’.

How to Disable Scrolling in the Terminal

To disable the scrollbar, follow the steps given in this section. First, on the window, press the Menu button residing in the top-right corner. Then select Preferences. From the Profiles section in the sidebar, select the profile you’re currently using. Then select the Scrolling option. Finally, uncheck the Show scrollbar to disable the scrolling feature in the terminal. Your preference will be saved immediately.