Installing LibreOffice On Slackware 15

Installing LibreOffice On Slackware 15

Slackware has been one of my favorite GNU/Linux distributions for a very long time, especially since Version 8.0 came out, many moons back. The reason is that it embodies the "KISS" method of designing a distribution. "KISS" means, "Keep It Simple, Stupid!", and that's what the Slackware team has done since the distribution's inception. When Slackware 15.0 came out in February 2022, I celebrated like other "Slackers", and I'd been running the beta and release candidates (the then-"Slackware-current") since early 2021.

I've even used Slackware at work in a "Microsoft shop". Yes, it can be done, and it can be done well. To do so, I needed something compatible with Microsoft Office file formats. OpenOffice.org was the ticket back then even in its Beta Build 638c days (yes, I've been using it for a long time!), and the tradition continues today, 21 years later with today's LibreOffice. It is this office productivity suite that really makes using Free Software platforms (e. g. GNU/Linux, the BSD's) on general-purpose business computers possible.

Sadly, Slackware didn't include OpenOffice.org back then, and it doesn't include LibreOffice now. This is speculation on my part, but several years ago, Patrick Volkerding stopped including GNOME because it was too much of a pain to package and distribute for a project that doesn't have the resources of Red Hat, Debian, or Ubuntu. I suspect this may also be true for LibreOffice. Also, the binary packages from LibreOffice come in RPM and DEB format. This choice by the LibreOffice developers is quite understandable, as Red Hat- and Debian-based distros are by far the dominant presence on personal computers. That still leaves us "Slackers" out in the cold, though.

I realize that nowadays there are "Slackbuilds", analogous to BSD's "Packages" collection, and the people who maintain those are definitely to be thanked and appreciated (and I do). The reality is that those aren't always updated to the latest versions of applications, given time constraints. Remember that Slackware is a relatively small all-volunteer project, like OpenBSD. Also, I prefer to stay as up-to-date as possible.

So, what to do?

Fortunately, there is a way to install a fully-functional, latest-greatest, LibreOffice on our Slackware 15.0 computers and use it. The best part is that it's not difficult to do...at least, not now that you have this handy-dandy HOW-TO document to follow.

SQLite for Secrecy Management – Tools and Methods

SQLite for Secrecy Management - Tools and Methods

Introduction

Secrets pervade enterprise systems. Access to critical corporate resources will always require credentials of some type, and this sensitive data is often inadequately protected. It is rife both for erroneous exposure and malicious exploitation. Best practices are few, and often fail.

SQLite is a natural storage platform, approved by the Library of the U.S. Congress as a long-term archival medium. “SQLite is likely used more than all other database engines combined.” The software undergoes extensive testing as it has acquired DO-178B certification for reliability due to the needs of the avionics industry, and is currently used on the Airbus A350's flight systems. The need for SQLite emerged from a damage control application tasked for the U.S. battleship DDG-79 Oscar Austin. An Informix database was running under HP-UX on this vessel, and during ship power losses, the database would not always restart without maintenance, presenting physical risks for the crew. SQLite is an answer to that danger; when used properly, it will transparently recover from such crashes. Despite a small number of CVEs patched in CentOS 7 (CVE-2015-3414, CVE-2015-3415, CVE-2015-3416, CVE-2019-13734), few databases can match SQLite's reliability record, and none that are commercially prevalent.

SQLite specifically avoids any question of access control. It does not implement GRANT and REVOKE as found in other databases, and delegates permissions to the OS. Adapting it for sensitive data always requires strong security to be implemented upon it.

The free releases of CyberArk Conjur and Summon build a basic platform for secrecy management. These tools are somewhat awkward, as conjur requires a running instance of PostgreSQL, which brings an attack surface that is far larger than hoped. Slaving an enterprise to a free, centralized instance of conjur and PostgreSQL is a large risk, as CyberArk's documentation attests.

CyberArk summon, however, can be configured with custom backend providers, which have simple interfacing requirements. SQLite is a fit both for summon and as a standalone secrecy provider.

Pwndrop on Linode

Pwndrop on Linode

When I first ran across PwnDrop, I was intrigued at what the developers had in mind with it. For instance, if you're a white-hat hacker and are looking to share exploits safely with your client, you might use a service like PwnDrop. If you're a journalist communicating with, well, just about anyone who is trying to keep their identity secret, you might use a service like PwnDrop.

In this tutorial, we're going to look at how easy it is to set up and use in just a few minutes.

Prerequisites for PwnDrop in Docker

First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.

Another thing you’ll need is a domain name, which you can buy from almost anywhere online for a wide range of prices depending on where you make your purchase. Be sure to point the domain's DNS settings to Linode. You can find more information about that here: https://www.linode.com/docs/guides/dns-manager/

You’ll also want a reverse proxy set up on your Docker Server so that you can do things like route traffic and manage SSLs on your server. I made a video about the process of setting up a Docker server with Portainer and a reverse proxy called Nginx Proxy Manager that you can check out here: https://www.youtube.com/watch?v=7oUjfsaR0NU

Once you’ve got your Docker server set up, you can begin the process of setting up your PwnDrop password manager on that server.

There are 2 primary ways you can do this:

  1. In the command line via SSH.
  2. In Portainer via the Portainer dashboard.

We're going to take a look at how to do this in Portainer so that we can have a user interface to work with.

Head over to http://your-server-ip-address:9000 and get logged into Portainer with the credentials we set up in our previous post/video.

On the left side of the screen, we're going to click the "Stacks" link and then, on the next page, click the "+ Add stack" button.

This will bring up a page where you'll enter the name of the stack. Below that that you can then copy and paste the following:

FileRun on Docker

FileRun on Docker

You may want to set up a file server like FileRun for any number of reasons. The main reason, I would think, would be so you can have your own Google Drive alternative that is under your control instead of Google's.

FileRun claims to be "Probably the best File Manager in the world with desktop Sync and File Sharing," but I think you'll have to be the judge of that for yourself.

Just to be completely transparent here, I like FileRun, but there is a shortcoming that I hope they will eventually fix. That shortcoming is that there are some, in my opinion, very important settings that are locked away behind an Enterprise Licence requirement.

That aside, I really like the ease-of-use and flexibility of FileRun. So let's take a look at it.

Prerequisites for FileRun in Docker

First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.

Another thing you’ll need is a domain name, which you can buy from almost anywhere online for a wide range of prices depending on where you make your purchase. Be sure to point the domains DNS settings to point to Linode. You can find more information about that here: https://www.linode.com/docs/guides/dns-manager/

You’ll also want a reverse proxy set up on your Docker Server so that you can do things like route traffic and manage SSLs on your server. I made a video about the process of setting up a Docker server with Portainer and a reverse proxy called Nginx Proxy Manager that you can check out here: https://www.youtube.com/watch?v=7oUjfsaR0NU

Once you’ve got your Docker server set up, you can begin the process of setting up your VaultWarden password manager on that server.

There are 2 primary ways you can do this:

  1. In the command line via SSH.
  2. In Portainer via the Portainer dashboard.

We're going to take a look at how to do this in Portainer so that we can have a user interface to work with.

Head over to http://your-server-ip-address:9000 and get logged into Portainer with the credentials we setup in our previous post/video.

On the left side of the screen, we're going to click the "Stacks" link and then, on the next page, click the "+ Add stack" buton.

This will bring up a page where you'll enter the name of the stack. Below that that you can then copy and paste the following:

Static Site Generation with Hugo

Static Site Generation with Hugo

Hugo is quickly becoming one of the best ways to create a website. Hugo is a free and open source static website generator that allows you to build beautiful static websites with ease. Static websites are awesome because they take very little system resources to host. Compared to something like Wordpress that replies on databases, php, and more static sites are simply HTML, CSS, and the occasional line JavaScript. So static sites are perfect for simple blogs, documentation sites, portfolios, and more.

What is a Static Site?

Static websites are simply sites that consist of basic HTML and CSS files for each individual page. A static site can be easily created and published as server requirments are small and their is very limited server-side software requirements to publish them. You don’t need to know coding and database design to build a static website.

In the early days of the internet most everything was static, but sites were bland and poorly designed. Also if you wanted to make a site wide change such as a link in the footer you’d need to go though every file for your website and make changes on a page by page basis. Maintaining a huge number of fixed pages of files seems to be impractical without having automated tools. However, with the modern web template systems, this scenario is changing.

Over the past few years, static sites are again becoming popular. This is due to advances in programming languages and libraries. Now, with static site generators you can host blogs, large websites, and more with the ability to make site wide changes on the fly.

Advantages of Static

Static files are lightweight making the site faster to load. Cost efficiency is another vital reason why companies tend to migrate to static sites. Below are some of the advantages of static sites over traditional sites based on content management systems and server-side scripting, like PHP, MySQL and others.

Speed

With server-side rendering, potential difficulties regarding web page loading are lesser. Here, your site’s content is presented as an entirely pre-rendered web page. Whereas, in traditional sites, the web page is built separately for every visitor. Better speed provides a better SEO ranking and better site performance as a whole.

Flexibility

Static websites have multiple options in terms of using frameworks for rendering. You’re free to choose any programming language and framework from Ruby, JavaScript, Vue, React, etc. This makes the build and maintenance smoother than the traditional sites. Also, static sites have fewer dependencies. So, you can easily leverage your cloud infrastructure and migrate easily.

How to Use Sar (System Activity Reporter)

How to Use Sar (System Activity Reporter)

Overview

In this article, we're going to take a look at the System Activity Reporter, also known as the sar command. This command will help us with seeing a historical view of the performance of our server. You'll see examples of installing it, running it manually, and more. Let's get started!

Prerequisites

Before we do get started, there's a few quick things to mention. If your server is a production server, then I hope you've already installed all available updates. There's already articles within Linode's documentation when it comes to updating packages.

To get started, we'll first need to install the sar command, which is available in the sysstat package:

sudo apt update
sudo apt install sysstat

Installation of the sysstat package should be fairly fast.

However, having the sysstat package installed by itself isn't enough - we'll need to configure its defaults. We can use the nano text editor, for example, to edit the /etc/default/sysstat file:

sudo nano /etc/default/sysstat

The first change to make within this file, is to enable stat collection:

ENABLED="true"

Save the file, and then we're all set with that file in particular.

Optionally, you could consider editing other configuration files that configure sar:

  • /etc/cron.d/sysstat

  • /etc/sysstat/sysstat

The first configures how often stats are collected, the second example will give you even more options to fine-tun sar, which might be useful. Feel free to take a look at it.

The data file

List the storage of the /var/log/sysstat/ directory:

ls -l /var/log/sysstat/

sar will run every ten minutes by default, so if ten minutes hasn't passed since you've enabled stat collecting, then wait a bit, and it should be present.

Running the sar command

Here's an example of sar in action:

sudo sar -u -f /var/log/sysstat/saNUM

Note: NUM in the example is a placeholder for the number next to your data file, which will actually be the same as the date, specifically the day of the month (for example, sa22 corresponds to the 22nd of the current month). The output will give you the overall performance for your server at a given time.

Continuing, let's look at a simpler example:

sar -u

This should give you the same output as before, but without waiting for the data file to be updated.

Yet enother example to show you, is the

How to Use the rsync Command

How to Use the rsync Command

Overview

One of my favorite utilities on the Linux command-line, and block storage is one of my favorite features on Linode's platform, so in this article I get to combine both of these together - because what I'm going to be doing is show you how to use rsync to copy data from one server to another, in the form of a backup. What's really cool about this, is that this example will utilize block storage.

Note: I'll be using the Nextcloud server that was set up in a previous article, but it doesn't really matter if it's Nextcloud - you can back up any server that you'd like.

Setting up our environment

On the Linode dashboard, I created an instance named "backup-server" to use as the example here. On your side, be sure to have a Linode instance ready to go in order to have a destination to copy your files to. Also, create a block storage volume to hold the backup files. If you don't already have block storage set up, you can check out other articles and videos on Linode's documentation and YouTube channel respectively, to see an overview of the process.

Again, in the examples, I'm going to be backing up a Nextcloud instance, but feel free to back up any server you may have set up - just be sure to update the paths accordingly to ensure everything matches your environment. In the Nextcloud video, I set up the data volume onto a block storage volume, so block storage is used at both ends.

First, let's create a new directory where we will mount our block storage volume on the backup server. I decided to use /mnt/backup-data:

sudo mkdir /mnt/backup-data

Since the backup server I used in the example stores backups for more than one Linode instance, I decided to have each server back up to a sub-directory within the /mnt/backup-data directory.

sudo mkdir /mnt/backup-data/nextcloud.learnlinux.cloud

Note: I like to name the sub-directories after the fully qualified domain name for that instance, but that is not required.

Continuing, let's make sure our local user (or a backup user) owns the destination directory:

sudo chown jay:jay /mnt/backup-data/nextcloud.learnlinux.cloud

After running that command, the user and group you specify will become the owner of the target directory, as well as everything underneath it (due to the -R option).

Note: Be sure to update the username, group name, and directory names to match your environment.

How to use Block Storage to Increase Space on Your Nextcloud Instance

How to use Block Storage to Increase Space on Your Nextcloud Instance

Overview

In a previous article, I showed you how to build your very own Nextcloud server. In this article, we're going to extend the storage for our Nextcloud instance by utilizing block storage. To follow along, you'll either need your own Nextcloud server to extend, or perhaps you can add block storage to a different type of server you may control, which would mean you'd need to update the paths accordingly as we go along. Block storage is incredibly useful, so we'll definitely want to take advantage of this.

Let's begin!

Setting up the block storage volume

First, use SSH to log in to your Nextcloud instance:

ssh 

If we execute df -h, we can see the current list of attached storage volumes:

df -h

One of the benefits of block storage, is that you can have a smaller instance (but still have a bigger disk). Right now, unless you're working ahead, we won't have a block storage volume attached yet, so create one within the Linode dashboard.

You can do this by clicking on "Volumes" within the dashboard, and then you can get started with the process. Fill out each of the fields while creating the block storage device. But pay special attention to the region - you want to set this to be the same region that your Linode instance is in.

After creating the volume, you should see some example commands that give you everything you need to set up the volume. The first command, the one we will use to format the volume, we can copy and paste that command directly into a command shell. For example, it might look similar to this:

sudo mkfs.ext4 "/dev/disk/by-id/scsi-0Linode_Volume_nextcloud-data"

Of course, that's just an example command, it's best to use the command provided from the Linode dashboard, so if you'd like to copy and paste - use the command that you're provided within the dashboard.

At this point, the volume will be formatted, but we'll need to mount it in order to start using it. The second command presented in the dashboard will end up creating a directory into which to mount the volume:

sudo mkdir "/mnt/nextcloud-data"

The third command will actually mount the new volume to your filesystem. Be sure to use the command from the dashboard, the one below is presented only as an example of what that generally looks like:

sudo mount "/dev/disk/by-id/scsi-0Linode_Volume_nextcloud-data"

Next, check the output of the df command and ensure the new volume is listed within the output:

df -h

Next, let's make sure we update /etc/fstab for the new volume, to ensure that it's automatically mounted every time the server starts up:

How To Install Nextcloud On An Ubuntu Server

How To Install Nextcloud On An Ubuntu Server

Introduction, and Getting Started

Nextcloud is a powerful productivity platform that gives you access to some amazing features, such as collaborative editing, cloud file sync, private audio/video chat, email, calendar, and more! Best of all, Nextcloud is under your control and is completely customizable. In this article, we're going to be setting up our very own Nextcloud server on Linode. Alternatively, you can also spin up a Nextcloud server by utilizing the Linode marketplace, which you can use to set up Nextcloud in a single click. However, this article will walk you through the manual installation method. While this method has more steps, by the end you'd have built your very own Nextcloud server from scratch, which will be not only a valuable learning experience - you'll become intimately familiar with the process of setting up Nextcloud. Let's get started!

In order to install Nextcloud, we'll need a Linux instance to install it onto. That's the easy part - there's no shortage of Linux on Linode, so what we'll do in order to get started, is create a brand-new Ubuntu 20.04 Linode instance to serve as our base. Many of the commands we'll be using have changed since Ubuntu 20.04, so while you might be tempted to start with a newer instance, these commands were all tested on Ubuntu 20.04. And considering that Ubuntu 20.04 is supported until April of 2025, it's not a bad choice at all.

Creating your instance

During the process of creating your new Linode instance, choose a region that's closest to you geographically (or close to your target audience). For the instance type, be sure to choose a plan with 2GB of RAM (preferably 4GB). You can always increase the plan later, should you need to do so. You can save some additional money by choosing an instance from the Shared CPU section. For the label, give it a label that matches the designated purpose for the instance. A good name might be something like "nextcloud", but if you have a domain for you instance, you an use that as the name as well.

Continuing, you can consider using tags, which are basically basically a name value pair you can add to your instance. This is completely optional, but you could create whatever tags for your instance if you have a need to do so. For example, you could have a "production" tag, or maybe a "development" tag depending on whether or not you intend to use the instance for production. Again, this is optional, and there's no right or wrong way to tag an instance. If in doubt, you can just leave this blank.

Next, the root password should be unique, and preferably, randomly-generated. This password in particular is going to be the password we will use to log into our instance so make sure you remember it. SSH keys are preferred, and if you have one set up within your profile, you can check a box on this page to add it to your instance.

The Echo Command

The Echo Command

In this article, we're going to look at the echo command, which is useful for showing text on the terminal, as well as the contents of variables. Let's get started!

Basic usage of the echo command

Basic usage of the echo command is very simple. For example:

echo "Hello World"

The output will be what you expect, it will echo Hello World onto the screen. You can also use echo to view the contents of a variable:

msg="Hello World"

echo $msg

This also works for built-in shell variables as well:

echo $HOME

Additional Examples

As with most Linux commands, there's definitely more that we can do with echo;than what we've seen so far.

Audible Alerts

You can also sound an audible alert with echo as well:

echo -e "\aHello World"

The -e option allows you to change the format of the output while using echo.

echo -e "This is a\bLinux server."

The example used \b within the command, which actually lets you call backspace, which gives you the same behavior as actually pressing backspace. In the above example, the letter "a" will not print, because \b backspaces that.

Truncating

The ability to truncate, means you can remove something from the output. For example:

echo -e "This is a Linux\c server."

The output of the above command will actually truncate the word right after \c, which means we'll see the following output:

This is a Linux

Adding a new line

To force a new line to be created:

echo -e "This is a Linux\n server"

The output will end up becoming:

This is a Linux
 server.

Adding a tab character

To add a tab character to the output:

echo -e "This is a\t Linux\t server."

This will produce the following output:

This is a     Linux     server.

Redirecting output to a text file

Rather than showing the output on the terminal, we can instruct echo to instead send its output to a text file.

echo "Logfile started: $(date +'%D %T'$" > log.txt

Closing

The basics of the echo command were covered in this article. Of course, there's more options where that came from - but this should be more than enough to get you started!

You can watch the tutorial here:

Open Source Community to Gather in LA for SCALE 19x

Open Source Community to Gather in LA for SCALE 19x

The Southern California Linux Expo – SCALE 19x – returns to its regularly scheduled annual program this year from July 28-31 at the Hilton Los Angeles Airport hotel.

As this continent’s largest community-run Linux/FOSS expo, SCALE 19x continues a nearly two-decade tradition of bringing the latest Free/Open Source Software developments, DevOps, Security and related trends to the general public during the course of the four-day event. Whether you are interested in low level system tuning, how to scale and secure your applications, or how to use OSS at home - SCALE is for you.

Some of this year's highlights include keynotes by Internet pioneer Vint Cerf, who now serves as Chief Internet Evangelist for Google, and Demetris Cheatham, Senior Director, Diversity and Inclusion.

Along with over 100 speakers in sessions spanning the four-day event, SCALE 19x also brings about 100 exhibitors to the expo floor providing their latest software and other developments. In addition, co-located events return to SCALE 19x, which include sessions by IEEE SA Open, AWS, FreeBSD, PostgreSQL, and DevOps Day LA among others. More information on the co-located events can be found at https://www.socallinuxexpo.org/scale/19x/events

Sponsors – both long-time friends of the Expo and newcomers with whom we expect a long relationship – have lined up to support SCALE 19x. Amazon Web Services – AWS for short – leads off the Platinum List, along with Portworx and Mattermost.

Returning to the Hilton Los Angeles Airport hotel provides that there’s one place to stay and attend during the four-day Expo. The Hilton LAX offers a special deal for SCALE 19x attendees, and to take advantage of the savings, visit https://book.passkey.com/event/50305242/owner/50954/home

And, of course, SCALE wouldn’t be SCALE without the attendees – registration for SCALE 19x ranges from an expo-only pass to an all-access SCALE Pass for the exhibit floor and speakers. To register, visit https://register.socallinuxexpo.org/reg6/

For more information, visit https://www.socallinuxexpo.org/scale/19x

How You Can Change the Cursor Theme on Your Ubuntu Desktop

How You Can Change the Cursor Theme on Your Ubuntu Desktop

Are you finding an alternative for your default Yaru cursor themes on Ubuntu? This article is where you’ll get to know about the procedure of changing and installing cursor themes on Ubuntu. So, read on and find out.

Change the Cursor Themes Using GNOME Tweak

To change the mouse pointer theme on Ubuntu, open the Software app. Then, look out for the GNOME Tweaks tool. GNOME Tweaks is one of the most-used configuration tools to manage the GNOME desktop. So, install the same, without any delay. 

After installing GNOME Tweaks, navigate to the top-left ‘Activities’ overview. Go to GNOME Tweaks and open it. Once you open GNOME Tweaks, go to the Appearance option from the left pane. Choose a different cursor theme from the drop-down menu.

Ubuntu Change The Cursor Theme 3

 

Note: Since Ubuntu is the default Linux distribution for GNOME Desktops, you can apply this method for other distributions as well, including Debian, CentOS, Fedora, SUSE Linux, Red Hat Enterprise Linux, and other GNOME-based Linux distros.

5 Beautiful Cursor Themes for Ubuntu

There might not be plenty of cursor themes available. But, you can always install any of them from the internet. Below are some of the most excellent cursor themes to choose from.

Oreo Cursors

Oreo offers colored cursors with cute animations. They have 64 px and 32 px with HiDPI (High Dots Per Inch) display support for Linux desktops. You can get more than 10 varieties in the colors of the cursors. The icon theme comprises various states of a cursor within the cursor icon itself. If you find the Oreo Cursors attractive, you can get them here.

Ubuntu Change The Cursor Theme 2

Bibata Cursors

Another favorite cursor theme is Bibata. Bibata Cursors is a modern-style cursor theme available for Ubuntu. And it comes in three different options: Classic, Ice, and Amber. Bibata supports HiDPI Display also. Each of the themes of Bibata has round and sharp edge icons. If you want Bibata Cursors for your Linux desktop, find them here.

Everything You Need to Know about Linux Input-Output Redirection

Everything You Need to Know about Linux Input-Output Redirection

Are you looking for information related to the Linux input-output redirection? Then, read on. So, what’s redirection? Redirection is a Linux feature. With the help of it, you are able to change standard I/O devices. In Linux, when you enter a command as an input, you receive an output. It’s the basic workflow of Linux.

The standard input or stdin device to give commands is the keyboard and the standard output or stdout device is your terminal screen. With redirection, you can change the standard input/output. From this article, let’s find out how Linux input-output redirection works.

Standard Streams in Input-Output Redirection

The bash shell of Linux has three standard streams of input-output redirection, 1) Standard Input or Stdin, 2) Standard Output or Stdout, and 3) Standard Error or Stderr.

The standard input stream is denoted as stdin (0). The bash shell receives input from stdin. The keyboard is used to give input. The standard output stream is denoted as stdout (1). The bash shell sends the output to stdout. The final output goes to the display screen. Here 0, 1, and 2 are called file descriptors (FD). In the following section, we’ll look into file descriptors in detail.

File Descriptors

In Linux, everything is a file. Directories, regular files, and even the devices are considered to be files. Each file has an associated number. This number is called File Descriptor or FD.

Interestingly, your terminal screen also has a definite File Descriptor. Whenever a particular program is executed, its output gets sent to your screen’s File Descriptor. Then, you can see the program output on the display screen. If the program output gets sent to your printer’s FD, the output would be printed.

0, 1, and 2 are used as file descriptors for stdin, stdout, and stderr files respectively.

Input Redirection

The ‘<’ sign is used for the input or stdin redirection. For example, Linux’s mail program sends emails from your Linux terminal.

You can type the email contents with the standard input device, keyboard. However, if you’re willing to attach a file to the email, use Linux’s input redirection feature. Below is a format to use the stdin redirection operator.

Mail -s "Subject" to-address < Filename

This would attach a file with your email, and then the email would be sent to a recipient.

Output Redirection

The ‘>’ sign signifies the output redirection. Below is an example to help you understand its functions.

How to Use the VI Editor in Linux

How to Use the VI Editor in Linux

If you’re searching for info related to the VI editor, this article is for you. So, what’s VI editor? VI is a text editor that’s screen-oriented and the most popular in the Linux world. The reasons for its popularity are 1) availability for almost all Linux distros, 2) VI works the same throughout multiple platforms, and 3) its user-friendly features. Currently, VI Improved or VIM is the most used advanced counterpart of VI.

To work on the VI text editor, you have to know how to use the VI editor in Linux. Let’s find it out from this article.

Modes of VI Text Editor

VI text editor works in two modes, 1) Command mode and 2) Insert mode. In the command mode, users’ commands are taken to take action on a file. The VI editor, usually, starts in the command mode. Here, the words typed act as commands. So, you should be in the command mode while passing a command.

On the other hand, in the Insert mode, file editing is done. Here, the text is inserted into the file. So, you need to be in the insert mode to enter text. Just type ‘i’ to be in the insert mode. Use the Esc key to switch from insert mode to command mode in the editor. If you don’t know your current mode, press the Esc key twice. This takes you to the command mode.

Launch VI Text Editor 

First, you need to launch the VI editor to begin working on it. To launch the editor, open your Linux terminal and then type:

vi  or 

And if you mention an existing file, VI would open it to edit. Alternatively, you’re free to create a completely new file.

VI Editing Commands

You need to be in the command mode to run editing commands in the VI editor. VI is case-sensitive. Hence, make sure you use the commands in the correct letter case. Also, make sure you type the right command to avoid undesired changes. Below are some of the essential commands to use in VI.

i – Inserts at cursor (gets into the insert mode)

a – Writes after the cursor (gets into the insert mode)

A – Writes at the ending of a line (gets into the insert mode)

o – Opens a new line (gets into the insert mode)

ESC – Terminates the insert mode

u – Undo the last change

U – Undo all changes of the entire line

D – Deletes the content of a line after the cursor

R – Overwrites characters from the cursor onwards

r – Replaces a character

s – Substitutes one character under the cursor and continue to insert

S – Substitutes a full line and start inserting at the beginning of a line

Primer to Container Security

Primer to Container Security

Containers are considered to be a standard way of deploying these microservices to the cloud. Containers are better than virtual machines in almost all ways except security, which may be the main barrier to their widespread adoption.

This article will provide a better understanding of container security and available techniques to secure them.

A Linux container can be defined as a process or a set of processes running in the userspace that is/are isolated from the rest of the system by different kernel tools.

Containers are great alternatives to virtual machines (VMs). Even though containers and virtual machines provide the same isolation benefits, they differ in the way that containers provide operating system virtualization instead of hardware. This makes them lightweight, faster to start, and consumes less memory.

As multiple containers share the same kernel, the solution is less secure than the VMs, where they have their copies of OS, libraries, dedicated resources, and applications. That makes VM excellently secure but because of their high storage size and reduced performance, it creates a limitation on the total number of VMs which can be run simultaneously on a server. Further VMs take a lot of time to boot.

The introduction of microservice architecture has changed the way of developing software. Microservices allow the development of software in small self-contained independent services. This makes the application easier to scale and provides agility.

If a part of the software needs to be rewritten it can easily be done by changing only that part of the code without interrupting any other service, which wasn't possible with the monolithic kernel.

Protection requirement use cases and solutions
Protection requirement use cases and solutions

1) Linux Kernel Features

a. Namespaces

Namespaces ensure the isolation of resources for processes running in a container to that of others. They partition the kernel resources for different processes. One set of processes in a separate namespace will see one set of resources while another set of processes will see another. Processes in different see different process IDs, hostnames, user IDs, file names, names for network access, and some interprocess communication. Hence, each file system namespace has its private mount table and root directory.

Scrolling Up and Down in the Linux Terminal

Scrolling Up and Down in the Linux Terminal

Are you looking for the technique of scrolling through your Linux terminal? Brace yourself. This article is written for you. Today you’ll learn how to scroll up and down in the Linux terminal. So, let’s begin.

Why You Need to Scroll in Linux Terminal

But before going ahead and learning about up and down scrolling in the terminal, let’s find out what’s the importance of scrolling in the Linux terminal. When you have a lot of output printed on your terminal screen, it becomes helpful to make your Linux terminal behave in a particular manner. You can clear the terminal at any time. This may make your work easier and quicker to complete. But what if you’re troubleshooting an issue and you need a previously entered command, then scrolling up or down comes to the rescue.

Various shortcuts and commands allow you to perform scrolling in the Linux terminal whenever you want. So, for easy navigation in your terminal using the keyboard, read on.

How to Scroll Up and Down in Linux Terminal

In the Linux terminal, you can scroll up by page using the Shift + PageUp shortcut. And to scroll down in the terminal, use Shift + PageDown. To go up or down in the terminal by line, use Ctrl + Shift + Up or Ctrl + Shift + Down respectively.

Key Combinations Used in Scrolling

Following are some key combinations that are useful in scrolling through the Linux terminal. 

Ctrl+End: This allows you to scroll down to your cursor.

Ctrl+Page Up: This key combination lets you scroll up by one page.

Ctrl+Page Dn: This lets you scroll down by one page.

Ctrl+Line Up: To scroll up by one line, use this key combination.

Scrolling Up and Down with More Command

The more command allows you to see the text files within the command prompt. For bigger files (for example, log files), it shows one screen at one time. The more command is also used to scroll up and down within the page. To scroll up the display one line at a time, press the Enter key. To scroll a screenful at a time, use Spacebar. To do backward scrolling, press ‘b’.

How to Disable Scrolling in the Terminal

To disable the scrollbar, follow the steps given in this section. First, on the window, press the Menu button residing in the top-right corner. Then select Preferences. From the Profiles section in the sidebar, select the profile you’re currently using. Then select the Scrolling option. Finally, uncheck the Show scrollbar to disable the scrolling feature in the terminal. Your preference will be saved immediately.

Self-Hosted Static Homepages: Dashy Vs. Homer

Self-Hosted Static Homepages: Dashy Vs. Homer

Authors: Brandon Hopkins, Suparna Ganguly

Self-hosted homepages are a great way to manage your home lab or cloud services. If you’re anything like me chances are, you have a variety of docker containers, media servers, and NAS portals all over the place. Using simple bookmarks to keep track of everything often isn’t enough. With a self-hosted homepage, you can view everything you need from anywhere. And you can add integrations and other features to help you better manage everything you need to.

Dashy and Homer are two separate static homepage applications. These are used in home labs and on the cloud to help people organize and manage their services, docker containers, and web bookmarks. This article will overview exactly what these self-hosted homepages have to offer.

Dashy

Dashy is a 100% free and open-source, self-hosted, highly customizable homepage app for your server that has a strong focus on privacy. It offers an easy-to-use visual editor, widgets, status checking, themes, and lots more features. Below are the features that you can avail yourself of with Dashy.

Live Demo: https://demo.dashy.to/

Customize

You can customize your Dashy as how you want to fit in your use case. From the UI, choose from different layouts, show/hide components, item sizes, switch themes, and a lot more. You can customize each area of your dashboard. There are config options available for custom HTML header, footer, title, navbar links, etc. If you don’t need something, just hide it!

Dashy offers multiple color themes having a UI color editor and support towards custom CSS. Since all of the properties use CSS variables, it is quite easy to override. In addition to themes, you can get a host of icon options, such as Font-Awesome, home lab icons, Material Design Icons, normal images, emojis, auto-fetching favicons, etc.

Integrations

GIMP in a Pinch: Life after Desktop

GIMP in a Pinch: Life after Desktop

So my Dell XPS 13 DE laptop running Ubuntu died on me today. Let’s just say I probably should not have attempted to be efficient and take a bath and work at the same time!

Unfortunately, as life always seems to be, you always need something at a time that you don’t have it and that is the case today. I have some pictures that I need to edit for a website, and I only know and use GIMP. I took a look at my PC inventory at home, and I had two options:

  1. Macbook Air: My roommate’s computer
  2. HP Chromebook 11: A phase of my life where I attempted to streamline my life and simplify which lasted two weeks

My roommate was using his computer, so it really only left me with one option, the chromebook. I also did not have a desire to learn another OS today as I have done enough distro hopping in the last few months. I charged and booted up the chromebook and started to figure out how I could get GIMP onto it. Interestingly enough, there are not many clear cut options to running GIMP on an Android device. There was an option to run a Linux developer environment on the chromebook, but it required 10GB of space which I didn’t have. Therefore, option two was to find an app on the Google Play Store.

Typing GIMP brought me to an app called XGimp Image Editor from DMobileAndroid, and I installed and loaded it with an image to only find this:

gimp-image-1

This definitely is nothing like GIMP and appeared to be very limited in functionality anyway. I could see why it had garnered a 1.4 star rating as it definitely is not what someone would expect when they are looking for something similar to GIMP.

So I took a look at the other options, and there was another app called GIMP from Userland Technologies. It does cost $1.99, but it was a one-time charge and seemed to be the only other option on the Play Store. Reviewing the screenshots and the description of the application seemed to suggest that this would be the actual GIMP app that I was using on my desktop so I went ahead and downloaded it. Installation was relatively quick, and I started running it and to my surprise, here is what I saw:

gimp-image-3

It appears that the application basically is a Linux desktop build that automatically launches the desktop version of GIMP. Therefore, it really is GIMP. I loaded up an image which was also relatively easy to do as it seamlessly connected to my folders on my chromebook.

Geek Guide: Purpose-Built Linux for Embedded Solutions

Geek Guide: Purpose-Built Linux for Embedded Solutions

The explosive growth of the Internet of Things (IoT) is just one of several trends that is fueling the demand for intelligent devices at the edge. Increasingly, embedded devices use Linux to leverage libraries and code as well as Linux OS expertise to deliver functionality faster, simplify ongoing maintenance, and provide the most flexibility and performance for embedded device developers.

This e-book looks at the various approaches to providing both Linux and a build environment for embedded devices and offers best practices on how organizations can accelerate development while reducing overall project cost throughout the entire device lifecycle.

Download PDF

How to Install and Uninstall KernelCare

How to Install and Uninstall KernelCare

In my previous article, I described what KernelCare is. In this article, I’m going to tell you how to install, uninstall, clear the KernelCare cache, and other important information regarding KernelCare. In case you’re yet to know about the product, here’s a short recap. KernelCare provides automated security updates to the Linux kernel. It offers patches and error fixes for various Linux kernels.

So, if you are looking for anything similar, you have landed upon the right page. Let’s begin without further ado.

Prerequisites to Install KernelCare

Before installing KernelCare in your Linux system, ensure that you have either of these operating systems as given below.

  • 64-bit RHEL/CentOS 5.x, 6.x, 7.x

  • CloudLinux 5.x, 6.x

  • Virtuozzo/PCS/OpenVZ 2.6.32

  • Debian 6.x, 7.x

  • Ubuntu 14.04

Note: In case you have KernelCare installed on your machine, it might be useful to know the current KernelCare version before installing KernelCare next time. To know the current version run the below-given command as root:

/usr/bin/kcarectl –uname

Checking Kernel’s Compatibility with KernelCare

To check if your current kernel is compatible with KernelCare, you need to use the following code.

curl -s -L https://kernelcare.com/checker | python

Installing KernelCare

Run the following command to install KernelCare.

curl -s -L https://kernelcare.com/installer | bash

If you use an IP-based license, you don’t need to do anything more. However, if you use a key-based license, run the following command.

/usr/bin/kcarectl --register KEY

KEY is a registration key code string. It’s given to you when you sign up to purchase or to go through a trial of KernelCare. Let’s see an example.

[root@unixcop:~]/usr/bin/kcarectl --register XXXXXXXXXXX

Server Registered

The above example shows a registration key code string.

If you experience a “Key limit reached” error message, then you need to first unregister the server after the trial ends. To do the same type:

kcarectl --unregister

Checking If the Patches Applied Successfully

For checking if the patches have been applied successfully or not, use the command as given below.

/usr/bin/kcarectl --info

Now the software will check for new patches automatically every 4 hours.

If you want to run updates manually, run: