Virtual Machine Startup Shells Closes the Digital Divide One Cloud Computer at a Time

Image
Shells Virtual Machine and Cloud Computing

Startup turns devices you probably already own - from smartphones and tablets to smart TVs and game consoles - into full-fledged computers.

Shells (shells.com), a new entrant in the virtual machine and cloud computing space, is excited to launch their new product which gives new users the freedom to code and create on nearly any device with an internet connection.  Flexibility, ease, and competitive pricing are a focus for Shells which makes it easy for a user to start-up their own virtual cloud computer in minutes.  The company is also offering multiple Linux distros (and continuing to add more offerings) to ensure the user can have the computer that they “want” to have and are most comfortable with.

The US-based startup Shells turns idle screens, including smart TVs, tablets, older or low-spec laptops, gaming consoles, smartphones, and more, into fully-functioning cloud computers. The company utilizes real computers, with Intel processors and top-of-the-line components, to send processing power into your device of choice. When a user accesses their Shell, they are essentially seeing the screen of the computer being hosted in the cloud - rather than relying on the processing power of the device they’re physically using.

Shells was designed to run seamlessly on a number of devices that most users likely already own, as long as it can open an internet browser or run one of Shells’ dedicated applications for iOS or Android. Shells are always on and always up to date, ensuring speed and security while avoiding the need to constantly upgrade or buy new hardware.

Shells offers four tiers (Lite, Basic, Plus, and Pro) catering to casual users and professionals alike. Shells Pro targets the latter, and offers a quad-core virtual CPU, 8GB of RAM, 160GB of storage, and unlimited access and bandwidth which is a great option for software engineers, music producers, video editors, and other digital creatives.

Using your Shell for testing eliminates the worry associated with tasks or software that could potentially break the development environment on your main computer or laptop. Because Shells are running round the clock, users can compile on any device without overheating - and allow large compile jobs to complete in the background or overnight. Shells also enables snapshots, so a user can revert their system to a previous date or time. In the event of a major error, simply reinstall your operating system in seconds.

“What Dropbox did for cloud storage, Shells endeavors to accomplish for cloud computing at large,” says CEO Alex Lee. “Shells offers developers a one-stop shop for testing and deployment, on any device that can connect to the web. With the ability to use different operating systems, both Windows and Linux, developers can utilize their favorite IDE on the operating system they need. We also offer the added advantage of being able to utilize just about any device for that preferred IDE, giving devs a level of flexibility previously not available.”

“Shells is hyper focused on closing the digital divide as it relates to fair and equal access to computers - an issue that has been unfortunately exacerbated by the ongoing pandemic,” Lee continues. “We see Shells as more than just a cloud computing solution - it’s leveling the playing field for anyone interested in coding, regardless of whether they have a high-end computer at home or not.”

Follow Shells for more information on service availability, new features, and the future of “bring your own device” cloud computing:

Website: https://www.shells.com

Twitter: @shellsdotcom

Facebook: https://www.facebook.com/shellsdotcom

Instagram: https://www.instagram.com/shellscom

An Introduction to Linux Gaming thanks to ProtonDB

An Introduction to Linux Gaming thanks to ProtonDB

Video Games On Linux? 

In this article, the newest compatibility feature for gaming will be introduced and explained for all you dedicated video game fanatics. 

Valve releases its new compatibility feature to innovate Linux gaming, included with its own community of play testers and reviewers.

In recent years we have made leaps and strides on making Linux and Unix systems more accessible for everyone. Now we come to a commonly asked question, can we play games on Linux? Well, of course! And almost, let me explain. 

Proton compatibility layer for Steam client 

With the rising popularity of Linux systems, valve is going ahead of the crowd yet again with proton for their steam client (computer program that runs your purchased games from Steam). Proton is a variant of Wine and DXVK that lets Microsoft Games run on Linux operating systems. Proton is backed by Valve itself and can easily be added to any steam account for Linux gaming, through an integration called "Steam Play." 

Lately, there has been a lot of controversy as Microsoft is rumored to someday release its own app store and disable downloading software online. In response, many companies and software developers are pressured to find a new "haven" to share content with the internet. Proton might be Valve's response to this and is working to make more of its games accessible to Linux users. 

Activating Proton with Steam Play 

Proton is integrated into the Steam Client with "Steam Play." To activate proton, go into your steam client and click on Steam in the upper right corner. Then click on settings to open a new window.

Linux Gaming Steamplay
Steam Client's settings window

 

From here, click on the Steam Play button at the bottom of the panel. Click "Enable Steam Play for Supported Titles." After, it will ask you to restart steam, click yes and you are ready to play after the restart.

Your computer will now play all of steam's whitelisted games seamlessly. But, if you would like to try other games that are not guaranteed to work on Linux, then click "Enable Steam Play for All Other Titles."

What Happens if a Game has Issues?

Don't worry, this can and will happen for games that are not in Steam's whitelisted games archive. But, there is help for you online on steam and in proton's growing community. Be patient and don't give up! There will always be a solution out there.

The Review of GUI LVM Tools

GUI LVM Tools

The LVM is a powerful storage management module which is included in all the distributions of Linux now. It provides users with a variety of valuable features to fit different requirements. The management tools that come with LVM are based on the command line interface, which is very powerful and suitable for automated/batch operations. But LVM's operations and configuration are quite complex because of its own complexity. So many software companies including Red Hat have launched some GUI-based LVM tools to help users manage LVM more easily. Let’s review them here to see the similarities and differences between individual tools.

system-config-lvm (alternate name LVM GUI)

Provider: Red Hat

The system-config-lvm is the first GUI LVM tool which was originally released as part of Red Hat Linux. It is also called LVM GUI because it is the first one. Later, Red Hat also created an installation package for it. So system-config-lvm is able to be used in other Linux distributions. The installation package includes RPM packages and DEB packages.

The main panel of system-config-lvm

The main panel of system-config-lvm

The system-config-lvm only supports lvm-related operations. Its user interface is divided into three parts. The left part is tree view of disk devices and LVM devices (VGs); the middle part is the main view which shows VG usage, divided into LV and PV columns.

There are zoom in/zoom out buttons in the main view to control display ratio, but it is not enough for displaying complex LVM information.The right part displays details of the selected related objects (PV/LV/VG).

The different versions of system-config-lvm are not completely consistent in the organized way of devices. Some of them show both LVM devices and non-lvm devices (disk), the others show LVM devices only. I have tried two versions, one shows LVM devices existing in the system, namely PV/VG/LV only, no other devices; The other can display non-lvm disks and PV can be removed in disk view.

The version which shows non-lvm disks

The version which shows non-lvm disks

Supported operations

PV Operations

  • Delete PV
  • Migrate PV

VG Operations

  • Create VG
  • Append PV to VG/Remove PV from VG
  • Delete VG (Delete last PV in VG)

LV Operations

Boost Up Productivity in Bash – Tips and Tricks

Bash Tips and Tricks

Introduction

When spending most of your day around bash shell, it is not uncommon to waste time typing the same commands over and over again. This is pretty close to the definition of insanity.

Luckily, bash gives us several ways to avoid repetition and increase productivity.

Today, we will explore the tools we can leverage to optimize what I love to call “shell time”.

Aliases

Bash aliases are one of the methods to define custom or override default commands.

You can consider an alias as a “shortcut” to your desired command with options included.

Many popular Linux distributions come with a set of predefined aliases.

Let’s see the default aliases of Ubuntu 20.04, to do so simply type “alias” and press [ENTER].

Bash Tips and Tricks 1

By simply issuing the command “l”, behind the scenes, bash will execute “ls -CF”.

It's as simple as that.

This is definitely nice, but what if we could specify our own aliases for the most used commands?! The answer is, of course we can!

One of the commands I use extremely often is “cd ..” to change the working directory to the parent folder. I have spent so much time hitting the same keys…

One day I decided it was enough and I set up an alias!

To create a new alias type “alias ” the alias name, in my case I have chosen “..” followed by “=” and finally the command we want an alias for enclosed in single quotes.

Here is an example below.

Bash Tips and Tricks 2

Functions

Sometimes you will have the need to automate a complex command, perhaps accept arguments as input. Under these constraints, aliases will not be enough to accomplish your goal, but no worries. There is always a way out!

Functions give you the ability to create complex custom commands which can be called directly from the terminal like any other command.

For instance, there are two consecutive actions I do all the time, creating a folder and then cd into it. To avoid the hassle of typing “mkdir newfolder” and then “cd newfolder” i have create a bash function called “mkcd” which takes the name of the folder to be created as argument, create the folder and cd into it.

To declare a new function, we need to type the function name “mkcd ” follower by “()” and our complex command enclosed in curly brackets “{ mkdir -vp "$@" && cd "$@"; }”

Case Study: Success of Pardus GNU/Linux Migration

Pardus GNU/Linux Migration

Eyüpsultan Municipality decided to use an open source operating system in desktop computers in 2015.

The most important goal of the project was to ensure information security and reduce foreign dependency.

As a result of the research and analyzes prepared, a detailed migration plan was prepared.

As a first step, licensed office software installed on all computers has been removed. LibreOffice software was installed instead.

Later, LibreOffice training was given to the municipal staff.

Pardus GNU/Linux

Meanwhile, preparations were made for the operating system migration.

Instead of the existing licensed operating system, Turkey's developed Pardus GNU / Linux distribution was decided to use.

Applications on the Pardus GNU / linux operating system were examined in detail and unnecessary applications were removed.

And a new ISO file was created with the applications used in Eyüpsultan municipality.

This process automated the setup steps and reduced setup time.

While the project continued at full speed, the staff were again trained on LibreOffice and Pardus GNU / linux.

After their training, the users took the exam.

The Pardus GNU / Linux operating system was installed on the computers of the successful ones.

Those who failed were retrained and took the exam again.

As of 2016, 25% of a computer's operating system migration was completed.

Immigration Project Implementation Steps

Analysis

A detailed inventory of all software and hardware products used in the institution was created. The analysis should go down to the department, unit and personnel details.

It should be evaluated whether extra costs will arise in the migration project.

Planning

Migration plan should be prepared, migration targets should be determined.

The duration of the migration should be calculated and the team that will carry out the migration should be determined.

Production

You can use an existing Linux distribution.

Or you can customize the distribution you will use according to your own preferences.

Making a customized ISO file will give you speed and flexibility.ISO file icon

It also helps you compensate for the loss of time caused by incorrect entries.

Test

Start using the ISO file you have prepared in a lab environment consisting of the hardware you use.

Look for solutions, noting any problems encountered during and after installation.

BPF For Observability: Getting Started Quickly

Linux BPF For Observability: Getting Started Quickly

How and Why for BPF

BPF is a powerful component in the Linux kernel and the tools that make use of it are vastly varied and numerous. In this article we examine the general usefulness of BPF and guide you on a path towards taking advantage of BPF’s utility and power. One aspect of BPF, like many technologies, is that at first blush it can appear overwhelming. We seek to remove that feeling and to get you started.

What is BPF?

BPF is the name, and no longer an acronym, but it was originally Berkeley Packet Filter and then eBPF for Extended BPF, and now just BPF. BPF is a kernel and user-space observability scheme for Linux.

A description is that BPF is a verified-to-be-safe, fast to switch-to, mechanism, for running code in Linux kernel space to react to events such as function calls, function returns, and trace points in kernel or user space.

To use BPF one runs a program that is translated to instructions that will be run in kernel space. Those instructions may be interpreted or translated to native instructions. For most users it doesn’t matter the exact nature.

While in the kernel, the BPF code can perform actions for events, like, create stack traces, count the events or collect counts into buckets for histograms.

Through this BPF programs provide both fast and immensely powerful and flexible means for deep observability of what is going on in the Linux kernel or in user space. Observability into user space from kernel space is possible, of course, because the kernel can control and observe code executing in user mode.

Running BPF programs amounts to having a user program make BPF system calls which are checked for appropriate privileges and verified to execute within limits. For example, in the Linux kernel version 5.4.44, the BPF system call checks for privilege with:

if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN))

return -EPERM;

The BPF system call checks for a sysctl controlled value and for a capability. The sysctl variable can be set to one with the command

sysctl kernel.unprivileged_bpf_disabled=1

but to set it to zero you must reboot and make sure to not have your system configured to set it to one at boot time.

Because BPF is doing the work in kernel space significant time and overhead is saved avoiding context switches and by not necessitating transferring large amounts of data back to user space.

Not all kernel functions can be traced. For example if you were to try funccount-bpfcc '*_copy_to_user' you may get output like:

cannot attach kprobe, Invalid argument

Failed to attach BPF program b'trace_count_3' to kprobe

b'_copy_to_user'

This is kind of mysterious. If you check the output from dmesg you would see something like:

A Linux Survey For Beginners

Linux For Beginners

So you have decided to give the Linux operating system a try. You have heard it is a good stable operating system with lots of free software and you are ready to give it a shot. It is downloadable for free, so you get on the net and search for a copy, and you are in for a shock. Because there isn’t one “Linux”, there are many. Now you feel like a deer in the headlights. You want to make a wise choice, but have no idea where to start. Unfortunately, this is where a lot new Linux users give up. It is just too confusing.

The many versions of Linux are often referred to as “flavors” or distributions. Imagine yourself in an ice cream shop displaying 30+ flavors. They all look delicious, but it’s hard to pick one and try it. You may find yourself confused by the many choices but you can be sure you will leave with something delicious. Picking a Linux flavor should be viewed in the same way.

As with ice cream lovers, Linux users have their favorites, so you will hear people profess which is the “best”. Of course, the best is the one that you conclude, will fit your needs. That might not be the first one you try. According to linuxquestions.org there are currently 481 distributions, but you don’t need to consider every one. The same source lists these distributions as “popular”: Ubuntu, Fedora, Linux Mint, OpenSUSE, PCLinuxOS, Debian, Mageia, Slackware, CentOS, Puppy, Arch. Personally I have only tried about five of these and I have been a Linux user for more than 20 years. Today, I mostly use Fedora.

Many of these also have derivatives that are made for special purpose uses. For example, Fedora lists special releases for Astronomy, Comp Neuro, Design Suite, Games, Jam, Python Classroom, Security Lab, Robotics Suite. All of these are still Fedora, but the installation includes a large quantity of programs for the specific purpose. Often a particular set of uses can spawn a whole new distribution with a new name. If you have a special interest, you can still install the general one (Workstation) and update later.

Very likely one of these systems will suit you. Even within these there are subtypes and “windows treatments” to customize your operating system. Gnome, Xfce, LXDE, and so on are different windows treatments available in all of the Linux flavors. Some try to look like MS windows, some try to look like a Mac. Some try to be original, light weight, graphically awesome. But that is best left for another article. You are running Linux no matter which of those you choose. If you don’t like the one you choose, you can try another without losing anything. You also need to know that some of these distributions are related, so that can help simplify your choice.

 

Terminal Vitality

Terminal Vitality - Difference Engine

Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire. 

Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.

GUI Linux

But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.

 

GUI Linux Terminal
A tiling window manager makes xterm overload more manageable

 

A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.

Bashing GUIs

Terminal Vitality

Terminal Vitality - Difference Engine

Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire. 

Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.

GUI Linux

But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.

 

GUI Linux Terminal
A tiling window manager makes xterm overload more manageable

 

A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.

Bashing GUIs

Building A Dashcam With The Raspberry Pi Zero W

raspberry-pi-zero-w

I've been playing around with the Raspberry Pi Zero W lately and having so much fun on the command line. For those uninitiated it's a tiny Arm computer running Raspbian, a derivative of Debian. It has a 1 GHz processor that had the ability to be overclocked and 512 MB of RAM, in addition to wireless g and bluetooth.

raspberry pi zero w with wireless g and bluetooth

A few weeks ago I built a garage door opener with video and accessible via the net. I wanted to do something a bit different and settled on a dashcam for my brother-in-law's SUV.

I wanted the camera and Pi Zero W mounted on the dashboard and to be removed with ease. On boot it should autostart the RamDashCam (RDC) and there should also be 4 desktop scripts dashcam.sh, startdashcam.sh, stopdashcam.sh, shutdownshutdown.sh. Also create and a folder named video on the Desktop for the older video files. I also needed a way to power the RDC when there is no power to the vehicle's usb ports. Lastly I wanted it's data accessible on the local LAN when the vehicle is at home.

Here is the parts list:

  1. Raspberry Pi Zero W kit (I got mine from Vilros.com)
  2. Raspberry Pi official camera
  3. Micro SD card, at least 32 gigs
  4. A 3d printed case from thingverse.com
  5. Portable charger, usually used to charge cell phones and tablets on the go
  6. Command strips, it's like double sided tape that's easy to remove or velcro strips

 

First I flashed the SD card with Raspbian, powered it up and followed the setup menu. I also set a static IP address.

Now to the fun stuff. Lets create a service so we can start and stop RDC via systemd. Using your favorite editor, navigate to "/etc/systemd/system/" and create "dashcam.service"  and add the following:

[Unit]
Description=dashcam service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=forking
Restart=on-failure
RestartSec=1
User=pi
WorkingDirectory=/home/pi/Desktop
ExecStart=/bin/bash /home/pi/Desktop/startdashcam.sh

[Install]
WantedBy=multi-user.target

 

Now that's complete lets enable the service, run the following: sudo systemctl enable dashcam

I added these scripts to start and stop RDC on the Desktop so my brother-in-law doesn't have to mess around in the menus or command line. Remember to "chmod +x" these 4 scripts.

 

startdashcam.sh

#!/bin/bash

# remove files older than 3 days
find /home/pi/Desktopvideo -type f -iname '*.flv' -mtime +3 -exec rm {} \;

# start dashcam service
sudo systemctl start dashcam

 

stopdashcam.sh

SeaGL – Seattle GNU/Linux Conference Happening This Weekend!

SeaGL - Seattle GNU/Linux Conference

This Friday, November 13th and Saturday, November 14th, from 9am to 4pm PST the 8th annual SeaGL will be held virtually. This year features four keynotes, and a mix of talks on FOSS tech, community and history. SeaGL is absolutely free to attend and is being run with free software!

Additionally, we are hosting a pre-event career expo on Thursday, November 12th from 1pm to 5pm. Counselors will be available for 30 minute video sessions to provide resume reviews and career guidance.

Mission

The Seattle GNU/Linux conference (SeaGL) is a free, as in freedom and tea, grassroots technical summit dedicated to spreading awareness and knowledge about free/libre/open source software, hardware, and culture.

SeaGL strives to be welcoming, enjoyable, and informative for professional technologists, newcomers, enthusiasts, and all other users of free software, regardless of their background knowledge; providing a space to bridge these experiences and strengthen the free software movement through mentorship, collaboration, and community.

Dates/Times

  • November 13th and 14th
  • Friday and Saturday
  • Main Event: 9am-4:30pm
  • TeaGL: 1-2:45pm, both days
  • Friday Social: 4:30-6pm
  • Saturday Party: 6-10pm
  • Pre-event Career Expo: 1-5pm, Thursday November 12th
  • All times in Pacific Timezone

Hashtags

- `#SeaGL2020`

- `#TeaGLtoasts`

Social Media

Reference Links

Best contact: press@seagl.org

Hot Swappable Filesystems, as Smooth as Btrfs

Hot Swappable Filesystems, as Smooth as Btrfs

Filesystems, like file cabinets or drawers, control how your operating system stores data. They also hold metadata like filetypes, what is attached to data, and who has access to that data. For windows or macOS users

Quite honestly, not enough people consider which file system to use for their computers.

Windows and macOS users have no valid reason to look into filesystems because they have one that’s been widely used since its inception. For Windows that’s NTFS and macOS that’s HFS+. For Linux users, there are plenty of different file system options to choose from. The current default in the Linux field is known as the Fourth Extended Filesystem or ext4.

Currently there is discussion for changes in the filesystem space of Linux. Much like the changes to the default init systems and the switch to systemd a few years ago, there has been a push for changing the default Linux filesystem to the Btrfs. No, I'm not using slang or trying to insult you. Btrfs stands for the B-Tree file system. Many Linux users and sysadmins were not too happy with its initial changes. That could be because people are generally hesitant to change, or because they change may have been too abrupt. A friend once said, "I've learned that fear limits you and your vision. It serves as blinders to what may be just a few steps down the road for you." In this article I want to help ease the understanding of Btrfs and make the transition as smooth as butter. Let’s go over a few things first.

What do Filesystems do?

Just to be clear, we can summarize what filesystems do and what they are used for. Like mentioned before filesystems are used for controlling how data is store after a program is no longer using it, how to access that data, where that data is located, and what is attached to the data itself. As a sysadmin, one of the many tasks and responsibilities is to maintain backups and manage filesystems. Partitioning filesystems help with separating different areas in business environments and is common practice for data retention. An example would be taking a 3TB hard disk and partitioning 1TB for your production environment, 1TB for your development environment, 1TB for company related documents and files. When accidents happen to a specific partition, only the data stored in that partition is affected, instead of the entire 3TB drive in this example. A fun example would be a user testing a script in a development application that begins filling up disk space in the dev partition. Filling up a filesystem accidentally, whether it be from an application or a user’s script or anything on the system, could cause an entire system to stop functioning. If data is partitioned to separate partitions, only the data in that partition will be full or affected, so the production and company data partitions are safe.

How to Try Linux Without a Classical Installation

How to Try Linux Without a Classical Installation

For many different reasons, you may not be able to install Linux on your computer.

Maybe you are not familiar with words like partitioning and bootloader, maybe you share the PC with your family, maybe you don’t feel comfortable to wipe out your hard drive and start over, or maybe you just want to see how it looks before proceeding with a full installation.

I know, it feels frustrating, but no worries, we have got you covered!

In this article, we will explore several ways to try Linux out without the hassle of a classical installation.

Choosing a distribution

In the Linux world, there are several distributions which are quite different between them.

Some are general purpose operating systems, some others are created with a specific use case in mind. That being said, I know how confusing this can be for a beginner.

If you are moving your first steps with Linux and you are still not sure how and why to pick a distribution instead of another one, there are several resources online available to help you.

A perfect example of these resources is the website https://distrochooser.de/ which will walk you through a questionnaire to understand your needs and advice on what distribution could be a good fit for your use case.

Once you have chosen your distribution, there are high chances it will have a live CD image available for testing before the installation. If this is the case, here below you can find many ways to “boot” your live CD ISO image.

MobaLiveCD

MobaLiveCD is an amazing open source application which lets run a live Linux on windows with nearly zero efforts.

Download the application from the official site download page available here and run it.

It will present a screen where you can choose either a Linux Live CD ISO file or a bootable USB drive.

MobaLiveCD

Click on Run the LiveCD, select your ISO file, select no when asked if you want to create a hard disk.

MobaLiveCD prompt

Your Linux virtual machine will boot up “automagically”.

Slackware

How to Create EC2 Duplicate Instance with Ansible

Creating EC2 Duplicate with Ansible

Many companies like mine use AWS infrastructure as a service (IaaS) heavily. Sometimes we want to perform a potentially risky operation on an EC2 instance. As long as we do not work with immutable infrastructure it is imperative to be prepared for instant revert.

One of the solutions is to use a script that will perform instance duplication, but in modern environments, where unification is an essence it would be wiser to use more common known software instead of making up a custom script.

Here comes the Ansible!

Ansible is a simple automation software. It handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. It is marketed as a tool for making complex changes like zero-downtime rolling patching, therefore we have used it for this straightforward snapshotting task.

Requirements

For this example we will only need an Ansible, in my case it was version 2.9 - in subsequent releases there is a major change with introducing collections so let's stick with this one for simplicity.

Due to working with AWS we require a minimal set of permissions, which include permissions to create:

  • AWS snapshots
  • Register images (AMI)
  • Start and stop EC2

Environment preparation

Since I am forced to work on Windows I have utilized Vagrant instances. Please find below a Vagrantfile content.

We are launching a virtual machine, with Centos 7 and Ansible installed.

For security reasons Ansible, by default, has disabled reading configuration from mounted location, therefore we have to implcity indicate path /vagrant/ansible.cfg.

Listing 1. Vagrantfile for our research

Vagrant.configure("2") do |config|
  config.vm.box = "geerlingguy/centos7"
  config.vm.hostname = "awx"
  config.vm.provider "virtualbox" do |vb|
    vb.name = "AWX"
    vb.memory = "2048"
    vb.cpus = 3
  end
  config.vm.provision "shell", inline: "yum install -y git python3-pip"
  config.vm.provision "shell", inline: "pip3 install ansible==2.9.10"
  config.vm.provision "shell", inline: "echo 'export ANSIBLE_CONFIG=/vagrant/ansible.cfg' >> /home/vagrant/.bashrc"
end

First tasks

In the first lines of the Ansible we specify few meta values. Most of them, like name, hosts and tasks are mandatory. Others provide auxiliary functions.

Listing 2. duplicate_ec2.yml playbook first lines ---

TCP Analysis with Wireshark

TCP Analysis with Wireshark

Transmission Control is an essential aspect of network activity and governs the behavior of many services we take for granted. When sending your emails or just browsing the web you are relying on TCP to send and receive your packets in a reliable fashion. Thanks to two DARPA scientists, Vinton Cerf and Bob Kahn who developed TCP/IP in 1970, we have a specific set of rules that define how we communicate over a network. When Vinton and Bob first conceptualized TCP/IP, they set up a basic network topology and a device that can interface between two other hosts.

Network A Network B

In the Figure 1 we have two networks connected by a single gateway. The gateway plays an essential role in the development of any network and bares the responsibility of routing data properly between these two networks.

Since the gateway must understand the addresses of each host on the network, it is necessary to have a standard format in every packet that arrives. Vince and Bob called this the internetwork header prefixed to the packet by the source host.

Internetwork header

The source and destination entries, along with the IP address, uniquely identify every host on the network so that the gateway can accurately forward packets.

The sequence number and byte count identifies each packet sent from the source, and accounts for all of the text within the segment. The receiver can use this to determine if it has already seen the packet and discard if necessary.

The check sum is used to validate each packet being sent to ensure error free transmission. This checksum uses a false header and encapsulates the data of the original TCP header, such as source/destination entries , header length and byte count .

How to Add a Simple Progress Bar in Shell Script

How to Add a Simple Progress Bar in Shell Script

At times, we need to write shell scripts that are interactive and user executing them need to monitor the progress. For such requirements, we can implement a simple progress bar that gives an idea about how much task has been completed by the script or how much the script has executed.

To implement it, we only need to use the “echo” command with the following options and a backslash-escaped character.

-n : do not append a newline
-e : enable interpretation of backslash escapes
\r : carriage return (go back to the beginning of the line without printing a newline)

For the sake of understanding, we will use “sleep 2” command to represent an ongoing task or a step in our shell script. In a real scenario, this could be anything like downloading files, creating backup, validating user input, etc. Also, to give an example we are assuming only four steps in our script below which is why we are using 20,40,60,80 (%) as progress indicator. This can be adjusted as per the number of steps in a script. For instance, a script with three steps can be represented by 33,66,99 (%) or a script with ten steps can be represented by 10-90 (%) as progress indicator.

The implementation looks like the following:

echo -ne '>>>                       [20%]\r'
# some task
sleep 2
echo -ne '>>>>>>>                   [40%]\r'
# some task
sleep 2
echo -ne '>>>>>>>>>>>>>>            [60%]\r'
# some task
sleep 2
echo -ne '>>>>>>>>>>>>>>>>>>>>>>>   [80%]\r'
# some task
sleep 2
echo -ne '>>>>>>>>>>>>>>>>>>>>>>>>>>[100%]\r'
echo -ne '\n'

In effect, every time the “echo” command executes, it replaces the output of the previous “echo” command in the terminal thus representing a simple progress bar. The last “echo” command simply enters a newline (\n) in the terminal to resume the prompt for the user.

The execution looks like the following:

simple progress bar shell execution

Ubuntu 20.10 “Groovy Gorilla” Arrives With Linux 5.8, GNOME 3.38, Raspberry Pi 4 Support

Article Images
Image
Ubuntu 20.10 “Groovy Gorilla” Arrives With Linux 5.8, GNOME 3.38, Raspberry Pi 4 Support

Just two days ago, Ubuntu marked the 16th anniversary of its first ever release, Ubuntu 4.10 “Warty Warthog,” which showed Linux could be a more user friendly operating system.

Back to now, after the six months of development cycle and the release of the current long-term Ubuntu 20.04 “Focal Fossa,” Canonical has announced a new version called Ubuntu 20.10 “Groovy Gorilla” along with its seven official flavor: Kubuntu, Lubuntu, Ubuntu MATE, Ubuntu Kylin, Xubuntu, Ubuntu Budgie, and Ubuntu Studio.

Ubuntu 20.10 is a short term or non-LTS release, which means it will be supported for 9 months until July 2021. Though v20.10 does not seem a major release, it does come with a lot of exciting and new features. So, let’s see what Ubuntu 20.10 “Groovy Gorilla” has to offer:

New Features in Ubuntu 20.10 “Groovy Gorilla”

Groovy Gorilla

Ubuntu desktop for Raspberry Pi 4

Starting with one of the most important enhancements, Ubuntu 20.10 has become the first Ubuntu release to feature desktop images for the Raspberry Pi 4. Yes, you can now download and run Ubuntu 20.10 desktop on your Raspberry Pi models with at least 4GB of RAM.

Even both Server and Desktop images also support the new Raspberry Pi Compute Module 4. The 20.10 images may still boot on earlier models, but new Desktop images only built for the arm64 architecture and officially only support the Pi 4 variant with 4GB or 8GB RAM.

Linux Kernel 5.8

Linux Kernel 5.8

Upgrading the previous Linux kernel 5.4, the latest Ubuntu 20.10 ships the new Linux kernel 5.8, which is dubbed “the biggest release of all time” by Linus Torvalds as it contains the highest number of over 17595 commits.

So it’s obvious that Linux 5.8 brings numerous updates, new features, and hardware support. For instance, Kernel Event Notification Mechanism, Intel Tiger Lake Thunderbolt support, extended IPv6 Multi-Protocol Label Switching (MPLS) support, Inline Encryption hardware support, Thunderbolt support for Intel Tiger Lake and non-x86 systems, and initial support for booting POWER10 processors.

GNOME 3.38 Desktop Environment

Gnome 3.38 Desktop Environment

Another key change that Ubuntu 20.10 includes is the latest version of GNOME desktop environment, which enhances the visual appearance, performance, and user experience of Ubuntu.

One of my favorite features that GNOME 3.38 introduces is a much-needed separate “Restart” button in the System menu.

Among other enhancements, GNOME 3.38 also includes:

  • Better multi-monitor support
  • Revamped GNOME Screenshot app
  • Customizable App Grid with no “Frequent Apps” tab
  • Battery percentage indicator
  • New Welcome Tour app written in Rust
  • Core GNOME apps improvements

Share Wi-Fi hotspot Via QR Code

If you’re the person who wants to share the system’s Internet with other devices wirelessly, this feature of sharing Wi-Fi hotspot through QR code will definitely please you.

Thanks to GNOME 3.38, you can now turn your Linux system into a portable Wi-Fi hotspot by sharing QR code with the devices like laptops, tablets, and mobiles.

Add events in GNOME Calendar app

Forget to remember the events? A pre-installed GNOME Calendar app now lets you add new events (birthday, meetings, reminders, releases), which displays in the message tray. Instead of adding new events manually, you can also sync your events from Google, Microsoft, or Nextcloud calendars after adding online accounts from the settings.

Active Directory Support

In the Ubiquity installer, Ubuntu 20.10 has also added an optional feature to enable Active Directory (AD) integration. If you check the option, you’ll be directed to configure the AD by giving information about the domain, administrator, and password.

Tools and Software upgrade

Ubuntu 20.10 also features the updated tools, software, and subsystems to their new versions. This includes:

  • glibc 2.32, GCC 10, LLVM 11
  • OpenJDK 11
  • rustc 1.41
  • Python 3.8.6, Ruby 2.7.0, PHP 7.4.9
  • perl 5.30
  • golang 1.13
  • Firefox 81
  • LibreOffice 7.0.2
  • Thunderbird 78.3.2
  • BlueZ 5.55
  • NetworkManager 1.26.2

Other enhancements to Ubuntu 20.10:

  • Nftables replaces iptables as default backend for the firewall
  • Better support for fingerprint login
  • Cloud images with KVM kernels boot without an initramfs by default
  • Snap pre-seeding optimizations for boot time improvements

A full release notes of Ubuntu 20.10 is also available to read right from here.

How To Download Or Upgrade To Ubuntu 20.10

If you’re looking for a fresh installation of Ubuntu 20.10, download the ISO image available for several platforms such as Desktop, Server, Cloud, and IoT.

But if you’re already using the previous version of Ubuntu, you can also easily upgrade your system to the Ubuntu 20.10. For upgrading, you must be using Ubuntu 20.04 LTS as you cannot directly reach 20.10 from 19.10, 19.04, 18.10, 18.04, 17.04, or 16.04. You should first hop on to v20.04 and then to the latest v20.10.

As Ubuntu 20.10 is a non-LTS version and by design, Ubuntu only notifies a new LTS release, you need to upgrade manually by either choosing a GUI method using the built-in Software Updater tool or a command line method using the terminal.

For command line method, open terminal and run the following commands:

sudo apt update && sudo apt upgrade

sudo do-release-upgrade -d -m desktop

Or else, if you’re not a terminal-centric person, here’s an official upgrade guide using a GUI Software Updater.

Enjoy Groovy Gorilla!

Btrfs on CentOS: Living with Loopback

Btrfs on CentOS

Introduction

The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions.

Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years.

For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn. Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.

How to Secure Your Website with OpenSSL and SSL Certificates

How to Secure Your Website with OpenSSL and SSL Certificates

The Internet has become the number one resources for news, information, events, and all things social. As most people know there are many ways to create a website of your own and capture your own piece of the internet to share your stories, ideas, or even things you like with others. When doing so it is important to make sure you stay protected on the internet the same way you would in the real world. There are many steps to take in the real world to stay safe, however, in this article we will be talking about staying secure on the web with an SSL certificate.

OpenSSL is a command line tool we can use as a type of "bodyguard" for our webservers and applications. It can be used for a variety of things related to HTTPS, generating private keys and CSRs (certificate signing requests), and other examples. This article will break down what OpenSSL is, what it does, and examples on how to use it to keep your website secure. Most online web/domain platforms provide SSL certificates for a fixed yearly price. This method, although it takes a bit of technical knowledge, can save you some money and keep you secure on the web.

* For example purposes we will use testmastersite.com for commands and examples

How this guide may help you:

  • Using OpenSSL to generate and configure CSRs
  • Understanding SSL certificates and their importance
  • Learn about certificate signing requests (CSRs)
  • Learn how to create your own CSR and private key
  • Learn about OpenSSL and its common use cases

Requirements

OpenSSL

The first thing to do would be to generate a 2048-bit RSA key pair on your machine. This pair i'm referring to is both your private and public key. You can use a list of tools online to do so, but for this example we will be working with OpenSSL.

What are SSL certificates and who cares?

According to GlobalSign.com an SSL certificate is a small data file that digitally binds a cryptographic key to an organizations details. When installed on a webserver, it activates the padlock and the https protocol and allows secure connections from a web server to a browser. Let me break that down for you. An SSL certificate is like a bodyguard for your website. To confirm that a site is using an SSL you can typically check that the site has an https in the url rather than an http string in the name. the "s" stands for Secure.

  • Example SECURE Site: https://www.testmastersite.com/

Pretty Good Privacy (PGP) and Digital Signatures

Pretty Good Privacy (PGP) and Digital Signatures

If you have sent any plaintext confidential emails to someone (most likely you did), have you ever questioned yourself about the mail being tampered with or read by anyone during transit? If not, you should!

Any unencrypted email is like a postcard. It can be seen by anyone (crackers/security hackers, corporations, governments, or anyone with the required skills), during its transit.

In 1991 Phil Zimmermann, a free speech activist, and anti-nuclear pacifist developed Pretty Good Privacy (PGP), the first software available to the general public that utilized RSA (a public key cryptosystem, will discuss it later) for email encryption and signing. Zimmermann, after having had a friend post the program on the worldwide Usenet, got prosecuted by the U.S. government; later he was charged by the FBI for illegal weapon export because encryption tools were considered as such (all charges were eventually dropped). Zimmermann later founded PGP Inc., which is now part of Symantec Corporation.

In 1997 PGP Inc. submitted a standardization proposal to the Internet Engineering Task Force. The standard was called OpenPGP and was defined in 1998 in the IETF document RFC 2440. The latest version of the OpenPGP standard is described in RFC 4880, published in 2007.

Nowadays there are many OpenPGP-compliant products: the most widespread is probably GnuPG (GNU Privacy Guard, or GPG for short) which has been developed since 1999 by Werner Koch. GnuPG is free, open-source, and available for several platforms. It is a command-line only tool.

PGP is used for digital signature, encryption (and decrypting obviously, nobody will use software which only encrypts!), compression, Radix-64 conversion.

In this article, we will explain encryption and digital signatures.

So what encryption is, how does it work, and how does it benefit us?

Encryption (Confidentiality)

Encryption is the process of conversion of any information to a ciphertext or an unreadable form. A very simple example of encrypting text is:

Hello this is Knownymous and this is a ciphertext.

Uryyb guvf vf Xabjalzbhf naq guvf vf n pvcuregrkg.

If you read it carefully, you will notice that every letter of the English alphabet is converted to its next 13th letter in the English alphabet, so 13 is the key here, needed to decrypt it. It was known as Caesar cipher (Yes, the method is named after Julius Caesar).

Since then there are many encryption techniques (Cryptography) developed like- Diffie–Hellman key exchange (DH), RSA.

The techniques can be used in two ways: