It's another Tuesday and another excuse to sip some red while doing some live Linux and open-source experimentation. Yes, it's time for Cooking with Linux (without a Net), and on today's show, I'll show you how to edit a video using the Kdenlive video editor, how to trim said video, adjust audio, fade between clips and apply all sorts of fun effects. Then, I'll show you how to turn that masterpiece into a video format suitable for uploading to YouTube! All of it live, on camera, and without the benefit of post video editing—therefore providing a high probability of falling flat on my face. Once we're done doing art, I'll try out ArcoLinux, another distribution you've probably never heard of, and I'll go through the installation for you. If it wasn't already obvious, this is a pre-recorded video of a live show.
News briefs for June 19, 2018.
Red Hat today launched Red Hat Process Automation Manager 7, which is "a comprehensive, cloud-native platform for developing business automation services and process-centric applications across hybrid cloud environments". This new release expands some key capabilities including cloud native application development, dynamic case management and low-code user experience. You can learn more and get started here.
The free, open-source Brackets editor, which focuses on web development/design, released version 1.13 of its code editor this week. Linux Uprising reports that the new release features "the ability to opening remote files, drag and drop support for the FileTreeFiew, an option to automatically update Brackets, and bug fixes". See also the release notes on GitHub for more info.
Qt announced the release of version 5.11.1 today. This release is the first patch release for the 5.11 series and doesn't include any new functionality, but it does provide more than 150 bug fixes and 700 important changes. See the Change Files page for details.
Today, June 19th, has been declared FreeBSD Day. Visit the website for information on ways you can help them celebrate this 25th anniversary.
Happy Birthday to It's FOSS! Visit the website for giveaways and more details on It's FOSS's 6th birthday celebration.
There's an effort under way to reduce and ultimately remove all system call invocations from within kernel space. Dominik Brodowski was leading this effort, and he posted some patches to remove a lot of instances from the kernel. Among other things, he said, these patches would make it easier to clean up and optimize the syscall entry points, and also easier to clean up the parts of the kernel that still needed to pretend to be in userspace, just so they could keep using syscalls.
The rationale behind these patches, as expressed by Andy Lutomirski, ultimately was to prevent user code from ever gaining access to kernel memory. Sharing syscalls between kernel space and user space made that impossible at the moment. Andy hoped the patches would go into the kernel quickly, without needing to wait for further cleanup.
Linus Torvalds had absolutely no criticism of these patches, and he indicated that this was a well desired change. He offered to do a little extra housekeeping himself with the kernel release schedule to make Dominik's tasks easier. Linus also agreed with Andy that any cleanup effort could wait—he didn't mind accepting ugly patches to update the syscall calling conventions first, and then accept the cleanup patches later.
Ingo Molnar predicted that with Dominik's changes, the size of the compiled kernel would decrease—always a good thing. But Dominik said no, and in fact he ran some quick numbers for Ingo and found that with his patches, the compiled kernel was actually a few bytes larger. Ingo was surprised but not mortified, saying the slight size increase would not be a showstopper.
This project is similar—although maybe smaller in scope—to the effort to get rid of the big kernel lock (BKL). In the case of the BKL, no one could figure out for years even how to begin to replace it, until finally folks decided to convert all BKL instances into identical local implementations that could be replaced piecemeal with more specialized and less heavyweight locks. After that, it was just a question of slogging through each one until finally even the most finicky instances were replaced with more specialized locking code.
Dominik seems to be using a similar technique now, in which areas of the kernel that still need syscalls can masquerade as user space, while areas of the kernel that are easier to fix get cleaned up first.
Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to firstname.lastname@example.org.
News briefs for June 18, 2018.
Feral Interactive announced this morning that Total War: WARHAMMER II is coming to Linux and macOS this year. You can view the trailer here. Pricing and system requirements will be announced closer to the release.
Starting today, Red Hat announced that "all new Red Hat-initiated open source projects that opt to use GPLv2 or LGPLv2.1 will be expected to supplement the license with the cure commitment language of GPLv3". The announcement notes that this development is the latest in "an ongoing initiative within the open source community to promote predictability and stability in enforcement of GPL-family licenses".
Linspire announced the release of 8.0 Alpha 1 yesterday. This release marks the beginning stages of the new Linspire release, scheduled for around Christmas, and is not intended for use in production environments. New features include Ubuntu 18.04 Base, new GUI layout, kernel 4.15/0-23, Mate 1.20.1, Google Chrome 67 and more.
Yesterday marked the end of security support for for Debian GNU/Linux 8 "Jessie", Softpedia News reports. If you haven't already done so, upgrade now.
Phoronix reports on feautres that didn't make it for the mainline Linux kernel 4.18. Work that isn't being mailined includes Bcachefs, NOVA, Reiser4, WireGuard, LLVM Linux and more.
Want to distribute Python programs to your Python-less clients? PyInstaller is the answer.
If you're used to working with a compiled language, the notion that you would need to have a programming language around, not just for development but also for running an application, seems a bit weird. Just because a program was written in C doesn't mean you need a C compiler in order to run it, right?
But of course, interpreted and byte-compiled languages do require the original language, or a version of it, in order to run. True, Java programs are compiled, but they're compiled into bytecodes then executed by the JVM. Similarly, .NET programs cannot run unless the CLR is present.
Even so, many of the students in my Python courses are surprised to discover that if you want to run a Python program, you need to have the Python language installed. If you're running Linux, this isn't a problem. Python has come with every distribution I've used since 1995. Sometimes the Python version isn't as modern as I'd like, but the notion of "this computer can't run Python programs" isn't something I've had to deal with very often.
However, not everyone runs Linux, and not everyone's computer has Python on it. What can you do about that? More specifically, what can you do when your clients don't have Python and aren't interested in installing it? Or what if you just want to write and distribute an application in Python, without bothering your users with additional installation requirements?
In this article, I discuss PyInstaller, a cross-platform tool that lets you take a Python program and distribute it to your users, such that they can treat it as a standalone app. I also discuss what it doesn't do, because many people who think about using PyInstaller don't fully understand what it does and doesn't do.
Running Python Code
Like Java and .NET, Python programs are compiled into bytecodes, high-level commands that don't correspond to the instructions of any actual computer, but that reference something known as a "virtual machine". There are a number of substantial differences between Java and Python though. Python doesn't have an explicit compilation phase; its bytecodes are pretty high level and connected to the Python language itself, and the compiler doesn't do that much in terms of optimization. The correspondence between Python source code and the resulting bytecodes is basically one-to-one; you won't find the bytecode compiler doing fancy things like inlining code or optimizing loops.
It's another cartoon in need of a caption! You submit your caption, we choose three finalists, and readers vote for their favorite. The winning caption for this cartoon will appear in the August issue of Linux Journal.
To enter, simply type in your caption in the comments below or email us, email@example.com.
A script a day will allow you some freedom to play and build other useful and more complicated scripts. Every day, I attempt to make my life easier—by this I mean, trying to stop doing the repetitive tasks. If a process is repeatable; it can be scripted and automated. The idea to automate everything is not new, but try automating a command on a remote host.
SSH is very flexible, and it comes with many options. My absolute favorite is
its ability to let you run a command on a remote server by passing the
flag. An example:
ssh -t firstname.lastname@example.org 'cat /etc/hosts'
ssh to webserver1.test.com, then run
/etc/hosts in your shell
and return the output.
For efficiency, you could create an ssh-key pair.
It's a simple process of creating a passwordless public and a private
keypair. To set this up, use
ssh-keygen, and accept the defaults ensuring you
leave the password blank:
ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/adam/.ssh/id_rsa): y Enter passphrase (empty for no passphrase): LEAVE BLANK Enter same passphrase again: Your identification has been saved in y. Your public key has been saved in y.pub. The key fingerprint is: SHA256:jUxrQRObADE8ardXMT9UaoAcOcQPBEKGU622646P8ho ↪email@example.com The key's randomart image is: +---[RSA 2048]----+ |B*++*Bo.=o | |.+. | |=*= | +----[SHA256]-----+
Once completed, copy the public key to the target server. To do this, use
ssh-copy-id firstname.lastname@example.org /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/adam/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), ↪to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if ↪you are prompted now it is to install the new keys email@example.com's password: ******** Number of key(s) added: 1
You will be asked for the password of the target server.
If you have set this up correctly, you won't be asked for your password
next time you
ssh to your target.
Execute the original example. It should be quicker now that you don't need to enter your password.
If you have a handful of servers and want to report
the running kernel versions, you can run
uname -r from the command line, but
to do this on multiple devices, you'll need a script.
Start with a file with a list of your servers, called server.txt, and then run your script to iterate over each server and return the required information:
News briefs for June 15, 2018.
Purism detailed some of its future plans for PureOS in a blog post this morning. The team is looking into Librem 5 specific-image builds, and besides the ARM64 architecture, they also are "researching usage of OSTree, Flatpak, and a couple of other new technologies to use by default in PureOS on the desktop and/or the phone". In addition, "PureOS is planning to host its own Flathub instance (dedicated to Freedom, of course) so upstream developers can just package their app and submit it to PureOS's flathub if they don't want to trouble themselves with system-wide dependencies." Also, part of Purism's plans for handling apps includes developing "an ethical app store that will provide users with an option to donate, 'pay what you want', or 'subscribe' (support as a patron) the apps you use".
Ars Technica reported this week that "a single person or group may have made as much as $90,000 over 10 months by spreading 17 malicious images that were downloaded more than 5 million times from Docker Hub." A user first complained of the backdoor in September, but nothing was done, and 14 more malicious images were submitted. See Kromtech's report for more details on the cryptojacking. And note that "despite the images being pulled from Docker Hub, many servers that installed the images may still be infected."
Samsung yesterday announced its new Chromebook Plus 2-in-1 convertible laptop, running the Linux-based ChromeOS. The Chromebook Plus "is equipped with a built-in pen and offers a light, thin and stylish design that delivers versatility, portability and a premium experience at a competitive price point". It will be available starting June 24 from Best Buy for $499.99.
Fedora 29 will fully support the FreeDesktop.org Boot Loader Specification, Phoronix reports. With this change Fedora hopes to "simplify the kernel installation process significantly and make it more consistent across the different architectures. This will also make it easier for automation tools to manage the bootloader menu options since it will just be a matter of adding, removing or editing individual BLS entry files in a directory."
News briefs for June 14, 2018.
openSUSE Leap 15, released two weeks ago, is now offering images for Raspberry Pis, Beagle Boards, Arndale board, CuBox-i computers, OLinuXino and more. See the openSUSE blog post for more information on how "makers can leverage openSUSE Leap 15 images for aarch64 and Armv7 on Internet of Things (IoT) and embedded devices" and for download links.
Intel yesterday announced yet another security vulnerability with its Core-based microprocessors. According to ZDNet, Lazy FP state restore "can theoretically pull data from your programs, including encryption software, from your computer regardless of your operating system." Note that Lazy State does not affect AMD processors.
Adblock Plus creators, eyeo, have introduced a beta Chrome extension called Trusted News, which "will use blockchain to help you verify whether a site is trustworthy", Engadget reports. It currently uses four established fact-checker sites, but "the eventual plan is to decentralize the database with the Ethereum blockchain and use game-like token mechanics to reward everyday users for submitting feedback while protecting against trolls."
Untangle yesterday released NG Firewall 14.0. New features include "enhanced support of SD-WAN networking architectures in order to reduce costs for businesses with distributed, branch and remote offices and enable fast and flexible deployment, while ensuring a consistent security posture."
It's like an extra-geeky episode of Cribs featuring single-board computers.
I'm a big fan of DIY projects and think that there is a lot of value in doing something yourself instead of relying on some third party. I mow my own lawn, change my own oil and do most of my own home repairs, and because of my background in system administration, you'll find all sorts of DIY servers at my house too. In the old days, geeks like me would have stacks of loud power-hungry desktop computers around and use them to learn about Linux and networking, but these days, VMs and cloud services have taken their place for most people. I still like running my own servers though, and thanks to the advent of these tiny, cheap computers like the Raspberry Pi series, I've been able to replace all of my home services with a lot of different small, cheap, low-power computers.
Occasionally, I'll hear people talk about how they have a Raspberry Pi or some other small computer lying around, but they haven't figured out quite what to do with it yet. And it always shocks me, because I have a house full of those small computers doing all sorts of things, so in this article, I describe my personal "Piventory"—an inventory of all of the little low-power computers that stay running around my house. So if you're struggling to figure out what to do with your own Raspberry Pi, maybe this article will give you some inspiration.
Primary NAS and Central Server
In "Papa's Got a Brand New NAS" I wrote about my search for a replacement for my rackmount server that acted as a Network-Attached Storage (NAS) for my house, along with a bunch of other services. Ultimately, I found that I could replace the whole thing with an ODroid XU4. Because of its octo-core ARM CPU, gigabit networking and high-speed USB3 port, I was able to move my hard drives over to a Mediasonic Probox USB3 disk array and set up a new low-power NAS that paid for itself in electricity costs.
In addition to a NAS, this server provides a number of backup services for my main server that sits in a data center. It acts as a backup mail server, authoritative DNS, and it also provides a VPN so I can connect to my home network from anywhere in the world—not bad for a little $75 ARM board.
Figure 1. Papa's New NAS
The Comprehensive and Progressive Agreement for Trans Pacific Partnership (CPTPP) is an enormous (roughly 6,000-page) treaty between Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore and Vietnam that was signed in Chile on March 8, 2018. So far, only Mexico and Japan have ratified it. CPTPP is almost identical to the original TPP, which included those 11 countries plus the United States. In early 2017, the US withdrew from the treaty, which its President had previously described as a "terrible deal".
CPTPP has many provisions of concern to the FOSS industries and communities in those countries. Open Source Industry Australia (OSIA) has raised a number of those issues with an Australian Senate committee's inquiry into CPTPP (see "CPTPP could still destroy the Australian FOSS industry" and "Submission to the Senate Standing Committee on Foreign Affairs, Defense & Trade regarding the 'Comprehensive & Progressive agreement for Trans Pacific Partnership'"). The figure below shows the likely consequences of one such provision, Art. 14.17 in the Electronic Commerce Chapter, which deals with transfer of or access to source code.
Linux Journal readers may be particularly concerned about one of those consequences: FOSS authors in the 11 CPTPP countries may lose the ability to use the courts to enforce the copyleft terms in licences such as the GPL.
To what extent that happens will depend on how each country decides two questions of legal interpretation: first, whether FOSS licences constitute "commercially negotiated contracts"; and second, how significant the omission of "enforcement" from the list of conditional actions in the provision may be.
At least some adverse consequences of Art. 14.17 are likely in any countries that ratify CPTPP regardless of the interpretation taken, and the risk of the more severe consequences in those countries seems grave.
News briefs for June 13, 2018.
BrowserStack this morning announced its enhanced open source program, which offers free testing of open source software on the BrowserStack Real Device Cloud. The press release states that "BrowserStack is doubling down on its support for open source projects with full and unlimited access to the BrowserStack platform and its capabilities. The goal is to empower open source developers with the tools and infrastructure necessary to test with speed, accuracy and scale." See the BrowserStack blog post "Supporting Open Source to Drive Community Innovation" for more on BrowserStack's commitment to open source.
Act now to stop the EU's web censorship plan. The Legal Affairs Committee of the European Parliament is voting on June 20 on the proposed reform of EU copyright rules. According to the Creative Commons story, "the final copyright directive will have deep and lasting effects on the ability to create and share, to access and use education and research, and to support and grow diverse content platforms and information services. As it stands now, the copyright reform—especially Article 13—is a direct threat to the open web." If you're in the EU, you can go to https://saveyourinternet.eu and ask Members of the European Parliament to delete Article 13 from the copyright directive.
The first official release of Qt for Python (Pyside2) is now available. It's based on Qt 5.11, and the project will follow the general Qt release schedule and versions. It's available for open-source and commercial Qt Development users. See the Qt blog post for more details and links to download packages.
Notepad++ is now available as a Snap package for Linux, It's FOSS reports. The package actually runs through Wine, but you don't need to set up Wine first. For Ubuntu users, Notepad++ is available in the Software Center.
Facebook has released its Sonar debugging tool to the Open Source community, ZDNet reports. Sonar was developed by Facebook engineers "to help them manage the social network, including the implementation of new features, bug hunting, and performance optimization." By releasing Sonar, the hope is to give programmers a tool to help accelerate app development and deployment.
Exploring the current state of musical Linux with interviews of developers of popular packages.
Linux is ready for prime time when it comes to music production. New offerings from Linux audio developers are pushing creative and technical boundaries. And, with the maturity of the Linux desktop and growth of standards-based hardware setups, making music with Linux has never been easier.
Linux always has had a place for musicians looking for inexpensive rigs to record and create music, but historically, it's been a pain to maintain. Digging through arcane documentation and deciphering man pages is not something that interests many musicians.
Loading up Linux is not as intimidating as it once was, and a helpful community is going strong. Beyond tinkering types looking for cheap beats, users range in experience and skill. Linux is still the underdog when it comes to its reputation for thin creative applications though.
Recently, musically inclined Linux developers have turned out a variety of new and updated software packages for both production and creative uses. From full-fledged DAWs (Digital Audio Workstations), to robust soft-synths and versatile effects platforms, the OSS audio ecosystem is healthy.
A surge in technology-focused academic music programs has brought a fresh crop of software-savvy musicians into the fold. The modular synth movement also has nurtured an interest in how sound is made and encouraged curiosity about the technology behind it.
One of the biggest hurdles in the past was the lack of core drivers for the wide variety of outboard gear used by music producers. With USB 2.0 and improvements in ALSA and JACK, more hardware became available for use. Companies slowly have opened their systems to third-party developers, allowing more low-level drivers to be built.
In terms of raw horsepower, the ubiquity of multicore processors and cheap RAM has enabled Linux to take advantage of powerful machines. Specifically, multithreaded software design available to developers in the Linux kernel offer audio packages that offload DSP and UI to various cores. Beyond OS multithreading, music software devs have taken advantage of this in a variety of ways.
A well known API called Jack Audio Connection Kit (JACK) handles multiple inter-application connections as well as audio hardware communication with a multithreaded approach, enabling low latency with both audio DSP and MIDI connections.
Ardour has leveraged multithreaded processing for some time. In early versions, it was used to distribute audio processing and the main interface and OS interaction to separate cores. Now it offers powerful parallel rendering on a multitude of tracks with complex effects.
News briefs for June 12, 2018.
KDE released Plasma 5.13.0 today. The team has "spent the last four months optimising startup and minimising memory usage, yielding faster time-to-desktop, better runtime performance and less memory consumption. Basic features like panel popups were optimised to make sure they run smoothly even on the lowest-end hardware. Our design teams have not rested either, producing beautiful new integrated lock and login screen graphics." New features in Plasma 5.13 include Plasma Browser Integration, redesigned system settings, new look for lock and login screens, improved KWin graphics compositor and more. See the release announcement for links to download pages for live images, distro packages and source.
OpenGear announced its new NetOps Automation platform, which "provides a solution for automation of NetOps workflows, enabling the management of the network from a central location, and eliminating the need for human intervention on the data center floor or at the edge of the network". NetOps is currently available as a beta product for select customers, and will be generally available in Q4 2018.
There's a new open-source Raspberry Pi synthesizer called Zynthian, which is a "swiss army knife of synthesis, equipped with multiple engines, filters and effects", Geeky Gadgets reports. The synthesizer is completely hackable and "offers an open platform for Sound Synthesis based on the awesome Raspberry Pi mini PC and Linux". See the main website for a video demo and to order.
Wine development release 3.10 is now available. New features include Swapchain support in direct 3D, updated Vulkan support, debugger support for Wow64 processes and more. See the announcement for more details and to download.
Devuan 2.0 ASCII has been released. Devuan is based on Debian Stretch, doesn't use systemd and it lets you choose between SysVinit and OpenRC init systems. With this release, Devuan provides various desktop environments, including Xfce, KDE, MATE, Cinnamon and LXQt. See the Devuan release notes and the It's FOSS post for more information on the distro.
Andiry Xu (working with Lu Zhang, Steven Swanson and others) posted patches for a new filesystem called NOVA (NOn-Volatile memory Accelerated). Normal RAM chips are wiped every time you turn off your computer. Non-volatile RAM retains its data across reboots. Their project targeted byte-addressable non-volatile memory chips, such as Intel's 3DXpoint DIMMs. Andiry said that the current incarnation of their code was able to do a lot already, but they still had a big to-do list, and they wanted feedback from the kernel people.
Theodore Y. Ts'o gave the patches a try, but he found that they wouldn't even compile without some fixes, which he posted in reply. Andiry said they'd adapt those fixes into their patches.
The last time NOVA made an appearance on the kernel mailing list was August 2017, when Steven made a similar announcement. This time around, they posted a lot more patches, including support for SysFS controls, Kconfig compilation options and a significant amount of documentation.
One of NOVA's main claims to fame, aside from supporting non-volatile RAM, is that it is a log-based filesystem. Other filesystems generally map out their data structures on disk and update those structures in place. This is good for saving seek-time on optical and magnetic disks. Log-based filesystems write everything sequentially, trailing old data behind them. The old data then can be treated as a snapshot of earlier states of the filesystem, or it can be reclaimed when space gets tight.
Log-based filesystems are not necessarily preferred for optical and magnetic drives, because sequential writes will tend to fragment data and slow things down. Non-volatile RAM is based on different technology that has faster seek-times, making a log-based approach a natural choice.
NOVA goes further than most log-based filesystems, which tend to have a single log for the whole filesystem, and instead maintains a separate log for each inode. Using the log data, NOVA can perform writes either in place like traditional filesystems or as copy-on-write (COW) operations, which keep the old version of a file until the new version has been written. This has the benefit of being able to survive catastrophic events like sudden power failures in the middle of doing a write, without corrupting the filesystem.
There were lots of responses to the patches from Andiry and the rest of his team. Most were bug reports and criticism, but no controversy. Everyone seemed to be interested in helping them get their code right so the patches could get into the main tree quickly.
Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to firstname.lastname@example.org.
It's now official: the latest RC1 pull request for the Linux 4.18 will not host the nearly 15-year-old Lustre filesystem.
Greg Kroah-Hartman has been growing weary of the team developing its source code not pushing cleaner and fixed code to the staging tree. The removal was committed on June 5, 2018: with the following notes:
The Lustre filesystem has been in the kernel tree for over 5 years now. While it has been an endless source of enjoyment for new kernel developers learning how to do basic coding style cleanups, as well as a semi-entertaining source of bewilderment from the vfs developers any time they have looked into the codebase to try to figure out how to port their latest api changes to this filesystem, it has not really moved forward into the "this is in shape to get out of staging" despite many half-completed attempts.
And getting code out of staging is the main goal of that portion of the kernel tree. Code should not stagnate, and it feels like having this code in staging is only causing the development cycle of the filesystem to take longer than it should. There is a whole separate out-of-tree copy of this codebase where the developers work on it, and then random changes are thrown over the wall at staging at some later point in time. This dual-tree development model has never worked, and the state of this codebase is proof of that.
So, let's just delete the whole mess. Now the lustre developers can go off and work in their out-of-tree codebase and not have to worry about providing valid changelog entries and breaking their patches up into logical pieces. They can take the time they have spent doing those types of housekeeping chores and get the codebase into a much better shape, and it can be submitted for inclusion into the real part of the kernel tree when ready.
Honestly, I do not blame him. The staging tree is primarily intended for unstable and less than mature code, which ideally should move to the mainline within a short time of further development. It's a temporary (that is, staging) location. It's not that I don't appreciate the Lustre filesystem. In fact, I once wrote about it for Linux Journal in the past.
For those who are less familiar with this filesystem: Lustre (or Linux Cluster) is a distributed filesystem typically deployed in large-scale cluster computing environments. Lustre is designed to be both performant and to scale to tens of thousands of nodes and to petabytes of storage. And as what may have just been alluded to already, a distributed filesystem allows access to files from multiple hosts sharing a computer network.
News briefs for June 11, 2018.
Andrew Hutton organized and ran the Linux Symposium for years (otherwise known as OLS). He is one of the people who helped put Linux on the map through his sheer determination, perseverance and enthusiasm for Linux. Several months ago, Andrew suffered a heart attack and now needs our help. Please remember, a donation of any amount helps tremendously.
Court orders Open Source Security, Inc, and Bradley Spengler to pay $259,900.50 to Bruce Perens' attorneys. See Bruce Perens' blog post for more details on the lawsuit against him, which sought $3 million "because they disagreed with my blog posts and Slashdot comments which expressed my opinions that their policies regarding distribution of their Grsecurity product could violate the GPL and lead to liability for breach of contract and copyright infringement."
The US now has the world's fastest supercomputer, named Summit, reclaiming its "speediest computer on earth" title from China and its Sunway TaihuLight system, OMG Ubuntu reports. And of course, the Summit, which boasts 200 petaflops at peak performance, runs Linux—RHEL to be exact. See the U.S. Department of Energy's Oak Ridge National Laboratory's post for more details.
Jarek Duda, inventor of a new compression technique called asymmetric numeral systems (ANS) a few years ago, which he dedicated to the public domain, claims that Google is now seeking a patent that would give it broad rights over the use of ANS for video compression, Ars Technica reports. Google denies it's attempting to patent Duda's work, but "Duda says he suggested the exact technique Google is trying to patent in a 2014 email exchange with Google engineers'—a view largely endorsed by a preliminary ruling in February by European patent authorities."
ownCloud recently announced "the introduction of the Virtual File System within the ownCloud Desktop Client". This allows users to synchronize with the end device only when needed, which will require significantly less local storage space and improve ownCloud user experience. You can download it here.
Why open source needs an open geographic dataset.
Open source has won. The fact that free software now dominates practically every sector of computing (with the main exception of the desktop) is proof of that. But there is something even more important than the victory of open source itself, and that is the wider success of the underlying approach it embodies. People often forget just how radical the idea of open, collaborative development seemed when it appeared in the 1990s. Although it is true that this philosophy was the norm in the very earliest days of the field, that culture was soon forgotten with the rapid rise of commercial computing, which swept everything before it in the pursuit of handsome profits. There, a premium was placed on maintaining trade secrets and of excluding competitors. But the appearance of GNU and Linux, along with the other open software projects that followed, provided repeated proof that the older approach was better for reasons that are obvious upon reflection.
Open, collaborative development allows people to build on the work of others, instead of wastefully re-inventing the wheel, and it enables the best solutions to be chosen on technical, rather than commercial, grounds. The ability to work on areas of personal interest, rather than on those assigned by managers, encourages new talent to join projects in order to pursue their passions, while the non-discriminatory global reach of the open method means that the pool of contributors is much larger than for conventional approaches. However, none of those advantages is tied to software: they can be applied to many fields. And that is precisely what has happened in the last two decades, with the ideas underlying free software producing astonishing results elsewhere.
The built-in PHP debugger allows you to execute PHP scripts step by step, sequentially moving through the lines of code. You can assign check points, view the process of the work of loops and monitor the values of all variables during the execution of a script.
You can view the HTML templates directly in the editor, highlight the interesting elements on a page and explore the associated CSS styles. The HTML and CSS inspector works by following the well known FireBug principle.
Other useful functions and features of the IDE include:
- Pair-highlighting of parentheses and tags—you'll never need to count parentheses or quotation marks; the editor takes care of it for you.
- Highlighting of blocks, selection and collapsing of code snippets, bookmarks to facilitate navigation on edited files, recognition and building of the complete structure of PHP projects—all of these functions ensure easy work with projects of any scale.
- Support for 17 user-interface languages, including English, German, Russian, Spanish, French and more.
- The program works on the following operating systems: Windows 7, Windows 8, Windows 10, macOS, Linux, Ubuntu, Fedora and Debian.
The professional version of the Codelobster IDE provides programmers with even more features.
For example, you can work with projects on a remote server with the use of the built-in FTP client. You can edit the selected files, preview the results and then synchronize the changes with the files on the hosting side.
In addition, the professional version includes an extensive set of plugins:
Some guidance along our road to greatness.
In a February 2018 post titled "Worth Saving", I said I'd like Linux Journal to be for technology what The New Yorker is for New York and National Geographic is for geography. In saying this, I meant it should be two things: 1) a magazine readers value enough not to throw away and 2) about much more than what the name says, while staying true to the name as well.
The only push-back I got was from a guy whose comment called both those model pubs "fanatically progressive liberal whatever" and said he hoped we're not "*planning* to emulate those tainted styles". I told him we weren't. And, in case that's not clear, I'm saying it here again. (For what it's worth, I think The New Yorker has some of the best writing anywhere, and I've hardly seen a National Geographic outside a doctor's office in decades.)
Another commenter asked, "Is there another publication that you'd offer up as an example to emulate?" I replied, "Three come quickly to mind: Scientific American, the late Dr. Dobb's and Byte. Just think of all three when they were at their best. I want Linux Journal to honor those and be better as well."
Scientific American is the only one of those three that's still alive. Alas, it's not what it once was: the most authoritative yet popular science magazine in the world—or at least, that's how it looked when my parents gave me a subscription when I was 12. Back then I wanted to read everything I could about science—when I wasn't beeping code to other ham radio operators from my bedroom or otherwise avoiding homework assignments.
Today, Scientific American is probably as close as it can get to that legacy ideal while surviving in the mainstream of magazine publishing—meaning it persists in print and digital form while also maintaining a constant stream of topical stories on its website.
That last thing is the main work of most magazines these days—or so it seems. As a result, there isn't much difference between Scientific American, Smithsonian, Wired, Ars Technica and Inverse. To demonstrate what I mean, here are stories from those five publications' websites. See if you can guess (without clicking on the links) where each one ran—and which one is a fake headline: