A Brief History of CPU Clock Speeds
In the last decade, the concept of a virtual machine (VM) has really come into prominence. Until roughly the mid 2000s, the benchmark of any good central processing unit (CPU) was its clock speed. Measured in Hz, clock speed represents the number of times per second that a CPU can execute a full instruction cycle. The very first computer I ever owned had an Intel Pentium processor, clocked at 60 MHz. From there, CPU clock speeds progressed in a predictable way before reaching a peak in the mid-2000s around 4000 MHz.
There were many improvements in CPU architecture along the way, but the rule of thumb was that for a single-threaded task, clock speed was king. If your CPU had faster clock speed compared to a competitor, it could do any job faster.
That all started changing when the limits of silicon manufacturing began to encounter a point of diminishing returns. In order to fit more processing power onto increasingly smaller chip sizes, higher power was necessary. With more power comes more heat, as well as a higher propensity for calculation errors and electrical instability due to components being physically closer together.
The upper limit of predictable mass-consumer clock speed seems to be around 5 GHz (5000 MHz). If you’re interested in a little history on the matter, check out this very interesting post about CPUs and clock speeds written in 2005, right before chipmakers ran into the big clock speed wall.
When chip makers started to run into this limit, they took a novel approach. Instead of trying to push a processing core faster, they simply started combining multiple cores into a single physical chip. That approach has been in use ever since, and it’s common to see two, four, eight, and sixteen core CPUs in consumer hardware.
The reality is that single-threaded tasks are rare, and a computer is often used to do many different things simultaneously. Thus, a handful of capable cores will do more useful work than a single very fast core that constantly has to switch between tasks.
Typical computer use consists of web browsing, some light multimedia work, and transferring of files around on a network. Plus perhaps some audio and video conferencing. None of it is particularly demanding, so often a computer with multiple processing cores will be sitting idle or at a part load.
The Virtual Machine
With all of these cores sitting around, many experimented with virtualization. Virtualization is, essentially, a software or hardware-assisted method of isolating resources within a computer and allowing software to run inside that isolated area. The software itself functions in the way it normally would, except that it is interacting with an abstracted software layer instead of real physical hardware. Instead of addressing a real CPU, the VM provides a virtual CPU that translates tasks to the host CPU with very little overhead. Since most work done by a computer is CPU-bound, the cost to run many loads in a VM is low.
As I write this on my Windows 10 workstation, I have a browser window open with 7 tabs, one of which is playing music and video. I also have a game window minimized, a file browser, a handful of system tray components (printer driver, mouse driver, video driver), and a few online gaming services. I just launched Task Manager and I see that I’m using 10% CPU, 67% of my memory, 0% disk I/O and 0% Network. And I don’t even have a very powerful machine! It’s a budget quad-core, for crying out loud. Imagine how small the percentages would be if I had a maxed-out CPU and extra RAM!
Since my machine is mostly just sitting around waiting for me to challenge it, it’s perfect for use as a VM host.
The wonderful thing about VMs is that they only exist in memory when they are operating, and the storage is simply saved to a large chunk of your hard drive. Since a VM has no physical component to speak of, you are free to interact with it in a completely virtual way. You may start it up, reboot it, turn it off, etc. without affecting your host machine. It works just like a real, physical computer, but the cost to interact with it is very low.
I know many people who are afraid of experimenting with computers. I understand the mindset, and I sympathize. Computers have become more and more powerful and user-friendly, and along the way became “magical”.
When I first started learning about computers, it was on my uncle’s 386-DX using MS-DOS 3.2. I mostly played games on it, loaded from 5.25" floppy disks. It had its own magic then, but the need to interact with MS-DOS on the command line made me unafraid of it. Nothing happened on the computer without me telling it explicitly what to do. Go to a directory, copy a file, start an executable, etc.
A modern computer may as well be magic, for all that it can do. But it hides a lot from you. Gone are the days when you had to search for a special driver, research an accessory beforehand to see if it was compatible with your computer, or purchase a hardware device and open the computer to place it inside.
A VM is perfect for capturing that good old-fashioned computer hacker feeling. Since a VM can be created, destroyed, modified, or copied with extreme ease, the cost of making a mistake is near-zero except for your time.
Since a VM can therefore act as a stand-in for a complete computer, there is almost no downside to using it for learning.
I am going to write my lesson plan around Linux installed inside of a VM. I will install it using a complete graphical user interface (GUI), but I intend to teach you exclusively from the command line interface (CLI). A GUI is nice for getting started, but most server work will be from the CLI.
Tools and Plans
A long-time fixture in the VM world is a product called VirtualBox. Version 4 was released as an open source product in 2010, and continues to be developed in that way. It is up-to-date and free (both cost and in philosophy), and because of its ubiquity it is very easy to find support.
I recommend going to the VirtualBox Download page, installing the latest version for your operating system, and starting to play around.
The end goal is to establish a small VM, install a very friendly version of Linux on it, and then interact with your new machine. Once you get over the mental hurdle of interacting with a machine inside of a machine (cue the Inception sound), you can begin to appreciate the utility of Linux as a platform for self-hosting software.
I have embedded a YouTube video below where a very capable presenter shows you the steps to installing VirtualBox, creating a VM, and then installing Ubuntu Linux to that VM.
As you watch, please think of the process generally as an example of a new operating system being installed to a virtual machine inside a real physical machine. By all means follow the steps exactly, try it out, and get a feel for Linux. If you’ve never used it before, it will be a bit of a shock to your system but probably fun too. And remember, if you screw it up, just throw the virtual machine away! No harm, no foul.
After you have installed VirtualBox, visit Ubuntu Downloads and download the Ubuntu 20.04 LTS distribution in ISO format.
Then watch this video, which does an excellent job walking you through the process of installing Ubuntu inside of a new VirtualBox VM.
If you have any issues, find me on Twitter or email me at BowTiedDevil@protonmail.com
Good luck and have fun!