Sunday 21 June 2015

OS explained

OS explained

What does your operating system really do? Mike Bedford explains all in this in-depth guide to the brains behind your PC

The launch of a new version of Windows is always a major event in the world of PCs. The introduction of Windows 10 this year means the operating system is, once again, vying for our attention. Although it’s the most visible element of a computer, in some ways the operating system is the least well understood. It’s easy to view it simply as the means through which we interact with a PC -mainly to start and stop applications - but it’s actually much more than that. To give this unsung hero the respect it deserves, therefore, we’re going to consider what an operating system really is and what it does, much of it behind the scenes.


We’ll then delve into some real-world operating systems and see that Windows, in its various guises, is just the tip of the iceberg. This exploration will show something of the differences between operating systems, and we’ll also attempt to shed some light on that vexed question of whether some really are better than others. Understanding what makes things tick is always interesting but this expose of operating systems isn’t just to provide information - it could change your way of working. Windows isn’t the only option for PCs so, if you come to the conclusion that a different operating system would serve you better, there’s nothing to stop you either making the switch or installing an alternative alongside Microsoft’s own effort.

A FUNDAMENTAL REQUIREMENT


To see the fundamental role played by the operating system, and the huge benefit it bestows, we have to go back to the operation of the very first computers in the late 1940s. In particular, we need to look at some vital system software which performed a job that’s now the most important task of any operating system, even though the term operating system wouldn’t be coined for some time.

The primary job of a computer is to execute a program that comprises a series of instructions which are stored in memory, in the earliest computers, loading the program into memory in the first place was a laborious job that had to be done by entering each instruction as a binary number using switches on the computer’s front panel. This could only be done at a rate of once every 15 seconds, so entering even a short program would take several minutes, while a longer one could take hours. In addition, it’s all too easy to make a mistake when entering a program manually. This was especially true since the instructions were just binary numbers so errors wouldn’t be nearly as obvious as they would be in entering something more meaningful such as text. Today, we’re used to being able to load a program with virtually no possibility of errors with just a click on an icon, even though many of those programs contain millions of instructions so entering them manually would quite literally take months.

The first step in being able to load programs more quickly involved reading them from punch cards but, without some sort of software, the computer wouldn’t know how to read those punch cards - so the software to load applications from punch cards had to be entered manually. This may not sound like a huge advantage until we realise that those loader programs were designed to be very short. So entering a short load program - and using that to load larger application software from punch cards - did provide a major advantage. Suffice to say that being able to load application software from disk is integral to all modern operating systems.

MULTITASKING


While at the most basic level the operating system allows a program to be loaded from disk and executed, virtually all modern operating systems go far beyond this by implementing multitasking. This means several programs can run at the same time, or so it appears. What’s more, multitasking predates processors with multiple cores.

Multitasking first came to the fore with large mainframes and mini-computers where users, on perhaps dozens of terminals, would each be able to run their programs and get timely responses. In computers with just one single-cored processor, this was achieved by time-slicing, a technique that involved running each program for a fraction of a second before moving on to the next and eventually cycling back to the first. This scheduling was a key task for the operating system, and it sometimes involved maintaining several programs in memory simultaneously. Multitasking was also required when the memory was full, something the operating system achieved by writing the state of some programs to disk and loading them back to memory when their time slot was available.

Windows brought multitasking to the PC, but not to support lots of users. Instead, it allowed multiple applications to be open onscreen so that it was no longer necessary, for example, to close the word processor to read an email. Users could also leave power-hungry tasks such as media creation to work in the background while they carried out regular office-type tasks.

With the common availability of multicore processors, operating systems are now able to distribute tasks between cores. However, time-slicing is still an important technique as it means the number of simultaneous tasks isn’t limited by the number of cores.

FILE HANDLING


Another of the most fundamental facilities provided by the operating system is some means of file handling, something that’s essential to any PC user; even if you don’t drag and drop files much any more, you’ll need a file handler to download a photo or PDF to your PC. Here we’re thinking primarily of a means of seeing what files are present in the various drives and folders, moving them from one drive or folder to another, and copying, renaming and deleting them.

While these types of file-handling tasks are familiar to most computer users, none of it would be possible without a more basic yet essential element of any operating system, namely the file system. Without a file system, a program could write to a disk, but those bytes of information recorded in the media wouldn’t be recognisable as a file to application software. Instead, so that all programs can share files, and the operating system can provide users with file-handling tools, it’s necessary to have a definition of the way files are stored and a set of low-level routines for accessing those files.

Computer files and file systems are so called because of their similarity to paper files and the systems used for storing them. Paper documents are put into labelled hanging files, these are put in the labelled drawers of a filing cabinet, and the filing cabinets themselves are labelled. This hierarchical structure makes it far easier to find particular documents. A computer file system works in much the same way. First it defines how a single file is separated from all the other data on a disk, usually by way of a header and a footer which define where the file starts and ends. This way, the file can be viewed as a separate entity, just as a sheet of paper would be in a manual filing system. Next, it defines a structure in which files can be placed together with similar files in a folder or directory (the terminology differs between operating systems), and these can be bundled together in higher-level folders and so forth.

Our mention of low-level routines for accessing files brings us to another important element of an operating system. Even if the file system is defined, requiring each piece of software to include its own code for reading and writing data from files and carrying out other common filehandling tasks would involve a huge duplication of effort. For this reason, the operating system provides something called an Application Programming Interface (API), which allows a programmer to create or access a file using code that’s built into the OS. So, for example, to create a new file, instead of having to write dozens of lines of code, a Windows programmer can simply use the CreateFile function, providing details of the file to be created. The concept of the API goes far beyond the file system, providing programmers with easy access to all the major elements of the operating system.

LOOK AND FEEL


For some users, a new user interface might be the most significant benefit offered by one operating system compared to its rivals. More technically minded users, on the other hand, might be inclined to dismiss the way in which we interact with the operating system as largely irrelevant. We’d like to suggest that neither view is correct and, while the look and feel isn’t always the most important aspect of an operating system, developments here can bring about huge productivity benefits. A classic example was the introduction of the graphical user interface (GUI) to the world of PCs with the first version of Windows in 1985.

Before the days of Windows, PCs were shipped with the MS-DOS operating system, which provided a command-line user interface. When you switched on your machine, you’d be presented with an introductory message such as "Starting MS-DOS..." at the top of the screen and, below it, "С:\>". This was the so-called command-line prompt, and it indicated that the current directory (or folder in modern Windows terminology) was the root directory on the C: disk and that MS-DOS was awaiting your instruction. That instruction would be given as a textual command so, for example, if you wanted to run WordPerfect, a popular MS-DOS word processor, you’d type WP followed by the Enter key. However, WordPerfect wasn’t usually installed in the root directory, so you’d first have to change the directory by typing "cd/WP51" followed by Enter to change to the directory in which WordPerfect 5.1 was installed. Having to type these two commands may not have been a huge disadvantage compared to clicking on an icon, but other commands were a lot trickier.

For a start, you had to remember the names of all the commands and know how to use them. Some would become second nature, but for the less commonly used ones you’d probably end up typing Help so that MS-DOS would list the format for each of its dozens of commands. Even if you knew the syntax of the command, though, many instructions would be much longer than the example we’ve given, with the ever-present possibility of making an error. So to copy a file from a directory called Documents on floppy disk drive A: to a similarly named directory on the C: hard drive, for example, you needed to type something like: ‘copy a:\documents\letter1.wp c:\documents’. Needless to say, error messages such as “Could not find C:\ documents\lettet1.wp”, resulting from a misspelled filename, were commonplace.

The advantages provided by a graphical user interface aren’t hard to appreciate, but that first version of Windows wasn’t actually an operating system in its own right. Instead, it was a graphical front end that ran within MS-DOS. So MS-DOS started up normally, even if Windows 1.0 was installed, and if you wanted to use Windows you’d first have to type ‘Win’ at the command prompt. In fact, Windows 2.0, 3.0, 3.1, 95, 98 and Me would all come and go before Windows became a fully fledged operating system with Windows XP in 2001 (although, for servers, Windows NT had made this breakthrough several years earlier).

COMPARING OPERATING SYSTEMS


Sometimes a new operating system is clearly a departure from its predecessor in having a different user interface. Windows 8 was a classic example in providing a markedly different look and feel to Windows 7 and, for that matter, to most versions of Windows that went before it. Other Windows upgrades provided a user interface that was little changed but offered some major new functionality. While the aesthetic differences that Windows XP brought were only skin deep, for example, this was the first mainstream version of Windows for which a 64-bit edition was available.

With most operating systems, Windows or otherwise, now providing support for 64-bit processors and - except for Windows 8 and 10 and Android, with their tile-based user interfaces - most having a very similar look and feel, what does differentiate operating systems? Do Linux, BSD or OS X (formerly Mac OS X), for example, really provide any benefits over their Windows counterparts, as supporters of these alternatives might claim?

To get some thoughts on this question, we spoke to Professor Timothy Roscoe, an expert in operating systems in the department of computer science at ETH Zurich. Roscoe explained that most of today’s mainstream operating systems are related to Unix in one way or another, suggesting that the difference might not be as fundamental as we might have thought.

“Linux, BSD, OS X, iOS, Android, Windows, and Solaris are really quite similar,” he said. “Linux, Solaris, and BSD obviously trace their lineage directly back to Unix; Android is Linux, OS X and iOS are the Mach microkernel with the BSD emulation subsystem moved into the kernel, and Windows is a descendent of VMS, which was DEC’S proprietary alternative to Unix back in their day,” he went on to explain.

Despite their common roots, since Linux split from Unix almost a quarter of a century ago, and even OS X made the break in 2001, there's been quite some potential for divergence. We asked Roscoe if any significant differences have emerged that would affect the user experience. In particular, we wanted to know whether there were any major performance differences between mainstream operating systems.

“It's really impossible to say for a bunch of reasons”, he told us, explaining that it all depended on issues such as how performance is measured, the type of application, exact details of the hardware configuration, and whether we assume that users have the skills necessary for specialist tuning or tweaking of the operating system. He summarised by claiming that, “for any mainstream operating system, I suspect someone could come up with a plausible benchmark and hardware platform that demonstrates clear superiority over all others”.

Despite all this, for many people the choice of operating system will come down to something much more mundane, namely the availability of software. While exact figures are hard to come by, the differences certainly appear to be dramatic, with Microsoft recently claiming over 4 million available applications for Windows. Indications are that the corresponding figures for OS X and Linux are 650,000 and a few tens of thousands respectively. This wouldn’t be a major issue if most of those millions of Windows applications were obscure, specialist or rarely used, but some of the major productivity packages aren’t available if you abandon Windows. Even Microsoft Office, for example, hasn’t yet bridged the gap between Windows and Linux. Perhaps the surprise figure, though, is the 1.3 million apps that are available for Android. It may pale into insignificance next to Windows’ 4 million, but we shouldn’t lose sight of the fact that Windows has 28 years under its belt while Android is most definitely the new kid on the block, having been released less than seven years ago.

THE ROAD AHEAD


With the exception of the ever-changing user interface, and support for new hardware such as 64-bit processors a few years ago, it’s been suggested that the development of mainstream operating systems stalled many years ago. We asked Professor Roscoe if he agreed with that view and, in the main, he did. While recognising that support had been provided for new types of hardware such as networking, graphics, power and energy management and multicore processors, he said that the basic structure of Unix remains the same as it was in 1974. Given that most PC operating systems can trace their heritage back to Unix, that would seem to suggest that while hardware has changed out of all recognition over that time, our operating systems are still in the Dark Ages.

So are we destined to see further new operating systems that represent evolutionary rather than revolutionary changes, or are there moves afoot in the realm of operating system research that will genuinely bring us huge gains in the future? Again we asked Roscoe for his views. “It will happen in the server space first,” he said, before listing some of the basic trends he envisaged. In the main, these are support for a lot more cores, not all of which will be powered on at once, highly heterogeneous and specialised cores, and very large memories.

“Current designs for operating systems are probably OK on small devices like phones and PCs for the next few years”, Roscoe suggested, “but eventually the changes will percolate down from the larger systems, just as Unix and Windows did into Android, iOS, and the Windows Phone.” It looks like your Windows skills will stand you in good stead for some time to come, then.


THE STRANGE TALE OF BOOTING


Ever wondered why starting a PC is often referred to as booting? Perhaps you thought it’s some sort of reference to kicking it into action, but the truth is stranger than that. It’s actually short for bootstrapping and is a reference to the concept of picking yourself up by your own bootstraps, or shoelaces if you prefer a somewhat less American turn of phrase, and was considered to be an analogy to the difficulty of loading software into a computer.

As we saw in the main part of this article, unless you do it manually, loading application software requires some sort of software which itself needs to be loaded into memory. Elsewhere we describe how the load program had to be loaded manually in the first computers, but this was something of an oversimplification, and bootstrapping sometimes described a more complicated multi-stage process.

If software had to be loaded manually, there was a clear benefit in making that software as short as possible. Unfortunately, short programs tend not to be too clever and might only be able to read the contents of a single punch card into memory, not the whole deck on which application programs were stored. However, that single punch card could hold a longer program than you’d want to enter manually each time the computer was switched on and might hold a more sophisticated load program that could handle a deck of cards.

Eventually, means were found of hard-wiring the initial load program so it didn’t have to be entered manually. However, because hard-wired software was expensive, multi-stage bootstrapping continued.

This isn’t just a history lesson, though, as much the same happens in modern PCs. When you first switch on your PC it executes software called the BIOS (Basic Input/Output System), which is stored in non-volatile flash memory on the motherboard. Because flash memory is now comparatively cheap, the BIOS is a lot more sophisticated than the hard-wired load programs of old. First it tests the hardware to ensure it’s working correctly, and only if the PC passes this so-called POST (Power On Self Test) does the BIOS go on to load the operating system from the hard disk. From now on, the operating system will be responsible for loading applications from disk, but the parallels with bootstrapping those early computers are clear to see.

HOW ‘QUICK AND DIRTY’ WON MICROSOFT THE CLONE WARS


A long time ago, back in 1980, IBM was the top dog in computing. However, other companies’ personal computers (as opposed to the big business mainframes) were starting to appear and were undercutting IBM’s product. In response the company came up with its platform-defining IBM Personal Computer, but time was tight and so it turned to a variety of external companies for the various elements.

While the processor came from Intel, the operating system was sourced from Microsoft. However, the company didn’t have the necessary code and so it looked around for someone who did. Its first efforts were to broker a deal between IBM and a company called Digital Research, but when this fell through Microsoft quickly purchased the rights to Seattle Computer Products’ Quick and Dirty Operating System (QDOS). Microsoft tidied up the code, and it launched with the IBM Personal Computer as PC DOS.

However, Bill Gates had cleverly bartered away the rights to perpetual royalty payments from IBM in return for the rights to sell PC DOS, later renamed MS-DOS, to any computer manufacturer it wanted. Gates looked to have foreseen the huge rise in clone PC makers the IBM PC would create - largely because it was built from off-the-shelf parts, but also because of the high prices IBM was charging. Microsoft boomed off the back of licensing MS-DOS to these PC clone makers, it went on to create Windows in 1985 and the rest is history. Which is why Microsoft and Windows are synonymous with the PC, while IBM has become something of a footnote.

BUNDLED UTILITIES


Most operating systems come bundled with a whole load of utilities that, strictly speaking, are separate applications rather than features of the OS. However, it would be splitting hairs to ignore them; indeed, they’ve been present since the very early days. Some of the MS-DOS commands, for example, actually caused separate programs to run, and bundled utilities have been a part of the Windows experience since the introduction of a clock and a calculator in Windows 1.0.

Operating systems increasingly include lots of separate programs, and much of what we consider to be core functionality is really provided by a separate application that runs under the host OS. Here we could include some of the most fundamental facilities such as the File Manager and Windows Explorer, plus a whole raft of peripheral applications such as Paint and WordPad. But the question of what is and what isn’t part of the operating system goes deeper than this. No longer is the operating system a single huge program, even if we strip away these bundled utilities. Because operating systems allow multiple programs to run concurrently, it makes sense for the operating system itself to be split into several smaller, more manageable programs.

MICROSOFT’S A-VERSION TO CHANGE


You might be surprised to learn that the number on the Windows box no longer matches up to the version number of the operating system itself. It all started well enough with Windows 1.0 back in 1985, and Windows 2.0 and 3.0 followed the logical pattern.

Windows 95 changed tack from a marketing point of view, but under the hood the name Windows 4.0 made perfect sense. Windows 98 was 4.1, while Windows XP was called 5.1 (as technically speaking it followed on from Windows 2000 NT 5.0).

Things started to go wrong with the introduction of Windows Vista in 2007, which was named version 6.0. The problem was that many of the mass of applications written during the lengthy XP years simply couldn’t cope with an operating system that wasn’t 5.x. It was this change that largely led to Vista’s now notorious compatibility problems with older software.

Microsoft decided not to make a bad situation worse, so with the launch of Windows 7, despite its official moniker, the version number was simply bumped to 6.1. This meant that any quick fixes that software developers had made should have continued to work. The thinking even continued with Windows 8, which goes under 6.2, and Windows 8.1, which is 6.3.

Thankfully it looks like Windows 10 is wiping away all that confusion, sort of, with a little misdirection. The new operating system will be called version 10.0 by all accounts when it’s released. However, any old software that hasn’t been written specifically to work with the new operating system will be tricked into seeing the version number as 6.2 (Windows 8). Newer software will have the correct code to see the real version number and therefore be able to make full use of the new features. So Microsoft has finally fixed the problem, with a little smoke and mirrors.