Saturday, 11 June 2016

Things We’ve Stopped Doing On The PC

Things We’ve Stopped Doing On The PC

Mark Pickavance looks at a collection of once-common activities, which have now been all but abandoned

Being along for the ride, it’s often difficult to appreciate the sheer amount of change that has occurred over the past 30 years or so. Larger storage, faster processors and better technology have all come along and rewritten the PC playbook, along with the internet.

Because of this, there are activities that many PC owners did that they no longer do – and I’m not just talking about loving the colour beige.

Do you remember doing any of these, or are there some you’re sheepishly still doing?


Compressed Storage


About the time that Windows came along, there was a curious phase where the cost of storage and the size of drives got completely out of balance with what PC owners could actually afford. As people used their systems more and their collection of apps and data grew, they rapidly discovered that the hard drive they’d bought just wasn’t big enough.

Luckily, the power of the processors available at the time offered a potential solution in the form of compressed volumes, as first appeared with AddStor’s SuperStor that came bundled with DR DOS 6.0.

The thinking behind these products was relatively simple but effective. A single large file was defined on the existing volume and then, via the app, bolted onto the OS  as if it was another physical drive. Any data being placed in that container was automatically compressed and then uncompressed when it was required.

How well this generally worked was rather shocking, I recall. Often a document would load faster when it was in a compressed volume than from an uncompressed one.

The downside was that it was very difficult to work out how much extra space this would extract from the storage you had, because some data compressed better than others. Using zip files, for example, gained you nothing, as they really couldn’t be compressed any more.

Where it really shined was with documents that by their very nature contained lots of repeated bytes or sequences, like those typically found in word processed content.

Pretty soon, everyone was using one of these compression tools, with the most popular being Stacker by Stac Electronics.

Disturbed by the popularity of this tool, Microsoft decided to put this functionality in MS-DOS 6.0 and developed ‘DoubleSpace’, which effectively did exactly what Stacker did. However, it later turned out that DoubleSpace infringed on two data compression patents that Stac owned, and litigation ensued. Eventually, Microsoft paid for this problem to go away – over $80 million, all told.

Within a few years of this, the necessity for compressed volumes had passed, as drive capacities increased, as did speeds, and the cost per megabyte dropped.

Ten years later, very few people were using compressed volumes, and those who needed that type of technology used operating systems with the functionality baked in, rather than as an additional app. The last version of Windows to include it was Windows 98 SE, which had DriveSpace 3.

But drive compression generally was a very popular solution that most PC owners in the 90s used, but it failed to stand the technological test of time.

Use KVMs


They still sell KVMs (keyboard video mouse) hardware, so is my contention that this is a thing of the past valid?

It is, because the entire point of the KVM originally was to allow one monitor and one keyboard to be used to control multiple machines, and these days there are much better ways to do this than some horrific rats-nest wiring solution.

My first experience with a KVM was in the control of servers, where in the machine room there was only room for one screen, so it needed to be connected to multiple machines.

Thinking about it logically, surely it would have been better to put more than one input on a monitor? But the KVM came along, and by twisting a physical dial you could direct a keyboard, display and later mouse, to initially a couple of machines.

There were however multiple problems with these devices, which made them less than a joy to operate – not least the amount of cabling that came with them. For each machine on the KVM there were at least three cables, and the VGA line was also rather thick and inflexible. As these boxes progressed from two, to three and even more supported machines, the number of cables and the complexity of the wiring inside became exponentially greater.

But that wasn’t the only problem, because the PC was never designed to have its keyboard disconnected and reconnected while it was running. The keyboard has a processor that’s powered from the PC, and it’s initialised when it’s first turned on. Having it reset repeatedly could lead to problems, loss of connection and spurious key sequences being sent to the attached system.

The solution to this was an electrically active KVM that kept the keyboard powered and thinking it was attached to a PC while its physical connections were redirected.

These improvements helped, but the other problem was that there was still a practical limit as to how many machines you could manage using one.

What really spelled the end of the KVM were two things: USB and remote desktop.

USB by its very nature doesn’t like to have two or more masters, so that didn’t work well in a KVM context. But also with the advent of remote desktop tools, the necessity for IT staff to actually visit the machine room to make server adjustments diminished. They could make the same changes from their own PC and do it for as many servers as they needed.

For those changes that needed a physical presence, USB mice and keyboards are built to be attached (and detached) at any time, and screens these days can have multiple inputs.

I’m sure there are plenty of people still using KVMs, but in reality they’re unreliable and overly complicated accessories that need to be consigned to history.

Fax


Before the letters page goes crazy with people declaring themselves as fax fans, I’ll point out that I’m just the messenger, so don’t shoot me.

Originally called telefacsimile, this technology was referred to as fax once it became popular with business in the late 70s. But the concept of sending images using a telephone line dates back to the 1920s.

These machines became the darlings of the print industry and enabled newspapers around the world to put pictures on their cover stories from around the world within hours of those breaking events.

The breakthrough for the fax came in the 60s when Xerox developed a machine that was small enough to transport easily, and a decade later they started to invade offices all over the world.

After early analogue standards, eventually digital ones came along that took their cues from the speed increases that dial-up modems experienced.

In the ultimate incarnation, they used ISDN to achieve about 8KB/s transfer speed, though both ends needed ISDN to achieve this.

The problem with the fax was that it tied up an entire telephone line at both ends to send a poor-quality representation, usually in mono, and it took an age to do it.

Given the amazing technology we have these days, how is this still a thing? I’ve heard numerous arguments about why the fax still exists, most of which are utter rubbish when you analyse them. One is that businesses have confidence that a faxed message will always get through, whereas an email could easily be overlooked or deleted.

Much of this confidence seems to stem from the receipt that the system appears to give you that the fax was received at the other end, ignoring the reality that at that point there is no guarantee that the fax was actually printed. Failure to have available paper or toner/ink and the ability for the memory to be easily reset undermine that argument further.

Also, because of the lack of effective error checking, the sender has no idea if the critical information on the sent document made it over the transmission or even if an omission would be noticed by the recipient.

When you compare this with an email, where the contents are identical on arrival to how they were sent, the fax seems terribly ineffective.

There is only one situation where the fax still has sway, and that’s in a legal context. As silly as this is, in many countries, electronic signatures on contracts are not yet recognised by law, while faxed contracts with copies of signatures are.

This scenario is purely about the inability of the legal framework to address the changes of technology in a reasonable time, rather than a validation of a fax as being in any way superior to a dozen other methods of identity confirmation.

Unless you’re a lawyer or Japanese, if you’re still using fax technology with the excuse that ‘it’s a technology I understand’, then you really need to retire, along with your fax machine.

Desktop Publishing


In the 80s, probably the coolest thing you could do with a computer was to desktop publish with it. Only a few years before, producing typeset output was something only union approved experts were capable of, and then everyone got to try by simply installing software.

This did, however, miss the point somewhat that the real skill of typesetting was not throwing words at a page or using as many fonts as the system came with.

So for at least a decade, many people produced unreadable newssheets and posters and wrestled with the demons of hyphenation-justification tables, while the likes of Adobe and Quark made an absolute mint out of them.

And then, probably not before time, our interest in killing trees just to see our own page composition skills in action waned and the market for desktop publishing tools dried up.

Yes, there are some, like those who put together this publication, who use QuarkXpress or Adobe InDesign, but for the majority of people these are tools they’ll never own or even aspire to have.

At school, you might encounter Microsoft Publisher, but many people can as easily make Microsoft Word produce very similar results if they need to create a pamphlet or poster.

QuarkExpress for the PC costs around £1,000, so it isn’t an application that many people would invest in on the odd chance that they might need to publish something.

I haven’t had a desktop publishing application on my work PC since the Pentium II was popular, which tells you just how beyond that era we really are.

Defragmenting


Did you ever defragment your drives? Most people did at some point, because otherwise your system would become really sluggish and temperamental whenever you went to write a big file.

That’s because if the file can’t be written in successive disk sectors, then it ends up in little pieces all over the disk, making both the write slower and the retrieval of the data later.

But actually when I think back, defragging was also about what happens when you use 90% (or more) of a hard drive and the writing and reading of files becomes highly inefficient.

Since then, there has been a multi-pronged approach to solving fragmentation that’s all but eliminated the need for defragging.

For starters, far fewer people run their systems anywhere near full, but also the operating systems try to be more organised when writing data, to reduce the build-up of fragmented layouts. Having more RAM to preorganise data before its written has helped, and also the cache space on the drives is also another buffer to aid this process.

All these developments helped minimise the impact of fragmentation and the necessity to run a defrag.

On most modern systems, the defrag tool is now entirely redundant, because they use SSDs that don’t have moving parts. It takes no longer to write or read data to an SSD in a fragmented or contiguous space, because there are no spinning disks or moving heads.

What’s much more important on these devices is that they write evenly to the whole capacity, not repeatedly favouring one location over another.

What’s slightly curious about this is that in Windows the defragmentation tool is still available for drives the system knows are SSDs, even if running it achieves nothing.

Actually, it’s worse than nothing, because rewriting gigabytes of data into sequential blocks actually reduces the life of your drive for no appreciable benefit.

If you have an SSD and defrag, please stop! And those with hard drives, make sure you really need to do this before starting, as it can take an age on a highcapacity drive.

Partitioning Hard Drives


When I first got into computersm ‘Winchester’ drives didn’t really exist for the home user, though they arrived soon enough once we’d fully exploited the humble floppy disk.

The first hard drive I owned was a 30MB (not a typo, 30MB!) Megafile for my Atari ST, and even with that little space on it, I partitioned it!

Why? Because the TOS operating system it used was a derivative of CP/M-68K, so it could only address a 16MB partition.

As systems evolved through the 80s and 90s, this was a problem that was stumbled into numerous times. FAT16 could only handle 32MB initially. That grew to 2GB before FAT32 came along and supported 16TB partitions but only 4GB (minus one byte) for the maximum file size.

But even under FAT32 people were often encouraged to take a single hard drive and split it into logical volumes, so they could run more efficient naming tables and also learn to keep the OS and data separate.

Part of this was to do with academic notions of organisation and also the influence of Unix on early computing, where symbolic links allow you to distribute parts of the file system around different drives.

However, if you look at what Microsoft did with MS-DOS and then Windows, you’ll see it ignored all this and put everything in a single partition and directory structure, with sub directories as the only segmenting control. Even with Windows 10, it never actually moved away from this model, and organising a Windows system to use different partitions for system, apps and data is really challenging for those who insist on doing that.

The problem with doing that is that by dividing up a drive into multiple partitions, you’re assuming that you know exactly how much of each you’re likely to occupy, and rarely are people that accurate.

If you run out of space in the App partition but have plenty in the data area, then you’ll need to resize the partitions to move unused space around. If everything user accessible is in a single partition, then that just isn’t necessary.

The only solid argument these days for partitioning is if you wish to isolate multiple operating systems from each other, as in dual-booting. By giving each OS its own partition, you can hopefully stop them making changes that others wouldn’t care for on their file structures.

But we’re talking about very technical things here that the average user wouldn’t understand or want to. And even those who are technical have better things to do than mess with the partitioning of their drives when just letting their operating system allocate space logically works 99% of the time.

There are plenty of apps for adjusting partitions, and Windows itself has many of facilities to grow, shrink and even span them over multiple drives. But frankly, most people would probably like to spend time using their computers than reorganising them at a partition level these days.

Connecting Printers Directly


For older readers, I’m going to use a rude word that only they would understand. Centronics! There, I said it. Early printers used either entirely proprietary connection methods or the dreaded RS232 serial interface, and then parallel printing came along with the Centronics connector.

This was a really horrible thing that wasn’t sufficiently standardised and gave many IT people headaches through its abysmal level of reliability.

Based in a bi-directional parallel communication interface developed by Centronics in the 1970s, it later became IEEE 1284, but it still had 36 pins on an overly complicated cable and only worked when it was in the mood.

What was really scary about this technology was that people actually started using it for other things, like scanners and even tape backup drives, because every PC had one.

As PC technology spread into businesses, they often whined about the cost of buying a printer for every PC and those abysmal cables to connect them. So in a very similar fashion to KVM hardware, office suppliers started offering Centronics switch boxes, so a printer could be easily connected to more than one computer. The number of wires in these was tremendous, even if they only switched between two computers. Ones that could handle more machines must have looked like early Bletchley Park experiments internally.

With so many connections, this generally unreliable technology didn’t get any more robust by doing this with it.

Thankfully, USB came along eventually to save us from Centronics, but this was still promoting the idea of one PC equals one printer.

And then wi-fi came along, and while it only appeared on the most expensive printers first, soon it was on even the cheapest. Today HP’s Envy 4502 Wireless e-All-in-One Inkjet costs less than £30 and it has a means to operate without ever being physically connected to the PC that’s using it. This frees you up to put the printer where it best suits you and not on your desk next to your computer. Unless you have a PC with no wi-fi or no access to a wi-fi router, then there’s no need to connect your printer to your computer directly.

For those with printers that predate the wired/wireless network revolution, there are now devices you can purchase that add this functionality locally to them.

The era of wiring up printers to computers is behind us, and the only reason for doing it is to avoid configuring the wi-fi properly.

Using Screensavers


A decade ago, if you went into an openplan office at lunchtime, all you’d see was screensavers running. Often they’d be the ones that came by default with Windows, but occasionally you’d see ones people had installed themselves, like the fish tank.

These appeared in response to problems that happened with CRT monitors, where if they got left on the same thing for prolonged periods of time they’d permanently burn that image into the screen.

Logically, you’d think that it would be better to have a blank screen and the monitor in power saving mode, but CRTs didn’t fire up immediately, so moving images were generally the approved solution.

However, modern LCD monitors don’t suffer with these problems, and people still run screensavers on them. Why? Well, it does tell you that the computer is on, if you can’t see the power light, but other than that, it doesn’t serve much purpose.

It’s much better for the person paying the electricity bill that they go into power saving mode, and that’s the default that Windows uses.

While custom backgrounds are still popular, the notion of a screensaver is one we’ve moved beyond.

Those who are using them almost certainly aren’t saving their screen or their pocket, even if they can be a pleasant distraction.

Placing PCs On Desks


I’ve thought quite hard about this, and I’ve come to the conclusion that this all started with systems like the Commodore Pet, where the monitor and system were all in the same box.

The Apple Mac was the same, though the IBM PC did have the screen and system box as separate items. However, to use the PC, you needed to access the floppy drive slots, and the monitor needed a plinth to raise it up to a decent working height.

The idea that the computer would occupy your desk space also seemed to represent the notion that you didn’t need that work area for papers, because you had a computer.

The reality was that many offices ended up buying extra workstation furniture so the desk wasn’t occupied by the PC, ironically. This model of the computer with a screen on top went on for a significant number of years, and systems like the iconic Amstrad PCW512 reinforced this notion.

It wasn’t until the 90s that floor standing systems started to become more popular, and people started to re-establish control of their desk space, with floating support arms for their displays and keyboards.

The only computers that are generally placed on the desk are those like nettop systems that are inherently small, or if there isn’t any leg space to put that underneath. Most powerful desktop systems are far too large to go on a desk, and they’re not designed to be placed in any other orientation than standing up.

Placing computers on desks was an inherently silly idea and once people realised that, it soon went away.

Final Thoughts


I’m sure some of you reading this will be doing some of the things I’ve just mentioned, but generally they’re things of the past.

What we can take from this is that things we do now, like looking as screens or using keyboards will, in time, become something people once did on computers.

It’s an evolution, and everything we do turns into a historical footnote eventually. Just because we hang grimly on to concepts and structures that we can relate to, that doesn’t preserve them for the next generation in perpetuity.

All systems are the now, coloured with the past, with just an exciting hint of the future, and that’s as it should be.