Tuesday, 16 December 2014

Stone-age technology

Stone-age technology

Phones, tablets and processors always seem to be improving, yet some tech has barely advanced in decades. We look at the stone-age technologies that are in need of a bit of evolution

Voicemail


Does anyone actually like voicemail? Born from the dying embers of answering machines, it once seemed useful - but not any more. Often you’re forced to wade through menus, pressing random digits on your handset to hear a rambling message with no actionable points, which would have been far better expressed in an email - despite the limitations of that medium, as we’ll discuss below.

Voicemail has evolved somewhat over time. Voice messages can now be attached to emails, or automatically transcribed by services such as VoiceCloud. Google also offers its own Google Voice transcription service, if you happen to be one of the three people worldwide who use Google Chat.

But none of this is enough to give us hope. The best we can wish for is that in the future our various personal systems become so interconnected and clever that they’re aware if we’re available to take a call or not; and if we’re not, they can invite people to call back later, leave a brief message to be automatically transcribed, or just leave us in peace.


Printers


We’ve lost count of the false dawns surrounding the paperless office, the most recent being when the iPad came along. “I’ll carry all my documents with me on this marvellous A4-sized device,” executive after executive said as they justified their expensed purchase, “and in doing so I’ll save the environment!” While that certainly hasn’t happened, we have seen a falling demand for personal printing - much of it due to the growth in smartphone use rather than tablets. Phones, in tandem with Facebook and Flickr, are now the go-tos for sharing photos, and the need to print out directions has evaporated thanks to the phone in our pockets and the GPS device mounted on the dashboard of our cars.

Work-related printing isn’t showing any sign of disappearing, but that may yet come, as millennial gradually become the largest demographic group in the office: a MetaFacts study in 2012 found that 18- to 24-year-olds were far less likely to use physical print-outs than those over 54.

E-mail


The email standard we use today was created in 1982, but it builds on mainframe messaging systems dating back to the 1960s. And it shows: it’s a bareboncs service, suitable for a small community using basic terminal hardware. It wasn’t intended to grow into a global medium for professional and personal communications - and frankly, it’s a poor fit for the job.

For starters, there’s no confirmation when a message reaches its destination; no support for text formatting or embedded images; no privacy (unless encrypted by the sender, messages pass from server to server in plain text, which can be read by anybody in between); and no distinction between different types of message. There’s also little user authentication, which means any spammer can create a fake identity, fire off a million messages and vanish into the ether.

Over the years, there have been attempts to address these shortcomings: read-receipt systems, such as HTML email, and endless anti spam measures. The problem is, since these efforts are all grafted onto the standard, rather than being part of it, they can’t be relied upon to work predictably across the numerous clients and services in use. Hands up who’s ever received an email with messed-up formatting?

Email is now so deeply entrenched that It may be impossible to upgrade - but that doesn’t stop people from trying. Most recently, Google launched its Inbox app, which natively supports rich content and tries to help the user manage their mail by drawing on the sender’s broader online identity. It isn’t clear whether the world at large is eager to centralise yet more of its communications on Google’s servers, but frankly it’s likely to be a better experience than carrying on with the decrepit email standard.

Desktop PC


It’s always the CRT monitors you notice first when watching a 1990s film. Hairstyles may have shifted, clothing fashions come and gone, but it’s the flickering CRT monitor that really nails a film to its era with carbon-dating-like accuracy - more so, at least, than how old Tom Cruise looks in it.

Yet switch your attention to the PC attached to the monitor and it’s surprising just how much has remained the same. Tower desktop PCs have been using the same, unchanged basic design for nigh-on 20 years, and even inside the case this remains true: the motherboard may be blue or black rather than green these days, but fundamentally we’re still plugging cards into slots and attaching hard disks via cables.

Why hasn’t the world moved on? The answer likely boils down to power and convenience. If you need maximum processing power, a tower case allows space for a meaty desktop chip, along with the hardware needed to deal with its heat generation and power consumption. It’s a similar story with graphics cards, with multiple GPUs now the norm in enthusiasts’ desktops.

And if you’ve ever needed to maintain a fleet of PCs, you’ll immediately recognise the convenience of being able to remove the side from a device and add more memory, replace a hard disk (or add a second one), and upgrade core components such as the processor and graphics card. In contrast, a laptop needs replacement, with all the delays that can cause. The end result: we still think desktop PCs will be kicking around, fast and strong, for decades to come.

Cables


In the 1890s, Guglielmo Marconi perfected his first radio transmitter. Yet today our PCs still rely heavily on physical cabling, not only for power but also for communicating with peripherals and other devices.

There are many reasons to get shot of cables. They’re messy, as anyone who’s ever rummaged around the back of a computer desk will attest. They’re liable to become damaged or lost, serving as an unnecessary point of disconnection between otherwise compatible hardware. And the business of producing them is wasteful: it’s estimated there are two billion PCs in use worldwide, and a similar number of smartphones, so just think of the energy that’s expended producing USB cables.

Things are improving: Bluetooth is increasingly the norm for keyboards, mice, loudspeakers and other low-bandwidth accessories, and wireless printers are typical. But compared to USB 3 and Thunderbolt, even “fast” wireless technologies such as Wi-Fi Direct are slow to transfer large files, and lack the bandwidth needed to drive a Full HD or 4K screen. Similarly, wireless charging systems such as Qi can deliver enough juice to charge a phone or smartwatch placed directly onto a pad, but we’re some distance from powering a laptop from afar as you sit with it on the sofa.

Initiatives are underway to overcome these hurdles, however, to get rid of cables once and for all. As we reported last month, chip giant Intel is working on a reference design for a fully wireless laptop, including technology that’s capable of communicating with peripherals and displays at gigabit speeds. We’re talking five years at the very least for wireless computing to become the standard, but after 120 years of being tied down by cables, that isn’t such a long wait.

Кeyboards


The Qwerty keyboard was invented in the 1870s, and while it’s gained a few keys since - see “Alt Gr” and the mysterious “Scroll Lock" its basic mode of operation hasn’t changed one jot. That’s shocking, because having to manually push a dedicated button for each character in your sentence is a horribly inefficient way to communicate. Project the keyboard onto a flat, cramped touchscreen, as on today’s smartphones and tablets, and it makes even less sense.

Yet the keyboard continues to chug along, principally because the obvious alternatives are far worse. Handwriting recognition can’t deliver either the accuracy or speed we demand, and while voice recognition is getting better, nobody wants to live in a world where we’re jabbering away at our smart devices all the time.

We’ve recently seen a few fresh approaches. On the mobile front, gestural typing systems such as Swype make typing more fluid, while in the physical realm one occasionally comes across innovative projects such as the FrogPad - a 20-key keyboard that sadly never reached commercial production. A shame, since its one-handed design would have meant an end to perpetually switching between keyboard and mouse.

What we’re really waiting for, though, is a move beyond the model of mapping single letters onto keys. One promising idea is the Microsoft Research Type-Hover-Swipe project, which detects gestures in the air above the keyboard as well as key presses. Such an approach could see us using a form of sign language to “type” in units of words and phrases rather than individual characters - allowing us to get down hundreds of words per minute by simply twiddling our fingers.

Batteries


If you’re using a gadget away from the mains, wc can almost guarantee it’s thanks to lithium-ion (Li-ion) technology. The idea of Li-ion batteries was first proposed by Exxon engineer M Stanley Whittingham in the 1970s - but it took decades of research by academics at the University of Oxford, Bell Labs and the University of Texas before the idea took off. That’s because lithium isn’t exactly safe to keep around: leave it sitting out on the counter and it will burn down your house. Modern batteries therefore use certain compounds of lithium - this is where the “ion” comes in - that make them less dangerous.

The first commercial Li-ion batteries were released by Sony in 1991, and within two decades they dominated the market. They remain popular because they simply work: they don’t have a memory, so you don’t need to run them down before recharging, and they discharge more slowly than nickel-cadmium variations.

However, lithium-ion batteries age poorly; capacity begins to deteriorate after only a year - whether or not the battery is in use - and the battery can become effectively useless after as little as three years. Improvements have been made over the years, as researchers tweak designs by using different metals and chemicals, with 1HS iSuppli analyst Thomas McAlpine reporting a 7% annual improvement rate. But this isn’t enough to make a significant impact on our gadgets’ battery life - especially since larger screens add to power demands.

Happily, innovation is happening, both in lithium-ion technology and in other formats. A team of researchers at Stanford University is working on silicon nanotube Li-ion batteries; by using silicon in the anode - the negative electrode - researchers have managed to create cells offering ten times the power capacity, alongside longer usable life. Meanwhile, the California Lithium Battery company is looking to use silicon and graphene instead of lithium, saying it could offer triple the capacity of Li-ion models.

Phone numbers


The idea of needing to memorise a ten-digit sequence merely to communicate with someone seems almost like a step back from the earliest days of telephone, when an efficient corps of ladies would patch you through to your requested caller - and probably listen in on the juicier calls, a pleasure these days allegedly reserved for the NSA and GCHQ.

The days ofphone numbers appear to be, well, numbered, with most people now opting to press on an avatar than dial the actual number. In fact, we’re heading towards a time when pressing “Tom" on your phone (or whatever wc call our personal communications device) will engage location-awareness technology to opt for the cheapest or most convenient method by which to connect to him.

Within a couple of decades, we wouldn’t be at all surprised if phone numbers will be seen as a relic of early technological development, in much the same way we now look at CRT monitors.