Friday, 30 January 2015

The Network Switch

The Network Switch

Mark Pickavance gives a crash course on understanding your network's critical component and why it's so important to have a good one

When I first got involved in networking, most PCs communicated by means of a networking technology called Token Ring.

Its popularity stemmed from it being used by IBM and it being the default network protocol for Novell Netware. Initially it offered good performance, reliability and an easily implemented topology.

But it had a major flaw: it wasn't routable. By that I mean it was very difficult to have more than a certain number of users on a token network without requiring some expensive connecting hardware, and managing 100+ users was a bit of a nightmare. It was also really expensive for the token adapters and cables.


The alternative was thin Ethernet. This had a looped topology that meant a single PC in a chain could stuff all those others around it; it was slower than token, though card costs and cables were cheaper.

But (and this is why I'm telling you these things) it had a marvellous capability in that it could use routed protocols, like TCP/IP. That meant it was much more suitable for large networks where you could segment the network so that the traffic of PCs didn't propagate everywhere.

The key to this type of divide-and-conquer thinking was the Ethernet hub, a concept that eventually morphed into the network switch we know today.

But let's start by explaining how a hub and a switch differ, and why we no longer use hubs in general.

Hub Vs Switch


With the advent of the twisted pairs Ethernet that we're all familiar with today came hubs that allowed you to easily wire all the computers to each other.

These were the electronic equivalent of railway stations, where data packets would enter from a connected computer and then be distributed to all the others.

The hub wasn't intelligent in any way, so it was like all the computers were stood in a large open room, with everyone hearing all the conversations.

That's not wonderful from a security standpoint. But also if we take that analogy further, when you get to a certain level of people, nobody can hear anything.

In practical terms, that point occurred with about 16 to 20 users on 10Mbit Ethernet, because the number of transmission retries started a failure snowball.

You could segment the network, running each set of users on a different subnet, but then you couldn't talk to a server on an alternative subnet to the one your system used. What was required was something smarter, because the basic Ethernet architecture relied on each PC analysing the packets of data as they arrived and then working out if they were destined for them.

The solution to this was the managed hub, where you could create subnets isolating groups of users from the traffic of others, while maintaining a routing table to allow data to spill into other subnets when required.

This worked well, if the IT people responsible for it fully understood how it worked and the routing tables were correctly input. With a big company, IT staff usually broke the network down by departments or offices, and it was necessary to document everything you did just in case a PC needed to be moved from one location to another.

While this provided a workable solution, it was rather hand-cranked, and what network admins really wanted was an automated traffic management device: the switch.

Switch Me On


The arrival of this technology in the mid-90s totally revolutionised Ethernet networks, because they went from being a major drain on time and resources to almost a fire-and-forget solution overnight.

The switch had two major advantages, the first of which was the traffic management features I've already mentioned. This functionality interrogated incoming packets, somewhat like an old telephone exchange where you asked the operator to connect you.

By doing this, the switch could determine where the packet was going and send it on its way. But unlike the hub, it would only send that packet down the wire in the direction of the target PC, and not to all attached computers.

The effect was like each PC was on its own on the network, getting all the bandwidth available, even if it was one of a hundred machines.

If the switch had 16 10Mbit ports, it usually had, say, 200Mbits of backbone where the traffic could be directed, avoiding that becoming overloaded by the numerous conversations.

The snag (and it was a big one) was that this only worked flawlessly if all the PCs were randomly talking to each other, whereas in reality most PCs would generally talk to a very small selection of computers, namely the file servers.

This meant that a bottleneck existed between the server and the switch, as all the requests made by the PCs got squeezed into a single 10Mbit link that connected the server to the switch.

For this to work like intended, high traffic areas like switch-to-server pathways needed to be wider. Switch makers came up with two alternative means to achieve this, and they both worked for a fashion.

Channel Bonding


The first methodology was to use channel bonding, where the servers were given multiple network adapters. Early switches didn't support channel-bonding technology, but what you could do was connect one switch to each adapter and segment the network thus.

That kept the demands on each adapter down, increased overall throughput, and PCs on either side could talk to ones on the other adapters via routing on the server.

The downside of doing this was that if the server had a technical problem, then not only did that service end, but also any network devices (plotters, printers, modems) on remote segments would end abruptly.

A better solution was that offered by switches that supported channel bonding, where you could plug multiple adapters into one switch and have it logically stack the bandwidth by balancing the throughput on each channel.

I remember implementing this with dual adapters, and some IT managers even installed up to four LAN NICs to achieve greater performance.

However, a much better direction lay in the creating of 'fat pipes', where the switch had special high-speed ports that were designed to link the servers with the backbone more directly. These were 100Mbit at first, but as the technology moved on, they became 1Gbit (optical on fibre) and then even greater.

These fat links were also used to chain switches together, cascading their backbones so that they could handle 250+ users without choking on the amount of traffic generated by multiple servers and PCs.

These days, for big systems the standard user-facing ports on the switch are often 1 Gbit, and the inter-switch and server connections are 10Gbit, fibre or copper, though it is possible to get ones from the likes of Cisco that support 100Gbit and even faster.

Home Switches


I've so far talked about massive corporate networks, where the performance of the system is critical for the numerous users, but that isn't the problem that confronts most home or small office users.

In these environments, there are often less than ten simultaneous users, and therefore it's reasonable to ask if a switch is really necessary.

The majority of home users just use what switch functionality comes with their broadband router. That's usually a four-port device, and in the majority of cases they are often only 100Mbit Ethernet connections.

The irony of that specification is that if the router offers good N or AC class wi-fi, it's actually quicker to communicate through it using wireless than via the wired LAN.

Those wanting better performance need to make sure they have a router that supports gigabit speeds and/or an independent gigabit switch.

I say and/or, because realistically most of us don't have greater than 100Mbit broadband links, so communicating to the outside world is usually fine on 100Mbit LAN. Where the gigabit switch comes into play is when computers and servers or, in the home context, NAS boxes, want to talk at greater speeds.

I'm sure that some reading this will wonder why I'm promoting wired networks for home users, when running cables isn't easy in many UK homes.

Having used wi-fi, Ethernet cabling and even Powerline technology, I can say without fear of contradiction that wired gigabit Ethernet provides the most consistently high performance almost irrespective of the size and construction of the building. Therefore, if you really want to have the best network, then resorting to some cabling provides that, even if it is only in the critical connections between the most heavily trafficked routes.

Many people use wi-fi to connect their desktop PC to their broadband router, but they'd get much lower latency playing online games if they wired directly to them or via a switch.

Wired wins on speed, latency and reliability over any wireless technology yet devised, at this time.

Home Vs Business Switches


A visit to any switch maker will reveal a wide range of products, accounting for those with both shallow and deep pockets. Normally they're divided into home and business use, and sub-divided into small business and corporate ranges.

So other than the price and the massive amounts of ports that some business customers like, what is the difference between these solutions? Simply put, it's a feature fest, where home users generally get unmanaged switches with a limited backbone and automated responses to traffic. The cheaper ones don't support IPv6, and they're generally not sized or accessorised to fit into a rack system.

The business users get a much wider range of facilities and port configuration, allowing the switch to be tailored to the connectivity it is likely to encounter. There are also specialist products designed specifically for data centres, service distribution and per-office deployments. As managed devices, these interface to a central control system that IT staff can access, giving them the bigger picture of data traffic movement, allowing them to dynamically reorganise the flow to remove or negate bottlenecks.

They also can initiate fail-over modes where routes or hardware that is malfunctioning can be routed around to maintain system connectivity.

The very latest concepts for business networking are virtual networks, where hypervisors create the illusion of physical structures that only exist in software, dynamically maximising the performance of the hardware layers beneath them.

Having functionality like this doesn't come cheap, and where a home user might pick up a five-port gigabit switch for between £10 and £20, a managed business switch with 24 ports could range from £125 to easily more than £1,000. High-end Cisco switches designed for data centre use could easily run into tens of thousands, for those that want the ultimate in data flow control.

While I don't have a managed business switch, I can appreciate why even a home user might consider getting one.

Over the past few months I've been experiencing an intermittent fault that is probably cable related that causes my switch to unexpectedly restart. Finding this might prove challenging, given that I have at least 16 cables heading to it.

A managed switch would allow me to monitor the ports and report errors, so I could immediately identify the problem run and not need to test all of them individually.

I'm happy to accept that probably doesn't justify the expense of a managed router, but that doesn't preclude me being attracted to the idea of solving the problem using one.

Home Use Switches


Buying a small switch for home use can be challenging, especially if you're not sure exactly how much you'll need or use it.

If you have just a handful of computers that you want to wire, then a five-port switch is probably fine, although I do strongly recommend you at least go for one with gigabit and not just a 10/100Mbit design.

Those with greater network ambitions who want to cable numerous locations will need more ports and probably a secure location for the switch to live. I've mounted mine in the attic, but under the stairs or even in the garage are all acceptable places.

It needs to have good ventilation, power and access for cables to be run relatively easily. It should also be somewhere you can get to, in case you need to reboot the hardware, should it get confused or malfunction.

For most homes either an eight-, 12- or 16-port unmanaged switch is fine, and I'd always recommend have at least three ports unused, should you want to add new locations at a later date. If you run out, you can simply add another switch and use a short 'patch' cable to connect the two together.

There are some relatively inexpensive 'smart' managed switches available, like the Netgear ProSafe GS724T (£125), but I'd only consider these if you need to manage your traffic and create virtualised LANs (VLANs). For most home users, these are overkill, but they're ideal for small businesses wanting to control their rapidly expanding networks.

In general, the deployment of a switch for home use is relatively straightforward, only complicated by the vagaries of running wires through a typical UK home. If you can meet the challenge of running CAT6 cables though your house without incurring a huge redecorating cost, then wiring up a switch is certainly the easy part.

What would help immensely is if the quality of the switches that are incorporated into broadband routers improved dramatically, as did the number of ports they provided. Until that happens, there will still be a place for the switch in small and big networks alike.


Glossary


A collection of important switch-related terms that you might encounter if you are looking to purchase this technology.

ACL

A network that implements access control list (ACL) is the sort that those who handle sensitive information create. At the simplest level it defines which IPs (therefore devices) can talk to each other, and in more complicated setups it even tracking user specific service requests and server responses.

Auto MDI/MDIX Crossover

In the past, if you wanted to connect two switches to each other (a cascade), then you needed to use a crossed cable, wired differently from a standard Ethernet patch cable. These days switches either have specific cascade ports or, more likely, they automatically sense the other switch and adjust the port accordingly.

100 Base T

Each of the speed ratings has a different specification base on the wire used, and how it's cabled. 100Mbit connections come in various types like 100 Base TX and 100 Base T4. However, the ones that people generally use these days are 10 Base T, 100 Base T and 1000 Base T. These all use twisted pair cabling, in the various CAT standards.

Bandwidth

The amount of network traffic that any part of the network can handle at any one time. The faster the communication and the handling of that data in the switch the more bandwidth you have available. Communication over any piece of wire has a finite amount of bandwidth,

CAT5 and CAT6

Category 5 (or CAT5) is a cabling standard that was defined for carrying Ethernet over twisted pairs, originally up to frequencies of 100MHz. It was superseded by CAT5e (e for extended) and then CAT6.

Most people networking today will use CAT6 cable, designed for gigabit speeds and beyond. Using this it is possible to have cable runs of up to 100m at gigabit speeds and 55mm at 10Gbit.

Convergence

Having separate cables for computing, telephones and video security can be complicated, but these days it isn't necessary. Convergent networks aim to push all these services through the same cables, with switches built to distinguish the different types of traffic and manage it accordingly. Networks that carry more than just computing traffic are 'converged'.

Jumbo Frames - The speed of Ethernet on gigabit is limited by the amount of header information that accompanies each data fame. One way around this is to increase the amount of data in each packet, reducing the proportion of packaging to data. Jumbo frames are a standard method to do this, where network hardware agrees to increase the frame (usually to 9000k bytes) from the standard 1,500 bytes.

This can make a substantial difference on big file transfers, if both the network adapter and switch support Jumbo frames.

Packet

When data is moved across a network, it's organised by the system into manageable blocks or packets. These can vary in length and carry with them additional header information that is designed to help routers they encounter direct them correctly towards their destination. The terms 'packet' and 'frame' are interchangeable and can also be called a datagram.

PoE

Distributing a network doesn't always fit perfectly with the power layout in a building, so Power Over Ethernet (PoE) was devised. This subverts some of the wiring of the Ethernet cable to power a device at the other end, allowing local small switches to be deployed without needing new power sockets adding at locations they currently don't exist. Switches with PoE are made to send data and power of their ports, not just data.

QoS

With networks of all types, there is a danger that one user hogs all available bandwidth, souring the experience for everyone else. Quality of service (QoS) is technology that is designed to stop that happening and more fairly distribute the available resources.

RJ45

This is the chosen connector for Ethernet and works with wiring standards called CAT5, CAT5e or CAT6, which consists of four twisted pairs (eight wires) that are needed for gigabit network operations. Switches generally come with RJ45 sockets, unless they use fibre, and these are also how the corresponding wall sockets are also wired.

VoIP

Voice over IP is a method by which telephony can be redirected over a digital computer network as if it was ordinary data. To maintain the quality of the audio in a call it can be necessary to ring-fence the stream's bandwidth from being interrupted by other network traffic.