Sunday, 25 October 2015

How SSD work

How SSD work

Before we start to explain what an Solid State Drive (SSD) is and how it works we should first take a look at where we’ve come from.


Older hard drives used technology not too dissimilar to a record player in that a rotating disc (platter) was read and written to by an arm (actuator) which moved back and forth across the platters surface. This meant that when your computer asked for some information the actuator had to locate the data and read it. This all works well but as a hard drive starts to age and data is constantly written to and deleted from the hard drive – not all the data ends up being stored in a continuous stream. Data soon starts having to be written to “best fit” gaps, so you can end up with bits of information spread across the drive – this is called fragmentation. When this happens, you start to notice load times taking longer – as the actuator has to move over different parts of the platter to read the information. Running a defragmentation program can alleviate this and most new operating systems do this in the background – but the fact remains there are still moving physical parts which can only move so fast.

Solid State Drives are completely different in that they don’t have any moving parts. Data on an SSD is saved in banks of memory called NAND flash. This NAND flash is non-volatile meaning it continues to hold information while powered off (unlike your computers RAM which loses data when you power down your PC).

NAND flash is made up of what are called Floating Gate transistors (also called cells). Insulated within an oxide layer are two separate gates – the Control Gate above the Floating Gate – which both sit above the Channel (substrate). Electrons can move freely between the Control Gate and the Channel when a voltage is applied depending on which side the power is applied. To program (add data) to a cell, voltage is applied to the Control Gate – drawing electrons upwards from the Channel. The Floating Gate traps these electrons as they pass through on their way up towards the Control Gate and this is how the data is stored.

To delete this stored data, voltage is applied to the Channel side of the cell which pulls the electrons out of the Floating gate back into the Channel.

These cells are checked quite regularly to see if they are holding data - and this is done by applying a voltage to the Control Gate and measuring how much voltage travels through to the Channel. If data is indeed being stored in the Floating Gate layer, then the amount of voltage read at the Channel layer will be lessened and the controller has confirmation of data storage.

All of this electrical activity does come at a cost. Overtime the physical structure of the cell starts to degrade and this is why SSD drives have a finite lifespan - which is measured by the amount of Program / Erase (P/E) cycles.

To increase the capacity of NAND Flash manufacturers have been able to increase the amount of bits that a cell can hold. A Single Level Cell (SLC) can hold 1 bit of data per cell while Multi Level Cells (MLC) can hold 2 or 3 bits per cell (some manufacturers have recently announced they have been working on 4 bit cells).

The advantage of MLC NAND over SLC NAND is quite obvious – more capacity in one place means lower manufacturing costs – resulting in lower cost SSD’s to the consumer. There is a trade off however as more bits per cell mean more wear on the physical cell – as more monitoring of the cell is needed to know how much data is being stored and this in turn affects performance.

Sitting between this NAND Flash memory and your PC is the SSD controller – which manages all the calls and requests put forward to the SSD. It handles the fetching of the requested data, as well as the writing and deleting of new and old data.

The controller is also responsible for keeping the SSD in good health. It does this by performing processes such as Bad Block Mapping (where degraded NAND cells are marked as bad and not to be used), Error Code Checking (ECC) and Garbage Collection (which involves moving data around to keep it consistent in structure).

As stated – the controller is the link between the SSD and the host computer – and this typically uses the SATA (Serial ATA) interface. SATA was a big leap forward in performance from the previous IDE interfaces, but as technology has moved on, so too has our need for speed increased.

SATA is limited to a maximum throughput of 6Gbits/s (which when overheads are factored in equates to around бООМВ/s) - but in a new PC it is becoming a bottleneck. A newer solution is to move away from SATA and utilise the PCI Express (PCIe) channel on the PC. This involves mounting the SSD on a card and plugging into an empty PCIe slot.

PCIe SSD is definitely faster than SATA SSD with test results showing sequential read speeds at least twice as fast (up to 1500MB/s) and sequential write speeds even better (up to 1800MB/s).

Intel has been hard at work on a new technology for SSD’s as well. Called 3D XPoint, this is a new technology set to drive their new Optane branded drives. Details about the technology are still pretty closely guarded but Intel have teased saying the memory creates a 3D Checkerboard type structure in which data can be read and written in smaller sizes – which makes everything more efficient and allows for significant performance increases. They have announced claims of up to 5-7 performance increase over current SSD devices. We will have to wait and see as the first of their new drives aren’t due until well into 2016.

Start saving your pennies now so you’re ready for the next big leap forward.