We’ve all been told that one of the quickest and easiest ways to increase your computers performance is to add more RAM (Random Access Memory). This can be true – especially these days with resource hungry Operating Systems and ever increasingly complex programs. But what exactly is RAM, and how does it work.
Computer memory has one task – storing information as either a 0 or a 1 – it really is that simple. Think of you computer memory as a table with lots of paper cups on it. These cups represent the computer memory cells and can either be empty (0) or full (1). The computer constantly scans all these cups (cells) and can see if they are full or empty. The computer can then interpret all these 0’s and 1’s into data which tells the computer what and how to do things.
Memory cells however do leak, and going off our analogy this would mean each paper cup has a small hole in the bottom which slowly (in computer terms) leaks. This leaking is kept under control by the computer continuously interrogating the memory cells and replenishing them when needed – again off our analogy this would be like each cup having its content poured out then poured back in again. Just to be clear – the memory cells themselves don’t hold any information – just a charge – so they can be either empty (no charge) (0) or full (charged) (1).
The memory itself is made from silicon wafer, into which the memory cells are etched. They are produced in a grid like formation, lined up in rows and columns. The intersection of a row and column represents the memory address – just like a street address. Anyone who’s experienced the dreaded Windows blue screen of death will be familiar with what a memory address looks like. All this takes place at the microscopic level so don’t try breaking open one of your old RAM modules to try and see this.
Now that we’ve discussed what make makes up the memory modules and how they basically function let’s delve a little deeper into how they work, where we’ve come from and where we are going.
In the 1990’s as computers started to get faster and more complex we began to see the use of Synchronous Dynamic Random Access Memory (SDRAM), which allowed the computer memory to synchronise which other computer components. This let processes queue up while waiting for another process to finish. Pretty advanced for its day but it had a limitation in that the memory could only accept one command per cycle. This was called Single Data Rate (SDR) and by the end of the 1990’s SDR was becoming a bottleneck as other PC components were getting faster and faster.
Engineers came to the rescue with Double Data Rate (DDR) which allowed for two commands per cycle, basically doubling the memories speed capability overnight. DDR also brought in lower clock rates (between 100 and 200 Mhz) as well as using lower voltage. At this point – computer memory was able to transfer at a rate of 400 MT/s (Mega Transfers per Second). In terms of real world data transfer, this represented a speed of up to 3.2GB/s (Gigabytes per second). The largest DDR size available at this time was 1GB.
As technology started to explode in the years following 2000, DDR started to become too slow and in 2003 DDR2 was announced. The internal clock speed of DDR2 sped up to 200-533MHz while the voltage need decreased. DDR2 effectively runs at more than twice the speed of the original DDR clocking in at rates up to 1066 MT/s. The maximum storage limit for a DDR2 module was 4GB while the maximum data transfer rate was 6.4GB/s. The drawback with DDR2 as well as all subsequent DDR technologies is that they aren’t backward or forward compatible – meaning they can’t be mixed. Moving from DDR to DDR2 meant a system upgrade. They may look physically similar but they do run at different clock speeds, and if you look very closely you will notice the boards are notched differently meaning they won’t even plug into a different DDR slot.
The year 2007 quenched the thirst for even more speed with the introduction of DDR3. The internal clock was halved giving 400-1066MHz and again the voltage was decreased even further to 1.5 volts. And once again, effective speed doubled over the previous DDR rate with a maximum throughput of 2166MT/s. Real world data transfer rate more than doubled with up to 14.9GB/s. With the DDR3 up to 128GB became available as a module size.
Interestingly, high end graphics was one of the biggest drivers behind DDR3 as more complex and detailed graphics and games started to be developed.
DDR3 is the pseudo standard these days if you are looking for computer memory, but early 2014 saw the first implementations of DDR4 – which coincided with the release of the Intel Haswell-E processor which required DDR4 to run. Samsung had already made a prototype of a DDR4 module as far back as 2011 but it took some time for the technology to reach market.
DDR4 (with a clock speed between 800 and 1600 MHz) has a theoretical maximum module size of 512GB and a maximum data transfer rate of 25.6GB/s – though it will take some time (may not happen at all) before we see anything approaching that size. At time of writing the largest DDR4 module available is 128GB, with a data transfer rate of up to 17GB/s. This falls short of the expected doubling in speed from the previous DDR technology (DDR3). The potential is there, it’s just not finding the market share the manufacturers were expecting.
Our need for faster memory seems to have hit a bit of a plateau as our focus now has shifted more towards mobile technology which utilises smaller, cheaper components with lower voltage requirements.
No plans for DDR5 have been made official yet and probably won’t until DDR4 finds favour and market acceptance.