Bryston BDA-2 and BDP-2

Over the past few years, one of the biggest changes in the audiophile world is that computer audio has become mainstream. Evidence of this can be found at all the major audio shows from TAVES to CES to RMAF to name a few. I was surprised to see so many exhibitors at recent audio shows, demonstrate their equipment using a laptop and a DAC as their source components, sometimes even to the exclusion of the venerable turntable. It is sad to see the once ubiquitous CD player being slowly but surely supplanted.

This trend has put the spotlight on external drives that store the music and thanks to technological advances and economies of scale, this category of products has seen sizes and prices continue shrink to a level where they have become incredibly affordable. Currently, the most popular type of drive is undoubtedly the Hard Disk Drive or HDD because they offer the best bang for your buck in terms of storage capacity and price competitiveness. Today, many brands offer 1 to 3 terabyte HDDs for around $100 to $300.

HDDs come in different sizes and shapes, which are determined, by the size and shape of the components that are utilized to produce the device. The conventional HDD is built around a platter or optical disc with a spindle motor making it rotate at the required speed.

Solid State Drives and a Little History

Looking to the future, many pundits are forecasting that the HDD will soon face stiff competition from the more durable Solid State Drive or SSD, which has already started making some meaningful inroads into the market. Many people regard the SSD as a new development that has recently emerged in the consumer sphere. However the surprising fact of the matter is that SSD technology is more than half a century old.

The genesis of SSDs can be traced to the 1950s when two technologies namely ‘core memory’ and ‘card capacitor read only source’ commonly referred to as auxiliary memory units emerged during the age of vacuum tube computers. They were soon supplanted by drum storage type drives, which were a lot cheaper to manufacture. Then during the 1970s we saw the debut of drives implemented in semiconductor memory of the supercomputers of the day including names like Cray, IBM and Amdhal. At the time these were built to order and carried astronomical prices, which kept them out of the consumer sphere.

Things changed in 1978 when Texas Memory Systems developed a 16 kilobyte RAM solid-state drive, which became the darling of oil companies as it helped them in seismic date acquisition. The very next year StorageTek designed a new kind of solid-state drive and a few years later the PC-5000 was unveiled by Sharp. This caused quite a buzz because of its 128 kilobyte solid-state storage cartridge which incorporated bubble memory. At the time this was considered as very high storage capacity for an SSD.

In 1986, Santa Clara Systems launched its BatRam 4 megabyte storage system that could be expanded to 20 megabytes using add-on modules. Around a decade later M-Systems developed a flash based solid-state drive that could withstand extreme shock, temperatures and vibration and had a much longer mean time between failure (MTBF) rate. This made them great for military and aerospace applications. In 2006 Sandisk acquired M-Systems and went on to become one of the major players in this segment.

Hard Disk Drive Versus Solid State Drive

The biggest difference between HDD and SSD is that the former are electromechanical devices that incorporate spinning discs and movable read and write heads, while the latter uses microchips and totally eliminates the need for moving parts. This means that SSD memory is less susceptible to physical shock, operates quietly and offers lower access time and latency. The transition form HDDs to SSDs should be quite smooth because of the fact that they both use the same interface (connector type) and so switching from one to the other does not present any compatibility problems at the consumer level.

Types of Solid State Drives

When choosing an SSD it would behove you to opt for one that has flash memory as these have the ability to retain memory even without power. If your application requires a higher input/output rate and better reliability you could consider enterprise flash drives (EFDs). These drives offer superior specifications to the regular SSDs. The term EFD was coined by EMC at the beginning of 2008 to help them identify SSD producers that could provide drives with better than the standard specifications. The caveat here is that there is no governing body overseeing the EFD standard and so any SSD manufacturer can claim the EFD moniker whether it offers better than standard specifications or not.

If you peek inside an SSD, the major parts you will find are the controller, which includes the electronics that bridge the NAND memory components to the SSD input/output interface. This is an embedded processor that executes firmware-level software and it plays a major role in determining the performance level of the SSD.

An SSD also contains a cache. If it is of the flash variety, it uses a small amount of DRAM as cache, which is similar to the cache in an HDD. While the drive is operating a directory of block placement and wear levelling date is also kept in the cache.

High performance SSDs also incorporates a capacitor or some form of battery. The purpose of this is to maintain the integrity of the data in the cache, which can be flushed to the drive in the event of a power failure. The better SSDs are designed to continue supplying power even if there is a power outage that lasts for a very long time.

The performance of an SSD usually scales with the number of parallel NAND flash chips that are utilized in the device. One NAND chip is usually slow because of a narrow (8/16) asynchronous input/output interface and additional high latency of the basic input/output operations. When many NAND devices operate in parallel inside an SSD, the bandwidth scales and the high latencies can be concealed just so long as enough outstanding operations are pending and the load is evenly distributed between devices.

The more affordable SSDs usually employ muti-level cell flash memory. These are slower and not as reliable as single-level cells. However this can be mitigated and in some cases even reversed by being smarter when designing the internal design structure of the SSD. Some examples of this are interleaving and using better, more efficient algorithms.

SSDs that are based on volatile memory such as DRAM are characterized by faster data access, typically under 10 microseconds. These are used mainly to accelerate applications that would otherwise be held back by the latency of flash SSDs or traditional HDDs. DRAM-based SSDs usually utilize either an internal battery or an external AC/DC adapter and backup storage systems to make sure that data is retained even while no power is being supplied to the drive from external sources. In the event of a power outage the battery supplies power while all information is copied from the RAM to the back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation. These types of SSD are usually fitted with the same type of DRAM modules used in regular PCs and servers, which allows them to be swapped out and replaced with larger modules.

If an SSD is made up of various interconnected integrated circuits and an interface connector, then there is a lot more flexibility in determining the shape of the device because it is now not limited to the shape of rotating media drives. Some solid-state storage solutions come in a larger chassis that may even be a rack-mount form factor with numerous SSDs inside. They would all connect to a common bus inside the chassis and connect outside the box with a single connector.

Comparing Solid State Drives to Hard Disk Drives

When comparing SSDs to HDDs you have to make certain allowances. Traditional HDD benchmarks are focused on finding the performance aspects where they are weak, such as rotational latency time and seek time. Since SSDs do not spin, or seek, they may show huge superiority in such tests. On the other hand, SSDs have challenges with mixed reads and writes, and it is entirely possible that their performance may degrade over time. To get a more accurate comparison, you should test an SSD once is it filled to capacity with data. This is because a new and ‘empty’ disk is likely to show a much better write performance during the test than it would show after years of use.

Some of the advantages that SSDs have over their HDD counterparts are that the former has a faster start-up than the latter because no spin-up is needed. SSDs also have faster random access because of the absence of seeking motion which is a characteristic of the rotating disk platter, the read and write heads and the head-actuator mechanism of HDDs. SSDs also have more consistent read performance because the physical location of the data is irrelevant. They also feature faster boot and application launch time. SSDs are also less susceptible to file fragmentation because unlike HDDs, they are not subjected to data access degradation caused by the greater disk head seek activity when it tries to find data that is spread across many different locations on the disc.

SSDs are also generally a lot quieter in operational mode because unlike HDDs they do not have any moving parts. This is also the reason why they are much cooler running, consume less power, have much higher mechanical reliability and are able to endure greater shock, vibration and temperature ranges and are able to operate at higher altitude. SSDs also tend to have around double the data density of HDDs and eliminate the need to defragment the disc from time to time.

On the flip side, SSDs with flash memory have a relatively limited lifetime and can wear out after around 2 million P/E cycles. The life of the device can be extended by adopting special file systems or firmware designs that can mitigate this problem by spreading the writes over the entire device. This technique is known as wear leveling.

At the time this was written, HDDs are being sold at a lower cost per gigabyte than SSDs. This being said, SSDs are closing the gap quite quickly and at the current rate, they are expected to be price competitive with HDDs over the next few years.

How All This Relates to Music

What does all this mean for music listeners? For starters, the ever-decreasing cost of storing data is proving to be a boost for the sales of high-resolution music files, especially 24-bit/192 kHz resolution files, which are exponentially larger than their CD-quality 16-bit/44 kHz counterparts.   The shrinking size of external drives is making it a lot easier and more convenient to store and carry around your music files. Today you can buy 256 gigabyte thumb drives which can probably store your entire music file collection and carry it around in your pocket.

This trend is also expected to hasten the CD system’s journey into extinction. Just a couple of decades ago we marvelled at how the CD system made it so easy to store and access our music. We now live in an age where one jump drive that fits into the palm of our hand can contain the same digital music content as hundreds of CDs!


Click here to discuss this article on the CANADA HiFi Forum