For most organizations, solid-state drive (SSD) technology is still on the horizon rather than in the data center. But as costs have come down, the benefits of solid-state have become more compelling for some. The reason for the appeal of SSD is the perception of higher performance as compared to spinning disk drives, which generally seem to be supported by experience and the idea that solid-state is safer than an electro-mechanical spinning disk when it comes to the risk of data loss – a point that is not quite as simple to define, as will be discussed.
What is clear is that in various kinds of operations SSD is a lot faster, which makes a big difference. For instance, a bank that had trouble with financial processing going through its database system discovered that adding SSD solved the IO bottlenecks that were behind the performance troubles.
As a consequence of this outcome, vendors are pitching SSD solutions in the form of arrays and even as direct-access storage. SSDs may also help with performance problems that can arise in heavily virtualized environments where IO issues are lurking beneath the surface. Solid-state storage has the potential to speed backup jobs so they can conform to available time windows.
Indeed, some believe SSDs will begin to replace high-performance SAS and FC drives – leaving spinning disk in a capacity role, as typified by today’s SATA drives. Although, there is in fact still a hefty price delta between SSD and spinning disk (see the posting at StorageReview). And the notion that no moving parts mean no failures is not completely true. For example, because reading and writing the same physical location over and over again can actually degrade the device, SSD makers implement a practice called wear-leveling, which introduces a degree of randomization to the process so the same physical address does not end up doing all the work.
The same method of writing data means that erasure and disposal must be carefully handled at end of life to make sure the data is really gone. Another gotcha with SSDs is a phenomenon known as Write amplification. This is a result of the fact that flash memory can’t be overwritten. It needs to be erased first before new data can be written. So, this not only adds cycles to operations that aren’t there in HDDs, it also means that all those extra non-functional erase and write cycles contribute to the eventual degradation of the device.
The first use case for SSD will tend to be around cache, where there is the greatest potential to speed up existing storage assets – essentially boosting their net performance. However, some organizations with heavy IO requirements may want to invest in more SSD so that key applications and functions can get a large performance boost.
For those still in the “wait and see” mode, you are probably not alone. But in the least, it’s nice to know SSD can be an option.