Three Areas That Can Go Wrong For Storage Administrators

Nick Jheng, Regional Manager, Middle East from Synology points out that flash is not perfect, storage is corruptible, and total cost of software ownership is better than licensing costs.

0 843
Nick Jheng, Synology
Nick Jheng, Synology

Similar to all areas of the information technology industry, storage drives, backup and recovery data management, are also going through cycles of improved innovation. Increasing importance of cloud, software defined management, persistent memory and flash technologies are some of the areas, that data centre administrators need to come to new terms with. Using an approach of total cost of ownership throws up some important conclusions.

Hidden costs of software

Licenses for backup and recovery of data are available on a subscription basis and as perpetual licensing. At first glance the cost of subscription licenses or monthly licenses with an annual contract, appear more economical than perpetual licensing. However, over a period of software usage time, various additional costs begin to accumulate, creating a need to look at the overall total cost of ownership.

Typically, IT organisations, as they use applications, will tend to buy additional support, maintenance and patches, and upgrade services, each of which has an additional cost. Annual support services are typically in the range of 25% of perpetual licensing fees. Taking VMware as an example, the total cost of ownership is quite different if it is computed using a per CPU per socket basis and per host basis.

Flash limitations

While the rapid read and write capability of a solid-state drives (SSDs) are well known, there are limitations in their longer-term capabilities, and every data center administrator should be aware of this. SSDs work by writing and erasing data to NAND blocks. NAND blocks are the smallest blocks of data storage in an SSD drive and have a limited life span of read-write capabilities. Data in NAND blocks cannot be overwritten and must be erased first. As a result, the performance of SSDs vary over a period and continue to degrade resulting in a limited life span.

Algorithms written inside SSDs help to distribute usage of the NAND blocks so that the wear and tear due to the erase function is distributed across the overall material. However, this can only be done in the background and requires a certain percentage of the NAND blocks to be reserved for this back and forth movement of data. This is called over-provisioning of the SSD and includes partitioning and reserving a percentage of good NAND blocks for these operations.

Therefore, if the total capacity of the SSD is 1 Terabyte, after setup the administrator may find that the effective storage area inside the SSD is only 950GB. As the usage of the SSD progress, this percentage of usable area continues to reduce, while ensuring that it is available for high performance compute.

Bad sectors and wipe outs

Catastrophic data loses are often linked to the increase in bad sectors on a traditional hard disk. Bad sectors get built up on the surface of hard disks when there is wear and tear, collision, over-heating, and file-system errors amongst others. As the number of bad sectors build up, sequential writing and reading of data gets disturbed as alternative available blocks need to be found, while bad sectors are skipped. The process of skipping bad sectors and finding good sectors to write on is called remapping.

Hard disks with a higher number of bad sectors will go through longer periods of remapping that will slow down access to data from the hard disk. Continuous remapping and the increase of bad sectors will eventually be followed by a catastrophic data failure of various sorts. Hard disk drives that have developed bad sectors are 10 times more likely to fail than those hard disk drives without bad sectors.

Leave A Reply

Your email address will not be published.

Join our mailing list
Sign up here to get the latest news, updates and special offers delivered directly to your inbox.