Post by habiba123820 on Nov 2, 2024 9:26:32 GMT
Therefore, in this article I will consider three good options for using SSD drives to speed up the storage subsystem.
Why not just assemble an array of SSDs - a little theory and reasoning on the topic
Most often, solid-state drives are considered simply as an alternative to HDD, with greater throughput and IOPS. However, such a direct replacement is often too expensive (branded HP drives, for example, cost from $2,000), and the usual SAS drives are returned to the project. Alternatively, fast drives are simply used selectively.
In particular, it seems convenient to use SSD for the system partition or for the database partition - you can read about the specific gain in performance in the relevant materials . From these same wordpress web design agency comparisons it is clear that when using regular HDDs, the bottleneck is the disk performance, and in the case of SSDs, the interface will be the limiting factor. Therefore, replacing just one disk will not always give the same return as a comprehensive upgrade.
Servers use SSDs with SATA interface, or more productive SAS and PCI-E. Most server SSDs with SAS interface on the market are sold under the brands HP, Dell and IBM. By the way, even branded servers can use drives from OEM manufacturers Toshiba, HGST (Hitachi) and others, which allow for the cheapest upgrade possible with similar characteristics.
With the widespread use of SSDs, a separate protocol for accessing disks connected to the PCI-E bus was developed - NVM Express (NVMe). The protocol was developed from scratch and significantly exceeds the capabilities of the usual SCSI and AHCI. NVMe is usually used by solid-state drives with PCI-E, U.2 (SFF-8639) interfaces and some M.2, which are more than twice as fast as regular SSDs . The technology is relatively new, but over time it will definitely take its place in the fastest disk systems.
A little about DWPD and the influence of this characteristic on the choice of a specific model.
Thus, when replacing conventional disks with solid-state drives, it is logical to use MLC models in RAID 1, which will provide excellent speed with the same level of reliability.
It is believed that using RAID together with SSD is not the best idea. The theory is based on the fact that SSDs in RAID wear out synchronously and at a certain point all the disks can fail at once, especially when rebuilding the array. However, the situation is exactly the same with HDD. Except that damaged blocks of the magnetic surface will not even allow you to read the information, unlike SSD.
The still high cost of solid-state drives makes one think about alternative uses for them, in addition to point replacement or using storage systems based on SSDs alone.
Expanding RAID controller cache
The size and speed of the RAID controller cache determines the speed of the array as a whole. This cache can be expanded using SSD. The technology resembles the Smart Response solution from Intel.
Why not just assemble an array of SSDs - a little theory and reasoning on the topic
Most often, solid-state drives are considered simply as an alternative to HDD, with greater throughput and IOPS. However, such a direct replacement is often too expensive (branded HP drives, for example, cost from $2,000), and the usual SAS drives are returned to the project. Alternatively, fast drives are simply used selectively.
In particular, it seems convenient to use SSD for the system partition or for the database partition - you can read about the specific gain in performance in the relevant materials . From these same wordpress web design agency comparisons it is clear that when using regular HDDs, the bottleneck is the disk performance, and in the case of SSDs, the interface will be the limiting factor. Therefore, replacing just one disk will not always give the same return as a comprehensive upgrade.
Servers use SSDs with SATA interface, or more productive SAS and PCI-E. Most server SSDs with SAS interface on the market are sold under the brands HP, Dell and IBM. By the way, even branded servers can use drives from OEM manufacturers Toshiba, HGST (Hitachi) and others, which allow for the cheapest upgrade possible with similar characteristics.
With the widespread use of SSDs, a separate protocol for accessing disks connected to the PCI-E bus was developed - NVM Express (NVMe). The protocol was developed from scratch and significantly exceeds the capabilities of the usual SCSI and AHCI. NVMe is usually used by solid-state drives with PCI-E, U.2 (SFF-8639) interfaces and some M.2, which are more than twice as fast as regular SSDs . The technology is relatively new, but over time it will definitely take its place in the fastest disk systems.
A little about DWPD and the influence of this characteristic on the choice of a specific model.
Thus, when replacing conventional disks with solid-state drives, it is logical to use MLC models in RAID 1, which will provide excellent speed with the same level of reliability.
It is believed that using RAID together with SSD is not the best idea. The theory is based on the fact that SSDs in RAID wear out synchronously and at a certain point all the disks can fail at once, especially when rebuilding the array. However, the situation is exactly the same with HDD. Except that damaged blocks of the magnetic surface will not even allow you to read the information, unlike SSD.
The still high cost of solid-state drives makes one think about alternative uses for them, in addition to point replacement or using storage systems based on SSDs alone.
Expanding RAID controller cache
The size and speed of the RAID controller cache determines the speed of the array as a whole. This cache can be expanded using SSD. The technology resembles the Smart Response solution from Intel.