Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
RAID
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Standard levels == {{Main|Standard RAID levels}} [[File:HuaweiRH2288HV2 (cropped).JPG|thumb|Storage servers with 24 hard disk drives each and built-in hardware RAID controllers supporting various RAID levels]] Originally, there were five standard levels of RAID, but many variations have evolved, including several [[nested RAID levels|nested levels]] and many [[non-standard RAID levels|non-standard levels]] (mostly [[proprietary software|proprietary]]). RAID levels and their associated data formats are standardized by the [[Storage Networking Industry Association]] (SNIA) in the Common RAID Disk Drive Format (DDF) standard:<ref>{{cite web|url=http://www.snia.org/tech_activities/standards/curr_standards/ddf/ |title=Common RAID Disk Drive Format (DDF) standard |publisher=SNIA |work=SNIA.org |access-date=2012-08-26}}</ref><ref>{{cite web |url=http://www.snia.org/education/dictionary |title=SNIA Dictionary |publisher=SNIA |work=SNIA.org |access-date=2010-08-24}}</ref> * '''[[RAID 0]]''' consists of block-level [[data striping|striping]], but no [[disk mirroring|mirroring]] or [[parity bit#RAID|parity]]. Assuming ''n'' fully-used drives of equal capacity, the capacity of a RAID 0 volume matches that of a [[spanned volume]]: the total of the ''n'' drives' capacities. However, because striping distributes the contents of each file across all drives, the failure of any drive renders the entire RAID 0 volume inaccessible. Typically, all data is lost, and files cannot be recovered without a backup copy. : By contrast, a spanned volume, which stores files sequentially, loses data stored on the failed drive but preserves data stored on the remaining drives. However, recovering the files after drive failure can be challenging and often depends on the specifics of the filesystem. Regardless, files that span onto or off a failed drive will be permanently lost. : On the other hand, the benefit of RAID 0 is that the [[throughput]] of read and write operations to any file is multiplied by the number of drives because, unlike spanned volumes, reads and writes are performed [[concurrency (computer science)|concurrently]].<ref name="Patterson_1994" /> The cost is increased vulnerability to drive failures—since any drive in a RAID 0 setup failing causes the entire volume to be lost, the average failure rate of the volume rises with the number of attached drives. This makes RAID 0 a poor choice for scenarios requiring data reliability or fault tolerance. * '''[[RAID 1]]''' consists of data mirroring, without parity or striping. Data is written identically to two or more drives, thereby producing a "mirrored set" of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its [[seek time]] and [[rotational latency]]), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.<ref name="Patterson_1994" /> * '''[[RAID 2]]''' consists of bit-level striping with dedicated [[Hamming code|Hamming-code]] parity. All disk spindle rotation is synchronized and data is [[data striping|striped]] such that each sequential [[bit]] is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive.<ref name="Patterson_1994" /> This level is of historical significance only; although it was used on some early machines (for example, the [[Thinking Machines Corporation|Thinking Machines]] CM-2),<ref>{{Cite book |title=Structured Computer Organization 6th ed. |first=Andrew S. |last=Tanenbaum |page=95}}</ref> {{As of|2014|lc=yes}} it is not used by any commercially available system.<ref>{{Cite book |title=Computer Architecture: A Quantitative Approach, 4th ed |first1=John |last1=Hennessy |first2=David |last2=Patterson |year=2006 |page=362 |isbn=978-0123704900}}</ref> * '''[[RAID 3]]''' consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential [[byte]] is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive.<ref name="Patterson_1994" /> Although implementations exist,<ref>{{cite web|url=http://www.freebsd.org/doc/handbook/geom-raid3.html|title=FreeBSD Handbook, Chapter 20.5 GEOM: Modular Disk Transformation Framework|access-date=2012-12-20}}</ref> RAID 3 is not commonly used in practice. * '''[[RAID 4]]''' consists of block-level striping with dedicated parity. This level was previously used by [[NetApp]], but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called [[RAID-DP]].<ref name="NetApp">{{cite web|url=http://www.netapp.com/us/library/technical-reports/tr-3298.html|title=RAID-DP:NetApp Implementation of Double Parity RAID for Data Protection. NetApp Technical Report TR-3298|first1=Jay|last1=White|first2=Chris|last2=Lueth|date=May 2010|access-date=2013-03-02}}</ref> The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers.<ref name="patterson" /> * '''[[RAID 5]]''' consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks.<ref name="Patterson_1994" /> Like all single-parity concepts, large RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild (see "[[#Increasing rebuild time and failure probability|Increasing rebuild time and failure probability]]" section, below).<ref name="StorageForum" /> Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array. * '''[[RAID 6]]''' consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced.<ref name="Patterson_1994" /> With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5.<ref name="zdnet">{{cite news |url=http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805 |archive-url=https://web.archive.org/web/20100815215636/http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805 |url-status=dead |archive-date=August 15, 2010 |title=Why RAID 6 stops working in 2019 |work=[[ZDNet]] |date=22 February 2010}}</ref> RAID 10 also minimizes these problems.<ref name="UREs">{{cite web |url=http://www.techrepublic.com/blog/datacenter/how-to-protect-yourself-from-raid-related-unrecoverable-read-errors-ures/1752 |first=Scott |last=Lowe |title=How to protect yourself from RAID-related Unrecoverable Read Errors (UREs). Techrepublic. |date=2009-11-16 |access-date=2012-12-01}}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
RAID
(section)
Add topic