Software-RAID HOWTO
Software-RAID HOWTO
Linas Vepstas, linas@linas.org v0.54, 21 November 1998
ÃÖ Èñö ironyjk@kldp.org
2000³â 3¿ù 1ÀÏ
RAID ´Â ''Redundant Array of Inexpensive Disks'' ÀÇ ¾àÀÚ·Î,
°¢°¢ÀÇ µð½ºÅ© µéÀ» ¹¾î¼ ºü¸£°í ¾ÈÀüÇÑ µð½ºÅ© ½Ã½ºÅÛÀ» ¸¸µå´Â °ÍÀÌ´Ù.
RAID ´Â ÇϳªÀÇ µð½ºÅ©¿¡ ºñÇØ ¿À·ù¸¦ ´ëºñÇÒ ¼ö ÀÖÀ¸¸ç,
¼Óµµ¸¦ Áõ°¡ ½ÃŲ´Ù.
RAID stands for ''Redundant Array of Inexpensive Disks'', and
is meant to be a way of creating a fast and reliable disk-drive
subsystem out of individual disks. RAID can guard against disk
failure, and can also improve performance over that of a single
disk drive.
ÀÌ ¹®¼´Â Linux MD kernel È®Àå¿¡ °üÇÑ tutorial/HOWTO/FAQ ¹®¼ÀÌ´Ù.
MD È®ÀåÀº RAID-0,1,4,5 ¸¦ ¼ÒÇÁÆ®¿þ¾î ÀûÀ¸·Î Áö¿øÇÏ°í,
ÀÌ°ÍÀ» ÅëÇØ ¿ì¸®´Â Ưº°ÇÑ Çϵå¿þ¾î³ª µð½ºÅ© ÄÜÆ®·Ñ·¯ ¾øÀÌ
RAID ¸¦ »ç¿ëÇØ º¼¼ö ÀÖ´Ù.
This document is a tutorial/HOWTO/FAQ for users of
the Linux MD kernel extension, the associated tools, and their use.
The MD extension implements RAID-0 (striping), RAID-1 (mirroring),
RAID-4 and RAID-5 in software. That is, with MD, no special hardware
or disk controllers are required to get many of the benefits of RAID.
- ¸Ó¸®¸»
-
This document is copyrighted and GPL'ed by Linas Vepstas
(
linas@linas.org).
Permission to use, copy, distribute this document for any purpose is
hereby granted, provided that the author's / editor's name and
this notice appear in all copies and/or supporting documents; and
that an unmodified version of this document is made freely available.
This document is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY, either expressed or implied. While every effort
has been taken to ensure the accuracy of the information documented
herein, the author / editor / maintainer assumes NO RESPONSIBILITY
for any errors, or for any damages, direct or consequential, as a
result of the use of the information documented herein.
º» ¹®¼ÀÇ È²´çÇÏ°í ¹«Ã¥ÀÓÇÑ ¹ø¿ªÀ¸·Î ÀÎÇÑ Á¤½ÅÀû ¹°¸®Àû ÇÇÇظ¦ ¿ªÀڴ åÀÓÀ» ÁöÁö ¾Ê½À´Ï´Ù. ^^
(¹ø¿ªÀÌ Ã³À½ÀÌ´Ù º¸´Ï, ÀÌ ¹®¼¿¡ ¿À¿ªÀÌ Á¶±Ý(?) ÀÖ½À´Ï´Ù.)
ÀÌ ¹®¼´Â GPLÀ» µû¸¨´Ï´Ù. ¿À¿ª°ú À߸øµÈ, °»½ÅÇØ¾ß ÇÒ Á¤º¸¿¡ °üÇؼ´Â
Àú¿¡°Ô ¸ÞÀÏÀ» Áֽñ⠹ٶø´Ï´Ù.
¹ø¿ªÀ̶ó°í Çϱä Çߴµ¥ ¾û¼ºÇϱ⠱×Áö ¾ø±º¿ä. Á» ´õ ÀÚ¼¼ÇÑ ¹ø¿ªÀ» ÇÏ°í ½Í±ä ÇÏÁö¸¸.
Àß ¸ð¸£´Â °Íµµ ¸¹°í ´õ ÇÏ°í ½ÍÀº °Íµµ ¸¹¾Æ¼ ^^;
RAID´Â µð½ºÅ©ÀÇ Ãß°¡·Î ½Ã½ºÅÛÀÇ ½Å·Ú¼ºÀ» Çâ»ó½Ãų¼ö ÀÖÀ¸³ª,
À߸øµÈ »ç¿ëÀ¸·Î ÀÎÇØ ¿ªÈ¿°ú¸¦ ³¾ ¼öµµ ÀÖ´Ù.
ƯÈ÷ , RAID´Â µð½ºÅ© ÀÚüÀÇ ¿À·ù¿¡ ´ëºñÇÑ °ÍÀÌÁö.
»ç¿ëÀÚÀÇ ½Ç¼ö³ª, Àü¿øÀÇ ºÒ·®¿¡ ´ëºñÇϵµ·Ï ¼³°èµÈ °ÍÀÌ ¾Æ´Ï´Ù.
Àü¿øÀÇ ºÒ·®°ú, °³¹ß Ä¿³Î, ±×¸®°í, °ü¸®ÀÚÀÇ ½Ç¼ö´Â µ¥ÀÌÅ͸¦
¼Õ»ó½Ãų °ÍÀÌ°í, RAID ´Â ¹é¾÷¹æ¹ýÀÌ ¾Æ´Ô¿¡ À¯ÀÇÇ϶ó.
RAID, although designed to improve system reliability by adding
redundancy, can also lead to a false sense of security and confidence
when used improperly. This false confidence can lead to even greater
disasters. In particular, note that RAID is designed to protect against
*disk* failures, and not against *power* failures or *operator*
mistakes. Power failures, buggy development kernels, or operator/admin
errors can lead to damaged data that it is not recoverable!
RAID is *not* a substitute for proper backup of your system.
Know what you are doing, test, be knowledgeable and aware!
- Q:
RAID ¶õ ¹«¾ùÀΰ¡?
A
RAID ´Â ''Redundant Array of Inexpensive Disks' ÀÇ ¾àÀÚ·Î,
°¢°¢ÀÇ µð½ºÅ© µéÀ» ¹¾î¼ ºü¸£°í ¾ÈÀüÇÑ µð½ºÅ© ½Ã½ºÅÛÀ» ¸¸µå´Â °ÍÀÌ´Ù.
RAID stands for "Redundant Array of Inexpensive Disks",
and is meant to be a way of creating a fast and reliable disk-drive
subsystem out of individual disks. In the PC world, "I" has come to
stand for "Independent", where marketing forces continue to
differentiate IDE and SCSI. In it's original meaning, "I" meant
"Inexpensive as compared to refrigerator-sized mainframe
3380 DASD", monster drives which made nice houses look cheap,
and diamond rings look like trinkets.
- Q:
ÀÌ ¹®¼´Â ¹«¾ùÀΰ¡?
A:
ÀÌ ¹®¼´Â Linux MD kernel È®Àå¿¡ °üÇÑ tutorial/HOWTO/FAQ ¹®¼ÀÌ´Ù.
MD È®ÀåÀº RAID-0,1,4,5 ¸¦ ¼ÒÇÁÆ®¿þ¾î ÀûÀ¸·Î Áö¿øÇÏ°í,
ÀÌ°ÍÀ» ÅëÇØ ¿ì¸®´Â Ưº°ÇÑ Çϵå¿þ¾î³ª µð½ºÅ© ÄÜÆ®·Ñ·¯ ¾øÀÌ
RAID ¸¦ »ç¿ëÇØ º¼¼ö ÀÖ´Ù.
This document is a tutorial/HOWTO/FAQ for users of the Linux MD
kernel extension, the associated tools, and their use.
The MD extension implements RAID-0 (striping), RAID-1 (mirroring),
RAID-4 and RAID-5 in software. That is, with MD, no special
hardware or disk controllers are required to get many of the
benefits of RAID.
This document is NOT an introduction to RAID;
you must find this elsewhere.
- Q:
Linux Ä¿³ÎÀº ¾î¶² ·¹º§ÀÇ RAID ¸¦ Áö¿øÇϴ°¡?
A:
RAID-0 Àº 2.x ¹öÀüÀÇ ¸®´ª½º Ä¿³ÎµéÀÌ Áö¿øÇÑ´Ù.
ÀÌ°ÍÀº ÀÌÇØÇϱ⠽±°í, ¸î¸îÀÇ ¸Å¿ì Å« À¯Áî³Ý ´º½º ¼¹ö¿¡ »ç¿ëµÈ´Ù.
Striping (RAID-0) and linear concatenation are a part
of the stock 2.x series of kernels. This code is
of production quality; it is well understood and well
maintained. It is being used in some very large USENET
news servers.
RAID-1, RAID-4, RAID-5 ´Â Ä¿³Î 2.1.63 ÀÌ»óÀÇ ¹öÀü¿¡¼ Áö¿øÇÑ´Ù.
2.0.x ¿Í 2.1.xÀÇ Ä¿³ÎµéÀº ÆÐÄ¡¸¦ ÇØ¾ß ÇÑ´Ù.
Ä¿³ÎÀ» ¾÷±×·¹À̵å ÇØ¾ß ÇÑ´Ù°í »ý°¢ÇÏÁö ¸¶¶ó. ¾÷±×·¹À̵庸´Ù.
ÆÐÄ¡°¡ ÈξÀ ½¬¿ï °ÍÀÌ´Ù.
RAID-1, RAID-4 & RAID-5 are a part of the 2.1.63 and greater
kernels. For earlier 2.0.x and 2.1.x kernels, patches exist
that will provide this function. Don't feel obligated to
upgrade to 2.1.63; upgrading the kernel is hard; it is *much*
easier to patch an earlier kernel. Most of the RAID user
community is running 2.0.x kernels, and that's where most
of the historic RAID development has focused. The current
snapshots should be considered near-production quality; that
is, there are no known bugs but there are some rough edges and
untested system setups. There are a large number of people
using Software RAID in a production environment.
RAID-1 hot reconstruction has been recently introduced
(August 1997) and should be considered alpha quality.
RAID-5 hot reconstruction will be alpha quality any day now.
A word of caution about the 2.1.x development kernels:
these are less than stable in a variety of ways. Some of
the newer disk controllers (e.g. the Promise Ultra's) are
supported only in the 2.1.x kernels. However, the 2.1.x
kernels have seen frequent changes in the block device driver,
in the DMA and interrupt code, in the PCI, IDE and SCSI code,
and in the disk controller drivers. The combination of
these factors, coupled to cheapo hard drives and/or
low-quality ribbon cables can lead to considerable
heartbreak. The ckraid tool, as well as
fsck and mount put considerable stress
on the RAID subsystem. This can lead to hard lockups
during boot, where even the magic alt-SysReq key sequence
won't save the day. Use caution with the 2.1.x kernels,
and expect trouble. Or stick to the 2.0.34 kernel.
- Q:
¾îµð¿¡¼ Ä¿³ÎÀÇ ÆÐÄ¡¸¦ ±¸ÇÒ ¼ö ÀÖ³ª¿ä?
A:
Software RAID-0 and linear mode are a stock part of
all current Linux kernels. Patches for Software RAID-1,4,5
are available from
http://luthien.nuclecu.unam.mx/~miguel/raid.
See also the quasi-mirror
ftp://linux.kernel.org/pub/linux/daemons/raid/
for patches, tools and other goodies.
- Q:
Linux RAID ¿¡ °üÇÑ ´Ù¸¥ ¹®¼µéÀÌ ÀÖ³ª¿ä?
A:
- Q:
ÀÌ ¹®¼¿¡ ´ëÇØ ´©±¸¿¡°Ô ºÒÆòÇØ¾ß ÇÏÁÒ?
A:
Linas Vepstas slapped this thing together.
However, most of the information,
and some of the words were supplied by
Copyrights
- Copyright (C) 1994-96 Marc ZYNGIER
- Copyright (C) 1997 Gadi Oxman, Ingo Molnar, Miguel de Icaza
- Copyright (C) 1997, 1998 Linas Vepstas
- By copyright law, additional copyrights are implicitly held
by the contributors listed above.
Thanks all for being there!
- Q:
RAID´Â ¹«¾ùÀΰ¡? ¿Ö ³ª´Â ¾ÆÁ÷ »ç¿ëÇØ º¸Áö ¾Ê¾Ò´Â°¡?
A:
RAID ´Â ¿©·¯°³ÀÇ µð½ºÅ©¸¦ ¿ª¾î¼, ¼Óµµ¿Í, ¾ÈÀü¼ºÀÌ ÁÁÀº
ÇϳªÀÇ ÇüÅ·Π¸¸µå´Â °ÍÀÌ´Ù.
RAID ´Â ¿©·¯°¡Áö ÇüÅ°¡ ÀÖ°í, ±× ÇüŸ¶´Ù °¢°¢ÀÇ Àå´ÜÁ¡À»
°¡Áö°í ÀÖ´Ù.
¿¹¸¦ µé¸é RAID ·¹º§ 1 Àº µÎ°³(ȤÀº ÀÌ»ó)ÀÇ µð½ºÅ©¿¡ °°Àº µ¥ÀÌÅÍÀÇ
º¹»çº»À» ³Ö´Â °ÍÀÌ´Ù. µ¥ÀÌÅÍ°¡ º¹»çµÈ °¢ µð½ºÅ©¿¡¼
µ¥ÀÌÅ͸¦ °¡Á®¿À±â ¶§¹®¿¡ Àд ¼Óµµ´Â »¡¶óÁú °ÍÀÌ´Ù.
Ãß°¡ÀûÀ¸·Î º¹»çµÈ µ¥ÀÌÅÍ´Â ÇϳªÀÇ µð½ºÅ©°¡ ±úÁ³À» ¶§ ¾ÈÁ¤¼ºÀ»
Á¦°øÇÒ °ÍÀÌ´Ù. RAID ·¹º§¿¡ ÀÇÇÑ ´Ù¸¥ ¹æ¹ýÀº, ¿©·¯°³ÀÇ µð½ºÅ©¸¦
ÇϳªÀÇ µð½ºÅ©·Î ¹´Â °ÍÀÌ´Ù. ±×°ÍÀº °£´ÜÇÑ º¹»ç¿¡ ºñÇØ
Á» ´õ ¸¹Àº ÀúÀå·üÀ» Á¦°øÇÒ °ÍÀÌ´Ù, ¶ÇÇÑ, Àб⠾²±â¸¦ À§ÇÑ
¼º´É Çâ»óÀ» ½ÃÅ°¸é¼, ¿©ÀüÈ÷ ¿À·ù¿¡ ´ëºñÇÑ Àû´çÇÑ ¿©À¯°ø°£À»
³²°ÜµÑ °ÍÀÌ´Ù.
RAID is a way of combining multiple disk drives into a single
entity to improve performance and/or reliability. There are
a variety of different types and implementations of RAID, each
with its own advantages and disadvantages. For example, by
putting a copy of the same data on two disks (called
disk mirroring, or RAID level 1), read performance can be
improved by reading alternately from each disk in the mirror.
On average, each disk is less busy, as it is handling only
1/2 the reads (for two disks), or 1/3 (for three disks), etc.
In addition, a mirror can improve reliability: if one disk
fails, the other disk(s) have a copy of the data. Different
ways of combining the disks into one, referred to as
RAID levels, can provide greater storage efficiency
than simple mirroring, or can alter latency (access-time)
performance, or throughput (transfer rate) performance, for
reading or writing, while still retaining redundancy that
is useful for guarding against failures.
RAID´Â µð½ºÅ© ¿À·ù¿¡ ´ëºñÇÒ ¼ö ÀÖÁö¸¸, Àΰ£ÀÇ ½Ç¼ö³ª,
ÇÁ·Î±×·¥ÀÇ ¿À·ù¿¡´Â ´ëºñÇÒ ¼ö ¾ø´Ù.
(RAID ÇÁ·Î±×·¥ ÀÚüµµ ¿À·ù¸¦ Æ÷ÇÔÇÒ ¼ö ÀÖ´Ù.)
net »ó¿¡´Â RAID ¼³Ä¡¿¡ Àͼ÷Ä¡ ¾ÊÀº °ü¸®ÀÚµéÀÌ ±×µéÀÇ µ¥ÀÌÅ͸¦
¸ðµÎ ÀÒ¾î¹ö¸®´Â ±×·± ºñ±ØÀûÀÎ À̾߱⠵éÀÌ ¸¹´Ù.
RAID´Â Á¤±âÀûÀÎ ¹é¾÷À» ´ëü ÇÒ ¼ö ¾ø´Ù´Â »ç½ÇÀ» ¸í½ÉÇ϶ó!
Although RAID can protect against disk failure, it does
not protect against operator and administrator (human)
error, or against loss due to programming bugs (possibly
due to bugs in the RAID software itself). The net abounds with
tragic tales of system administrators who have bungled a RAID
installation, and have lost all of their data. RAID is not a
substitute for frequent, regularly scheduled backup.
RAID ´Â Ư¼öÇÑ µð½ºÅ© ÄÜÆ®·Ñ·¯¸¦ ÀÌ¿ëÇØ Çϵå¿þ¾îÀûÀ¸·Î ±¸ÇöµÇ°Å³ª
Ä¿³Î ¸ðµâÀ» ÀÌ¿ëÇØ ¼ÒÇÁÆ®¿þ¾î ÀûÀ¸·Î ±¸ÇöµÉ ¼ö ÀÖ´Ù. ÈÄÀÚÀÇ °æ¿ì
¸ðµâÀº Àú¼öÁØÀÇ µð½ºÅ©µå¶óÀ̹ö °èÃþ¿¡ ÀÖ°í, ÆÄÀϽýºÅÛÀÌ ±× À§¿¡ ¿À°ÔµÈ´Ù.
RAID Çϵå¿þ¾î "µð½ºÅ© ÄÜÆ®·Ñ·¯"´Â µð½ºÅ© µå¶óÀ̺꿡 ÄÉÀ̺íÀ»
¿¬°áÇÒ¼ö ÀÖ°Ô ÇØÁÖ´Â °ÍÀÌ´Ù. ÀϹÝÀûÀ¸·Î
lISA/EISA/PCI/S-Bus/MicroChannel ½½·Ô¿¡ ÀåÂøÇÒ¼ö ÀÖ´Â Ä«µåÇü½ÄÀ̳ª,
¾î¶² °ÍµéÀº ÀϹÝÀûÀÎ ÄÁÆ®·Ñ·¯¿Í µð½ºÅ©»çÀ̸¦ ¿¬°áÇØ´Â
¹Ú½º Çü½ÄÀÌ´Ù.
RAID can be implemented
in hardware, in the form of special disk controllers, or in
software, as a kernel module that is layered in between the
low-level disk driver, and the file system which sits above it.
RAID hardware is always a "disk controller", that is, a device
to which one can cable up the disk drives. Usually it comes
in the form of an adapter card that will plug into a
ISA/EISA/PCI/S-Bus/MicroChannel slot. However, some RAID
controllers are in the form of a box that connects into
the cable in between the usual system disk controller, and
the disk drives. Small ones may fit into a drive bay; large
ones may be built into a storage cabinet with its own drive
bays and power supply.
ÃÖ½ÅÀÇ ºü¸¥ CPU¸¦ »ç¿ëÇÏ´Â RAID Çϵå¿þ¾î´Â ÃÖ°íÀÇ ¼Óµµ¸¦ ³»±ä
ÇÏÁö¸¸, ±×¸¸Å ºñ½Ò °ÍÀÌ´Ù.
ÀÌÀ¯´Â ´ëºÎºÐÀÌ º¸µå»ó¿¡ ÃæºÐÇÑ DSP ¿Í ¸Þ¸ð¸®¸¦ °¡Áö°í ÀÖ±â
¶§¹®ÀÌ´Ù.
¿À·¡µÈ RAID Çϵå¿þ¾î´Â DSP¿Í ij½¬ÀÇ º´¸ñÇö»óÀ¸·Î ÃÖ½ÅÀÇ CPU¸¦
»ç¿ëÇÏ´Â ½Ã½ºÅÛÀÇ ¼Óµµ¸¦ ÀúÇϽÃų¼ö ÀÖ´Ù. ¶§·Î´Â, ÀϹÝÀûÀÎ
Çϵå¿þ¾î¿Í ¼ÒÇÁÆ®¿þ¾î RAID ¸¦ »ç¿ëÇÏ´Â °Íº¸´Ù ´õ ´À¸± °ÍÀÌ´Ù.
Çϵå¿þ¾î RAID°¡ ¼ÒÇÁÆ®¿þ¾î RAID¿¡ ºñÇØ ÀåÁ¡ÀÌ ÀÖÀ» ¼ö ÀÖÁö¸¸,
ÃÖ±Ù ´ëºÎºÐÀÇ µð½ºÅ© µå¶óÀ̺êµé¿¡°Õ ±×·¸Áö ¾Ê´Ù.?
RAID Çϵå¿þ¾î´Â ÀϹÝÀûÀ¸·Î ´Ù¸¥ ¸ÞÀÌÄ¿¿Í ¸ðµ¨ÀÇ Çϵåµé¿¡°Ô
ȣȯ¼ºÀ» Á¦°øÇÏÁö ¾ÊÁö¸¸, ¸®´ª½º»óÀÇ ¼ÒÇÁÆ®¿þ¾î RAID´Â
¾î¶² Ưº°ÇÑ ¼³Á¤¾øÀÌ ´ëºÎºÐÀÇ Çϵå¿þ¾îµéÀÌ Àß µ¹¾Æ°¥ °ÍÀÌ´Ù.
The latest RAID hardware used with
the latest & fastest CPU will usually provide the best overall
performance, although at a significant price. This is because
most RAID controllers come with on-board DSP's and memory
cache that can off-load a considerable amount of processing
from the main CPU, as well as allow high transfer rates into
the large controller cache. Old RAID hardware can act as
a "de-accelerator" when used with newer CPU's: yesterday's
fancy DSP and cache can act as a bottleneck, and it's
performance is often beaten by pure-software RAID and new
but otherwise plain, run-of-the-mill disk controllers.
RAID hardware can offer an advantage over pure-software
RAID, if it can makes use of disk-spindle synchronization
and its knowledge of the disk-platter position with
regard to the disk head, and the desired disk-block.
However, most modern (low-cost) disk drives do not offer
this information and level of control anyway, and thus,
most RAID hardware does not take advantage of it.
RAID hardware is usually
not compatible across different brands, makes and models:
if a RAID controller fails, it must be replaced by another
controller of the same type. As of this writing (June 1998),
a broad variety of hardware controllers will operate under Linux;
however, none of them currently come with configuration
and management utilities that run under Linux.
¼ÒÇÁÆ®¿þ¾î RAID´Â Ä¿³Î ¸ðµâ·Î ¼³Á¤Çϸç, °ü¸® µµ±¸µîµµ
¸ðµÎ ¼ø¼öÇÑ ¼ÒÇÁÆ®¿þ¾î ÀûÀ¸·Î ÀÌ·ç¾îÁ® ÀÖ´Ù.
¸®´ª½º RAID ½Ã½ºÅÛÀº IDE, SCSI and Paraport drives °°Àº
Àú¼öÁØ µå¶óÀ̹ö¿Í block-device interface À§¿¡ ¾ãÀº ÃþÀ¸·Î Á¸ÀçÇÑ´Ù.
ext2fs ³ª, DOS-FATµîÀÇ ÆÄÀϽýºÅÛÀº block-device interfaceÀ§¿¡
¾ñÇô ÀÖ´Ù. ¼ÒÇÁÆ®¿þ¾î RAID´Â ¼ÒÇÁÆ®¿þ¾îÀûÀ¸·Î ¸Å¿î ÀÚ¿¬½º·¯¿î
°ÍÀ̸ç, Çϵå¿þ¾îÀû ±¸Çöº¸´Ù À¯¿¬ÇÑ °ÍÀÌ´Ù.
´ÜÁ¡À¸·Î´Â Çϵå¿þ¾î ½Ã½ºÅÛº¸´Ù CPU cycle°ú Àü¿øÀ» Á¶±Ý ´õ
¼Ò¸ðÇÑ´Ù´Â °ÍÀÌÁö¸¸, °¡°ÝÀÌ ºñ½ÎÁö´Â °ÍÀº ¾Æ´Ï´Ù.
¼ÒÇÁÆ®¿þ¾î RAID´Â ÆÄƼ¼Ç ´ÜÀ§·Î ¿òÁ÷À̸ç, °¢°¢ÀÇ ÆÄƼ¼ÇÀ»
¹¾î¼ RAID ÆÄƼ¼ÇÀ» ¸¸µé ¼öµµ ÀÖ´Ù.
ÀÌ°ÍÀº Çϵå¿þ¾îÀû ±¸Çö°ú Å©°Ô ´Ù¸¥ Á¡À̸ç, µð½ºÅ©µé Àüü¸¦
Çϳª·Î ¹¾î¹ö¸± ¼öµµ ÀÖ´Ù.
±×°ÍÀº Çϵå¿þ¾îÀûÀ¸·Î´Â ¿î¿µÃ¼Á¦·ÎÀÇ ¼³Á¤À» °£´ÜÇÏ°í ¸í¹éÈ÷
ÇÒ¼ö ÀÖ°í, ¼ÒÇÁÆ®¿þ¾îÀûÀ¸·Î´Â Á» ´õ ´Ù¾çÇÑ ¼³Á¤À¸·Î
º¹ÀâÇÑ ¹®Á¦µé¿¡ Á¢±ÙÇÒ ¼ö ÀÖ´Ù.
Software-RAID is a set of kernel modules, together with
management utilities that implement RAID purely in software,
and require no extraordinary hardware. The Linux RAID subsystem
is implemented as a layer in the kernel that sits above the
low-level disk drivers (for IDE, SCSI and Paraport drives),
and the block-device interface. The filesystem, be it ext2fs,
DOS-FAT, or other, sits above the block-device interface.
Software-RAID, by its very software nature, tends to be more
flexible than a hardware solution. The downside is that it
of course requires more CPU cycles and power to run well
than a comparable hardware system. Of course, the cost
can't be beat. Software RAID has one further important
distinguishing feature: it operates on a partition-by-partition
basis, where a number of individual disk partitions are
ganged together to create a RAID partition. This is in
contrast to most hardware RAID solutions, which gang together
entire disk drives into an array. With hardware, the fact that
there is a RAID array is transparent to the operating system,
which tends to simplify management. With software, there
are far more configuration options and choices, tending to
complicate matters.
ÀÌ ±ÛÀÌ ¾²¿©Áö´Â ½ÃÁ¡( 1998³â 6¿ù)¿¡¼, LinuxÇÏÀÇ RAIDÀÇ ¼³Á¤Àº
¾î·Á¿î °ÍÀÌ°í, ¼÷·ÃµÈ ½Ã½ºÅÛ °ü¸®ÀÚ°¡ ¼³Á¤ÇÏ´Â °ÍÀÌ ÁÁÀ» °ÍÀÌ´Ù.
¹æ¹ýÀº ³Ê¹« º¹ÀâÇÏ°í , startup scriptµéÀÇ ¼öÁ¤À» ÇÊ¿ä·Î ÇÑ´Ù.
µð½ºÅ© ¿¡·¯·ÎºÎÅÍÀÇ º¹±¸´Â Æò¹üÇÑ °ÍÀÌ ¾Æ´Ï°í »ç¶÷ÀÇ ½Ç¼ö·Î
À̾îÁö±â ½±´Ù. RAID´Â Ãʺ¸ÀÚ¸¦ À§ÇÑ °ÍÀÌ ¾Æ´Ï´Ù.
¼Óµµ Çâ»ó°ú ¾ÈÀü¼ºÀ» ¾ò±â Àü¿¡ ÀÇ¿ÜÀÇ º¹ÀâÇÔ¿¡ Ç㸦 Âñ¸®±â
½¬¿ì´Ï Á¶½ÉÇϱ⠹ٶõ´Ù..
ƯÈ÷, ¿äÁò µð½ºÅ©µéÀº ¹ÏÀ» ¼ö ¾øÀ» ¸¸Å ¾ÈÀüÇÏ°í
¿äÁò CPU¿Í ÄÁÆ®·Ñ·¯´Â ÃæºÐÈ÷ °·ÂÇÏ´Ù. ´ç½ÅÀº
ÁúÁÁ°í ºü¸¥ Çϵå¿þ¾îÀÇ ±¸ÀÔÀ¸·Î Á»´õ ½±°Ô ¿øÇÏ´Â ¸¸ÅÀÇ
¼Óµµ¿Í ¾ÈÁ¤¼ºÀ» ¾òÀ» ¼ö ÀÖÀ» °ÍÀÌ´Ù.
As of this writing (June 1998), the administration of RAID
under Linux is far from trivial, and is best attempted by
experienced system administrators. The theory of operation
is complex. The system tools require modification to startup
scripts. And recovery from disk failure is non-trivial,
and prone to human error. RAID is not for the novice,
and any benefits it may bring to reliability and performance
can be easily outweighed by the extra complexity. Indeed,
modern disk drives are incredibly reliable and modern
CPU's and controllers are quite powerful. You might more
easily obtain the desired reliability and performance levels
by purchasing higher-quality and/or faster hardware.
- Q:
RAID ·¹º§ÀÌ ¹«¾ùÀΰ¡¿ä? ¿Ö ±×·¸°Ô ¸¹Àº°¡¿ä? ¾î¶»°Ô ±¸ºÐÇÏÁÒ?
A:
°¢ ·¹º§¸¶´Ù, ¼Óµµ¿Í »ç¿ë°ø°£, ¾ÈÁ¤¼º, °¡°ÝÀÇ Æ¯¼ºÀÌ ´Ù¸£´Ù.
¸ðµç RAID ·¹º§ÀÇ °úÀ×»ç¿ë°ø°£ÀÌ µð½ºÅ© ¿À·ù¸¦ ´ëºñÇØ ÁÖ´Â °ÍÀº
¾Æ´Ï´Ù. RAID-1°ú RAID-5°¡ °¡Àå ¸¹ÀÌ »ç¿ëµÇ¸ç,
RAID-1´Â Á»´õ ³ªÀº ¼Óµµ¸¦ ³» ÁÙ °ÍÀ̸ç,
RAID-5´Â Á» ´õ µð½ºÅ©ÀÇ ¿©À¯°ø°£À» ¸¹ÀÌ ³²°ÜÁÙ°ÍÀÌ´Ù.
ÇÏÁö¸¸, ¼Óµµ°¡ ·¹º§¿¡ ÀÇÇؼ ¿ÏÀüÈ÷ °áÁ¤µÇ´Â °ÍÀº ¾Æ´Ï´Ù.
¼Óµµ´Â »ç¿ëÇÒ ÇÁ·Î±×·¥, stripe,block,file µéÀÇ Å©±âµî
´Ù¾çÇÑ ¿äÀο¡ ¸¹Àº ¿µÇâÀ» ¹Þ´Â´Ù.
ÀÌ¿¡ °üÇؼ´Â ÀÌ µÚ¿¡¼ ÀÚ¼¼È÷ ´Ù·ê °ÍÀÌ´Ù.
The different RAID levels have different performance,
redundancy, storage capacity, reliability and cost
characteristics. Most, but not all levels of RAID
offer redundancy against disk failure. Of those that
offer redundancy, RAID-1 and RAID-5 are the most popular.
RAID-1 offers better performance, while RAID-5 provides
for more efficient use of the available storage space.
However, tuning for performance is an entirely different
matter, as performance depends strongly on a large variety
of factors, from the type of application, to the sizes of
stripes, blocks, and files. The more difficult aspects of
performance tuning are deferred to a later section of this HOWTO.
¾Æ·¡¿¡¼´Â Linux ¼ÒÇÁÆ®¿þ¾î RAID ±¸ÇöÀÇ ´Ù¸¥ ·¹º§µé¿¡ ´ëÇؼ
¼³¸íÇÏ°í ÀÖ´Ù.
The following describes the different RAID levels in the
context of the Linux software RAID implementation.
- ¼±Çü RAID (RAID-linear)
Àº ¿©·¯°³ÀÇ ÆÄƼ¼ÇµéÀ» ¿¬°áÇØ ÇϳªÀÇ Å« °¡»ó ÆÄƼ¼ÇÀ»
¸¸µå´Â °ÍÀÌ´Ù. ÀÌ°ÍÀº ÀÛÀº µå¶óÀ̺êµéÀ» ¿©·¯°³ °¡Áö°í ÀÖ°í
ÀÌ°ÍÀ» ÇϳªÀÇ Å« ÆÄƼ¼ÇÀ¸·Î ¸¸µé°íÀÚ ÇÒ¶§ À¯¿ëÇÒ °ÍÀÌ´Ù.
ÇÏÁö¸¸, ÀÌ ¿¬°áÀº ¾ÈÀü¼ºÀ» Á¦°øÇÏÁö ¾Ê´Â´Ù.
ÇϳªÀÇ µð½ºÅ©¿¡ ¿À·ù°¡ ³ª¸é, ¹¿©ÀÖ´Â ÆÄƼ¼Ç Àüü°¡
¿À·ù°¡ ³¯°ÍÀÌ´Ù.
is a simple concatenation of partitions to create
a larger virtual partition. It is handy if you have a number
small drives, and wish to create a single, large partition.
This concatenation offers no redundancy, and in fact
decreases the overall reliability: if any one disk
fails, the combined partition will fail.
- RAID-1
´Â "mirroring" ½ÃÅ°´Â °ÍÀÌ´Ù.
µÎ°³ ÀÌ»óÀÇ °°Àº Å©±â¸¦ °¡Áø ÆÄƼ¼ÇµéÀÌ ¸ðµÎ
ºí·°´ë ºí·°À¸·Î °°Àº µ¥ÀÌÅ͸¦ °¡Áö°Ô µÈ´Ù.
¹Ì·¯¸µÀº µð½ºÅ© ¿À·ù¿¡ ¾ÆÁÖ °·ÂÇÏ´Ù.
µð½ºÅ© Çϳª°¡ ¿À·ù³µÀ» ¶§¿¡µµ, ÆÄ¼ÕµÈ µð½ºÅ©¿Í
¿ÏÀüÈ÷ ¶È°°Àº º¹Á¦º»ÀÌ ÀÖ´Â °ÍÀÌ´Ù.
¹Ì·¯¸µÀº Àб⠿äûÀ» ¸î°³ÀÇ µð½ºÅ©°¡ ³ª´©¾î ó¸®ÇÔÀ¸·Î½á,
I/O°¡ ¸¹Àº ½Ã½ºÅÛÀÇ ºÎÇϸ¦ ÁÙ¿©ÁÙ¼ö ÀÖ´Ù.
ÇÏÁö¸¸, »ç¿ë°ø°£ÀÇ ÀÌ¿ëÀ²¿¡¼ º¼ ¶§ ¹Ì·¯¸µÀº
ÃÖ¾ÇÀÌ´Ù...
is also referred to as "mirroring".
Two (or more) partitions, all of the same size, each store
an exact copy of all data, disk-block by disk-block.
Mirroring gives strong protection against disk failure:
if one disk fails, there is another with the an exact copy
of the same data. Mirroring can also help improve
performance in I/O-laden systems, as read requests can
be divided up between several disks. Unfortunately,
mirroring is also the least efficient in terms of storage:
two mirrored partitions can store no more data than a
single partition.
- Striping
Àº ´Ù¸¥ RAID ·¹º§¿¡ ±âº»ÀûÀÎ °³³äÀÌ´Ù.
stripe´Â µð½ºÅ© ºí·°µéÀÌ ¿¬¼ÓÀûÀ¸·Î ºÙ¾îÀÖ´Â °ÍÀÌ´Ù.
stripe ´Â ÇϳªÀÇ µð½ºÅ© ºí·°¸¸Å ªÀ» ¼öµµ ÀÖÀ» °ÍÀÌ°í,
¼ö õ°³ÀÇ ºí·°µé·Î ÀÌ·ç¾îÁ® ÀÖÀ» ¼öµµ ÀÖÀ» °ÍÀÌ´Ù.
RAID µå¶óÀ̹ö´Â µð½ºÅ© ÆÄƼ¼ÇÀ» stripe ·Î ³ª´ °ÍÀÌ´Ù.
RAID ÀÇ ·¹º§Àº stripe°¡ ¾î¶»°Ô ±¸¼ºµÇ¾ú´Â°¡.
¾î¶² µ¥ÀÌÅ͸¦ ´ã°í Àִ°¡¿¡ µû¶ó¼ ´Þ¶óÁú °ÍÀÌ´Ù.
stripeÀÇ Å©±â¿Í, ÆÄÀϽýºÅÛ¾ÈÀÇ ÆÄÀÏÀÇ Å©±â, ±×°ÍµéÀÇ
µð½ºÅ© ¾È¿¡¼ÀÇ À§Ä¡°¡ RAID ½Ã½ºÅÛÀÇ ÀüüÀûÀÎ ¼º´ÉÀ»
Á¿ìÇÒ °ÍÀÌ´Ù.
(¿ªÀÚ µ¡, stripe´Â ¶ìÀε¥.. Çϳª¿¡ µð½ºÅ©¿¡ ÀÖ´Â°Ô ¾Æ´Ï¶ó.
¿©·¯°³ÀÇ µð½ºÅ©¿¡¼ °°Àº ºÎºÐÀÌ ¶ì¸¦ ¸¸µå´Â °ÍÀÌ°ÚÁÒ..)
is the underlying concept behind all of
the other RAID levels. A stripe is a contiguous sequence
of disk blocks. A stripe may be as short as a single disk
block, or may consist of thousands. The RAID drivers
split up their component disk partitions into stripes;
the different RAID levels differ in how they organize the
stripes, and what data they put in them. The interplay
between the size of the stripes, the typical size of files
in the file system, and their location on the disk is what
determines the overall performance of the RAID subsystem.
- RAID-0
Àº ¼±Çü RAID¿¡ ´õ °¡±õ´Ù. ÆÄƼ¼ÇÀ» stripe µé·Î ³ª´©°í
¹´Â °ÍÀÌ´Ù. ¼±Çü RAIDó·³ °á°ú´Â ÇϳªÀÇ Å« ÆÄƼ¼ÇÀÌ°í,
±×°ÍÀº °úÀ× °ø°£ÀÌ ¾ø´Ù. ¿ª½Ã ¾ÈÀü¼ºµµ ÁÙ¾îµç´Ù.
´Ü¼øÇÑ ¼±Çü RAID¿¡ ºñÇØ ¼º´ÉÀÌ Çâ»óµÇ±ä ÇÏÁö¸¸,
ÆÄÀÏ ½Ã½ºÅÛ°ú, stripe ÀÇ Å©±â¿¡ ÀÇÇØ »ý±â´Â ÆÄÀÏÀÇ ÀϹÝÀûÀÎ
Å©±â, ÀÛ¾÷ÀÇ ÇüÅ¿¡ ¸¹Àº ÀÇÁ¸À» ÇÑ´Ù.
is much like RAID-linear, except that
the component partitions are divided into stripes and
then interleaved. Like RAID-linear, the result is a single
larger virtual partition. Also like RAID-linear, it offers
no redundancy, and therefore decreases overall reliability:
a single disk failure will knock out the whole thing.
RAID-0 is often claimed to improve performance over the
simpler RAID-linear. However, this may or may not be true,
depending on the characteristics to the file system, the
typical size of the file as compared to the size of the
stripe, and the type of workload. The ext2fs
file system already scatters files throughout a partition,
in an effort to minimize fragmentation. Thus, at the
simplest level, any given access may go to one of several
disks, and thus, the interleaving of stripes across multiple
disks offers no apparent additional advantage. However,
there are performance differences, and they are data,
workload, and stripe-size dependent.
- RAID-4
´Â RAID-0 ó·³ stripe·Î ³ª´©´Â ¹æ½ÄÀ» »ç¿ëÇÑ´Ù.
ÇÏÁö¸¸, parity Á¤º¸¸¦ ÀúÀåÇÒ Ãß°¡ÀûÀÎ ÆÄƼ¼ÇÀ» »ç¿ëÇÑ´Ù.
parity ´Â °úÀ× Á¤º¸¸¦ ÀúÀåÇϴµ¥ »ç¿ëµÇ°í, ÇϳªÀÇ µð½ºÅ©¿¡
¿À·ù°¡ ³µÀ» ¶§, ³²Àº µð½ºÅ©ÀÇ µ¥ÀÌÅÍ´Â ÆÄ¼ÕµÈ µð½ºÅ©ÀÇ
µ¥ÀÌÅ͸¦ º¹±¸Çϴµ¥ »ç¿ëµÉ °ÍÀÌ´Ù. N °³ÀÇ µð½ºÅ©°¡ ÀÖ°í,
ÇϳªÀÇ parity µð½ºÅ©°¡ ÀÖ´Ù¸é, parity stripe´Â °¢ µð½ºÅ©ÀÇ
stripe µéÀÇ XOR ¿¬»êÀ¸·Î °è»êµÉ °ÍÀÌ´Ù.
(N+1) µð½ºÅ©¸¦ °¡Áø RAID-4 ¹è¿ÀÇ ÀúÀå¿ë·®Àº
N ÀÌ µÉ°ÍÀÌ´Ù.
ÇÏÁö¸¸, RAID-4´Â ¹Ì·¯¸µ¸¸Å Àд ¼Óµµ°¡ ºü¸£Áö ¾Ê°í
¸Å¹ø µð½ºÅ©¸¦ ¾µ ¶§¸¶´Ù ¿¬»êÀ» ÇÏ°í parity µð½ºÅ©¿¡
½á¾ß ÇÑ´Ù. ¶§¹®¿¡ ¾²±â°¡ ¸¹Àº ½Ã½ºÅÛ¿¡´Â ¸Å¹ø parity
µð½ºÅ©¸¦ access ÇØ¾ß Çϱ⠶§¹®¿¡, º´¸ñÇö»óÀÌ ÀϾ ¼ö ÀÖ´Ù.
interleaves stripes like RAID-0, but
it requires an additional partition to store parity
information. The parity is used to offer redundancy:
if any one of the disks fail, the data on the remaining disks
can be used to reconstruct the data that was on the failed
disk. Given N data disks, and one parity disk, the
parity stripe is computed by taking one stripe from each
of the data disks, and XOR'ing them together. Thus,
the storage capacity of a an (N+1)-disk RAID-4 array
is N, which is a lot better than mirroring (N+1) drives,
and is almost as good as a RAID-0 setup for large N.
Note that for N=1, where there is one data drive, and one
parity drive, RAID-4 is a lot like mirroring, in that
each of the two disks is a copy of each other. However,
RAID-4 does NOT offer the read-performance
of mirroring, and offers considerably degraded write
performance. In brief, this is because updating the
parity requires a read of the old parity, before the new
parity can be calculated and written out. In an
environment with lots of writes, the parity disk can become
a bottleneck, as each write must access the parity disk.
- RAID-5
´Â °¢ µå¶óÀ̺긶´Ù parity stripe ¸¦ ÀúÀå½ÃÅ´À¸·Î½á
RAID-4ÀÇ ¾²±â º´¸ñÇö»óÀ» ÇÇÇÒ¼ö ÀÖ´Ù.
±×¸®³ª, ¿©ÀüÈ÷ ¾²±â Àü¿¡ XOR ¿¬»êÀ» ÇØ¾ß Çϱ⠶§¹®¿¡
¾²±â ¼º´ÉÀº ¹Ì·¯¸µ¸¸Å »¡¶óÁú¼ö ¾ø´Ù.
Àб⠿ª½Ã ¿©·¯°³ÀÇ µ¥ÀÌÅÍ°¡ ÀÖ´Â °ÍÀÌ ¾Æ´Ï±â ¶§¹®¿¡
¹Ì·¯¸µ ¸¸Å »¡¶óÁú ¼ö ¾ø´Ù.
avoids the write-bottleneck of RAID-4
by alternately storing the parity stripe on each of the
drives. However, write performance is still not as good
as for mirroring, as the parity stripe must still be read
and XOR'ed before it is written. Read performance is
also not as good as it is for mirroring, as, after all,
there is only one copy of the data, not two or more.
RAID-5's principle advantage over mirroring is that it
offers redundancy and protection against single-drive
failure, while offering far more storage capacity when
used with three or more drives.
- RAID-2 ¿Í RAID-3
´Â ÀÌÁ¦ °ÅÀÇ »ç¿ëµÇÁö ¾Ê´Â´Ù.
¸î¸î ·¹º§Àº Çö´ë µð½ºÅ© ±â¼ú·Î ÀÎÇØ ÇÊ¿ä ¾ø¾îÁ³±â ¶§¹®ÀÌ´Ù.
RAID-2´Â RAID-4¿Í ºñ½ÁÇÏÁö¸¸, parity ´ë½Å¿¡ ECC Á¤º¸¸¦
ÀúÀåÇÏ´Â °ÍÀÌ ´Ù¸£´Ù. ÇöÀçÀÇ ¸ðµç µð½ºÅ©µéÀº ECC Á¤º¸¸¦
µð½ºÅ© ÀÚü³»¿¡ ³Ö¾î¹ö·È´Ù. ÀÌ°ÍÀº, µð½ºÅ© ÀÚü¿¡ ÀÛÀº
¾ÈÀüÀåÄ¡¸¦ ´Ü °ÍÀÌ´Ù. RAID-2 ´Â µð½ºÅ© ¾²±â µµÁß
Àü¿ø°ø±ÞÀÌ Â÷´ÜµÉ ¶§, µ¥ÀÌÅÍ ¾ÈÀü¼ºÀ» Á¦°øÇØÁØ´Ù.
ÇÏÁö¸¸, ¹èÅ͸® ¹é¾÷À̳ª, clean shutdown ¿ª½Ã ¶È°°Àº
±â´ÉÀ» Á¦°øÇÑ´Ù.. RAID-3Àº °¡´ÉÇÑ ÃÖ¼ÒÀÇ stripe Å©±â¸¦
»ç¿ëÇÏ´Â °ÍÀ» Á¦¿ÜÇϸé RAID-4 ¿Í ºñ½ÁÇÏ´Ù.
Linux ¼ÒÇÁÆ®¿þ¾î RAID µå¶óÀ̹ö´Â RAID-2 ¿Í RAID-3¸¦
¸ðµÎ Áö¿øÇÏÁö ¾Ê´Â´Ù.
are seldom used anymore, and
to some degree are have been made obsolete by modern disk
technology. RAID-2 is similar to RAID-4, but stores
ECC information instead of parity. Since all modern disk
drives incorporate ECC under the covers, this offers
little additional protection. RAID-2 can offer greater
data consistency if power is lost during a write; however,
battery backup and a clean shutdown can offer the same
benefits. RAID-3 is similar to RAID-4, except that it
uses the smallest possible stripe size. As a result, any
given read will involve all disks, making overlapping
I/O requests difficult/impossible. In order to avoid
delay due to rotational latency, RAID-3 requires that
all disk drive spindles be synchronized. Most modern
disk drives lack spindle-synchronization ability, or,
if capable of it, lack the needed connectors, cables,
and manufacturer documentation. Neither RAID-2 nor RAID-3
are supported by the Linux Software-RAID drivers.
- ±×¿ÜÀÇ RAID ·¹º§µéÀº
´Ù¾çÇÑ ¼ö¿ä¿Í ÆǸÅÀڵ鿡 ÀÇÇØ ¸¸µé¾îÁ³°í, Ưº°ÇÑ Çϵå¿þ¾î¸¦ ÇÊ¿ä·Î Çϰųª
¾î¶² °ÍµéÀº ÀúÀÛ±ÇÀ» º¸È£ ¹Þ°í ÀÖ´Ù.
Linux ¼ÒÇÁÆ®¿þ¾î RAID´Â ´Ù¸¥ ¾î¶² º¯Á¾µéµµ Áö¿øÇÏÁö ¾Ê´Â´Ù.
have been defined by various
researchers and vendors. Many of these represent the
layering of one type of raid on top of another. Some
require special hardware, and others are protected by
patent. There is no commonly accepted naming scheme
for these other levels. Sometime the advantages of these
other systems are minor, or at least not apparent
until the system is highly stressed. Except for the
layering of RAID-1 over RAID-0/linear, Linux Software
RAID does not support any of the other variations.
- Q:
Software RAID ¸¦ ¾î¶»°Ô ¼³Ä¡ÇØ¾ß °¡Àå ÁÁÀ» ±î¿ä?
A:
³ª´Â ÆÄÀÏ ½Ã½ºÅÛ °èȹÀÌ Á» ´õ ¾î·Á¿î À¯´Ð½º ¼³Á¤ÀÛ¾÷ÀÎ °ÍÀ»
±ú´Ý µµ·Ï ³²°ÜµÐ´Ù.
Áú¹®¿¡ ´ëÇÑ ´ë´äÀ¸·Î, ¿ì¸®°¡ ÇÑ ÀÏÀ» ¼³¸íÇÏ°Ú´Ù.
¿ì¸®´Â °¢°¢ 2.1 ±â°¡ÀÇ EIDE µð½ºÅ©¸¦ ¾Æ·¡¿Í °°ÀÌ ¼³Á¤ÇÒ °èȹÀ» ¼¼¿ü´Ù.
I keep rediscovering that file-system planning is one
of the more difficult Unix configuration tasks.
To answer your question, I can describe what we did.
We planned the following setup:
- two EIDE disks, 2.1.gig each.
disk partition mount pt. size device
1 1 / 300M /dev/hda1
1 2 swap 64M /dev/hda2
1 3 /home 800M /dev/hda3
1 4 /var 900M /dev/hda4
2 1 /root 300M /dev/hdc1
2 2 swap 64M /dev/hdc2
2 3 /home 800M /dev/hdc3
2 4 /var 900M /dev/hdc4
- °¢ µð½ºÅ©´Â ¸ðµÎ ºÐ¸®µÈ ÄÁÆ®·Ñ·¯¿Í ¸®º» ÄÉÀÌºí »ó¿¡ ÀÖ´Ù.
ÀÌ°ÍÀº ÇϳªÀÇ ÄÁÆ®·Ñ·¯³ª ÄÉÀ̺íÀÌ °íÀå ³µÀ» ¶§,
µð½ºÅ©µéÀÌ °°ÀÌ »ç¿ë ºÒ°¡´ÉÇÏ°Ô µÇ´Â °ÍÀ» ¸·¾ÆÁØ´Ù.
Each disk is on a separate controller (& ribbon cable).
The theory is that a controller failure and/or
ribbon failure won't disable both disks.
Also, we might possibly get a performance boost
from parallel operations over two controllers/cables.
- ·çÆ® ÆÄƼ¼Ç (
/ /dev/hda1 )¿¡ ¸®´ª½º Ä¿³ÎÀ»
¼³Ä¡ÇÒ °ÍÀÌ´Ù. ÀÌ ÆÄƼ¼ÇÀ» bootable·Î ¼³Á¤Çضó.
Install the Linux kernel on the root (/ )
partition /dev/hda1 . Mark this partition as
bootable.
- /dev/hac1Àº /dev/hda1 ÀÇ RAID º¹»çº»ÀÌ ¾Æ´Ñ
´Ü¼ø º¹»çº»ÀÌ´Ù. ÀÌ°ÍÀ¸·Î, ù¹ø° µð½ºÅ©°¡ ¿À·ù³µÀ» ¶§
rescue µð½ºÅ©¸¦ »ç¿ëÇØ ÀÌ ÆÄƼ¼ÇÀ» bootable ¼³Á¤ÇÏ¿©
½Ã½ºÅÛÀ» ´Ù½Ã ÀνºÅçÇÏÁö ¾Ê°í »ç¿ëÇÒ ¼ö ÀÖ´Ù.
/dev/hdc1 will contain a ``cold'' copy of
/dev/hda1 . This is NOT a raid copy,
just a plain old copy-copy. It's there just in
case the first disk fails; we can use a rescue disk,
mark /dev/hdc1 as bootable, and use that to
keep going without having to reinstall the system.
You may even want to put /dev/hdc1 's copy
of the kernel into LILO to simplify booting in case of
failure.
ÀÌ°ÍÀº ½É°¢ÇÑ ¹®Á¦ ¹ß»ý½Ã, raid superblock-corruption À̳ª
´Ù¸¥ ÀÌÇØÇÒ¼ö ¾ø´Â ¹®Á¦¿¡ ´ëÇÑ °ÆÁ¤¾øÀÌ ½Ã½ºÅÛÀ» ºÎÆÃÇÒ ¼ö
ÀÖ°Ô ÇØÁØ´Ù.
The theory here is that in case of severe failure,
I can still boot the system without worrying about
raid superblock-corruption or other raid failure modes
& gotchas that I don't understand.
/dev/hda3 ¿Í /dev/hdc3 ´Â
¹Ì·¯¸µÀ» ÅëÇØ /dev/md0 °¡ µÉ°ÍÀÌ´Ù.
/dev/hda3 and /dev/hdc3 will be mirrors
/dev/md0 .
/dev/hda4 ¿Í /dev/hdc4 ´Â
¹Ì·¯¸µÀ» ÅëÇØ /dev/md1 °¡ µÉ°ÍÀÌ´Ù.
/dev/hda4 and /dev/hdc4 will be mirrors
/dev/md1 .
- ¿ì¸®´Â ¾Æ·¡¿Í °°Àº ÀÌÀ¯·Î ÆÄƼ¼ÇÀ» ³ª´©°í,
/var ¿Í /home
ÆÄƼ¼ÇÀ» ¹Ì·¯¸µÇϱâ·Î °áÁ¤ÇÏ¿´´Ù.
we picked /var and /home to be mirrored,
and in separate partitions, using the following logic:
/ (·çÆ® ÆÄƼ¼Ç)ÀÇ µ¥ÀÌÅ͵éÀº »ó´ëÀûÀ¸·Î
Àß º¯ÇÏÁö ¾Ê´Â´Ù.
/ (the root partition) will contain
relatively static, non-changing data:
for all practical purposes, it will be
read-only without actually being marked &
mounted read-only.
/home ÆÄƼ¼ÇÀº ''õõÈ÷'' º¯ÇÏ´Â µ¥ÀÌÅ͸¦
°¡Áö°í ÀÖ´Ù.
/home will contain ''slowly'' changing
data.
/var> ´Â ¸ÞÀÏ spool , µ¥ÀÌÅͺ£À̽º ³»¿ë,
À¥ ¼¹öÀÇ log ¿Í °°Àº ±Þ¼ÓÈ÷ º¯ÇÏ´Â µ¥ÀÌÅ͸¦
Æ÷ÇÔÇÏ°í ÀÖ´Ù.
/var will contain rapidly changing data,
including mail spools, database contents and
web server logs.
ÀÌ·¸°Ô ¿©·¯°³ÀÇ ´Ù¸¥ ÆÄƼ¼ÇÀ» ³ª´©´Â °ÍÀº,
Àΰ£ÀÇ ½Ç¼ö, Àü¿ø, ȤÀº osÀÇ ¹®Á¦µîÀÌ ÀϾÀ» ¶§,
±×°ÍÀÌ ¹ÌÄ¡´Â ¿µÇâÀÌ ÇϳªÀÇ ÆÄƼ¼Ç¿¡¸¸ ÇÑÁ¤µÇ±â ¶§¹®ÀÌ´Ù.
The idea behind using multiple, distinct partitions is
that if, for some bizarre reason,
whether it is human error, power loss, or an operating
system gone wild, corruption is limited to one partition.
In one typical case, power is lost while the
system is writing to disk. This will almost certainly
lead to a corrupted filesystem, which will be repaired
by fsck during the next boot. Although
fsck does it's best to make the repairs
without creating additional damage during those repairs,
it can be comforting to know that any such damage has been
limited to one partition. In another typical case,
the sysadmin makes a mistake during rescue operations,
leading to erased or destroyed data. Partitions can
help limit the repercussions of the operator's errors.
-
/usr ¿Í /opt ÆÄƼ¼ÇÀ» ¼±ÅÃÇÏ¿©µµ ±¦Âú¾ÒÀ» °ÍÀÌ´Ù.
»ç½Ç, Çϵ尡 Á»´õ ÀÖ¾ú´Ù¸é, /opt ¿Í /home ÆÄƼ¼ÇÀ»
RAID-5 ·Î ¼³Á¤ÇÏ´Â °ÍÀÌ ´õ ÁÁ¾ÒÀ» °ÍÀÌ´Ù.
ÁÖÀÇÇÒ Á¡Àº /usr ÆÄƼ¼ÇÀ» RAID-5·Î ¼³Á¤ÇÏÁö ¸»¶ó´Â °ÍÀÌ´Ù.
½É°¢ÇÑ ¹®Á¦°¡ ÀϾÀ» °æ¿ì /usr ÆÄƼ¼Ç¿¡ ¸¶¿îÆ® ÇÒ¼ö ¾ø°Ô
µÉ °ÍÀÌ°í, /usr ÆÄƼ¼Ç¾ÈÀÇ ³×Æ®¿öÅ© Åø°ú ÄÄÆÄÀÏ·¯ °°Àº °ÍµéÀ»
ÇÊ¿ä·Î ÇÏ°Ô µÉ °ÍÀÌ´Ù. RAID-1À» »ç¿ëÇÑ´Ù¸é, ÀÌ·± ¿¡·¯°¡ ³µÀ»¶§,
RAID´Â »ç¿ëÇÒ¼ö ¾ø¾îµµ µÎ°³ÀÇ ¹Ì·¯¸µµÈ °ÍÁß Çϳª¿¡´Â ¸¶¿îÆ®°¡ °¡´ÉÇÏ´Ù.
Other reasonable choices for partitions might be
/usr or /opt . In fact, /opt
and /home make great choices for RAID-5
partitions, if we had more disks. A word of caution:
DO NOT put /usr in a RAID-5
partition. If a serious fault occurs, you may find
that you cannot mount /usr , and that
you want some of the tools on it (e.g. the networking
tools, or the compiler.) With RAID-1, if a fault has
occurred, and you can't get RAID to work, you can at
least mount one of the two mirrors. You can't do this
with any of the other RAID levels (RAID-5, striping, or
linear append).
±×·¡¼ Áú¹®¿¡ ´ëÇÑ ¿Ï¼ºµÈ ´äÀº:
- ù¹ø° µð½ºÅ©ÀÇ Ã¹¹ø° ÆÄƼ¼Ç¿¡ ¿î¿µÃ¼Á¦¸¦ ¼³Ä¡ÇÏ°í
´Ù¸¥ ÆÄƼ¼ÇµéÀº ¸¶¿îÆ®ÇÏÁö ¸»¾Æ¶ó.
install the OS on disk 1, partition 1.
do NOT mount any of the other partitions.
- ¸í·É´ÜÀ§·Î RAID¸¦ ¼³Ä¡Ç϶ó.
install RAID per instructions.
-
md0 ¿Í md1 . ¼³Á¤Ç϶ó.
configure md0 and md1 .
- µð½ºÅ© ¿À·ù°¡ ÀϾÀ» ¶§ ¹«¾ùÀ» ÇØ¾ß ÇÏ´Â Áö
ÁغñÇضó. °ü¸®ÀÚ°¡ Áö±Ý ½Ç¼öÇÏ´ÂÁö ã¾Æº¸°í,
Ÿ°ÝÀ» ÀÔ°Ô ³öµÎÁö ¸¶¶ó. ±×¸®°í °æÇèÀ» ½×¾Æ¶ó.
(¿ì¸®´Â µð½ºÅ©°¡ ÀÛµ¿ÇÏ°í ÀÖ´Â µ¿¾È, Àü¿øÀ» ²¨º¸¾Ò´Ù.
ÀÌ°ÍÀº ¸ÛûÇغ¸ÀÌÁö¸¸, Á¤º¸¸¦ ¾òÀ» ¼ö ÀÖ´Ù.)
convince yourself that you know
what to do in case of a disk failure!
Discover sysadmin mistakes now,
and not during an actual crisis.
Experiment!
(we turned off power during disk activity —
this proved to be ugly but informative).
-
/var ¸¦ /dev/md1 À¸·Î ¿Å±â´Â Áß,
¾î´À Á¤µµ À߸øµÈ mount/copy/unmount/rename/reboot À» Çغ¸¶ó.
Á¶½ÉÈ÷¸¸ ÇÑ´Ù¸é, À§ÇèÇÏÁö´Â ¾ÊÀ» °ÍÀÌ´Ù.
do some ugly mount/copy/unmount/rename/reboot scheme to
move /var over to the /dev/md1 .
Done carefully, this is not dangerous.
- ±×¸®°í, ±×°ÍµéÀ» Áñ°Ü¶ó.
- Q:
mdadd , mdrun µîÀÇ ¸í·É°ú raidadd , raidrun ¸í·ÉÀÇ
´Ù¸¥ Á¡ÀÌ ¹º°¡¿ä?
A:
raidtools ÆÐÅ°ÁöÀÇ 0.5 ¹öÁ¯ºÎÅÍ À̸§ÀÌ ¹Ù²î¾ú´Ù. md ·Î À̸§ÀÌ ºÙ´Â °ÍÀº 0.43 ÀÌÀü¹öÁ¯ÀÌ°í
raid ·Î À̸§ÀÌ ºÙ´Â °ÍÀº 0.5 ¹öÁ¯°ú ´õ »õ¹öÁ¯µéÀÌ´Ù..
The names of the tools have changed as of the 0.5 release of the
raidtools package. The md naming convention was used
in the 0.43 and older versions, while raid is used in
0.5 and newer versions.
- Q:
°¡Áö°í ÀÖ´Â 2.0.34 Ä¿³Î¿¡¼ RAID-linear ¿Í RAID-0 ¸¦ »ç¿ëÇÏ°í ½Í´Ù.
RAID-linear ¿Í RAID-0 À» À§Çؼ ÆÐÄ¡°¡ ÇÊ¿äÇÏÁö ¾Ê±â ¶§¹®¿¡.
raid ÆÐÄ¡´Â ÇÏ°í ½ÍÁö ¾Ê´Ù. ¾îµð¿¡ °¡¸é, À̰͵éÀ» À§ÇÑ raid-tool À»
±¸ÇÒ¼ö ÀÖ³ª?
A:
°ú°ÜÇÑ Áú¹®ÀÌ´Ù. »ç½Ç, ÃÖ½ÅÀÇ raid toolµéÀº ÄÄÆÄÀÏ Çϱâ À§ÇØ
RAID-1,4,5 Ä¿³Î ÆÐÄ¡¸¦ ÇÊ¿ä·Î ÇÑ´Ù.
ÇöÀç raid toolÀÇ ÄÄÆÄÀÏµÈ ¹ÙÀ̳ʸ® ¹öÁ¯Ã£Áö ¸øÇß´Ù.
ÇÏÁö¸¸, 2.1.100 Ä¿³Î¿¡¼ ÄÄÆÄÀÏµÈ ¹ÙÀ̳ʸ®°¡ 2.0.34 Ä¿³Î¿¡¼
RAID-0/linear ÆÄƼ¼ÇÀ» ¸¸µå´Â °ÍÀ» Àß ¼öÇàÇÏ´Â °ÍÀ» º¸¾Ò´Ù.
±×·¡¼, ³ª´Â
http://linas.org/linux/Software-RAID/ ¿¡ mdadd,mdcreateµîÀÇ
¹ÙÀ̳ʸ®¸¦ ÀÓ½ÃÀûÀ¸·Î ¿Ã¸°´Ù.
This is a tough question, indeed, as the newest raid tools
package needs to have the RAID-1,4,5 kernel patches installed
in order to compile. I am not aware of any pre-compiled, binary
version of the raid tools that is available at this time.
However, experiments show that the raid-tools binaries, when
compiled against kernel 2.1.100, seem to work just fine
in creating a RAID-0/linear partition under 2.0.34. A brave
soul has asked for these, and I've temporarily
placed the binaries mdadd, mdcreate, etc.
at http://linas.org/linux/Software-RAID/
You must get the man pages, etc. from the usual raid-tools
package.
- Q:
·çÆ® ÆÄƼ¼Ç¿¡ RAID¸¦ Àû¿ëÇÒ ¼ö Àִ°¡?
¿Ö md µð½ºÅ©·Î Á÷Á¢ ºÎÆÃÇÒ ¼ö ¾ø´Â°¡?
A:
LILO¿Í Loadlin ¸ðµÎ RAID ÆÄƼ¼Ç¿¡¼ Ä¿³ÎÀ̹ÌÁö¸¦ Àоî¿Ã ¼ö ¾ø´Ù.
·çÆ® ÆÄƼ¼Ç¿¡ RAID¸¦ Àû¿ëÇÏ°í ½Í´Ù¸é, Ä¿³ÎÀ» ÀúÀåÇÒ
RAID°¡ ¾Æ´Ñ ÆÄƼ¼ÇÀ» ¸¸µé¾î¾ß ÇÒ°ÍÀÌ´Ù.
(ÀϹÝÀûÀ¸·Î ÀÌ ÆÄƼ¼ÇÀÇ À̸§Àº /boot ÀÌ´Ù.)
<
HarryH@Royal.Net>
·ÎºÎÅÍ ¹ÞÀº initial ramdisk (initrd) ¶Ç´Â, ÆÐÄ¡´Â RAID µð½ºÅ©¸¦ root µð¹ÙÀ̽º·Î
»ç¿ë°¡´ÉÇÏ°Ô ÇØ ÁÙ°ÍÀÌ´Ù.
(ÀÌ ÆÐÄ¡´Â ÃÖ±Ù 2.1.xÄ¿³Î¿¡´Â ±âº»ÀûÀ¸·Î äÅõǾîÀÖ´Ù.)
Both LILO and Loadlin need an non-stripped/mirrored partition
to read the kernel image from. If you want to strip/mirror
the root partition (/ ),
then you'll want to create an unstriped/mirrored partition
to hold the kernel(s).
Typically, this partition is named /boot .
Then you either use the initial ramdisk support (initrd),
or patches from Harald Hoyer
<
HarryH@Royal.Net>
that allow a stripped partition to be used as the root
device. (These patches are now a standard part of recent
2.1.x kernels)
°Å±â¿¡´Â »ç¿ëÇÒ ¼ö ÀÖ´Â ¸î°¡Áö ¹æ¹ýÀÌ Àִµ¥, Çϳª´Â
Bootable RAID mini-HOWTO:
ftp://ftp.bizsystems.com/pub/raid/bootable-raid¿¡
ÀÚ¼¼È÷ ¼³¸íµÇ¾î ÀÖ´Ù.
There are several approaches that can be used.
One approach is documented in detail in the
Bootable RAID mini-HOWTO:
ftp://ftp.bizsystems.com/pub/raid/bootable-raid.
¶Ç´Â, ¾Æ·¡Ã³·³ mkinitrd ¸¦ »ç¿ëÇØ ramdisk image¸¦ ¸¸µé¼öµµ ÀÖ´Ù.
Alternately, use mkinitrd to build the ramdisk image,
see below.
Edward Welbon
<
welbon@bga.com>
writes:
- ... all that is needed is a script to manage the boot setup.
To mount an
md filesystem as root,
the main thing is to build an initial file system image
that has the needed modules and md tools to start md .
I have a simple script that does this.
- For boot media, I have a small cheap SCSI disk
(170MB I got it used for $20).
This disk runs on a AHA1452, but it could just as well be an
inexpensive IDE disk on the native IDE.
The disk need not be very fast since it is mainly for boot.
- This disk has a small file system which contains the kernel and
the file system image for
initrd .
The initial file system image has just enough stuff to allow me
to load the raid SCSI device driver module and start the
raid partition that will become root.
I then do an
echo 0x900 > /proc/sys/kernel/real-root-dev
(0x900 is for /dev/md0 )
and exit linuxrc .
The boot proceeds normally from there.
- I have built most support as a module except for the AHA1452
driver that brings in the
initrd filesystem.
So I have a fairly small kernel. The method is perfectly
reliable, I have been doing this since before 2.1.26 and
have never had a problem that I could not easily recover from.
The file systems even survived several 2.1.4[45] hard
crashes with no real problems.
- At one time I had partitioned the raid disks so that the initial
cylinders of the first raid disk held the kernel and the initial
cylinders of the second raid disk hold the initial file system
image, instead I made the initial cylinders of the raid disks
swap since they are the fastest cylinders
(why waste them on boot?).
- The nice thing about having an inexpensive device dedicated to
boot is that it is easy to boot from and can also serve as
a rescue disk if necessary. If you are interested,
you can take a look at the script that builds my initial
ram disk image and then runs
LILO .
http://www.realtime.net/~welbon/initrd.md.tar.gz
It is current enough to show the picture.
It isn't especially pretty and it could certainly build
a much smaller filesystem image for the initial ram disk.
It would be easy to a make it more efficient.
But it uses LILO as is.
If you make any improvements, please forward a copy to me. 8-)
- Q:
striping À§¿¡ ¹Ì·¯¸µÀÌ °¡´ÉÇÏ´Ù°í µé¾ú´Âµ¥, »ç½ÇÀΰ¡?
loopback ÀåÄ¡·Î ¹Ì·¯¸µÇÒ ¼ö Àִ°¡?
A:
±×·¸´Ù. ÇÏÁö¸¸, ±× ¹Ý´ë·Î´Â ¾ÈµÈ´Ù.
Yes, but not the reverse. That is, you can put a stripe over
several disks, and then build a mirror on top of this. However,
striping cannot be put on top of mirroring.
°£´ÜÈ÷ ±â¼úÀûÀÎ ¼³¸íÀ» µ¡ºÙÀÌÀÚ¸é, linear ¿Í stripe´Â
ÀÚüÀûÀ¸·Î ll_rw_blk ·çƾÀ» »ç¿ëÇÏ´Â µ¥ ÀÌ°ÍÀº
block ¸¦ »ç¿ëÇÏÁö ¾Ê°í µð½ºÅ© device¿Í sector¸¦ »ç¿ëÇØ
Á¤½ÄÀûÀ¸·Î, ±×¸®°í Àú¼öÁØÀÇ access¸¦ ÇÑ´Ù, ¶§¹®¿¡,
´Ù¸¥ ¹Ì·¯¸µÀ§¿¡ À§Ä¡½Ãų¼ö ¾ø´Ù.
A brief technical explanation is that the linear and stripe
personalities use the ll_rw_blk routine for access.
The ll_rw_blk routine
maps disk devices and sectors, not blocks. Block devices can be
layered one on top of the other; but devices that do raw, low-level
disk accesses, such as ll_rw_blk , cannot.
ÇöÀç (1997³â 11¿ù) RAID´Â loopback device¸¦ Áö¿øÇÏÁö ¾ÊÁö¸¸,
°ð Áö¿øÇÒ °ÍÀÌ´Ù.
Currently (November 1997) RAID cannot be run over the
loopback devices, although this should be fixed shortly.
- Q:
µÎ°³ÀÇ ÀÛÀº µð½ºÅ©¿Í ¼¼°³ÀÇ Å« µð½ºÅ©¸¦ °¡Áö°í ÀÖÀ»¶§,
ÀÛÀº µð½ºÅ© µÎ°³¸¦ RAID-0À¸·Î ¹Àº ÈÄ, ³ª¸ÓÁö µð½ºÅ©µé°ú,
RAID-5¸¦ ¸¸µé¼ö Àִ°¡?
A:
1997³â 11¿ù ÇöÀç, RAID-5·Î ¹À» ¼ö´Â ¾ø´Ù.
¹¿©Áø µð½ºÅ©µé·Î´Â RAID-1(¹Ì·¯¸µ)¸¸ °¡´ÉÇÏ´Ù.
Currently (November 1997), for a RAID-5 array, no.
Currently, one can do this only for a RAID-1 on top of the
concatenated drives.
- Q:
µÎ°³ÀÇ µð½ºÅ©·Î RAID-1 À» ¼³Á¤ÇÏ´Â °Í°ú, RAID-5¸¦ ¼³Á¤ÇÏ´Â °ÍÀÌ
¾î¶»°Ô ´Ù¸¥°¡?
A:
µ¥ÀÌÅÍÀÇ ÀúÀåÀ²¿¡´Â Â÷ÀÌ°¡ ¾ø´Ù. µð½ºÅ©¸¦ ´õ ºÙÈù´Ù°í ÀúÀåÀ²ÀÌ
´Ã¾î°¡´Â °Íµµ ¾Æ´Ï´Ù.
There is no difference in storage capacity. Nor can disks be
added to either array to increase capacity (see the question below for
details).
RAID-1 Àº °¢ µå¶óÀ̺꿡¼ µÎ ¼½Å͸¦ µ¿½Ã¿¡ Àд ºÐ»ê ±â¼úÀ» »ç¿ëÇϱ⠶§¹®¿¡
µÎ¹èÀÇ Àб⠼º´ÉÀ» º¸¿©ÁØ´Ù.
RAID-1 offers a performance advantage for reads: the RAID-1
driver uses distributed-read technology to simultaneously read
two sectors, one from each drive, thus doubling read performance.
RAID-5´Â ¸¹Àº °ÍµéÀ» Æ÷ÇÔÇÏÁö¸¸, 1997³â 9¿ù ÇöÀç ±îÁö´Â,
µ¥ÀÌÅÍ µð½ºÅ©°¡ parity µð½ºÅ©·Î ½ÇÁ¦ÀûÀ¸·Î ¹Ì·¯¸µµÇÁö´Â ¾Ê´Â´Ù.
¶§¹®¿¡ µ¥ÀÌÅ͸¦ º´·Ä·Î ÀÐÁö´Â ¾Ê´Â´Ù.
The RAID-5 driver, although it contains many optimizations, does not
currently (September 1997) realize that the parity disk is actually
a mirrored copy of the data disk. Thus, it serializes data reads.
- Q:
µÎ°³ÀÇ µð½ºÅ©°¡ ¸Á°¡Á³À»¶§¿¡´Â ¾î¶»°Ô ´ëºñÇÏÁÒ?
A:
¸î¸îÀÇ RAID ´Â ¾Ë°í¸®ÁòÀº ¿©·¯°³ÀÇ µð½ºÅ©°¡ ¸Á°¡Á³À» ¶§¸¦ ´ëºñÇÒ
¼ö ÀÖ´Ù. ÇÏÁö¸¸, ÇöÀç ¸®´ª½º¿¡¼ Áö¿øµÇÁö´Â ¾Ê´Â´Ù.
±×·¯³ª, RAIDÀ§¿¡ RAID¸¦ ±¸ÃàÇÔÀ¸·Î½á, Linux Software RAID·Îµµ,
±×·± »óȲ¿¡ ´ëºñÇÒ ¼ö ÀÖ´Ù. ¿¹¸¦ µé¸é,9°³ÀÇ µð½ºÅ©·Î 3°³ÀÇ
RAID-5¸¦ ¸¸µé°í ´Ù½Ã ±×°ÍÀ» ÇϳªÀÇ RAID-5 ·Î ¸¸µå´Â °ÍÀÌ´Ù.
ÀÌ·± ¼³Á¤Àº 3°³ÀÇ µð½ºÅ©°¡ ¸Á°¡Á³À»¶§±îÁö ´ëºñÇÒ ¼ö ÀÖÁö¸¸,
¸¹Àº °ø°£ÀÌ ''³¶ºñ''µÈ´Ù´Â °ÍÀ» ÁÖ¸ñÇ϶ó.
Some of the RAID algorithms do guard against multiple disk
failures, but these are not currently implemented for Linux.
However, the Linux Software RAID can guard against multiple
disk failures by layering an array on top of an array. For
example, nine disks can be used to create three raid-5 arrays.
Then these three arrays can in turn be hooked together into
a single RAID-5 array on top. In fact, this kind of a
configuration will guard against a three-disk failure. Note that
a large amount of disk space is ''wasted'' on the redundancy
information.
For an NxN raid-5 array,
N=3, 5 out of 9 disks are used for parity (=55%)
N=4, 7 out of 16 disks
N=5, 9 out of 25 disks
...
N=9, 17 out of 81 disks (=~20%)
ÀϹÝÀûÀ¸·Î, MxN °³·Î ¸¸µé¾îÁø RAID¸¦ À§ÇØ M+N-1 °³ÀÇ
µð½ºÅ©°¡ parity ·Î »ç¿ëµÇ°í, M = N À϶§ ¹ö·ÁÁö´Â ¾çÀÌ
ÃÖ¼Ò°¡ µÉ °ÍÀÌ´Ù.
In general, an MxN array will use M+N-1 disks for parity.
The least amount of space is "wasted" when M=N.
´Ù¸¥ ¹æ¹ýÀº ¼¼°³ÀÇ µð½ºÅ©(RAID-5·Î ¼³Á¤µÈ)·Î RAID-1À» ¸¸µå´Â °ÍÀÌ´Ù.
±×°ÍÀº, ¼¼°³ÀÇ µð½ºÅ©Áß °°Àº µ¥ÀÌÅ͸¦ °¡Áö´Â 2/3À» ³¶ºñÇÏ°Ô
µÉ °ÍÀÌ´Ù.
Another alternative is to create a RAID-1 array with
three disks. Note that since all three disks contain
identical data, that 2/3's of the space is ''wasted''.
- Q:
ÆÄƼ¼ÇÀÌ Á¦´ë·Î unmount µÇÁö ¾Ê¾ÒÀ» ¶§
fsck °¡ ½ÇÇàµÇ¾î¼
ÆÄÀϽýºÅÛÀ» ½º½º·Î °íÄ¡´Â °ÍÀÌ ¾î¶»°Ô °¡´ÉÇÑÁö ¾Ë°í ½Í´Ù.
RAID ½Ã½ºÅÛÀ» ckraid --fix ·Î °íÄ¥¼ö Àִµ¥ ¿Ö ±×°ÍÀ»
ÀÚµ¿À¸·Î ÇÏÁö ¾Ê´Â°¡?
I'd like to understand how it'd be possible to have something
like fsck : if the partition hasn't been cleanly unmounted,
fsck runs and fixes the filesystem by itself more than
90% of the time. Since the machine is capable of fixing it
by itself with ckraid --fix , why not make it automatic?
A:
/etc/rc.d/rc.sysinit ¿¡ ¾Æ·¡¿Í °°ÀÌ
Ãß°¡ÇÔÀ¸·Î½á ÇÒ¼ö ÀÖ´Ù.
This can be done by adding lines like the following to
/etc/rc.d/rc.sysinit :
mdadd /dev/md0 /dev/hda1 /dev/hdc1 || {
ckraid --fix /etc/raid.usr.conf
mdadd /dev/md0 /dev/hda1 /dev/hdc1
}
or
mdrun -p1 /dev/md0
if [ $? -gt 0 ] ; then
ckraid --fix /etc/raid1.conf
mdrun -p1 /dev/md0
fi
Á»´õ ¿Ïº®ÇÑ ½ºÅ©¸³Æ®¸¦ ¸¸µé±â ÀÌÀü¿¡ ½Ã½ºÅÛÀÌ ¾î¶»°Ô ÄÑÁö´ÂÁö º¸µµ·Ï ÇÏÀÚ.
Before presenting a more complete and reliable script,
lets review the theory of operation.
Á¤»óÀûÀ¸·Î Á¾·áµÇÁö ¾Ê¾Ò´Ù¸é, ¸®´ª½º´Â ¾Æ·¡¿Í °°Àº »óÅÂÁßÀÇ ÇϳªÀÏ ²¨¶ó°í
Gadi OxmanÀº ¸»Çß´Ù.
Gadi Oxman writes:
In an unclean shutdown, Linux might be in one of the following states:
|
|