Solution
Yes.... that's the whole idea behind Raid5 (to provide fault tolerance) although, you'll probably see I/Os slow to a crawl until you replace the failed drive and rebuild / reinitialize the set.
It's a very good idea in that type of configuration (if your hardware raid card / firmware (software) supports it) to configure something called a "Hot Spare" that's a 4th hard drive just hanging out, ready to jump in for the failed drive.
Most, but I suppose not all, hardware raid controllers support that setup.
It's a very good idea in that type of configuration (if your hardware raid card / firmware (software) supports it) to configure something called a "Hot Spare" that's a 4th hard drive just hanging out, ready to jump in for the failed drive.
Most, but I suppose not all, hardware raid controllers support that setup.
http://www.vantagetech.com/faq/raid-5-recovery-faq.html
> "RAID 5 volume sets require a minimum of at least three hard drives to create and maintain a RAID 5 volume"
I am not certain but I think one volume can be removed while keeping most of the saved data.
> "RAID 5 volume sets require a minimum of at least three hard drives to create and maintain a RAID 5 volume"
I am not certain but I think one volume can be removed while keeping most of the saved data.
Trouble
Noob Whisperer
- Joined
- Nov 30, 2009
- Messages
- 13,722
Yes.... that's the whole idea behind Raid5 (to provide fault tolerance) although, you'll probably see I/Os slow to a crawl until you replace the failed drive and rebuild / reinitialize the set.
It's a very good idea in that type of configuration (if your hardware raid card / firmware (software) supports it) to configure something called a "Hot Spare" that's a 4th hard drive just hanging out, ready to jump in for the failed drive.
Most, but I suppose not all, hardware raid controllers support that setup.
It's a very good idea in that type of configuration (if your hardware raid card / firmware (software) supports it) to configure something called a "Hot Spare" that's a 4th hard drive just hanging out, ready to jump in for the failed drive.
Most, but I suppose not all, hardware raid controllers support that setup.
Solution
- Thread Author
-
- #4
Thanks Trouble. If the parity bit or data is on the failed drive how come users fetch data until we rebuild with new one. So every time it uses the xor or nor logic to construct the data.Yes.... that's the whole idea behind Raid5 (to provide fault tolerance) although, you'll probably see I/Os slow to a crawl until you replace the failed drive and rebuild / reinitialize the set.
It's a very good idea in that type of configuration (if your hardware raid card / firmware (software) supports it) to configure something called a "Hot Spare" that's a 4th hard drive just hanging out, ready to jump in for the failed drive.
Most, but I suppose not all, hardware raid controllers support that setup.
Trouble
Noob Whisperer
- Joined
- Nov 30, 2009
- Messages
- 13,722
Are you seriously asking me how come?how come
There must be thousands of webpages, courtesy of any search engine, that will provide you with that answer and I might add, likely far better than I could ever do.
Personally, I'm not sure I even remember how a striped set with parity actually works. I'm quite certain I knew all about it at one time, back when things like that were important to me.
Now, I'm affraid, like many other things..... I just take it for granted. Sort of like:
I don't know and understand how my refrigerator actually works but that's OK..... I still enjoy the cold beer it provides.