We have a customer with a RAID 5 array of 5 x 136 GB 10K SAS drives on a Modular Server. Over a week ago, one of the drives started getting timeout errors. The server automatically marked the drive as stale and unused and began rebuilding the array using a dedicated spare. However, it is now about 10 days later and the background task indicates the rebuild is only at 32% complete. Is it possible that such a small array could really take this long? Or do we need to take other action? The stale drive is still physically in the server and powered up, but it continues to get a timeout and reset event every minute or so.
What is the proper action here? We are afraid of ejecting the stale drive until the rebuild is complete, but maybe the failing drive is causing the array to have poor performance. Any guidance, comments, or suggestions are greatly appreciated. Thank you.
Message was edited by: Tim Sagstetter The rebuild is proceeding at about 1% per day. At this rate, it will take months to complete.