Rumor is it is not possible to boot an operating system from the software array. Not exactly.
All of the modern boot loaders, ones coming from NT, Linux LILO, and GRUB, would happily boot from a RAID 1 (mirror).
It should be noted that some additional measures should be done to achieve boot redundancy if you are going to build a RAID. Since the mirroring does not cover the Master Boot Record, you should copy the MBR sector manually between hard disks in software mirror. If you omit the above, then
when the primary (boot) drive dies, you are left with an unbootable system.
Surely there will be no irreversible loss of data - all you need is to just attach the good hard drive into a working PC and read it, but no automatic recovery is possible if the drive stops working in operation, requiring a reboot.
Random stuff
Wednesday, June 13, 2012
Monday, January 30, 2012
Make a software RAID bootable?
Is it possible to create a bootable software RAID0, RAID 5, or JBOD?
The answer is "no". It is impossible to start an operating system off a software RAID0, RAID 5, or JBOD.
A hardware controller is required to be able to to make a RAID bootable. You cannot start an operating system from the software RAID because the RAID is not readable until the operating system is done loading, and the operating system itself is on the RAID.
Your only option is to start an operating system using a software RAID 1. To boot from a "second" drive of a software RAID1, one may need to copy the master boot record first, because typically the master boot record would not be subjected to mirroring.
The answer is "no". It is impossible to start an operating system off a software RAID0, RAID 5, or JBOD.
A hardware controller is required to be able to to make a RAID bootable. You cannot start an operating system from the software RAID because the RAID is not readable until the operating system is done loading, and the operating system itself is on the RAID.
Your only option is to start an operating system using a software RAID 1. To boot from a "second" drive of a software RAID1, one may need to copy the master boot record first, because typically the master boot record would not be subjected to mirroring.
Tuesday, January 10, 2012
Complex level RAID100
If you open Wikipedia you find out that RAID100 is considered as nested RAID level. It is said that RAID 100 is a RAID0 built from RAID10 arrays. Thus it turns out that on-disk contents of RAID 100 will be exactly the same as in RAID 10.
So, it is possible to obtain RAID 0 from RAID 100 by removing one of the disks of each mirror set. From the RAID recovery point of view, RAID100 recovery just boils down to RAID0 recovery given that you are able to fetch a full set of RAID0 disks.
So, it is possible to obtain RAID 0 from RAID 100 by removing one of the disks of each mirror set. From the RAID recovery point of view, RAID100 recovery just boils down to RAID0 recovery given that you are able to fetch a full set of RAID0 disks.
Saturday, October 15, 2011
Linux and fashion
All users know that Linux developers like and support in every way the diversity of versions which support various incompatible things whether they are really useful or not.
It is enough to look at the md-raid superblock that is known to have at least three "just a little" different versions. Most of them differ only in a location of a superblock in relation to the array contents. Such a difference can result in several unexpected consequences, e.g. you want to recover NAS manually following the instructions at www.nasrecovery.info, but Linux is not able to assemble NAS because it is not able to see a single superblock.
In my opinion, those who develop Linux are just a little mad at a location of a superblock. As a consequence of all this - with enviable regularity another Linux version with a new superblock location is released.
It is enough to look at the md-raid superblock that is known to have at least three "just a little" different versions. Most of them differ only in a location of a superblock in relation to the array contents. Such a difference can result in several unexpected consequences, e.g. you want to recover NAS manually following the instructions at www.nasrecovery.info, but Linux is not able to assemble NAS because it is not able to see a single superblock.
In my opinion, those who develop Linux are just a little mad at a location of a superblock. As a consequence of all this - with enviable regularity another Linux version with a new superblock location is released.
Wednesday, September 21, 2011
Want to build an array?
If you are going to make some big storage, be aware of the following concerns:
Should you choose the latter option, then have a look at this one of RAID Tips. If you decide to create your own RAID then go to www.raidtips.com.
- needed array size
- redundancy
- performance
- required ability to boot from the RAID array
- money you are going to spend.
Should you choose the latter option, then have a look at this one of RAID Tips. If you decide to create your own RAID then go to www.raidtips.com.
Sunday, August 28, 2011
URE values - real or not?
When reading technical specification of disks given by vendors it can be noted that often makers provide not real Unrecoverable Error Rate values. This URE probability value is widely utilized to substantiate naive statements similar to "RAID 5 is dead by 2009" and to guess chances of double failure in RAID5. These calculations get people building their own RAIDs concerned.
In fact, the vendor URE data seems to be very far off the mark. Read technical documentation on Hitachi official website, they have kind of interesting URE values for 3 TB hard disk - 10-14 errors per read bit. Such a value can be converted to the probability to read the drive from the start to the end and not encounter an URE is:
(1- 10-14)(8*3*1012)~0,79
therefore, the probability of the disk failing to read one sector is about 20%.
In other words when you have a disk filled at capacity there is a non-negligible chance (about 20%) that you are not be able to get data back off it. This is easily proven wrong by simplistic testing.
In fact, the vendor URE data seems to be very far off the mark. Read technical documentation on Hitachi official website, they have kind of interesting URE values for 3 TB hard disk - 10-14 errors per read bit. Such a value can be converted to the probability to read the drive from the start to the end and not encounter an URE is:
(1- 10-14)(8*3*1012)~0,79
therefore, the probability of the disk failing to read one sector is about 20%.
In other words when you have a disk filled at capacity there is a non-negligible chance (about 20%) that you are not be able to get data back off it. This is easily proven wrong by simplistic testing.
Monday, August 15, 2011
Random access time in RAIDs
The basic characteristics of a data storage speed are:
Access time in a regular disk includes the time to position a read head above the track (so called seek time) and the time which is needed a drive to bring a sector under a read head (so called rotational latency). No matter how many member disks are in RAID 0, there always exists such a sector which is simultaneously the furthest away from the head and this sector is not contained in the cache. For this sector the access time will be the same as in case of a single drive. The only way to decrease access time is to stick to an SSD.
P.S. One can easily estimate access time (and other performance characteristics) using free benchmark software BenchMe.
- access time which is defined as time delay between when a request is addressed to a storage device and the moment the requested data begins to come in.
- throughput is sustained average transfer rate.
Access time in a regular disk includes the time to position a read head above the track (so called seek time) and the time which is needed a drive to bring a sector under a read head (so called rotational latency). No matter how many member disks are in RAID 0, there always exists such a sector which is simultaneously the furthest away from the head and this sector is not contained in the cache. For this sector the access time will be the same as in case of a single drive. The only way to decrease access time is to stick to an SSD.
P.S. One can easily estimate access time (and other performance characteristics) using free benchmark software BenchMe.
Subscribe to:
Posts (Atom)