Cherry Servers

How to Create Different RAID Array Levels on Linux

In this tutorial, we'll guide you through how to create a new redundant array of independent disks (RAID) array, while leaving your existing RAID array untouched. Creating two separate RAID arrays improves your system's redundancy and performance for different data sets. This guide focuses on RAID 1, but the core principles can be applied to configure any RAID setup that suits your needs. Root (sudo) access is essential to perform these operations outlined in this guide.

#What Are the RAID Levels and Their Disk Requirements?

RAID arrays come in different configurations designed to improve performance, redundancy, or both. Here are some key points to consider:

#RAID 0

  • Minimum number of drives: 2
  • Disk failure tolerance: No tolerance. If any drive fails, all data is lost.

#RAID 1

  • Minimum number of drives: 2
  • Disk failure tolerance: One drive failure can be tolerated without data loss, as the data is mirrored on the other drive.

#RAID 5

  • Minimum number of drives: 3
  • Disk failure tolerance: One drive failure can be tolerated. Data can be reconstructed from the information stored on the remaining drives.

#RAID 10

  • Minimum number of drives: 4
  • Disk Failure Tolerance: Each mirrored pair can tolerate up to one drive failure. Multiple drive failures can be tolerated if they are in different mirrored pairs.

#Instructions to Create New RAID Arrays

#Step 1: Prepare Your Drives

Identify the drives connected to your system and determine which are available for the creation of a RAID array. This involves ensuring the drives are unmounted and not part of any existing RAID configuration. Unmounting drives will not cause data loss by itself, however, creating a RAID array using a drive will result in the loss of any data stored there. Ensure you have backups if necessary.

  1. List all block devices.

    Use the “lsblk” command to list all block devices, their mount points, and their current usage. Look for devices that are not mounted or part of another RAID array.

    Command Line
    lsblk
    
  2. Unmount the drives.

    Make sure the drives you want to use are not mounted. If they are, unmount them using the “umount” command, replacing “/dev/sdX” with the chosen drive name:

    Command Line
    sudo umount /dev/sdX
    
  3. Check for existing RAID configurations.

    Use the following command to see the status of any existing RAID arrays. If the drives you want to use are part of an existing array, you must remove them before proceeding:

    Command Line
    cat /proc/mdstat 
    

    An example output would be:

    Output root@fphtaeqnyi-nxwtgsavpf:~# cat /proc/mdstat
     Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
     md0 : active raid1 nvme0n1p2[0] nvme1n1p1[1]
     937049088 blocks super 1.2 [2/2] [UU]
     [====>................] resync = 35.9% (336969216/937049088) finish=48.4min speed=206513K/sec
     bitmap: 5/7 pages [20KB], 65536KB chunk
    unused devices: <none>
    
  4. OPTIONAL - Clear previous RAID configurations: If a drive was previously part of a RAID array, you'll need to remove any remaining RAID metadata. Please note that this action will erase RAID metadata and any existing file systems on the drives, potentially resulting in data loss. You may skip this step if you are using fresh drives, or they have never been part of a RAID array. Use the following code to remove remaining RAID metadata, and replace "/dev/sdX" with the appropriate drive name.

Command Line
sudo mdadm --zero-superblock /dev/sdX 

Repeat this step for each drive you wish to use in the new RAID array.

#Step 2: Create the RAID Array

Once the drives have been prepared, the next step is to create the RAID array. The following commands are provided for the creation of different RAID array types.

#For RAID 0

Replace "mdX" with your desired RAID device name and replace "/dev/sdc" and "/dev/sd" with the appropriate drive names.

Command Line
sudo mdadm --create --verbose /dev/mdX --level=1 --raid-devices=2 /dev/sdc /dev/sdd 

#For RAID 1

Replace "mdX" with your desired RAID device name and replace "/dev/sdc" and "/dev/sdd" with the appropriate drive names.

Command Line
sudo mdadm --create --verbose /dev/mdX --level=1 --raid-devices=2 /dev/sdc /dev/sdd 

#For Raid 5

Replace "mdX" with your desired RAID device name and replace "/dev/sdc," "/dev/sdd", and "/dev/sde" with the appropriate drive names.

Command Line
sudo mdadm --create --verbose /dev/mdX --level=5 --raid-devices=3 /dev/sdc /dev/sdd /dev/sde 

#For Raid 10

Replace "mdX" with your desired RAID device name and replace "/dev/sdc", "/dev/sdd", "/dev/sde" and "/dev/sdf" with the appropriate drive names.

Command Line
sudo mdadm --create --verbose /dev/mdX --level=10 --raid-devices=4 /dev/sdc /dev/sdd /dev/sde /dev/sdf 

The process will appear similar to this RAID 1 array example:

Outputroot@fphtaeqnyi-nxwtgsavpf:~# sudo mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/nvme2n1 /dev/nvme3n1
mdadm: Note: this array has metadata at the start and
       may not be suitable as a boot device.  If you plan to
       store '/boot' on this device please ensure that
       your boot-loader understands md/v1.x metadata, or use
       --metadata=0.90
mdadm: size set to 3750606144K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

#Step 3: Check the RAID Array Status

You can verify the status of the RAID array with the following command, replacing “/dev/mdX” with your RAID array device name:

Command Line
sudo mdadm --detail /dev/mdX 

This should return:

Outputroot@fphtaeqnyi-nxwtgsavpf:~# sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 18 22:40:49 2024
     Raid Level : raid1
     Array Size : 3750606144 (3.49 TiB 3.84 TB)
  Used Dev Size : 3750606144 (3.49 TiB 3.84 TB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Intent Bitmap : Internal

    Update Time : Tue Jun 18 22:47:32 2024
          State : clean, resyncing
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

Consistency Policy : bitmap

    Resync Status : 2% complete

           Name : fphtaeqnyi-nxwtgsavpf:1 (local to host fphtaeqnyi-nxwtgsavpf)
           UUID : cf674ea9:d9a5c4d9:a7e67de1:f380cf54
         Events : 79

    Number   Major   Minor   RaidDevice State
       0     259        5        0      active sync   /dev/nvme2n1
       1     259        6        1      active sync   /dev/nvme3n1

#Step 4: Save the RAID Configuration

  1. Create a variable to store the UUID of the new RAID array.

    This command extracts the UUID of the new RAID array and stores it in a variable named “NEW_UUID”, simply replace “/dev/mdX” with your RAID array device name:

    Command Line
    NEW_UUID=$(sudo mdadm --detail /dev/mdX | grep 'UUID' | awk '{print $3}') 
    
  2. Append the new RAID array details to “mdadm.conf”. This command appends the new RAID array's details to the “mdadm.conf” file using the UUID stored in “$NEW_UUID” variable:

    Command Line
    sudo mdadm --detail --scan | grep $NEW_UUID | sudo tee -a /etc/mdadm/mdadm.conf 
    

    You should see a similar output to:

    Outputroot@fphtaeqnyi-nxwtgsavpf:~# NEW_UUID=$(sudo mdadm --detail /dev/md1 | grep 'UUID' | awk '{print $3}')
    root@fphtaeqnyi-nxwtgsavpf:~# sudo mdadm --detail --scan | grep $NEW_UUID | sudo tee -a /etc/mdadm/mdadm.conf
    ARRAY /dev/md1 metadata=1.2 name=fphtaeqnyi-nxwtgsavpf:1 UUID=cf674ea9:d9a5c4d9:a7e67de1:f380cf54
    
  3. Update the “initramfs”. This command updates the initial RAM filesystem to include the new RAID configuration:

    Command Line
    sudo update-initramfs -u
    

    This returns:

    Outputroot@fphtaeqnyi-nxwtgsavpf:~# sudo update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-5.15.0-107-generic
    W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast
    
  4. Assemble the RAID Array.

    This command ensures that the RAID array is correctly assembled on system startup:

    Command Line
    sudo mdadm --assemble --scan
    

#Step 5: Create a Filesystem on the RAID Array

  1. Create a filesystem on the RAID array (e.g., ext4).

    Use this command and replace “/dev/mdX” with your RAID array device name:

    Command Line
    sudo mkfs.ext4 /dev/mdX 
    

    You should see:

    Outputroot@fphtaeqnyi-nxwtgsavpf:~# sudo mkfs.ext4 /dev/md1
    mke2fs 1.46.5 (30-Dec-2021)
    Discarding device blocks: done
    Creating filesystem with 937651536 4k blocks and 234414080 inodes
    Filesystem UUID: 0311e850-c233-42c9-b180-74958a4a413d
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
            102400000, 214990848, 512000000, 550731776, 64492544
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (262144 blocks): done
    Writing superblocks and filesystem accounting information: done
    

#Step 6: Mount the RAID Array

  1. Create a mount point with the following command, replacing “/mnt/mdX” with your desired mount point:

    Command Line
    sudo mkdir -p /mnt/mdX 
    
  2. To mount the RAID array, run this command, and replace “/dev/mdX” with your RAID array device name, and “/mnt/mdX” with your mount point:

    Command Line
    sudo mount /dev/mdX /mnt/mdX
    
  3. Edit the “/etc/fstab” file.

    Open the “/etc/fstab” file in a text editor to ensure the RAID array persists through reboots, using:

    Command Line
    sudo nano /etc/fstab 
    
  4. Add an entry to “/etc/fstab”.

    At the bottom of the “/etc/fstab” file, add the following line, replacing “/dev/mdX” with your RAID array device name, “/mnt/mdX” with your mount point, and “” with your created filesystem type:

    /dev/mdX /mnt/mdX <filesystem> defaults 0 2 
    

    It should look like this:

    GNU nano 6.2 /etc/fstab *
    
    /etc/fstab: static file system information.
    Use 'blkid' to print the universally unique identifier for a
    device; this may be used with UUID= as a more robust way to name devices
    that works even if disks are added and removed. See fstab(5).
    <file system> <mount point> <type> <options> <dump> <pass>
    was on /dev/md0 during curtin installation
    
    #/dev/disk/by-id/md-uuid-b264bbd4-5581775e:bf63febe:d6f5d2f2 / ext4 defaults 0 1
    
    was on /dev/nvme0n1p1 during curtin installation
    
    #/dev/disk/by-id/DFE-6AA7 /boot/efi vfat defaults 0 1
    /swap.img none swap sw 0 0
    /dev/md1 /mnt/md1 ext4 defaults 0 2
    
  5. Save and exit the editor.

    • Press Ctrl + X to exit.
    • Press Y to confirm you want to save the changes.
    • Press Enter to finalize the save.

#Step 7: Verify the RAID Array

  1. Check the status of the RAID array using this command, and replacing “/dev/mdX” with your RAID array device name.
    Command Line
    sudo mdadm --detail /dev/mdX 
    
    This should provide the following:
    Outputroot@fphtaeqnyi-nxwtgsavpf:/# sudo mdadm --detail /dev/md1
    /dev/md1:
            Version : 1.2
      Creation Time : Tue Jun 18 22:40:49 2024
         Raid Level : raid1
         Array Size : 3750606144 (3.49 TiB 3.84 TB)
      Used Dev Size : 3750606144 (3.49 TiB 3.84 TB)
         Raid Devices : 2
        Total Devices : 2
        Persistence : Superblock is persistent
    
        Intent Bitmap : Internal
    
        Update Time : Tue Jun 18 23:44:52 2024
              State : clean, resyncing
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
    Consistency Policy : bitmap
    
        Resync Status : 20% complete
    
               Name : fphtaeqnyi-nxwtgsavpf:1 (local to host fphtaeqnyi-nxwtgsavpf)
               UUID : cf674ea9:d9a5c4d9:a7e67de1:f380cf54
             Events : 718
    
        Number   Major   Minor   RaidDevice State
           0     259        5        0      active sync   /dev/nvme2n1
           1     259        6        1      active sync   /dev/nvme3n1
    

All steps to create a separate RAID array using two drives, while still leaving your existing RAID array untouched, have now been completed, and you may now continue operations as normal.

No results found for ""
Recent Searches
Navigate
Go
ESC
Exit
We use cookies to ensure seamless user experience for our website. Required cookies - technical, functional and analytical - are set automatically. Please accept the use of targeted cookies to ensure the best marketing experience for your user journey. You may revoke your consent at any time through our Cookie Policy.
build: 920a9a1ae.1622