Nachdem wir im ersten Teil den Server mit Proxmox installiert haben und dafür gesorgt haben, dass die Updates eingespielt werden können, wollen wir uns nun, damit beschäftigen ein Software RAID 1 zu erstellen damit die Daten gesichert, wenn mal eine Festplatte ausfällt. Auf der nicht gesicherten Festplatte sollen Container und VMs laufen welche ihre notwendigen Daten auf den RAID sichern. Ich habe mich für ein Software RAID entschied, da ich aktuell für den Test-Server keinen RAID Controller über habe und ich keinen Sponsor hierfür gewinnen konnte.
Es geht mir auch darum zu zeigen, dass ein Software RAID auch funktionieren kann und es durchaus für einen Home Server infrage kommen kann. Klar ist ein RAID Controller immer besser und hat viele Vorteile gerade was die Geschwindigkeiten auf den Server beim Lesen und Schreiben angeht, aber ich denke, wie gesagt es reicht für meine Zwecke durchaus aus diesem Weg zu gehen. Bei einem RAID eins spiegeln wir die Festplatten und erhalten dadurch eine doppelte Sicherheit was unsere Daten betrifft. Leider hat ein RAID 1 auch den Nachteil wir können von unserer eigentlichen Datenmenge nur die Hälfte maximal nutzen, was den Nachteil hat, unser Speicher ist recht teuer dadurch (wenn man die Einkaufsmenge im Verhältnis zur Nutzmenge sieht).
Bei den Schreibgeschwindigkeiten kann man bei einem RAID 1 die annähernd normalen Geschwindigkeiten erwarten, da die Daten am vollen Stück geschrieben werden und nicht erst gesplittet werden. Beim Lesen von sehr großen oder vielen kleinen Daten kann man theoretisch sogar mehr Performance erwarten, da von mehreren Festplatten parallel gelesen werden kann. Das ist besonders interessant bei Videos oder vielen Bildern welche man dadurch gut streamen kann.
Nun wollen wir uns zuerst einmal unsere Festplatten ansehen, damit wir uns einen Überblick verschaffen was wir aktuell haben und wie es aktuell bei uns aussieht: Ich verwende hierfür das Programm fdisk. Der Parameter „-l“ gibt uns eine aktuelle Liste.
root@test-srv1:~# fdisk –l
Disk /dev/sda: 119.2 GiB, 128035676160 bytes, 250069680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: ABCDB0DA-XXXX-XXXX-XXXX-E37ABCDA1905
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 250069646 249019023 118.8G Linux LVM
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: ABCD7AC5-XXXX-XXXX-XXXX-425DA9AEC742
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: ABCD30AA-XXXX-XXXX-XXXX-E062ABCD63F8
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: ABCDA647-XXXX-XXXX-XXXX-FE0ABCDA1B3
Disk /dev/mapper/pve-swap: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-root: 29.5 GiB, 31675383808 bytes, 61865984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@test-srv1:~# apt install mdadm
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
mdadm
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 430 kB of archives.
After this operation, 1,174 kB of additional disk space will be used.
Get:1 http://ftp.debian.org/debian stretch/main amd64 mdadm amd64 3.4-4+b1 [430 kB]
Fetched 430 kB in 0s (1,209 kB/s)
Preconfiguring packages ...
Selecting previously unselected package mdadm.
(Reading database ... 54830 files and directories currently installed.)
Preparing to unpack .../mdadm_3.4-4+b1_amd64.deb ...
Unpacking mdadm (3.4-4+b1) ...
Setting up mdadm (3.4-4+b1) ...
Generating mdadm.conf... done.
update-initramfs: deferring update (trigger activated)
Generating grub configuration file ...
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23569: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23569: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23582: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23582: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23595: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23595: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23608: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23608: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23669: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23669: /usr/sbin/grub-probe
Found linux image: /boot/vmlinuz-4.15.18-13-pve
Found initrd image: /boot/initrd.img-4.15.18-13-pve
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23757: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23757: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23771: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23771: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23784: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23784: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23797: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 23797: /usr/sbin/grub-probe
Found linux image: /boot/vmlinuz-4.15.18-12-pve
Found initrd image: /boot/initrd.img-4.15.18-12-pve
Found linux image: /boot/vmlinuz-4.15.18-10-pve
Found initrd image: /boot/initrd.img-4.15.18-10-pve
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24056: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24056: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24096: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24096: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24109: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24109: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24122: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24122: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24135: /usr/sbin/grub-probe
File descriptor 3 (pipe:[85375]) leaked on vgs invocation. Parent PID 24135: /usr/sbin/grub-probe
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Processing triggers for systemd (232-25+deb9u11) ...
Processing triggers for man-db (2.7.6.1-2) ...
Processing triggers for initramfs-tools (0.130) ...
update-initramfs: Generating /boot/initrd.img-4.15.18-13-pve
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
root@uhle-srvh1:~# mdadm –help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm --create device options...
Create a new array from unused devices.
mdadm --assemble device options...
Assemble a previously created array.
mdadm --build device options...
Create or assemble an array without metadata.
mdadm --manage device options...
make changes to an existing array.
mdadm --misc options... devices
report on or modify various md related devices.
mdadm --grow options device
resize/reshape an active array
mdadm --incremental device
add/remove a device to/from an array as appropriate
mdadm --monitor options...
Monitor one or more array for significant changes.
mdadm device options...
Shorthand for --manage.
Any parameter that does not start with '-' is treated as a device name
or, for --examine-bitmap, a file name.
The first such name is often the name of an md device. Subsequent
names are often names of component devices.
For detailed help on the above major modes use --help after the mode
e.g.
mdadm --assemble –help
For general help on options use
mdadm --help-options
root@test-srv1:~# mdadm --create –help
Usage: mdadm --create device -chunk=X --level=Y --raid-devices=Z devices
This usage will initialise a new md array, associate some
devices with it, and activate the array. In order to create an
array with some devices missing, use the special word 'missing' in
place of the relevant device name.
Before devices are added, they are checked to see if they already contain
raid superblocks or filesystems. They are also checked to see if
the variance in device size exceeds 1%.
If any discrepancy is found, the user will be prompted for confirmation
before the array is created. The presence of a '--run' can override this
caution.
If the --size option is given then only that many kilobytes of each
device is used, no matter how big each device is.
If no --size is given, the apparent size of the smallest drive given
is used for raid level 1 and greater, and the full device is used for
other levels.
Options that are valid with --create (-C) are:
--bitmap= : Create a bitmap for the array with the given filename
: or an internal bitmap is 'internal' is given
--chunk= -c : chunk size in kibibytes
--rounding= : rounding factor for linear array (==chunk size)
--level= -l : raid level: 0,1,4,5,6,10,linear,multipath and synonyms
--parity= -p : raid5/6 parity algorithm: {left,right}-{,a}symmetric
--layout= : same as --parity, for RAID10: [fno]NN
--raid-devices= -n : number of active devices in array
--spare-devices= -x: number of spare (eXtra) devices in initial array
--size= -z : Size (in K) of each drive in RAID1/4/5/6/10 – optional
--data-offset= : Space to leave between start of device and start
: of array data.
--force -f : Honour devices as listed on command line. Don't
: insert a missing drive for RAID5.
--run -R : insist of running the array even if not all
: devices are present or some look odd.
--readonly -o : start the array readonly - not supported yet.
--name= -N : Textual name for array - max 32 characters
--bitmap-chunk= : bitmap chunksize in Kilobytes.
--delay= -d : bitmap update delay in seconds.
--write-journal= : Specify journal device for RAID-4/5/6 array
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
root@test-srv1:~# cfdisk /dev/sdd (Hier öffnet sich ein Programm wo man sich durch das Menü wählen muss)
Syncing disks.
Nachdem wir diese Einstellungen vorgenommen haben, werden wir mit den Command reboot einen Neustart durchführen. Nach den Neustart erstellen wir das Filesystem auf den Festplatten:
root@test-srv1:~# mkfs.ext4 /dev/sdb1
mke2fs 1.43.4 (31-Jan-2017)
/dev/sdb1 contains a ext4 file system
last mounted on Sun Mar 31 17:37:22 2019
Proceed anyway? (y,N) y
Creating filesystem with 976754385 4k blocks and 244195328 inodes
Filesystem UUID: ABCD7AC5-XXXX-XXXX-XXXX-425DA9AEC742
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Nachdem wir den Vorgang auch für /dev/sdc1 und sdd1 wiederholt haben können wir auch schon direkt weiter machen mit der Erstellung des RAID Verbund:
root@uhle-srvh1:~# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=3907017540K mtime=Thu Jan 1 01:00:00 1970
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=3907017540K mtime=Thu Jan 1 01:00:00 1970
root@test-srv1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc1[1] sdb1[0]
3906886464 blocks super 1.2 [2/2] [UU]
[>....................] resync = 0.1% (6024512/3906886464) finish=357.8min speed=181666K/sec
bitmap: 30/30 pages [120KB], 65536KB chunk
unused devices:
Nach der Erstellung des RAIDs können wir uns alle interessanten Informationen über das RAID anzeigen:
root@test-srv1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra id10]
md0 : active raid1 sdc1[1] sdb1[0]
3906886464 blocks super 1.2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices:
root@ test-srv1:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat May 4 22:35:57 2019
Raid Level : raid1
Array Size : 3906886464 (3725.90 GiB 4000.65 GB)
Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun May 5 06:01:43 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : test-srv1:0 (local to host test-srv1)
UUID : XXXXXXXX:XXXXXXXX:XXXXXXXX:XXXXXXXX
Events : 5428
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
root@test-srv1:~# mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : XXXXXXXX:XXXXXXXX:XXXXXXXX:XXXXXXXX
Name : test-srv1:0 (local to host test-srv1)
Creation Time : Sat May 4 22:35:57 2019
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 7813772943 (3725.90 GiB 4000.65 GB)
Array Size : 3906886464 (3725.90 GiB 4000.65 GB)
Used Dev Size : 7813772928 (3725.90 GiB 4000.65 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=15 sectors
State : clean
Device UUID : XXXXXXXX:XXXXXXXX:XXXXXXXX:XXXXXXXX
Internal Bitmap : 8 sectors from superblock
Update Time : Sun May 5 06:01:43 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9d7df450 – correct
Events : 5428
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@test-srv1:~# mkfs.ext4 /dev/md0
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 976721616 4k blocks and 244187136 inodes
Filesystem UUID: ABC1234-XXXX-XXXX-XXXX-4321ABC1234
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information:
done
root@test-srv1:~# mkdir /media/raid1
root@test-srv1:~# mount /dev/md0 /media/raid1
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
UUID: ABC1234-XXXX-XXXX-XXXX-4321ABC1234 /media/raid1 ext4 defaults 0 2
Add a Comment