Estado del raid de arranque en ODA x7

Problema:

En nuestro oda, despues de un tiempo sin reinciar y antes un proceso crítico en las proximas fechas, necesitamos saber si una vez apagado, al arrancar de nuevo el raid de discos donde se encuentra el boot funcioná de manera correcta

SOLUCIÓN:

Para realizar esta tarea, lo haremos con el comando mdad sobre los dispositivos «/dev/md0» y «/dev/md1» que son los discos que tienen el boot de arranque (En raid 0)

[root@test-node1 ~]##NODO 0
[root@node0-x7 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Thu May 3 20:48:11 2018
Raid Level : raid1
Array Size : 511936 (499.94 MiB 524.22 MB)
Used Dev Size : 511936 (499.94 MiB 524.22 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Jun 24 01:00:04 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:0
UUID : f208c90f:1aeddba4:5aab5a39:da7f9f34
Events : 43

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
[root@node0-x7 ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Thu May 3 20:48:12 2018
Raid Level : raid1
Array Size : 467694592 (446.03 GiB 478.92 GB)
Used Dev Size : 467694592 (446.03 GiB 478.92 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Jun 27 09:29:54 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:1
UUID : ce4fb3e0:2af57fa0:7608ff49:cf4e9e5f
Events : 4171

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3

##NODO 1
[root@node1-x7 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Thu May 3 20:44:44 2018
Raid Level : raid1
Array Size : 511936 (499.94 MiB 524.22 MB)
Used Dev Size : 511936 (499.94 MiB 524.22 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Jun 26 15:27:01 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:0
UUID : 3d087a10:957b48ba:8f50c397:b5a34ea3
Events : 43

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
[root@node1-x7 ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Thu May 3 20:44:45 2018
Raid Level : raid1
Array Size : 467694592 (446.03 GiB 478.92 GB)
Used Dev Size : 467694592 (446.03 GiB 478.92 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Jun 27 09:28:56 2018
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:1
UUID : 74e59374:1639f352:fea3567d:5efacab3
Events : 4098

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3

</pre class>

En la salida del comando, tenemos que ver que la etiqueta "Working devices" está en 2 y que la etiqueta "Failed devices" está a 0

Esta comprobación hay que realizarla en los dos nodos para asegurarnos