Hardware (2)
List Pool
zpool list
Show detailed health status
zpool status rpool
Show all properties for the pool
zpool get all rpool
Show I/O Stats
zpool iostat -v rpool
List datasets
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.7G 1.73T 139K /rpool
rpool/ROOT 6.12G 1.73T 128K /rpool/ROOT
rpool/ROOT/pve-1 6.12G 1.73T 6.12G /
rpool/data 14.6G 1.73T 128K /rpool/data
rpool/data/vm-100-disk-0 10.7G 1.73T 10.7G -
rpool/data/vm-101-disk-0 3.85G 1.73T 3.85G -
Replacing a failed disk
root@pve:~# zpool status -v
pool: rpool
state: ONLINE
scan: resilvered 143G in 15h22m with 0 errors on Fri Oct 14 10:59:46 2016
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
errors: No known data errors
Assuming the failing disk is /dev/sdb2, first take the disk offline:
root@pve:~# zpool offline rpool /dev/sdb2
root@pve:~# zpool status -v
pool: rpool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 143G in 15h22m with 0 errors on Fri Oct 14 10:59:46 2016
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sda2 ONLINE 0 0 0
sdb2 OFFLINE 0 0 0
sdc2 ONLINE 0 0 0
errors: No known data errors
replace the physical disk & clone a working disk’s partition tables, copy the GRUB boot partition, copy the MBR, and rerandomize the GUIDs before letting ZFS at the disk again.
root@pve:~# sgdisk --replicate=/dev/sdb /dev/sda
root@pve:~# sgdisk --randomize-guids /dev/sdb
root@pve:~# grub-install /dev/sdb
Replace the disk in the ZFS pool
root@pve:~# zpool replace rpool /dev/sdb2Check Status:
root@pve:~# zpool status -v
pool: rpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Oct 17 13:02:00 2016
147M scanned out of 298G at 4.46M/s, 19h1m to go
47.3M resilvered, 0.05% done
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sda2 ONLINE 0 0 0
replacing-1 OFFLINE 0 0 0
old OFFLINE 0 0 0
sdb2 ONLINE 0 0 0 (resilvering)
sdc2 ONLINE 0 0 0
errors: No known data errors
Add Cache to existing pool
zpool add -f rpool cache sdc
Anzeigen aller Drives:
./storcli64 /c0 show
Output (als Beispiel):
——————————————————————————
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
——————————————————————————
252:0 8 Onln 0 931.0 GB SATA SSD N N 512B Samsung SSD 850 EVO 1TB U
252:1 9 Onln 0 931.0 GB SATA SSD N N 512B Samsung SSD 850 EVO 1TB U
Drive “offline” setzen:
./storcli64 /c0 /e252 /s1 set offline
Drive als “missing” setzen:
./storcli64 /c0 /e252 /s1 set missing
Drive spindown zum entfernen:
./storcli64 /c0 /e252 /s1 spindown
Danach Disk/SSD ersetzen. Der Rebuidl startet automatisch. Den Status kann man mittels folgendem Befehl kontrollieren:
./storcli64 /c0 /eall /sall show rebuild
Beispiel output:
———————————————————-
Drive-ID Progress% Status Estimated Time Left
———————————————————-
/c0/e252/s0 – Not in progress –
/c0/e252/s1 46 In progress –
Linux (2)
List Pool
zpool list
Show detailed health status
zpool status rpool
Show all properties for the pool
zpool get all rpool
Show I/O Stats
zpool iostat -v rpool
List datasets
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.7G 1.73T 139K /rpool
rpool/ROOT 6.12G 1.73T 128K /rpool/ROOT
rpool/ROOT/pve-1 6.12G 1.73T 6.12G /
rpool/data 14.6G 1.73T 128K /rpool/data
rpool/data/vm-100-disk-0 10.7G 1.73T 10.7G -
rpool/data/vm-101-disk-0 3.85G 1.73T 3.85G -
Replacing a failed disk
root@pve:~# zpool status -v
pool: rpool
state: ONLINE
scan: resilvered 143G in 15h22m with 0 errors on Fri Oct 14 10:59:46 2016
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
errors: No known data errors
Assuming the failing disk is /dev/sdb2, first take the disk offline:
root@pve:~# zpool offline rpool /dev/sdb2
root@pve:~# zpool status -v
pool: rpool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 143G in 15h22m with 0 errors on Fri Oct 14 10:59:46 2016
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sda2 ONLINE 0 0 0
sdb2 OFFLINE 0 0 0
sdc2 ONLINE 0 0 0
errors: No known data errors
replace the physical disk & clone a working disk’s partition tables, copy the GRUB boot partition, copy the MBR, and rerandomize the GUIDs before letting ZFS at the disk again.
root@pve:~# sgdisk --replicate=/dev/sdb /dev/sda
root@pve:~# sgdisk --randomize-guids /dev/sdb
root@pve:~# grub-install /dev/sdb
Replace the disk in the ZFS pool
root@pve:~# zpool replace rpool /dev/sdb2
Check Status:
root@pve:~# zpool status -v
pool: rpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Oct 17 13:02:00 2016
147M scanned out of 298G at 4.46M/s, 19h1m to go
47.3M resilvered, 0.05% done
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sda2 ONLINE 0 0 0
replacing-1 OFFLINE 0 0 0
old OFFLINE 0 0 0
sdb2 ONLINE 0 0 0 (resilvering)
sdc2 ONLINE 0 0 0
errors: No known data errors
Add Cache to existing pool
zpool add -f rpool cache sdc
Anzeigen aller Drives:
./storcli64 /c0 show
Output (als Beispiel):
——————————————————————————
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
——————————————————————————
252:0 8 Onln 0 931.0 GB SATA SSD N N 512B Samsung SSD 850 EVO 1TB U
252:1 9 Onln 0 931.0 GB SATA SSD N N 512B Samsung SSD 850 EVO 1TB U
Drive “offline” setzen:
./storcli64 /c0 /e252 /s1 set offline
Drive als “missing” setzen:
./storcli64 /c0 /e252 /s1 set missing
Drive spindown zum entfernen:
./storcli64 /c0 /e252 /s1 spindown
Danach Disk/SSD ersetzen. Der Rebuidl startet automatisch. Den Status kann man mittels folgendem Befehl kontrollieren:
./storcli64 /c0 /eall /sall show rebuild
Beispiel output:
———————————————————-
Drive-ID Progress% Status Estimated Time Left
———————————————————-
/c0/e252/s0 – Not in progress –
/c0/e252/s1 46 In progress –