Proxmox replace Boot Drive 512 -> 4k

hier beschreibe ich wie man bei Proxmox eine nvme die mit 512Byte Block Size durch eine nvme mit 4k Block Size ersetzt.

Der normale weg durch das kopieren der Partition tabelle von der guten Disk auf die neue Disk funktioniert nicht da die Block Size unterschiedliche ist.

nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme1n1          /dev/ng1n1            S340NX1K758869       SAMSUNG MZVLW256HEHP-000H1               1         123,83  GB / 256,06  GB    512   B +  0 B   CXB73H1Q
/dev/nvme0n1          /dev/ng0n1            23262P808102         WD PC SN740 SDDPNQD-512G-1006            1         512,11  GB / 512,11  GB      4 KiB +  0 B   HPS3

Wenn beide Disk eine unterschiedliche Block Size / Formart haben dann kommt beim copieren der Partitions tabelle dies.

sgdisk /dev/nvme1n1 -R /dev/nvme0n1
Caution! Secondary header was placed beyond the disk's limits! Moving the
header, but other problems may occur!

Warning! Secondary partition table overlaps the last partition by
375091290 blocks!
You will need to delete this partition or resize it in another utility.

Problem: partition 3 is too big for the disk.
Aborting write operation!
Aborting write of new partition table.

jetzt könnte man hergehen und bei der NVME die Formatierung von 4k auf 512 ändern und dann wäre alles ok.

Aber wir werden einfach die Tabellen manuell anlegen

lsblk -b |grep nvme
nvme1n1     259:0    0  256060514304  0 disk 
├─nvme1n1p1 259:1    0       1031168  0 part 
├─nvme1n1p2 259:2    0    1073741824  0 part 
└─nvme1n1p3 259:3    0  254985707008  0 part 
nvme0n1     259:4    0  512110190592  0 disk 
fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 238,47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SAMSUNG MZVLW256HEHP-000H1              
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 19B7D4F0-6D1A-40BD-BA09-F3F05EF21647

Device           Start       End   Sectors   Size Type
/dev/nvme1n1p1      34      2047      2014  1007K BIOS boot
/dev/nvme1n1p2    2048   2099199   2097152     1G EFI System
/dev/nvme1n1p3 2099200 500118158 498018959 237,5G Solaris /usr & Apple ZFS

Also legen wir diese Partitionen auf der neuen nvme an.

fdisk /dev/nvme0n1

g für create a new empty GPT partition table

n für add a new partition

Partition 1 Fist sector default und dann +1007K

Command (m for help): g
Created a new GPT disklabel (GUID: AAD85292-69F1-E449-93B8-0D047D913180).

Command (m for help): n
Partition number (1-128, default 1): 1
First sector (256-125026896, default 256): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (256-125026896, default 125026815): +1007K

Created a new partition 1 of type 'Linux filesystem' and of size 1 MiB.

p zum anzeigen der Partition

Command (m for help): p
Disk /dev/nvme0n1: 476,94 GiB, 512110190592 bytes, 125026902 sectors
Disk model: WD PC SN740 SDDPNQD-512G-1006           
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: AAD85292-69F1-E449-93B8-0D047D913180

Device         Start   End Sectors Size Type
/dev/nvme0n1p1   256   511     256   1M Linux filesystem

t zum anpassen des Types

Command (m for help): t
Selected partition 1
Partition type or alias (type L to list all): 4 
Changed type of partition 'Linux filesystem' to 'BIOS boot'.

und nun das ganze für die EFI System

Command (m for help): n
Partition number (2-128, default 2):
First sector (512-125026896, default 512):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (512-125026896, default 125026815): +1G

Created a new partition 2 of type 'Linux filesystem' and of size 1 GiB.

Command (m for help): t
Partition number (1,2, default 2): 2
Partition type or alias (type L to list all): 1

Changed type of partition 'Linux filesystem' to 'EFI System'.

Command (m for help): p
Disk /dev/nvme0n1: 476,94 GiB, 512110190592 bytes, 125026902 sectors
Disk model: WD PC SN740 SDDPNQD-512G-1006
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: AAD85292-69F1-E449-93B8-0D047D913180

Device         Start    End Sectors Size Type
/dev/nvme0n1p1   256    511     256   1M BIOS boot
/dev/nvme0n1p2   512 262655  262144   1G EFI System

und dann nochmal für das ZFS und da nehmen wir dann den Rest der Disk size

Command (m for help): n
Partition number (3-128, default 3): 
First sector (262656-125026896, default 262656): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (262656-125026896, default 125026815): 

Created a new partition 3 of type 'Linux filesystem' and of size 475,9 GiB.

Command (m for help): t
Partition number (1-3, default 3): 
Partition type or alias (type L to list all): 157

Changed type of partition 'Linux filesystem' to 'Solaris /usr & Apple ZFS'.

Command (m for help): p
Disk /dev/nvme0n1: 476,94 GiB, 512110190592 bytes, 125026902 sectors
Disk model: WD PC SN740 SDDPNQD-512G-1006
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: AAD85292-69F1-E449-93B8-0D047D913180

Device          Start       End   Sectors   Size Type
/dev/nvme0n1p1    256       511       256     1M BIOS boot
/dev/nvme0n1p2    512    262655    262144     1G EFI System
/dev/nvme0n1p3 262656 125026815 124764160 475,9G Solaris /usr & Apple ZFS

anschließend noch ein w um das ganze zu schreiben

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Dann sollte das ganze so aussehen

lsblk |grep nvme
nvme1n1 259:0 0 238,5G 0 disk
├─nvme1n1p1 259:1 0 1007K 0 part
├─nvme1n1p2 259:2 0 1G 0 part
└─nvme1n1p3 259:3 0 237,5G 0 part
nvme0n1 259:4 0 476,9G 0 disk
├─nvme0n1p1 259:5 0 1M 0 part
├─nvme0n1p2 259:6 0 1G 0 part
└─nvme0n1p3 259:7 0 475,9G 0 part

Als nächstes lassen wir das ZFS wieder aufbauen. ein

zpool status -v

gibt und den ZFS status aus

  pool: rpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 00:01:26 with 0 errors on Mon Sep  9 00:45:54 2024
config:

	NAME                                 STATE     READ WRITE CKSUM
	rpool                                DEGRADED     0     0     0
	  mirror-0                           DEGRADED     0     0     0
	    4441236233527056282              UNAVAIL      0     0     0  was /dev/disk/by-id/nvme-eui.000000000000001000080d05000beead-part3
	    nvme-eui.002538b781b54256-part3  ONLINE       0     0     0

errors: No known data errors

hier ersetzen wir jetzt die ausgefallene Disk mit der neuen Disk

zpool replace -f rpool /dev/disk/by-oid/ID-ALTE-FESTPLATTE /dev/disk/by-id/ID-NEUE-FESTPLATTE

in meinem beispiel ist es

zpool replace -f rpool  /dev/disk/by-id/nvme-eui.000000000000001000080d05000beead-part3 /dev/disk/by-id/nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part3

mit zpool status kann man sich dann den resiver prozess sich anzeigen lassen

zpool status -v
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Sep 15 09:24:38 2024
        23.6G / 23.6G scanned, 5.11G / 23.6G issued at 374M/s
        7.11G resilvered, 21.64% done, 00:00:50 to go
config:

        NAME                                                         STATE     READ WRITE CKSUM
        rpool                                                        DEGRADED     0     0     0
          mirror-0                                                   DEGRADED     0     0     0
            replacing-0                                              DEGRADED     0     0     0
              4441236233527056282                                    UNAVAIL      0     0     0  was /dev/disk/by-id/nvme-eui.000000000000001000080d05000beead-part3
              nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part3  ONLINE       0     0     0  (resilvering)
            nvme-eui.002538b781b54256-part3                          ONLINE       0     0     0

errors: No known data errors

Das dauert ein paar secunden oder minuten je nach belegung

  pool: rpool
 state: ONLINE
  scan: resilvered 35.4G in 00:01:13 with 0 errors on Sun Sep 15 09:25:51 2024
config:

        NAME                                                       STATE     READ WRITE CKSUM
        rpool                                                      ONLINE       0     0     0
          mirror-0                                                 ONLINE       0     0     0
            nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part3  ONLINE       0     0     0
            nvme-eui.002538b781b54256-part3                        ONLINE       0     0     0

errors: No known data errors

Jetzt ist da ZFS wieder stabiel aber uns fehlt noch der Boot Sector und das Boot Volume

Formatieren der Partion 2

proxmox-boot-tool format /dev/disk/by-id/nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part2
UUID="" SIZE="1073741824" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
Formatting '/dev/disk/by-id/nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.

Initialisieren der Partition 2

proxmox-boot-tool init /dev/disk/by-id/nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="CB47-56A2" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
Mounting '/dev/disk/by-id/nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part2' on '/var/tmp/espmounts/CB47-56A2'.
Installing systemd-boot..
Created "/var/tmp/espmounts/CB47-56A2/EFI/systemd".
Created "/var/tmp/espmounts/CB47-56A2/EFI/BOOT".
Created "/var/tmp/espmounts/CB47-56A2/loader".
Created "/var/tmp/espmounts/CB47-56A2/loader/entries".
Created "/var/tmp/espmounts/CB47-56A2/EFI/Linux".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/CB47-56A2/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/CB47-56A2/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/tmp/espmounts/CB47-56A2/loader/random-seed successfully written (32 bytes).
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/disk/by-id/nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part2'.
Adding '/dev/disk/by-id/nvme-WD_PC_SN740_SDDPNQD-512G-1006_23262P808102-part2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Copying and configuring kernels on /dev/disk/by-uuid/CB47-56A2
	Copying kernel and creating boot-entry for 6.5.13-6-pve
	Copying kernel and creating boot-entry for 6.8.12-1-pve
	Copying kernel and creating boot-entry for 6.8.8-3-pve
WARN: /dev/disk/by-uuid/E532-4422 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
Copying and configuring kernels on /dev/disk/by-uuid/E533-9A1E
	Copying kernel and creating boot-entry for 6.5.13-6-pve
	Copying kernel and creating boot-entry for 6.8.12-1-pve
	Copying kernel and creating boot-entry for 6.8.8-3-pve
	Disabling upstream hook /etc/initramfs/post-update.d/systemd-boot
	Disabling upstream hook /etc/kernel/postinst.d/zz-systemd-boot
	Disabling upstream hook /etc/kernel/postrm.d/zz-systemd-boot

anzeigen der Boot Volumes

proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
CB47-56A2 is configured with: uefi (versions: 6.5.13-6-pve, 6.8.12-1-pve, 6.8.8-3-pve)
WARN: /dev/disk/by-uuid/E532-4422 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
E533-9A1E is configured with: uefi (versions: 6.5.13-6-pve, 6.8.12-1-pve, 6.8.8-3-pve)

hier haben wir jetzt die beiden Boot Volumes und das fehlende.

Jetzt aktualisieren wir die Kernels

proxmox-boot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/CB47-56A2
	Copying kernel and creating boot-entry for 6.5.13-6-pve
	Copying kernel and creating boot-entry for 6.8.12-1-pve
	Copying kernel and creating boot-entry for 6.8.8-3-pve
WARN: /dev/disk/by-uuid/E532-4422 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
Copying and configuring kernels on /dev/disk/by-uuid/E533-9A1E
	Copying kernel and creating boot-entry for 6.5.13-6-pve
	Copying kernel and creating boot-entry for 6.8.12-1-pve
	Copying kernel and creating boot-entry for 6.8.8-3-pve

Und räumen dann noch auf

proxmox-boot-tool clean
Checking whether ESP 'CB47-56A2' exists.. Found!
Checking whether ESP 'E532-4422' exists.. Not found!
Checking whether ESP 'E533-9A1E' exists.. Found!
Sorting and removing duplicate ESPs..

Und zu guter letzt sollte es dann so ausschauen

proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
CB47-56A2 is configured with: uefi (versions: 6.5.13-6-pve, 6.8.12-1-pve, 6.8.8-3-pve)
E533-9A1E is configured with: uefi (versions: 6.5.13-6-pve, 6.8.12-1-pve, 6.8.8-3-pve)

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

*