Proxmox Ceph Rebuild tuning
Damit Ceph beim den Rebuild mit maximaler Geschwindigkeit rennt, kann man diesen Befehl auf einem der Ceph Server absetzen.
ceph tell 'osd.*' injectargs --osd_max_backfills=40 --osd_recovery_max_active=40 --osd_mclock_profile=high_recovery_ops --osd_scrub_auto_repair=true --osd_mclock_override_recovery_settings=true
In der Ceph Doku findet man die einzelnen Optionen.
- osd_max_backfills
- The maximum number of backfills allowed to or from a single OSD. Note that this is applied separately for read and write operations.
- osd_recovery_max_active
- The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests places an increased load on the cluster.
- osd_mclock_profile
- This sets the type of mclock profile to use for providing QoS based on operations belonging to different classes (background recovery, scrub, snaptrim, client op, osd subop). Once a built-in profile is enabled, the lower level mclock resource control parameters [reservation, weight, limit] and some Ceph configuration parameters are set transparently. Note that the above does not apply for the custom profile.
- osd_scrub_auto_repair
- Setting this to
true
will enable automatic PG repair when errors are found by scrubs or deep-scrubs. However, if more thanosd_scrub_auto_repair_num_errors
errors are found a repair is NOT performed.
- Setting this to
- osd_mclock_override_recovery_settings
- Setting this option will enable the override of the recovery/backfill limits for the mClock scheduler as defined by the
osd_recovery_max_active_hdd
,osd_recovery_max_active_ssd
andosd_max_backfills
options.
- Setting this option will enable the override of the recovery/backfill limits for the mClock scheduler as defined by the
Um die Default Werte wieder zu setzen
ceph tell 'osd.*' injectargs --osd_max_backfills=1 --osd_recovery-max_active=0 --osd_mclock_profile=balanced --osd_scrub_auto_repair=false --osd_mclock_override_recovery_settings=false
Schreibe einen Kommentar