# PG_NOT_DEEP_SCRUBBED Warnings ## TLDR Set the interval before warning to 14 days: ```sh ceph config set global osd_deep_scrub_interval 1209600 ``` ## Problem My Ceph cluster runs on cheap hardware and large OSDs take a while to scrub so I'm seeing Ceph health warnings: ```none [WRN] PG_NOT_DEEP_SCRUBBED: 43 pgs not deep-scrubbed in time pg 11.71 not deep-scrubbed since 2022-11-20T23:06:43.607051+0100 pg 7.78 not deep-scrubbed since 2022-11-21T18:39:17.628021+0100 ``` ## Default configuration The default interval before warning is seven days: ```none root@pve:~# ceph config show-with-defaults mgr.pve | grep osd_deep_scrub_interval osd_deep_scrub_interval 604800.000000 default ``` 604800 seconds = 7 days ## Increase window to 14 days I didn't want to spend more hours per day scrubbing, so for me the solution was to increase the windows to 14 days (1209600 seconds). ```sh ceph config set global osd_deep_scrub_interval 1209600 ``` ## Other notes You can also adjust the number of hours per day scrubbing happens and which days: * [osd_scrub_begin_hour](https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_scrub_begin_hour) * [osd_scrub_end_hour](https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_scrub_end_hour) * [osd_scrub_begin_week_day](https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_scrub_begin_week_day) * [osd_scrub_end_week_day](https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_scrub_end_week_day)