ceph health: HEALTH_ERR 1 scrub errors Possible data damage: 1 pg inconsistent
ceph health: HEALTH_ERR 1 scrub errors Possible data damage: 1 pg inconsistent
ceph -s
[root@emporerlinux ~]# ceph -s
cluster:
id: 48b33655-ebfb-4a58-a00f-8735d9eef2a3
health: HEALTH_ERR
1 scrub errors
Possible data damage: 1 pg inconsistent
services:
mon: 3 daemons, quorum emporerlinux01s,emporerlinux02,emporerlinux03 (age 7M)
mgr: emporerlinux03(active, since 8M), standbys: emporerlinux02, emporerlinux
osd: 24 osds: 24 up (since 8M), 24 in (since 8M)
data:
pools: 5 pools, 129 pgs
objects: 42.85k objects, 167 GiB
usage: 497 GiB used, 20 TiB / 21 TiB avail
pgs: 128 active+clean
1 active+clean+inconsistent
io:
client: 938 B/s rd, 618 KiB/s wr, 0 op/s rd, 126 op/s wr
ceph health detail
[root@emporerlinux ~]# ceph health detail
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
[ERR] OSD_SCRUB_ERRORS: 1 scrub errors
[ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistent
pg 4.d is active+clean+inconsistent, acting [22,23,21]
ceph df
[root@emporerlinux ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 20.95798 root default
-3 6.98599 host emporerlinux01
19 hdd 2.18320 osd.19 up 1.00000 1.00000
21 hdd 2.18320 osd.21 up 1.00000 1.00000
0 ssd 0.43660 osd.0 up 1.00000 1.00000
5 ssd 0.43660 osd.5 up 1.00000 1.00000
7 ssd 0.43660 osd.7 up 1.00000 1.00000
11 ssd 0.43660 osd.11 up 1.00000 1.00000
14 ssd 0.43660 osd.14 up 1.00000 1.00000
17 ssd 0.43660 osd.17 up 1.00000 1.00000
-7 6.98599 host emporerlinux02
18 hdd 2.18320 osd.18 up 1.00000 1.00000
22 hdd 2.18320 osd.22 up 1.00000 1.00000
1 ssd 0.43660 osd.1 up 1.00000 1.00000
3 ssd 0.43660 osd.3 up 1.00000 1.00000
6 ssd 0.43660 osd.6 up 1.00000 1.00000
9 ssd 0.43660 osd.9 up 1.00000 1.00000
12 ssd 0.43660 osd.12 up 1.00000 1.00000
15 ssd 0.43660 osd.15 up 1.00000 1.00000
-5 6.98599 host emporerlinux03
20 hdd 2.18320 osd.20 up 1.00000 1.00000
23 hdd 2.18320 osd.23 up 1.00000 1.00000
2 ssd 0.43660 osd.2 up 1.00000 1.00000
4 ssd 0.43660 osd.4 up 1.00000 1.00000
8 ssd 0.43660 osd.8 up 1.00000 1.00000
10 ssd 0.43660 osd.10 up 1.00000 1.00000
13 ssd 0.43660 osd.13 up 1.00000 1.00000
16 ssd 0.43660 osd.16 up 1.00000 1.00000
ceph pg repair
[root@emporerlinux ~]# ceph pg repair 4.d
instructing pg 4.d on osd.22 to repair
[root@emporerlinux ~]# ceph osd repair 22
instructed osd(s) 22 to repair
[root@emporerlinux ~]# ceph osd repair 23
instructed osd(s) 23 to repair
[root@emporerlinux ~]# ceph osd repair 21
instructed osd(s) 21 to repair
ceph -s
[root@emporerlinux ~]# ceph -s
cluster:
id: 48b33655-ebfb-4a58-a00f-8735d9eef2a3
health: HEALTH_OK
services:
mon: 3 daemons, quorum emporerlinux,emporerlinux02,emporerlinux03 (age 7M)
mgr: emporerlinux03(active, since 8M), standbys: emporerlinux02, emporerlinux
osd: 24 osds: 24 up (since 8M), 24 in (since 8M)
data:
pools: 5 pools, 129 pgs
objects: 42.85k objects, 167 GiB
usage: 496 GiB used, 20 TiB / 21 TiB avail
pgs: 128 active+clean
1 active+clean+scrubbing+deep+repair
io:
client: 937 B/s rd, 430 KiB/s wr, 1 op/s rd, 74 op/s wr
https://docs.ceph.com/en/latest/rados/operations/pg-repair/#repairing-pg-inconsistencies
版权声明:
本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自
Emporer-Linux!
喜欢就支持一下吧