Monitor some cluster functions and states

ceph -s


  cluster:

    id:     274350b4-7c7c-11f0-98aa-000c296e34b9

    health: HEALTH_WARN

            OSD count 1 < osd_pool_default_size 3

  services:

    mon: 2 daemons, quorum ceph01,cephosd01 (age 34m)

    mgr: ceph01.ubjkjj(active, since 37m), standbys: cephosd01.tfuidr

    osd: 1 osds: 1 up (since 72s), 1 in (since 33m)

  data:

    pools:   0 pools, 0 pgs

    objects: 0 objects, 0 B

    usage:   27 MiB used, 6.0 GiB / 6.0 GiB avail

    pgs:

ceph health detail

HEALTH_WARN OSD count 1 < osd_pool_default_size 3

[WRN] TOO_FEW_OSDS: OSD count 1 < osd_pool_default_size 3

ceph progress

[Complete]: Updating crash deployment (+1 -> 1) (38m)

    [============================]

[Complete]: Updating crash deployment (+1 -> 2) (36m)

    [============================]

[Complete]: Updating mgr deployment (+1 -> 2) (36m)

    [============================]

[Complete]: Updating mon deployment (+1 -> 2) (36m)

    [============================]