first you need to Have an image ! so create image :
rbd create --size 1G volumes/myimage
size should be <M/G/T>
rbd ls volumes | grep myimage
myimage
you can see DiskUsage on rbd image
rbd du volumes/myimage
NAME PROVISIONED USED
myimage 1 GiB 0 B
Now you need to add a keychain, either admin or another valid type for pools
sudo cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQB368ZoDbNqHBAAxiCPkJybEpDCi5tiOSUUyw==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
and ceph.conf on client
sudo cat /etc/ceph/ceph.conf
# minimal ceph.conf for 52db4c7a-9186-11f0-8599-000c294a37f4
[global]
fsid = 52db4c7a-9186-11f0-8599-000c294a37f4
mon_host = [v2:192.168.11.21:3300/0,v1:192.168.11.21:6789/0] [v2:192.168.11.22:3300/0,v1:192.168.11.22:6789/0] [v2:192.168.11.23:3300/0,v1:192.168.11.23:6789/0] [v2:192.168.11.24:3300/0,v1:192.168.11.24:6789/0]
now you can use rbd cli to attach an image as a block device as we expect form ceph RBD service :
rbd map volumes/myimage
/dev/rbd0
now myimage is accessible as a block device
lsblk /dev/rbd0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
rbd0 251:0 0 1G 0 disk
you can format a file system on it ilke ext4 or xfs :
mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=32768 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
of you map other rbd image its get an higher number like rbd1
mount this formatted block device on a path :
mount /dev/rbd0 /mnt/
mount | grep mnt
/dev/rbd0 on /mnt type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
df -h /mnt/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 960M 51M 910M 6% /mnt
you can use this path normally and its may add to fstab to make it permanent mounting at boot :
cat << EOF > /etc/fstab
/dev/rbd0 /mnt/ xfs defaults,noatime,_netdev 0 0
EOF
mount -a
the fstab record just make an path mount permanently, but rbd image is need further more for make it permanent ! because after reboot map is not permanent and its need to make image mapping permanent , so we need to add an record in /etc/ceph/rbdmap make it permanent, fstab + rbdamp file is the key point !
cat /etc/ceph/rbdmap
# RbdDevice Parameters
volumes/myimage id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
dont forget to enabe rbdmap service
systemctl enable rbdmap.service
it you want to remove image map, you can easily unmapping it, first unmount the path and unmap it
umount /mnt
rbd unmap /dev/rbd0
In real production world, rbd service is using integratly by other cloud services like Kubernetes or Openstack Glance or Cinder.
Ceph RADOS Block Device (RBD) — cinder 27.0.0.0rc2.dev3 documentation