ceph-rbd
rbd 挂载:
创建池
[ceph@serverb ~]$ ceph osd pool create emporer 32 32
pool 'emporer' created
[ceph@serverb ~]$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 88 GiB 76 GiB 12 MiB 12 GiB 13.66
TOTAL 88 GiB 76 GiB 12 MiB 12 GiB 13.66
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 23 GiB
cephfs_data 2 32 0 B 0 0 B 0 23 GiB
cephfs_metadata 3 32 27 KiB 22 168 KiB 0 23 GiB
emporer 8 32 0 B 0 0 B 0 23 GiB
启用类型为rbd
ceph osd pool application enable rbd emporer
[ceph@serverb ~]$ rbd pool init emporer
或者下面的这条命令
ceph osd pool application enable rbd emporer
创建镜像
[ceph@serverb ~]$ rbd create emporer/test1 --size=4G
查看:
[ceph@serverb ~]$ rbd info emporer/test1
rbd image 'test1':
size 2 GiB in 512 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 19beed9c5034d
block_name_prefix: rbd_data.19beed9c5034d
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Tue Apr 11 16:21:12 2023
access_timestamp: Tue Apr 11 16:21:12 2023
modify_timestamp: Tue Apr 11 16:21:12 2023
[ceph@serverb ~]$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 88 GiB 76 GiB 13 MiB 12 GiB 13.66
TOTAL 88 GiB 76 GiB 13 MiB 12 GiB 13.66
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 23 GiB
cephfs_data 2 32 0 B 0 0 B 0 23 GiB
cephfs_metadata 3 32 27 KiB 22 168 KiB 0 23 GiB
emporer 8 32 198 B 5 36 KiB 0 23 GiB
[ceph@serverb ~]$ rbd ls -p emporer
test1
创建rbd cephx的keyring 文件
[ceph@servera ~]$ ceph auth get-or-create client.rbduser1 \
> mon 'allow r' \
> osd 'allow rwx pool=emporer'\
> |tee /etc/ceph/ceph.client.rbduser1.keyring
[client.rbduser1]
key = AQBwoTdksXK+HRAAWyvho2ketqxiLOQmvnjJzA==
发送至客户端
[ceph@servera ~]$ sudo scp /etc/ceph/ceph.client.rbduser1.keyring root@ceph-client:/etc/ceph/
The authenticity of host 'ceph-client (192.168.5.116)' can't be established.
ECDSA key fingerprint is SHA256:XxvxbpbZEBSU1/b54W7lb9BYkXGKznThQMaIxlNaILE.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'ceph-client' (ECDSA) to the list of known hosts.
root@ceph-client's password:
ceph.client.rbduser1.keyring 100% 66 45.0KB/s 00:00
[root@ceph-client ~]# ll /etc/ceph/ceph.client.rbduser1.keyring
-rw-r--r--. 1 root root 66 4月 13 02:34 /etc/ceph/ceph.client.rbduser1.keyring
[root@ceph-client ~]#
客户端:
前置准备cpeh的yum 源
[root@ceph-client ~]# yum -y install ceph-common
[root@ceph-client ~]# modprobe rbd
[root@ceph-client ceph]# lsmod |grep rbd
rbd 110592 1
libceph 372736 1 rbd
编辑rbdmap
[root@ceph-client ~]# cat /etc/ceph/rbdmap
# RbdDevice Parameters
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring
[root@ceph-client ~]# vim /etc/ceph/rbdmap
[root@ceph-client ~]# cat /etc/ceph/rbdmap
# RbdDevice Parameters
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring
emporer/test1 id=rbduser1,keyring=/etc/ceph/ceph.client.rbduser1.keyring
添加字段。pool/image id=rbduser1 keyring文件存放路径
启动服务rbdmap并设置开机自启:
[root@ceph-client ceph]# systemctl restart rbdmap.service
[root@ceph-client ceph]# systemctl status rbdmap.service
● rbdmap.service - Map RBD devices
Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; disabled; vendor preset: disabled)
Active: active (exited) since Thu 2023-04-13 03:26:02 EDT; 6s ago
Process: 35130 ExecStart=/usr/bin/rbdmap map (code=exited, status=0/SUCCESS)
Main PID: 35130 (code=exited, status=0/SUCCESS)
4月 13 03:26:01 ceph-client systemd[1]: Starting Map RBD devices...
4月 13 03:26:02 ceph-client systemd[1]: Started Map RBD devices.
[root@ceph-client ~]# systemctl enable rbdmap.service
Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /usr/lib/systemd/system/rbdmap.service.
查看磁盘:
[root@ceph-client ceph]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 8.8G 0 rom /run/media/root/RHEL-8-3-0-BaseOS-x86_64
rbd0 252:0 0 4G 0 disk
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
[root@ceph-client ceph]#
格式化并挂载
[root@ceph-client ~]# mkfs -t xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=131072 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=1048576, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
查看现有挂载情况
[root@ceph-client ~]# df -TH
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 908M 0 908M 0% /dev
tmpfs tmpfs 939M 0 939M 0% /dev/shm
tmpfs tmpfs 939M 1.7M 937M 1% /run
tmpfs tmpfs 939M 0 939M 0% /sys/fs/cgroup
/dev/mapper/rhel-root xfs 19G 5.0G 14G 28% /
/dev/nvme0n1p1 xfs 1.1G 251M 813M 24% /boot
tmpfs tmpfs 188M 1.3M 187M 1% /run/user/42
tmpfs tmpfs 188M 3.6M 185M 2% /run/user/0
/dev/sr0 iso9660 9.5G 9.5G 0 100% /run/media/root/RHEL-8-3-0-BaseOS-x86_64
创建挂载点
[root@ceph-client ~]# mkdir /mnt/rbd0
[root@ceph-client ~]# mount /dev/rbd0 /mnt/rbd0/
[root@ceph-client ~]# df -TH
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 908M 0 908M 0% /dev
tmpfs tmpfs 939M 0 939M 0% /dev/shm
tmpfs tmpfs 939M 1.7M 937M 1% /run
tmpfs tmpfs 939M 0 939M 0% /sys/fs/cgroup
/dev/mapper/rhel-root xfs 19G 5.0G 14G 28% /
/dev/nvme0n1p1 xfs 1.1G 251M 813M 24% /boot
tmpfs tmpfs 188M 1.3M 187M 1% /run/user/42
tmpfs tmpfs 188M 3.6M 185M 2% /run/user/0
/dev/sr0 iso9660 9.5G 9.5G 0 100% /run/media/root/RHEL-8-3-0-BaseOS-x86_64
/dev/rbd0 xfs 4.3G 64M 4.3G 2% /mnt/rbd0
[root@ceph-client ~]#
写入fstab
[root@ceph-client ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sun Jul 17 23:02:43 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=3fdba1d5-baa1-42b7-a102-dc405869f91f /boot xfs defaults 0 0
/dev/mapper/rhel-swap none swap defaults 0 0
/dev/rbd0 /mnt/rbd0 xfs defaults,_netdev 0 0
测试:
[root@ceph-client ~]# cd /mnt/rbd0/
[root@ceph-client rbd0]# ls
[root@ceph-client rbd0]# touch 123
[root@ceph-client rbd0]# ls
123
挂载情况:
[root@ceph-client ~]# rbd showmapped
id pool namespace image snap device
0 emporer test1 - /dev/rbd0
重启下。。测试挂载是否正常
扩容:
rbd resize emporer/test1 --size=5G
[root@ceph-client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 8.8G 0 rom
rbd0 252:0 0 5G 0 disk /mnt/rbd0
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
[root@ceph-client ~]#
[root@ceph-client ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 866M 0 866M 0% /dev
tmpfs tmpfs 896M 0 896M 0% /dev/shm
tmpfs tmpfs 896M 1.3M 894M 1% /run
tmpfs tmpfs 896M 0 896M 0% /sys/fs/cgroup
/dev/mapper/rhel-root xfs 17G 4.8G 13G 28% /
/dev/nvme0n1p1 xfs 1014M 240M 775M 24% /boot
/dev/rbd0 xfs 4.0G 62M 4.0G 2% /mnt/rbd0
tmpfs tmpfs 180M 1.2M 178M 1% /run/user/42
tmpfs tmpfs 180M 0 180M 0% /run/user/0
[root@ceph-client ~]#
总结下:
1,前置条件,安装ceph-comm包
2,配置并启动rbdmap服务,且开机自启
3,keyring 文件因根据需求创建,不可直接用amdin,
4,基于内核模块rbd ,需加载rbd 模块,
5,企业级特性都支持 精简配置,支持快照,克隆,动态调整大小,很强。。
6,随动态扩容至5G 但是文件系统还是只有4G 可以结合lvm 使用。
7,这是一个精简配置,就是用多少给多少空间
8,unmap 时应该一步一步回退,编辑fstab,取消挂载点的文件读写 取消挂载,unmap .关闭服务
命令汇总:
rbd create --size pool-name/image-name 创建rbd
rbd pool-name ls 查看
rbd info pool-name/image-name 查看详细信息
rbd status pool-name/image-name 镜像撞他
rbd du pool-name/image-name 镜像使用量,用于调配 可用容量,已用容量
rbd resize pool-name/image-name --size 调配RBD大小
rbd rm pool-name/image-name 删除镜像
rbd cp pool-name/src-images-name pool-name/tag-image-name 复制RBD镜像
rbd mv pool-name/src-image-name pool-name/new-image-name 重命名RBD
rbd trash mv pool-name/image-name 镜像移入回收站
rbd trash rm pool-name/image-name 从回收站删除镜像
rbd trash restore pool-name/image-name 回收站回滚
rbd trash ls pool-name 列出回收站的rbd镜像
rbd --help
[ceph@servera ~]$ rbd trash rm --pool emporer --image-id 1e6a2fce6df93
Removing image: 100% complete…done.