Ceph手动添加osd的步骤
这篇文章主要介绍“Ceph手动添加osd的步骤”,在日常操作中,相信很多人在Ceph手动添加osd的步骤问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”Ceph手动添加osd的步骤”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!
创新互联建站是一家专业的成都网站建设公司,我们专注网站制作、成都做网站、网络营销、企业网站建设,外链,1元广告为企业客户提供一站式建站解决方案,能带给客户新的互联网理念。从网站结构的规划UI设计到用户体验提高,创新互联力求做到尽善尽美。
1、Ceph版本
ceph手动添加osd的过程,其实就是ceph-deploy的过程自己手动执行了一遍
# ceph version ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
2、磁盘分区
bluestore存储方式,data是以前的journal(vdc1),block(vdc2)才是实际数据存储
2.1、划分ceph data
# uuidgen ff3db0d3-fd32-4b2d-8c35-1fb074e00cea # sgdisk --new=1:0:+100M --change-name=1:"ceph data" --partition-guid=1:ff3db0d3-fd32-4b2d-8c35-1fb074e00cea --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc The operation has completed successfully. # /usr/bin/udevadm settle --timeout=600 # /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc # /usr/bin/udevadm settle --timeout=600
2.2、划分ceph block
# uuidgen a44651fb-8904-4a86-adf6-541fefdf229e # sgdisk --largest-new=2 --change-name=2:"ceph block" --partition-guid=2:a44651fb-8904-4a86-adf6-541fefdf229e --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc The operation has completed successfully. # /usr/bin/udevadm settle --timeout=600 # /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc # /usr/bin/udevadm settle --timeout=600
2.3、查看分区情况
# ls -lrt /dev/disk/by-partuuid/|grep vdc a44651fb-8904-4a86-adf6-541fefdf229e -> ../../vdc2 lrwxrwxrwx 1 root root 10 Jul 12 16:37 ff3db0d3-fd32-4b2d-8c35-1fb074e00cea -> ../../vdc1
3、格式化/dev/vdc1为xfs
# sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc The operation has completed successfully. # udevadm settle --timeout=600 # flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc # udevadm settle --timeout=600 # mkfs -t xfs -f -i size=2048 -- /dev/vdc1 meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=6400 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=25600, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=864, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
4、挂载临时目录(vdc1)
# mkdir /var/lib/ceph/tmp/mnt.DlWdC2 # mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.DlWdC2 # fsid=$(ceph-osd --cluster=ceph --show-config-value=fsid) # cat << EOF > /var/lib/ceph/tmp/mnt.DlWdC2/ceph_fsid > $fsid > EOF # echo "ff3db0d3-fd32-4b2d-8c35-1fb074e00cea" >> /var/lib/ceph/tmp/mnt.DlWdC2/fsid # restorecon -R /var/lib/ceph/tmp/mnt.DlWdC2/magic # cat << EOF > /var/lib/ceph/tmp/mnt.DlWdC2/block_uuid > a44651fb-8904-4a86-adf6-541fefdf229e > EOF
5、建立block软链接(vdc2)
# ln -s /dev/disk/by-partuuid/a44651fb-8904-4a86-adf6-541fefdf229e /var/lib/ceph/tmp/mnt.DlWdC2/block # echo bluestore >> /var/lib/ceph/tmp/mnt.DlWdC2/type # /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.DlWdC2/ # /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.DlWdC2/ # umount /var/lib/ceph/tmp/mnt.DlWdC2/ # sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully.
6、启动osd
udev来实现了osd的自动挂载
# /usr/bin/udevadm settle --timeout=600 # /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc # /usr/bin/udevadm settle --timeout=600 # /usr/bin/udevadm trigger --action=add --sysname-match vdc1
7、osd目录文件说明
# cd /var/lib/ceph/osd/ceph-4/ # ls -lrt total 52 -rw-r--r-- 1 ceph ceph 37 Jul 12 16:43 ceph_fsid -rw-r--r-- 1 ceph ceph 37 Jul 12 16:47 fsid -rw-r--r-- 1 ceph ceph 21 Jul 12 16:47 magic -rw-r--r-- 1 ceph ceph 37 Jul 12 16:49 block_uuid lrwxrwxrwx 1 ceph ceph 58 Jul 12 16:50 block -> /dev/disk/by-partuuid/a44651fb-8904-4a86-adf6-541fefdf229e -rw-r--r-- 1 ceph ceph 10 Jul 12 16:51 type -rw------- 1 ceph ceph 56 Jul 12 16:57 keyring -rw-r--r-- 1 ceph ceph 2 Jul 12 16:57 whoami -rw-r--r-- 1 root root 384 Jul 12 16:57 activate.monmap -rw-r--r-- 1 ceph ceph 8 Jul 12 16:57 kv_backend -rw-r--r-- 1 ceph ceph 2 Jul 12 16:57 bluefs -rw-r--r-- 1 ceph ceph 4 Jul 12 16:57 mkfs_done -rw-r--r-- 1 ceph ceph 6 Jul 12 16:57 ready -rw-r--r-- 1 ceph ceph 3 Jul 12 16:57 active -rw-r--r-- 1 ceph ceph 0 Jul 12 16:57 systemd
ceph_fsid -- ceph集群的fsid fsid -- osd的fsid,也是/dev/vdc1的盘符id magic -- ceph osd volume v026 block_uuid -- /dev/vdc2的盘符id block -- 软链接,指向/dev/vdc2的盘符 type -- ceph存储类型,这里为bluestore keyring -- osd的秘钥 whoami -- osd的序号 activate.monmap -- 活跃的monmap kv_backend -- 值为rocksdb bluefs -- 值为1 mkfs_done -- 值为yes ready -- 值为ready active -- 值为ok systemd -- 空文件
到此,关于“Ceph手动添加osd的步骤”的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注创新互联网站,小编会继续努力为大家带来更多实用的文章!
分享名称:Ceph手动添加osd的步骤
文章路径:http://azwzsj.com/article/gsejjd.html