ceph中使用技巧有哪些
小编给大家分享一下ceph中使用技巧有哪些,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!
创新互联公司是专业的巩义网站建设公司,巩义接单;提供网站设计制作、网站建设,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行巩义网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!
1. 设置cephx keys
如果ceph设置了cephx,就可以为不同的用户设置权限。
#创建dummy的key $ ceph auth get-or-create client.dummy mon 'allow r' osd 'allow rwx pool=dummy' [client.dummy] key = AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ== $ ceph auth list installed auth entries: ... client.dummy key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ== caps: [mon] allow r caps: [osd] allow rwx pool=dummy ... #对dummy的key重新分配权限 $ ceph auth caps client.dummy mon 'allow rwx' osd 'allow rwx pool=dummy' updated caps for client.dummy $ ceph auth list installed auth entries: client.dummy key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ== caps: [mon] allow rwx caps: [osd] allow allow rwx pool=dummy/dev/sda2 /srv/ceph/osdX1 xfs rw,noexec,nodev,noatime,nodiratime,barrier=0 0 0
2. 查看rbd被挂载到哪里
由于rbd showmapped只能显示本地挂载的rbd设备,如果机器比较多,而你恰好忘了在哪里map的了,就只能逐个机器找了。利用listwatchers可以解决这个问题。
对于image format为1的块:
$ rbd info boot rbd image 'boot': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rb.0.89ee.2ae8944a format: 1 $ rados -p rbd listwatchers boot.rbd watcher=192.168.251.102:0/2550823152 client.35321 cookie=1
对于image format为2的块,有些不一样:
[root@osd2 ceph]# rbd info myrbd/rbd1 rbd image 'rbd1': size 8192 kB in 2 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.13436b8b4567 format: 2 features: layering [root@osd2 ceph]# rados -p myrbd listwatchers rbd_header.13436b8b4567 watcher=192.168.108.3:0/2292307264 client.5130 cookie=1
需要将rbd info得到的序号加到rbd_header后面。
3. 怎样删除巨型rbd image
之前在一些博客看到删除巨型rbd image,如果直接通过rbd rm的话会很耗时(漫长的夜)。但在ceph 0.87上尝试了一下,这个问题已经不存在了,具体过程如下:
#创建一个1PB大小的块 [root@osd2 ceph]# time rbd create myrbd/huge-image -s 1024000000 real 0m0.353s user 0m0.016s sys 0m0.009s [root@osd2 ceph]# rbd info myrbd/huge-image rbd image 'huge-image': size 976 TB in 256000000 objects order 22 (4096 kB objects) block_name_prefix: rb.0.1489.6b8b4567 format: 1 [root@osd2 ceph]# time rbd rm myrbd/huge-image Removing image: 2% complete...^\Quit (core dumped) real 10m24.406s user 18m58.335s sys 11m39.507s
上面创建了一个1PB大小的块,也许是太大了,直接rbd rm删除的时候还是很慢,所以用了一下方法:
[root@osd2 ceph]# rados -p myrbd rm huge-image.rbd [root@osd2 ceph]# time rbd rm myrbd/huge-image 2014-11-06 09:44:42.916826 7fdb4fd5a7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory Removing image: 100% complete...done. real 0m0.192s user 0m0.012s sys 0m0.013s
来个1TB大小的块试试:
[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000 [root@osd2 ceph]# rbd info myrbd/huge-image rbd image 'huge-image': size 1000 GB in 256000 objects order 22 (4096 kB objects) block_name_prefix: rb.0.149c.6b8b4567 format: 1 [root@osd2 ceph]# time rbd rm myrbd/huge-image Removing image: 100% complete...done. real 0m29.418s user 0m52.467s sys 0m32.372s
所以巨型的块删除还是要用以下方法:
format 1:
[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000000 [root@osd2 ceph]# rbd info myrbd/huge-image rbd image 'huge-image': size 976 TB in 256000000 objects order 22 (4096 kB objects) block_name_prefix: rb.0.14a5.6b8b4567 format: 1 [root@osd2 ceph]# rados -p myrbd rm huge-image.rbd [root@osd2 ceph]# time rados -p myrbd ls|grep '^rb.0.14a5.6b8b4567'|xargs -n 200 rados -p myrbd rm [root@osd2 ceph]# time rbd rm myrbd/huge-image 2014-11-06 09:54:12.718211 7ffae55747e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory Removing image: 100% complete...done. real 0m0.191s user 0m0.015s sys 0m0.010s
format 2:
[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000000 --image-format=2 [root@osd2 ceph]# rbd info myrbd/huge-image rbd image 'huge-image': size 976 TB in 256000000 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.14986b8b4567 format: 2 features: layering [root@osd2 ceph]# rados -p myrbd rm rbd_id.huge-image [root@osd2 ceph]# rados -p myrbd rm rbd_header.14986b8b4567 [root@osd2 ceph]# rados -p myrbd ls | grep '^rbd_data.14986b8b4567' | xargs -n 200 rados -p myrbd rm [root@osd2 ceph]# time rbd rm myrbd/huge-image 2014-11-06 09:59:26.043671 7f6b6923c7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory Removing image: 100% complete...done. real 0m0.192s user 0m0.016s sys 0m0.010s
注意,如果块是空的,不许要xargs那条语句;如果是非空就需要了。
所以,如果是100TB以上的块,最好还是先删除id,再rbd rm进行删除。
4. 查看kvm或qemu是否支持ceph
$ sudo qemu-system-x86_64 -drive format=? Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug $ qemu-img -h ... ... Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug
可以到 http://ceph.com/packages/下载最新的rpm或deb包。
5. 利用ceph rbd配置nfs
一种简单实用的存储方法,具体如下:
#安装nfs rpm [root@osd1 current]# yum install nfs-utils rpcbind Loaded plugins: fastestmirror, priorities, refresh-packagekit, security Loading mirror speeds from cached hostfile epel/metalink | 5.5 kB 00:00 * base: mirrors.cug.edu.cn * epel: mirrors.yun-idc.com * extras: mirrors.btte.net * rpmforge: ftp.riken.jp * updates: mirrors.btte.net Ceph | 951 B 00:00 Ceph-noarch | 951 B 00:00 base | 3.7 kB 00:00 ceph-source | 951 B 00:00 epel | 4.4 kB 00:00 epel/primary_db | 6.3 MB 00:01 extras | 3.4 kB 00:00 rpmforge | 1.9 kB 00:00 updates | 3.4 kB 00:00 updates/primary_db | 188 kB 00:00 69 packages excluded due to repository priority protections Setting up Install Process Package rpcbind-0.2.0-11.el6.x86_64 already installed and latest version Resolving Dependencies --> Running transaction check ---> Package nfs-utils.x86_64 1:1.2.3-39.el6 will be updated ---> Package nfs-utils.x86_64 1:1.2.3-54.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================== Updating: nfs-utils x86_64 1:1.2.3-54.el6 base 326 k Transaction Summary ====================================================================================================================================== Upgrade 1 Package(s) Total download size: 326 k Is this ok [y/N]: y Downloading Packages: nfs-utils-1.2.3-54.el6.x86_64.rpm | 326 kB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : 1:nfs-utils-1.2.3-54.el6.x86_64 1/2 Cleanup : 1:nfs-utils-1.2.3-39.el6.x86_64 2/2 Verifying : 1:nfs-utils-1.2.3-54.el6.x86_64 1/2 Verifying : 1:nfs-utils-1.2.3-39.el6.x86_64 2/2 Updated: nfs-utils.x86_64 1:1.2.3-54.el6 #创建一个块并格式化、挂载 [root@osd1 current]# rbd create myrbd/nfs_image -s 1024000 --image-format=2 [root@osd1 current]# rbd map myrbd/nfs_image /dev/rbd0 [root@osd1 current]# mkdir /mnt/nfs [root@osd1 current]# mkfs.xfs /dev/rbd0 log stripe unit (4194304 bytes) is too large (maximum is 256KiB) log stripe unit adjusted to 32KiB meta-data=/dev/rbd0 isize=256 agcount=33, agsize=8190976 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=262144000, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=128000, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@osd1 current]# mount /dev/rbd0 -o rw,noexec,nodev,noatime,nobarrier /mnt/nfs #改写exports文件,添加一行 [root@osd1 current]# vim /etc/exports /mnt/nfs 192.168.108.0/24(rw,no_root_squash,no_subtree_check,async) [root@osd1 current]# exportfs -r 这里还需要执行指令service rpcbind start [root@osd1 current]# service nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ] 此时客户端就可以挂载了。客户端运行: showmount -e 192.168.108.2 然后进行挂载: mount -t nfs 192.168.108.2:/mnt/nfs /mnt/nfs
如果无法挂载,运行 service rpcbind start或 service portmap start命令试一下。
以上是“ceph中使用技巧有哪些”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注创新互联行业资讯频道!
分享文章:ceph中使用技巧有哪些
文章分享:http://azwzsj.com/article/jsjsph.html