1. 首页
  2. >
  3. 服务器技术
  4. >
  5. CentOS

Centos 7 部署Ceph集群

一、环境准备

各节点IP分配如下表:

Centos 7 部署Ceph集群

1、主机名

四个节点分别配置主机名

[root@localhost ~]# hostnamectl set-hostname ceph-admin

[root@localhost ~]# hostnamectl set-hostname node1

[root@localhost ~]# hostnamectl set-hostname node2

[root@localhost ~]# hostnamectl set-hostname node3

2、关闭防火墙、核心防护

三个节点均需要操作,以ceph-admin为例

[root@ceph-admin ~]# systemctl stop firewalld

[root@ceph-admin ~]# systemctl disable firewalld

[root@ceph-admin ~]# setenforce 0

[root@ceph-admin ~]# sed -i '7s/enforcing/disabled/' /etc/selinux/config

3、配置hosts

三个节点均需要操作,以ceph-admin为例

[root@ceph-admin ~]# vi /etc/hosts

192.168.0.210 ceph-admin

192.168.0.211 node1

192.168.0.212 node2

192.168.0.213 node3

4、创建免交互,同步到其它节点

[root@ceph-admin ~]# ssh-keygen

[root@ceph-admin ~]# ssh-copy-id node1

[root@ceph-admin ~]# ssh-copy-id node2

[root@ceph-admin ~]# ssh-copy-id node3

免交互登录测试

Centos 7 部署Ceph集群

5、配置YUM源

三个节点均需要操作,以ceph-admin为例

[root@ceph-admin ~]# vi /etc/yum.conf

keepcache=1 //开启缓存 三个节点都改

[root@ceph-admin ~]# yum -y install wget curl net-tools bash-completion //安装wget curl net-tools bash-completion

[root@ceph-admin ~]# cd /etc/yum.repos.d/

[root@ceph-admin yum.repos.d]# mkdir backup

[root@ceph-admin yum.repos.d]# mv C* backup

[root@ceph-admin yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@ceph01 yum.repos.d]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

写入ceph源

[root@ceph01 yum.repos.d]# cat << EOF > /etc/yum.repos.d/ceph.repo

[ceph]

name=Ceph packages for

baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

priority=1

[ceph-noarch]

name=Ceph noarch packages

baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

priority=1

[ceph-source]

name=Ceph source packages

baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

priority=1

EOF

[root@ceph-admin yum.repos.d]# cat ceph.repo

[root@ceph-admin yum.repos.d]# yum update -y //升级yum源

6、配置NTP时钟服务

6.1、服务端配置配置文件修改

  vi /etc/chrony.conf

修改第22行,Allow NTP client access from local network,配置允许访问的客户端列表,支持CIDR,例如:

  allow 192.168/16

修改第29行设置同步,Serve time even if not synchronized to any NTP server.,打开注释即可,即:

  local stratum 10

重启下服务端chrony服务,使用systemctl restart chronyd.service重启即可。

6.2、客户端配置配置文件修改

  vi /etc/chrony.conf

6.3、修改server即可,删掉其他的,添加要同步时间的源服务器ip,格式如下:

  server 192.168.0.210 iburst

6.4、重启下客户端chrony服务,使用systemctl restart chronyd.service重启即可。

6.5、查看同步状态客户端使用

chronyc sources

7、管理节点安装ceph-deploy工具,登录管理节点ceph-admin

[root@ceph-admin ~]# mkdir /etc/ceph

[root@ceph-admin ~]# yum -y install http://download.ceph.com/rpm-mimic/el7/noarch/ceph-deploy-2.0.1-0.noarch.rpm

[root@ceph-admin ~]# yum -y install python-setuptools

[root@ceph-admin ~]# yum install ceph-deploy -y

8、管理节点创建集群

[root@ceph-admin ~]# cd /etc/ceph

[root@ceph-admin ceph]# ceph-deploy new ceph-admin

[root@ceph-admin ceph]# scp /etc/yum.repos.d/ceph.repo node1:/etc/yum.repos.d/

[root@ceph-admin ceph]# scp /etc/yum.repos.d/ceph.repo node2:/etc/yum.repos.d/

[root@ceph-admin ceph]# scp /etc/yum.repos.d/ceph.repo node3:/etc/yum.repos.d/

9.三个节点均创建ceph目录并安装ceph、ceph-radosgw

[root@ceph-admin ceph]# mkdir /etc/ceph

[root@ceph-admin ceph]# yum clean all #更新了源后,必须做这下步操作,不然可能找不到源

[root@ceph-admin ceph]# yum -y install ceph ceph-radosgw --skip-broken #加上skip-broken可以解决依赖包的问题,不建议此操作

10、ceph-admin创建mon监控组件、在ceph01初始化mon并收取秘钥

[root@ceph-admin ceph]# cd /etc/ceph

[root@ceph-admin ceph]# ceph-deploy mon create-initial ##初始化

[root@ceph-admin ceph]# ceph -s ##查看状态

Centos 7 部署Ceph集群

11、创建mgr

[root@ceph-admin ceph]# ceph-deploy mgr create ceph-admin

12、在ceph-admin管理节点添加osd

12.1、初始化硬盘

[root@ceph-admin ceph]# ceph-deploy disk zap ceph-admin /dev/sdb

[root@ceph-admin ceph]# ceph-deploy disk zap node1 /dev/sdb #清除硬盘信息

[root@ceph-admin ceph]# ceph-deploy disk zap node2 /dev/sdb #清除硬盘信息

[root@ceph-admin ceph]# ceph-deploy disk zap node3 /dev/sdb #清除硬盘信息

12.2、挂载硬盘

[root@ceph-admin ceph]# ceph-deploy osd create --data /dev/sdb ceph-admin

[root@ceph-admin ceph]# ceph-deploy osd create --data /dev/sdb node1

[root@ceph-admin ceph]# ceph-deploy osd create --data /dev/sdb node2

[root@ceph-admin ceph]# ceph-deploy osd create --data /dev/sdb node3

[root@ceph-admin ceph]# ceph -s ##查看状态有2个osd

Centos 7 部署Ceph集群

查看osd状态信息

[root@ceph-admin ceph]# ceph osd tree

[root@ceph-admin ceph]# ceph osd stat

Centos 7 部署Ceph集群

13、创建rgw对象存储网关接口

[root@ceph-admin ceph]# ceph-deploy rgw create ceph-admin

Centos 7 部署Ceph集群

创建成功会显示接口信息

6、在ceph-admin将配置文件和admin秘钥下发到node1 node2

[root@ceph-admin ceph]# ceph-deploy admin node1 node2

ceph报错问题

[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite

[ceph_deploy][ERROR ] GenericError: Failed to create 3 monitors

原因:修改了ceph用户里的ceph.conf文件内容,但是没有把这个文件里的最新消息发送给其他节点,所以要推送消息

解决:ceph-deploy --overwrite-conf config push node1-4

  或ceph-deploy --overwrite-conf mon create node1-4

7、分别在ceph01和ceph02中给秘钥增加读的权限

[root@ceph01 ~]# chmod +x /etc/ceph/ceph.client.admin.keyring

[root@ceph02 ~]# chmod +x /etc/ceph/ceph.client.admin.keyring

[root@ceph01 ~]# ceph -s

到目前为止集群就搭建好了


CEPH扩容

1、登录ceph-admin将ceph03 osd加入到集群中

[root@ceph-admin ceph]# ceph-deploy osd create --data /dev/sdb ceph03

[root@ceph-admin ceph]# ceph -s

2、登录ceph01将ceph03 mon加入到集群中

[root@ceph-admin ceph]# ceph-deploy mon add ceph03

[root@ceph-admin ceph]# ceph -s

[root@ceph-admin ceph]# vi /etc/ceph/ceph.conf

mon_initial members = ceph01,ceph02,ceph03 //新增ceph03

mon_host = 192.168.100.101,192.168.100.102,192.168.100.103 //新增192.168.100.103

将配置文件下发给ceph01 ceph02 ceph03

[root@ceph-admin ceph]# cd /etc/ceph/

[root@ceph-admin ceph]# ceph-deploy --overwrite-conf config push ceph01 ceph02 ceph03


三个节点重启mon服务

[root@ceph01 ~]# systemctl restart ceph-mon.target

如果不知道重启mon服务,可以通过如下命令查看

systemctl list-unit-files l grep mon

四、osd数据恢复

1、模拟故障

登录ceph03,先把信息拷贝下来

[root@ceph-admin ceph]# ceph osd tree

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-1 2.99696 root default

-3 0.99899 host ceph01

0 hdd 0.99899 osd.0 up 1.00000 1.00000

-5 0.99899 host ceph02

1 hdd 0.99899 osd.1 up 1.00000 1.00000

-7 0.99898 host ceph03

2 hdd 0.99898 osd.2 up 1.00000 1.00000

移除osd.2

[root@ceph-admin ceph]# ceph osd out osd.2


删除osd.2

[root@ceph-admin ceph]# ceph osd crush remove osd.2

删除osd.2的认证

[root@ceph-admin ceph]# ceph auth del osd.2


彻底删除osd.2

[root@ceph-admin ceph]# ceph osd rm osd.2

ceph03重启osd服务

[root@ceph03 ~]# systemctl restart ceph-osd.target


2、恢复osd到集群中

[root@ceph03 ~]# df -hT //查看ceph信息

文件系统 类型 容量 已用 可用 已用% 挂载点

devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev

tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm

tmpfs tmpfs 1.9G 22M 1.9G 2% /run

tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup

/dev/sda3 xfs 17G 4.9G 13G 29% /

/dev/sda1 xfs 1014M 217M 798M 22% /boot

tmpfs tmpfs 378M 4.0K 378M 1% /run/user/988

/dev/sr0 iso9660 4.4G 4.4G 0 100% /run/media/ml/CentOS 7 x86_64

tmpfs tmpfs 378M 4.0K 378M 1% /run/user/42

tmpfs tmpfs 378M 60K 378M 1% /run/user/0

tmpfs tmpfs 1.9G 52K 1.9G 1% /var/lib/ceph/osd/ceph-2


[root@ceph03 ~]# cd /var/lib/ceph/osd/ceph-2

[root@ceph03 ceph-2]# more fsid //查看fsid

daf9b00a-7562-475e-8e6d-96455c808402


[root@ceph03 ceph-2]# ceph osd create daf9b00a-7562-475e-8e6d-96455c808402 //ceph osd create uuid

[root@ceph03 ceph-2]# ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-2/keyring //增加权限

[root@ceph03 ceph-2]# ceph osd crush add 2 0.99899 host=ceph03 //0.99899是权重,host=主机名称

[root@ceph03 ceph-2]# ceph osd in osd.2

[root@ceph03 ceph-2]# systemctl restart ceph-osd.target

[root@ceph03 ceph-2]# ceph osd tree


恢复完成!

五、ceph常规的维护命令

1、创建mgr服务

[root@ceph01 ceph]# ceph-deploy mgr create ceph01 ceph02 ceph03

2、创建pool

[root@ceph-admin ceph]# ceph osd pool create cinder 64

[root@ceph-admin ceph]# ceph osd pool create nova 64

[root@ceph-admin ceph]# ceph osd pool create glance 64

[root@ceph-admin ceph]# ceph osd pool ls

3、删除pool

进入ceph-deploy安装目录

[root@ceph-admin ceph]# cd /etc/ceph

[root@ceph-admin ceph]# ceph osd pool rm cinder cinder --yes-i-really-really-mean-it //提示设置权限

#ceph daemon mon.ceph-admin config set mon_allow_pool_delete true //新增删除权限

[root@ceph-admin ceph]# ceph-deploy --overwrite-conf admin ceph02 ceph03 //将文件下发ceph02 ceph03

[root@ceph-admin ceph]# systemctl restart ceph-mon.target //三个节点重启

[root@ceph-admin ceph]# ceph osd pool rm cinder cinder --yes-i-really-really-mean-it

pool 'cinder' removed //提示删除

[root@ceph-admin ceph]# ceph osd pool ls


4、修改pool名字

[root@ceph01 ceph]# ceph osd pool rename nova nova1 //将Nova改成nova1

[root@ceph01 ceph]# ceph osd pool ls


5、查看ceph命令

[root@ceph01 ceph]# ceph --help

[root@ceph01 ceph]# ceph osd --help


6、配置ceph内部通信网段

[root@ceph-admin ceph]# vim /etc/ceph/ceph.conf

public network= 192.168.0.0/16 //添加网段

[root@ceph-admin ceph]# ceph-deploy --overwrite-conf admin ceph01 ceph02 ceph03 //将文件下发ceph02 ceph03

[root@ceph-admin ceph]# systemctl restart ceph-mon.target //将各个节点的mon服务重启下

[root@ceph-admin ceph]# systemctl restart ceph-osd.target //将各个节点的osd服务重启下


# 六、制作centos_ceph离线包

将三个节点中的/var/cache/yum 所有packages文件整合到一起

上传至/opt/ceph_packages/

安装工具

[root@ceph01 opt]# yum -y install createrepo

[root@ceph01 opt]# cd ceph_packages

[root@ceph01 ceph_packages]# createrepo ./ //当前环境下生成新的依赖性关系