官术网_书友最值得收藏!

How to do it...

For this recipe, we will be configuring a second Ceph cluster with Ceph nodes ceph-node5, ceph-node6, and ceph-node7. The previous Chapter 1, Ceph – Introduction and Beyond, can be referenced for setting up your second Ceph cluster using Ansible with nodes 5, 6, and 7 taking the place of 1, 2, and 3 in the recipes, some highlights and changes that must be made before running the playbook on the secondary cluster are below:

  1. Your /etc/ansible/hosts file from each of your Ansible configuration nodes (ceph-node1 and ceph-node5) should look as follows:
      #Primary site (ceph-node1):
[mons]
ceph-node1
ceph-node2
ceph-node3
[osds]
ceph-node1
ceph-node2
ceph-node3
#Secondary site (ceph-node5):
[mons]
ceph-node5
ceph-node6
ceph-node7
[osds]
ceph-node5
ceph-node6
ceph-node7
  1. Your cluster will require a distinct name, the default cluster naming is ceph. Since our primary cluster is named ceph our secondary cluster must be named something different. For this recipe, we will name the secondary cluster as backup. We will need to edit the all.yml file on ceph-node5 to reflect this change prior to deploying by commenting out cluster and renaming backup:
        root@ceph-node5 group_vars # vim all.yml

It is possible to mirror RBD images between two clusters of the same name this requires changing the name of one of the clusters in the /etc/sysconfig/ceph file to a name other then Ceph and then creating a symlink to the ceph.conf file.

  1. Run Ansible to install the second Ceph cluster with the distinct name of backup:
        root@ceph-node5 ceph-ansible # ansible-playbook site.yml
  1. When the playbook competes set the Ceph environment variable to use cluster name of backup:
        # export CEPH_ARGS="--cluster backup"
  1. In each of the clusters create a pool called data, this pool will be mirrored between the sites:
        root@ceph-node1 # ceph osd pool create data 64
root@ceph-node5 # ceph osd pool create data 64
  1. Create the user client.local on the ceph Ceph cluster and give it a rwx access to data pool:
        root@ceph-node1 # ceph auth get-or-create client.local 
mon 'allow r' osd 'allow class-read object_prefix rbd_children,
allow rwx pool=data' -o /etc/ceph/ceph.client.local.keyring
--cluster ceph
  1. Create the user client.remote on the backup cluster and give it a rwx access to data pool:
        root@ceph-node5 # ceph auth get-or-create client.remote 
mon 'allow r' osd 'allow class-read object_prefix rbd_children,
allow rwx pool=data' -o /etc/ceph/backup.client.remote.keyring
--cluster backup
  1. Copy the Ceph configuration file from each of the clusters into the /etc/ceph directory of the corresponding peer cluster:
        root@ceph-node1 # scp /etc/ceph/ceph.conf 
root@ceph-node5:/etc/ceph/ceph.conf
root@ceph-node5 # scp /etc/ceph/backup.conf
root@ceph.node1:/etc/ceph/backup.conf
  1. Copy the keyrings for the user client.local and client.remote from each of the clusters into the /etc/ceph directory of the corresponding peer cluster:

root@ceph-node1 # scp /etc/ceph/ceph.client.local.keyring
root@ceph-node5:/etc/ceph/ceph.client.local.keyring
root@ceph-node5 # scp /etc/ceph/backup.client.remote.keyring
root@ceph-node1:/etc/ceph/backup.client.remote.keyring

We now have two Ceph clusters, with a client.local and a client.remote user, copies of their peer ceph.conf file in the etc/ceph directory and keyrings for the corresponding users on each peer cluster. In the next recipe we will configure mirroring on the data pool.

主站蜘蛛池模板: 北宁市| 门头沟区| 连山| 上饶市| 宣威市| 视频| 广平县| 英超| 玛沁县| 吉安县| 常熟市| 大连市| 昭苏县| 抚顺市| 通河县| 肇源县| 车致| 平利县| 卢氏县| 濉溪县| 翼城县| 祥云县| 梓潼县| 永丰县| 革吉县| 黄梅县| 怀安县| 新昌县| 固安县| 华宁县| 长岭县| 肇庆市| 灵丘县| 枞阳县| 观塘区| 漠河县| 濮阳市| 祥云县| 阿克| 惠安县| 贵州省|