官术网_书友最值得收藏!

How to do it...

For this recipe, we will be configuring a second Ceph cluster with Ceph nodes ceph-node5, ceph-node6, and ceph-node7. The previous Chapter 1, Ceph – Introduction and Beyond, can be referenced for setting up your second Ceph cluster using Ansible with nodes 5, 6, and 7 taking the place of 1, 2, and 3 in the recipes, some highlights and changes that must be made before running the playbook on the secondary cluster are below:

  1. Your /etc/ansible/hosts file from each of your Ansible configuration nodes (ceph-node1 and ceph-node5) should look as follows:
      #Primary site (ceph-node1):
[mons]
ceph-node1
ceph-node2
ceph-node3
[osds]
ceph-node1
ceph-node2
ceph-node3
#Secondary site (ceph-node5):
[mons]
ceph-node5
ceph-node6
ceph-node7
[osds]
ceph-node5
ceph-node6
ceph-node7
  1. Your cluster will require a distinct name, the default cluster naming is ceph. Since our primary cluster is named ceph our secondary cluster must be named something different. For this recipe, we will name the secondary cluster as backup. We will need to edit the all.yml file on ceph-node5 to reflect this change prior to deploying by commenting out cluster and renaming backup:
        root@ceph-node5 group_vars # vim all.yml

It is possible to mirror RBD images between two clusters of the same name this requires changing the name of one of the clusters in the /etc/sysconfig/ceph file to a name other then Ceph and then creating a symlink to the ceph.conf file.

  1. Run Ansible to install the second Ceph cluster with the distinct name of backup:
        root@ceph-node5 ceph-ansible # ansible-playbook site.yml
  1. When the playbook competes set the Ceph environment variable to use cluster name of backup:
        # export CEPH_ARGS="--cluster backup"
  1. In each of the clusters create a pool called data, this pool will be mirrored between the sites:
        root@ceph-node1 # ceph osd pool create data 64
root@ceph-node5 # ceph osd pool create data 64
  1. Create the user client.local on the ceph Ceph cluster and give it a rwx access to data pool:
        root@ceph-node1 # ceph auth get-or-create client.local 
mon 'allow r' osd 'allow class-read object_prefix rbd_children,
allow rwx pool=data' -o /etc/ceph/ceph.client.local.keyring
--cluster ceph
  1. Create the user client.remote on the backup cluster and give it a rwx access to data pool:
        root@ceph-node5 # ceph auth get-or-create client.remote 
mon 'allow r' osd 'allow class-read object_prefix rbd_children,
allow rwx pool=data' -o /etc/ceph/backup.client.remote.keyring
--cluster backup
  1. Copy the Ceph configuration file from each of the clusters into the /etc/ceph directory of the corresponding peer cluster:
        root@ceph-node1 # scp /etc/ceph/ceph.conf 
root@ceph-node5:/etc/ceph/ceph.conf
root@ceph-node5 # scp /etc/ceph/backup.conf
root@ceph.node1:/etc/ceph/backup.conf
  1. Copy the keyrings for the user client.local and client.remote from each of the clusters into the /etc/ceph directory of the corresponding peer cluster:

root@ceph-node1 # scp /etc/ceph/ceph.client.local.keyring
root@ceph-node5:/etc/ceph/ceph.client.local.keyring
root@ceph-node5 # scp /etc/ceph/backup.client.remote.keyring
root@ceph-node1:/etc/ceph/backup.client.remote.keyring

We now have two Ceph clusters, with a client.local and a client.remote user, copies of their peer ceph.conf file in the etc/ceph directory and keyrings for the corresponding users on each peer cluster. In the next recipe we will configure mirroring on the data pool.

主站蜘蛛池模板: 福泉市| 沁水县| 阿克苏市| 赤壁市| 于田县| 乌鲁木齐市| 兴国县| 德保县| 兴文县| 德格县| 贵州省| 潼南县| 阳信县| 榆树市| 竹北市| 措勤县| 大关县| 平原县| 讷河市| 论坛| 萨嘎县| 淮阳县| 赣榆县| 香河县| 田阳县| 喀什市| 建始县| 右玉县| 福建省| 伊吾县| 宣武区| 嘉兴市| 朝阳县| 湘潭市| 察隅县| 昭觉县| 巴塘县| 安宁市| 旬阳县| 镇宁| 房产|