- Ceph Cookbook(Second Edition)
- Vikhyat Umrao Michael Hackett Karan Singh
- 492字
- 2021-07-02 23:19:11
How to do it...
For this recipe, we will be configuring a second Ceph cluster with Ceph nodes ceph-node5, ceph-node6, and ceph-node7. The previous Chapter 1, Ceph – Introduction and Beyond, can be referenced for setting up your second Ceph cluster using Ansible with nodes 5, 6, and 7 taking the place of 1, 2, and 3 in the recipes, some highlights and changes that must be made before running the playbook on the secondary cluster are below:
- Your /etc/ansible/hosts file from each of your Ansible configuration nodes (ceph-node1 and ceph-node5) should look as follows:
#Primary site (ceph-node1):
[mons]
ceph-node1
ceph-node2
ceph-node3
[osds]
ceph-node1
ceph-node2
ceph-node3
#Secondary site (ceph-node5):
[mons]
ceph-node5
ceph-node6
ceph-node7
[osds]
ceph-node5
ceph-node6
ceph-node7
- Your cluster will require a distinct name, the default cluster naming is ceph. Since our primary cluster is named ceph our secondary cluster must be named something different. For this recipe, we will name the secondary cluster as backup. We will need to edit the all.yml file on ceph-node5 to reflect this change prior to deploying by commenting out cluster and renaming backup:
root@ceph-node5 group_vars # vim all.yml

It is possible to mirror RBD images between two clusters of the same name this requires changing the name of one of the clusters in the /etc/sysconfig/ceph file to a name other then Ceph and then creating a symlink to the ceph.conf file.
- Run Ansible to install the second Ceph cluster with the distinct name of backup:
root@ceph-node5 ceph-ansible # ansible-playbook site.yml
- When the playbook competes set the Ceph environment variable to use cluster name of backup:
# export CEPH_ARGS="--cluster backup"
- In each of the clusters create a pool called data, this pool will be mirrored between the sites:
root@ceph-node1 # ceph osd pool create data 64
root@ceph-node5 # ceph osd pool create data 64
- Create the user client.local on the ceph Ceph cluster and give it a rwx access to data pool:
root@ceph-node1 # ceph auth get-or-create client.local
mon 'allow r' osd 'allow class-read object_prefix rbd_children,
allow rwx pool=data' -o /etc/ceph/ceph.client.local.keyring
--cluster ceph
- Create the user client.remote on the backup cluster and give it a rwx access to data pool:
root@ceph-node5 # ceph auth get-or-create client.remote
mon 'allow r' osd 'allow class-read object_prefix rbd_children,
allow rwx pool=data' -o /etc/ceph/backup.client.remote.keyring
--cluster backup
- Copy the Ceph configuration file from each of the clusters into the /etc/ceph directory of the corresponding peer cluster:
root@ceph-node1 # scp /etc/ceph/ceph.conf
root@ceph-node5:/etc/ceph/ceph.conf
root@ceph-node5 # scp /etc/ceph/backup.conf
root@ceph.node1:/etc/ceph/backup.conf
- Copy the keyrings for the user client.local and client.remote from each of the clusters into the /etc/ceph directory of the corresponding peer cluster:
root@ceph-node1 # scp /etc/ceph/ceph.client.local.keyring
root@ceph-node5:/etc/ceph/ceph.client.local.keyring
root@ceph-node5 # scp /etc/ceph/backup.client.remote.keyring
root@ceph-node1:/etc/ceph/backup.client.remote.keyring
We now have two Ceph clusters, with a client.local and a client.remote user, copies of their peer ceph.conf file in the etc/ceph directory and keyrings for the corresponding users on each peer cluster. In the next recipe we will configure mirroring on the data pool.
- 電氣自動化專業英語(第3版)
- 機器學習實戰:基于Sophon平臺的機器學習理論與實踐
- Canvas LMS Course Design
- Spark編程基礎(Scala版)
- Dreamweaver 8中文版商業案例精粹
- PIC單片機C語言非常入門與視頻演練
- Photoshop CS3圖像處理融會貫通
- 21天學通Visual C++
- 電腦上網輕松入門
- Building a BeagleBone Black Super Cluster
- 實用網絡流量分析技術
- LMMS:A Complete Guide to Dance Music Production Beginner's Guide
- Mastering Ansible(Second Edition)
- 網絡安全概論
- Microsoft System Center Data Protection Manager Cookbook