官术网_书友最值得收藏!

How to do it…

A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors and more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. You will notice that your Ceph cluster is currently showing HEALTH_WARN; this is because we have not configured any OSDs other than ceph-node1. By default, the data in a Ceph cluster is replicated three times, that too on three different OSDs hosted on three different nodes.

Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster and configure OSDs on ceph-node2 and ceph-node3:

  1. Update the Ceph hosts ceph-node2 and ceph-node3 to /etc/ansible/hosts:
  1. Verify that Ansible can reach the Ceph hosts mentioned in /etc/ansible/hosts:
  1. Run Ansible playbook in order to scale up the Ceph cluster on ceph-node2 and ceph-node3:

Once playbook completes the ceph cluster scaleout job and plays the recap with failed=0, it means that the Ceph ansible has deployed more Ceph daemons in the cluster, as shown in the following screenshot.

You have three more OSD daemons and one more monitor daemon running in ceph-node2 and three more OSD daemons and one more monitor daemon running in ceph-node3. Now you have total nine OSD daemons and three monitor daemons running on three nodes:

  1. We were getting a too few PGs per OSD warning and because of that, we increased the default RBD pool PGs from 64 to 128. Check the status of your Ceph cluster; at this stage, your cluster is healthy. PGs - placement groups are covered in detail in Chapter 9Ceph Under the Hood.
主站蜘蛛池模板: 固阳县| 南雄市| 玉龙| 庆阳市| 蒙阴县| 沙坪坝区| 福泉市| 富川| 乐陵市| 沭阳县| 清远市| 平顶山市| 桃园县| 葵青区| 泾源县| 开远市| 临泉县| 靖宇县| 婺源县| 阿克| 十堰市| 白沙| 横峰县| 和政县| 锡林郭勒盟| 抚松县| 和平县| 阜平县| 兴宁市| 霸州市| 乐平市| 兴义市| 莱州市| 含山县| 临安市| 醴陵市| 海门市| 曲靖市| 姜堰市| 安阳县| 五常市|