官术网_书友最值得收藏!

How to do it…

A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors and more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. You will notice that your Ceph cluster is currently showing HEALTH_WARN; this is because we have not configured any OSDs other than ceph-node1. By default, the data in a Ceph cluster is replicated three times, that too on three different OSDs hosted on three different nodes.

Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster and configure OSDs on ceph-node2 and ceph-node3:

  1. Update the Ceph hosts ceph-node2 and ceph-node3 to /etc/ansible/hosts:
  1. Verify that Ansible can reach the Ceph hosts mentioned in /etc/ansible/hosts:
  1. Run Ansible playbook in order to scale up the Ceph cluster on ceph-node2 and ceph-node3:

Once playbook completes the ceph cluster scaleout job and plays the recap with failed=0, it means that the Ceph ansible has deployed more Ceph daemons in the cluster, as shown in the following screenshot.

You have three more OSD daemons and one more monitor daemon running in ceph-node2 and three more OSD daemons and one more monitor daemon running in ceph-node3. Now you have total nine OSD daemons and three monitor daemons running on three nodes:

  1. We were getting a too few PGs per OSD warning and because of that, we increased the default RBD pool PGs from 64 to 128. Check the status of your Ceph cluster; at this stage, your cluster is healthy. PGs - placement groups are covered in detail in Chapter 9Ceph Under the Hood.
主站蜘蛛池模板: 永修县| 丰都县| 内江市| 中方县| 琼海市| 莆田市| 夹江县| 牙克石市| 紫阳县| 永安市| 夏河县| 林甸县| 外汇| 金乡县| 安阳县| 阳高县| 滦平县| 江山市| 白山市| 耒阳市| 南乐县| 涿鹿县| 阿坝县| 从化市| 祁门县| 张家口市| 兴仁县| 邹城市| 区。| 北宁市| 普陀区| 英山县| 绍兴县| 衡山县| 彰化县| 铜陵市| 嘉定区| 阳曲县| 西吉县| 灵川县| 贵德县|