- Ceph Cookbook
- Karan Singh
- 369字
- 2021-07-16 13:01:14
Scaling up your Ceph cluster
At this point, we have a running Ceph cluster with one MON and three OSDs configured on ceph-node1
. Now, we will scale up the cluster by adding ceph-node2
and ceph-node3
as MON and OSD nodes.
How to do it…
A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors and more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. Since we already have one monitor running on ceph-node1
, let's create two more monitors for our Ceph cluster:
- Add a public network address to the
/etc/ceph/ceph.conf
file onceph-node1
:public network = 192.168.1.0/24
- From
ceph-node1
, useceph-deploy
to create a monitor onceph-node2
:# ceph-deploy mon create ceph-node2
- Repeat this step to create a monitor on
ceph-node3
:# ceph-deploy mon create ceph-node3
- Check the status of your Ceph cluster; it should show three monitors in the MON section:
# ceph -s # ceph mon stat
You will notice that your Ceph cluster is currently showing
HEALTH_WARN
; this is because we have not configured any OSDs other thanceph-node1
. By default, the date in a Ceph cluster is replicated three times, that too on three different OSDs hosted on three different nodes. Now, we will configure OSDs onceph-node2
andceph-node3
: - Use
ceph-deploy
fromceph-node1
to perform a disk list, disk zap, and OSD creation onceph-node2
andceph-node3
:# ceph-deploy disk list ceph-node2 ceph-node3 # ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd # ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
- Since we have added more OSDs, we should tune
pg_num
and thepgp_num
values for therbd
pool to achieve aHEALTH_OK
status for our Ceph cluster:# ceph osd pool set rbd pg_num 256 # ceph osd pool set rbd pgp_num 256
Tip
Starting the Ceph Hammer release,
rbd
is the only default pool that gets created. Ceph versions before Hammer creates three default pools:data
,metadata
, andrbd
. - Check the status of your Ceph cluster; at this stage, your cluster will be healthy.
- Docker技術入門與實戰(第3版)
- 潮流:UI設計必修課
- Learning ASP.NET Core 2.0
- 人臉識別原理及算法:動態人臉識別系統研究
- 薛定宇教授大講堂(卷Ⅳ):MATLAB最優化計算
- PostgreSQL Replication(Second Edition)
- Flux Architecture
- 大數據分析與應用實戰:統計機器學習之數據導向編程
- 高效使用Greenplum:入門、進階與數據中臺
- Python網絡爬蟲實例教程(視頻講解版)
- Python第三方庫開發應用實戰
- JavaScript Concurrency
- Scratch編程從入門到精通
- 透視C#核心技術:系統架構及移動端開發
- Java Web應用開發