- Learning Ceph(Second Edition)
- Anthony D'Atri Vaibhav Bhembre Karan Singh
- 401字
- 2021-07-08 09:43:52
MONs
Of all the nomenclature and jargon within the Ceph ecosystem, Ceph MONs are perhaps the most misleadingly named. While MONs do monitor cluster status, they are much more as well. They act as arbiters, traffic cops, and physicians for the cluster as a whole. As with OSDs, a Ceph MON is, strictly speaking, a daemon process (ceph-mon) that communicates with peer MONs, OSDs, and users, maintaining and distributing various information vital to cluster operations. In practice, the term is also used to refer to the servers on which these daemons run, that is Monitor nodes, mon nodes, or simply mons.
As with all other Ceph components, MONs need to be distributed, redundant, and highly available while also ensuring strict data consistency at all times. MONs accomplish this by participating in a sophisticated quorum that utilizes an algorithm called PAXOS. It is recommended to provision at least three for production clusters, but always an odd number to avoid a problematic situation known as split brain where network issues prevent some members from talking to each other, with the potential for more than one believing it is in charged and, worse yet, data divergence. Readers familiar with other clustering technologies such Oracle Solaris Cluster (?) may already be familiar with some of these concepts.
Among the data managed by Ceph MONs are maps of OSDs, other MONs, placement groups, and the CRUSH map,which describes where data should be placed and found. MONs are thus distributors of this data: they distribute initial state and updates to each other, Ceph OSDs, and Ceph clients. Alert readers might ask at this point, Hey, you said Ceph doesn't have a bottlenecked centralized metadata store, who are you trying to kid?
The answer is that while these maps may be considered a type of metadata, they are data concerning the Ceph cluster itself, not actual user data. The secret sauce here is CRUSH, which will be described in more detail later in this chapter. The CRUSH algorithm operates on the CRUSH map and PG map so that both clients and the Ceph backend can independently determine where given data lives. Clients thus are kept up-to-date with all they need to perform their own calculations that direct them to their data within the cluster's constellation of OSDs. By enabling clients to dynamically determine where their data resides, Ceph enables scaling without choke points or bottlenecks
- Java程序設計(慕課版)
- Redis Applied Design Patterns
- OpenShift開發指南(原書第2版)
- 信息可視化的藝術:信息可視化在英國
- Visual Basic 6.0程序設計實驗教程
- 零基礎學C語言第2版
- Buildbox 2.x Game Development
- 分布式架構原理與實踐
- 程序員的成長課
- DB2SQL性能調優秘笈
- Web開發的平民英雄:PHP+MySQL
- Sitecore Cookbook for Developers
- Python Penetration Testing Essentials
- Raspberry Pi Robotic Projects
- C#程序設計經典300例