- Learning Ceph(Second Edition)
- Anthony D'Atri Vaibhav Bhembre Karan Singh
- 378字
- 2021-07-08 09:43:53
CephFS MetaData server (MDS)
In order to manage and provide the hierarchy of data as presented in the context of a familiar tree-organized filesystem, Ceph needs to store additional metadata given the semantics expected:
- Permissions
- Hierarchy
- Names
- Timestamps
- Owners
- Mostly POSIX compliant. mostly.
Unlike legacy systems, the CephFS MDS is designed to facilitate scaling. It is important to note that actual file data does not flow through the Ceph MDS: as with RBD volumes, CephFS clients use the RADOS system to perform bulk data operations directly against a scalable number of distributed OSD storage daemons. In a loose sense, the MDS implements a control plane while RADOS implements the data plane; in fact, the metadata managed by Ceph's MDS also resides on the OSDs via RADOS alongside payload data / file contents:

It is important to note that MDS servers are only required if you're going to use the CephFS file-based interface; the majority of clusters that provide only block and / or object user-facing services do not need to provision them at all. It is also important to note that CephFS is best limited to use among servers—a B2B service if you will—as opposed to B2C. Some Ceph operators have experimented with running NFS or Samba (SMB/CIFS) to provide services directly to workstation clients, but this should be considered as advanced.
Although CephFS is the oldest of Ceph's user-facing interfaces, it has not received as much user and developer attention as have the RBD block service and the common RADOS core. CephFS in fact was not considered ready for production until the Jewel release in early 2016, and as I write still has certain limitations, notably, running multiple MDSes in parallel for scaling and high availability is still problematic. While one can and should run multiple MDSes, with the Kraken release only one can safely be active at any time. Additional MDSes instances are advised to operate in a standby role for failover in case the primary fails. With the Luminous release, multiple active MDS instances are supported. It is expected that future releases will continue to improve the availability and scaling of the MDS services.
http://docs.ceph.com/docs/master/cephfs/best-practices
and
http://docs.ceph.com/docs/master/cephfs/posix
- What's New in TensorFlow 2.0
- Java面向對象思想與程序設計
- PaaS程序設計
- Vue.js快跑:構建觸手可及的高性能Web應用
- Java編程指南:基礎知識、類庫應用及案例設計
- Visual Basic程序設計(第3版):學習指導與練習
- Building a Recommendation Engine with Scala
- Kotlin從基礎到實戰
- Apache Kafka Quick Start Guide
- 編程菜鳥學Python數據分析
- 分布式架構原理與實踐
- jQuery從入門到精通(微課精編版)
- 百萬在線:大型游戲服務端開發
- Mastering OpenStack
- 你必須知道的.NET(第2版)