官术网_书友最值得收藏!

Ceph

Ceph stands out from the storage solution crowd by virtue of its feature set. It has been designed to overcome the limitations of existing storage systems, and effectively replaces old and expensive proprietary solutions. Ceph is economical by being open source and software-defined and by running on most any commodity hardware. Clients enjoy the flexibility of Ceph's variety of client access modalities with a single backend.

Every Ceph component is reliable and supports high availability and scaling. A properly configured Ceph cluster is free from single points of failure and accepts an arbitrary mix of file types and sizes without performance penalties.

Ceph by virtue of being distributed does not follow the traditional centralized metadata method of placing and accessing data. Rather, it introduces a new paradigm in which clients independently calculate the locations of their data then access storage nodes directly. This is a significant performance win for clients as they need not queue up to get data locations and payloads from a central metadata server. Moreover, data placement inside a Ceph cluster is transparent and automatic; neither the client nor the administrators need manually or consciously spread data across failure domains.

Ceph is self-healing and self-managing. In the event of disaster, when other storage systems cannot survive multiple failures, Ceph remains rock solid. Ceph detects and corrects failure at every level, managing component loss automatically and healing without impacting data availability or durability. Other storage solutions can only provide reliability at drive or at node granularity.

Ceph also scales easily from as little as one server to thousands, and unlike many proprietary solutions, your initial investment at modest scale will not be discarded when you need to expand. A major advantage of Ceph over proprietary solutions is that you will have performed your last ever forklift upgrade. Ceph's redundant and distributed design allow individual components to be replaced or updated piecemeal in a rolling fashion. Neither components nor entire hosts need to be from the same manufacturer.

Examples of upgrades that the authors have performed on entire petabyte-scale production clusters, without clients skipping a beat, are as follows:

  • Migrate from from one Linux distribution to another
  • Upgrade within a given Linux distribution, for example, RHEL 7.1 to RHEL 7.3
  • Replace all payload data drives
  • Update firmware
  • Migrate between journal strategies and devices
  • Hardware repairs, including entire chasses
  • Capacity expansion by swapping small drives for new
  • Capacity expansion by adding additional servers

Unlike many RAID and other traditional storage solutions, Ceph is highly adaptable and does not require storage drives or hosts to be identical in type or size. A cluster that begins with 4TB drives can readily expand either by adding 6TB or 8TB drives either as replacements for smaller drives, or in incrementally added servers. A single Ceph cluster can also contain a mix of storage drive types, sizes, and speeds, either for differing workloads or to implement tiering to leverage both cost-effective slower drives for bulk storage and faster drives for reads or caching.

While there are certain administrative conveniences to a uniform set of servers and drives, it is also quite feasible to mix and match server models, generations, and even brands within a cluster.

主站蜘蛛池模板: 博乐市| 黎城县| 南漳县| 和硕县| 明光市| 唐海县| 宁晋县| 旬邑县| 罗平县| 寿光市| 耒阳市| 江都市| 浮山县| 洪洞县| 盐源县| 明水县| 屯门区| 托里县| 鹤山市| 读书| 溧水县| 南木林县| 镇原县| 修水县| 合肥市| 三亚市| 唐山市| 胶南市| 河池市| 青铜峡市| 宁远县| 增城市| 保山市| 定安县| 张家川| 岑溪市| 丽江市| 大埔区| 彰化市| 黄龙县| 伊金霍洛旗|