官术网_书友最值得收藏!

Consistent hashing

The solution is consistent hashing. Introduced as a term in 1997, consistent hashing was originally used as a means of routing requests among a large number of web servers. It's easy to see how the web could benefit from a hash mechanism that allows any node in the network to efficiently determine the location of an object, in spite of the constant shifting of nodes in and out of the network. This is the fundamental objective of consistent hashing.

How it works

With consistent hashing, the buckets are arranged in a ring with a predefined range. The exact range depends on the partitioner being used. Keys are then hashed to produce a value that lies somewhere along the ring. Nodes are assigned a range, which is computed as follows:

Tip

The following examples assume the default Murmur3Partitioner is used. For more information on this partitioner, take a look at the documentation, which can be found here: http://docs.datastax.com/en/cassandra/3.x/cassandra/architecture/archPartitionerM3P.html

Therefore, for a five-node cluster, a ring with evenly distributed token ranges would look like this, presuming the default Murmur3Partitioner is used:

The primary replica for each key is assigned to a node based on its hashed value. Each node is responsible for the region of the ring between itself (inclusive) and its predecessor (exclusive).

This diagram represents data ranges (the letters) and the nodes (the numbers) that own those ranges. It may also be helpful to visualize this in table form, which may be more familiar to those who have used the nodetool ring command to view Cassandra's topology:

When Cassandra receives a key for either a read or a write, the same hash function is applied to the key to determine where it lies in the range. Since all nodes in the cluster are aware of the other nodes' ranges, any node can handle a request for any other node's range. The node receiving the request is called the coordinator, and any node can act in this role. If a key does not belong to the coordinator's range, it forwards the request to replicas in the correct range.

Following our previous example, we can now examine how our names might map to a hash, using the Murmur3 hash algorithm. Once the values are computed, they can be matched to the range of one of the nodes in the cluster, as follows:

The placement of these keys might be easier to understand by visualizing their position in the ring:

The hash value of the name keys determines their placement in the cluster

Now that you understand the basics of consistent hashing, let's turn our focus to the mechanism by which Cassandra assigns data ranges.

主站蜘蛛池模板: 石屏县| 普格县| 南康市| 郸城县| 张家口市| 三门峡市| 呈贡县| 石狮市| 揭西县| 玉溪市| 库车县| 曲阳县| 浦东新区| 宜君县| 文水县| 哈尔滨市| 九江县| 旌德县| 桂林市| 金门县| 连云港市| 崇文区| 万荣县| 平阳县| 盘山县| 河津市| 万源市| 汤阴县| 永泰县| 彩票| 永州市| 安图县| 房产| 崇信县| 淅川县| 广平县| 象山县| 马尔康县| 东安县| 河北省| 突泉县|