官术网_书友最值得收藏!

Showing off straightaway

Within the Hazelcast JAR file, there is a very useful ConsoleApp utility class (this class was previously known as TestApp, but the name was a little deceptive as it can be used for operations beyond just testing). It is great in that it provides a simple text console for easy access to the distributed collections.

To fire this up, we need to run this class using the Hazelcast JAR file as the classpath. Alternatively, we can also use the console scripts that were provided in the demo/ directory.

$ java -cp hazelcast-3.5.jar com.hazelcast.console.ConsoleApp

This should bring up a fair amount of verbose logging, but a reassuring section to look for so that we can see that a cluster has been formed, is the following output:

Members [1] {
 Member [127.0.0.1]:5701 this
}

The output lets us know that a new cluster comprising a node has been created. The current node is indicated by this. The configuration, which is the default built-in JAR, was used to start up the instance. You can find its copy at bin/hazelcast.xml in the unpacked archive that we downloaded in the previous section. We will now be presented with a basic console interface prompt provided by the ConsoleApp class, as follows:

hazelcast[default] >

To get information about using the console, issue the help command. The response will be quite extensive, but it will be along the lines of the following command line:

hazelcast[default] > help
Commands:
-- General commands
jvm
 //displays info about the runtime
who
 //displays info about the cluster
whoami
 //displays info about this cluster member
ns <string>
 //switch the namespace

-- Map commands
m.put <key> <value>
 //puts an entry to the map
m.remove <key>
 //removes the entry of given key from the map
m.get <key>
 //returns the value of given key from the map
m.keys
 //iterates the keys of the map
m.values
 //iterates the values of the map
m.entries
 //iterates the entries of the map
m.size
 //size of the map
m.clear
 //clears the map
m.destroy
 //destroys the map

We can now use various map manipulation commands such as m.put, m.get, and m.remove to interact with the default distributed map, as follows:

hazelcast[default] > m.put foo bar
null

hazelcast[default] > m.get foo
bar

hazelcast[default] > m.entries
foo : bar
Total 1

hazelcast[default] > m.remove foo
bar

hazelcast[default] > m.size
Size = 0

It is obvious that while the map has the potential of being distributed, any changes that were made will be lost when it shuts down as we're only running a single node instance. To avoid this, let us start up another node. As each node should be identical in its configuration, let's repeat exactly the same process that we used before, to start up the first node. However, we should see two nodes in the startup logging this time. This lets us know that the sample application has successfully joined the existing cluster that was created by the ConsoleApp, as follows:

Members [2] {
 Member [127.0.0.1]:5701
 Member [127.0.0.1]:5702 this
}

If you don't see two nodes, it is possible that the network interface that Hazelcast selects by default doesn't support multicast. You can further confirm that this is likely to be the case by checking the interface associated with the IP address that is listed in the log and look for the following line:

WARNING: [127.0.0.1]:5702 [dev] [3.5] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!

To address this, the simplest solution at this stage is to disable the interface that is causing problems if you are able to do so. Otherwise, copy the bin/hazelcast.xml configuration to the working directory and edit it to force Hazelcast to use a particular network interface, like the following one, which will definitely fix the issue, but it does skip a little further ahead:

<interfaces enabled="true">
  <interface>127.0.0.1</interface>
</interfaces>

Once we have successfully started the second node, we should immediately have access to the same data that we persisted on the first. Additionally, behind the scenes, Hazelcast will rebalance the cluster to take advantage of the new node, making it the owner of a number of partitions as well as creating a backup copy of all the data that is held on both the nodes. We should be able to confirm that this is happening by taking a closer look at the log entries being generated, as follows:

INFO: [127.0.0.1]:5701 [dev] [3.5] Re-partitioning cluster data... Migration queue size: 135

Hazelcast can handle the new nodes that appear at pretty much any time without affecting its data. We can simulate a node failure by invoking the exit command on one of the test console nodes in order to shut it down (Ctrl + C also has the same effect). The actual data held on it will be lost, but if we restart the node, it will come back with all the previous data. This is because the other node keeps running and is able to reinitialize the failed node with the cluster data as it starts from the backup. As we learned in the previous section, the standard backup count is 1 by default (which we will configure later on). So, as long as we don't have more node failures than the backup count in a short amount of time (before the cluster has had a chance to react and rebalance the data), we shall not encounter any overall data loss. Why not give this a try? Let's see if we can lose some data. After all, it's only held in-memory!

One sure-fire way to expose this issue is to create a cluster of many nodes (it is important to have more nodes than the backup count) and fail a number of them in quick succession. To try this, we can use the test console to create a map with a large number of entries, as follows:

hazelcast[default] > m.putmany 10000
size = 10000, 29585 evt/s, 23113 Kbit/s, 976 KB added

hazelcast[default] > m.size
Size = 10000

Then, quickly fail the multiple nodes. We will get the following login, indicating the potential data loss. Thus, we can confirm the extent of the loss by looking at the size of the map:

WARNING: [127.0.0.1]:5701 [dev] [3.5] Owner of partition is being removed! Possible data loss for partition[213].

hazelcast[default] > m.size
Size = 8250

The output tells us that we can add more nodes to a cluster quickly without affecting the overall data, but we have to give Hazelcast enough time to rebalance the cluster if we wish to remove the nodes. We can see this process of rebalancing occurring in the logs of the remaining nodes, as the partitions belonging to the failed node are reassigned. To find out when things have calmed down, we can use a migration listener in the following way to give us more visibility of this process, but that's a topic for later:

INFO: [127.0.0.1]:5701 [dev] [3.5] Re-partitioning cluster data... Migration queue size: 181

INFO: [127.0.0.1]:5701 [dev] [3.5] All migration tasks have been completed, queues are empty.

In case of failure, we will need to understand the infrastructure's stability and set the backup count levels high enough to be able to handle the unexpected.

Tip

Downloading the example code

You can download the example code files for all the Packt books that you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

主站蜘蛛池模板: 彩票| 营口市| 太仓市| 宝清县| 塔城市| 农安县| 黄梅县| 墨竹工卡县| 甘洛县| 华阴市| 连州市| 六枝特区| 武强县| 浙江省| 西和县| 彩票| 安泽县| 留坝县| 海阳市| 桃源县| 增城市| 夹江县| 河南省| 长兴县| 马边| 台南县| 远安县| 廊坊市| 平利县| 安庆市| 吴旗县| 扎鲁特旗| 榆中县| 如东县| 乐山市| 汉中市| 乌兰察布市| 望谟县| 于田县| 彩票| 新宾|