- Getting Started with Hazelcast
- Mat Johns
- 1167字
- 2021-08-06 16:56:51
Showing off straightaway
Within the Hazelcast JAR, there is the very useful utility class TestApp
. The class name is a little deceptive as it can be used in more ways than just testing, but its greatest offering is that it provides a simple text console for easy access to distributed collections.
To fire this up, we need to run this class using the Hazelcast JAR as the classpath.
$ java -cp hazelcast-2.6.jar com.hazelcast.examples.TestApp
This should bring up a fair amount of verbose logging, but a reassuring section to look for to show that a cluster has been formed is the following code:
Members [1] { Member [127.0.0.1]:5701 this }
This lets us know that a new cluster of one node has been created with the node indicated by this
. The configuration that was used to start up this instance is the default one built into the JAR. You can find a copy of it at bin/hazelcast.xml
from within the unpacked archive that we downloaded in the previous section. We should now be presented with a basic console interface prompt provided by the TestApp
class.
hazelcast[default] >
To get lots of information about using the console, issue the help
command. The response will be quite extensive but will be along the lines of the following command line:
hazelcast[default] > help Commands: -- General commands jvm //displays info about the runtime who //displays info about the cluster whoami //displays info about this cluster member ns <string> //switch the namespace -- Map commands m.put <key> <value> //puts an entry to the map m.remove <key> //removes the entry of given key from the map m.get <key> //returns the value of given key from the map m.keys //iterates the keys of the map m.values //iterates the values of the map m.entries //iterates the entries of the map m.size //size of the map m.clear //clears the map m.destroy //destroys the map
We can now use the various map manipulation commands such as m.put
, m.get
, and m.remove
to interact with the default distributed map.
hazelcast[default] > m.put foo bar null hazelcast[default] > m.get foo bar hazelcast[default] > m.entries foo : bar Total 1 hazelcast[default] > m.remove foo bar hazelcast[default] > m.size Size = 0
Obviously, while our map has the potential of being distributed, as we're only running a single node instance, any changes will be lost when it shuts down. To avoid this, let us start up another node. As each node should be identical in its configuration, let's repeat exactly the same process we used before to start up the first node; however, this time we should see two nodes in the startup logging. This lets us know that our example application has successfully joined the existing cluster created by the TestApp
console.
Members [2] { Member [127.0.0.1]:5701 Member [127.0.0.1]:5702 this }
If you don't see two nodes, it is possible that the network interface that Hazelcast is selecting by default doesn't support multicast. You can further confirm that this is likely to be the case by checking the interface associated with the IP address listed in logging and looking for the following line:
WARNING: [127.0.0.1]:5702 [dev] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
To address this, the simplest solution at this stage is to disable the offending interface if you are able to do so. Otherwise, copy the bin/hazelcast.xml
configuration to our working directory, and edit it to force Hazelcast to a particular network interface like the following one which will definitely fix the issue, but does skip a little further ahead:
<interfaces enabled="true"> <interface>127.0.0.1</interface> </interfaces>
Once we have successfully started the second node, we should immediately have access to the same data that we have persisted on the first. Additionally behind the scenes, Hazelcast will be rebalancing the cluster to take advantage of the new node making it the owner of a number of partitions, as well as creating a backup copy of all the data that is held on both nodes. We should be able to confirm that this is happening by having a closer look at the log entries being generated.
INFO: [127.0.0.1]:5701 [dev] Re-partitioning cluster data... Immediate-Tasks: 271, Scheduled-Tasks: 0
Hazelcast can handle new nodes appearing pretty much at any time without risk to its data. We can simulate a node failure by invoking the exit
command on one of our test console nodes in order to shut it down (Ctrl + C also has the same effect). The actual data held on it will be lost, but if we were to restart the node, it should come back with all the previous data. This is because the other node remained running and was able to reinitialize the failed node with the cluster data as it started backup. As we learned in the previous section, by default, the standard backup count is 1 (which we look to configure later on), so as long as we don't have more node failures than the backup count in a short amount of time (before the cluster has had a chance to react and rebalance the data), then we shall not encounter any overall data loss. Why not give this a try, let's see if we can lose some data; after all it's only held in-memory!
One sure-fire way to expose this issue is to create a cluster of many nodes (importantly having more nodes than the backup count) and fail a number of them in quick succession. To try this, we can use the test console to create a map with a large number of entries.
hazelcast[default] > m.putmany 10000 size = 10000, 1222 evt/s, 954 Kbit/s, 976 KB added hazelcast[default] > m.size Size = 10000
Then quickly fail multiple nodes. We should get the following login indicating potential data loss and can confirm the extent of the loss by looking at the size of our map:
WARNING: [127.0.0.1]:5701 [dev] Owner of partition is being removed! Possible data loss for partition[213]. hazelcast[default] > m.size Size = 8250
This tells us that we can add more nodes to a cluster quite quickly without risking the overall data, but we have to allow Hazelcast enough time to rebalance the cluster if we remove nodes. We can see this rebalancing occurring in the logs of the remaining nodes, as partitions owned by the now dead node are reassigned. To find out when things have calmed down, we can use a migration listener to give us more visibility on this process, but that's a topic for later.
INFO: [127.0.0.1]:5701 [dev] Re-partitioning cluster data... Immediate-Tasks: 181, Scheduled-Tasks: 0
For the case of failure, we will need to understand our infrastructure's stability, and set the backup count levels high enough to be able to handle a certain amount of the unexpected data.
Tip
Downloading the example code
You can download the example code files for all Packt Publishing books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
- 軟件安全技術(shù)
- Learning Apex Programming
- Game Programming Using Qt Beginner's Guide
- C語言程序設(shè)計(jì)(第2版)
- OpenNI Cookbook
- 21天學(xué)通C++(第6版)
- Python機(jī)器學(xué)習(xí)編程與實(shí)戰(zhàn)
- 3D少兒游戲編程(原書第2版)
- Getting Started with Nano Server
- Machine Learning With Go
- 微課學(xué)人工智能Python編程
- Vue.js 3應(yīng)用開發(fā)與核心源碼解析
- Learning Jakarta Struts 1.2: a concise and practical tutorial
- 精通Spring:Java Web開發(fā)與Spring Boot高級(jí)功能
- Android智能手機(jī)APP界面設(shè)計(jì)實(shí)戰(zhàn)教程