官术网_书友最值得收藏!

  • OpenDaylight Cookbook
  • Mathieu Lemay Alexis de Talhou?t Jamie Goodyear Rashmi Pujar Mohamed El Serngawy Yrineu Rodrigues
  • 641字
  • 2021-07-02 21:38:39

How to do it...

Perform the following steps:

  1. Create three VMs.

The mentioned repository in the Getting ready section is providing a Vagrantfile spawning VMs with the following network characteristics:

  • Adapter 1: NAT
  • Adapter 2: Bridge en0: Wi-Fi (AirPort)
  • Static IP address: 192.168.50.15X (X being the number of the node)
  • Adapter type: paravirtualized

These are the steps to follow:

$ git clone https://github.com/adetalhouet/cluster-nodes.git  
$ cd cluster-nodes
$ export NUM_OF_NODES=3
$ vagrant up

After a few minutes, to make sure the VMs are correctly running, execute the following command in the cluster-nodes folder:

$ vagrant status 
Current machine states:
node-1 running (virtualbox)
node-2 running (virtualbox)
node-3 running (virtualbox)

This environment represents multiple VMs. The VMs are all listed preceding with their current state. For more information about a specific VM, run vagrant status NAME.

The credentials of the VMs are:

  • User: vagrant
  • Password: vagrant

We now have three VMs available at those IP addresses:

  • 192.168.50.151
  • 192.168.50.152
  • 192.168.50.153
  1. Prepare the cluster deployment.

In order to deploy the cluster, we will use the cluster-deployer script provided by OpenDaylight:

$ git clone https://git.opendaylight.org/gerrit/integration/test.git 
$ cd test/tools/clustering/cluster-deployer/

You will need the following information:

  • Your VMs/containers IP addresses:

192.168.50.151, 192.168.50.152, 192.168.50.153

  • Their credentials (must be the same for all the VMs/containers):

vagrant/vagrant

  • The path to the distribution to deploy:

$ODL_ROOT

  • The cluster's configuration files located under the templates/multi-node-test repository:
$ cd templates/multi-node-test/ 
$ ls -1
akka.conf.template
jolokia.xml.template
module-shards.conf.template
modules.conf.template
org.apache.karaf.features.cfg.template
org.apache.karaf.management.cfg.template
  1. Deploy the cluster.

We are currently located in the cluster-deployer folder:

$ pwd 
test/tools/clustering/cluster-deployer

We need to create a temp folder, so the deployment script can put some temporary files in there:

$ mkdir temp 

Your tree architecture should look like this:

$ tree 
.
├── cluster-nodes
├── distribution-karaf-0.4.0-Beryllium.zip
└── test
└── tools
└── clustering
└── cluster-deployer
├── deploy.py
├── kill_controller.sh
├── remote_host.py
├── remote_host.pyc
├── restart.py
├── temp
└── templates
└── multi-node-test

Now let's deploy the cluster using this command:

$ python deploy.py --clean --distribution=../../../../distribution-karaf-0.4.0-Beryllium.zip --rootdir=/home/vagrant --hosts=192.168.50.151,192.168.50.152,192.168.50.153 --user=vagrant --password=vagrant --template=multi-node-test 

If the process went fine, you should see similar logs while deploying:

https://github.com/jgoodyear/OpenDaylightCookbook/tree/master/chapter1/chapter1-recipe8

  1. Verify the deployment.

Let's use Jolokia to read the cluster's nodes data store:

Let's request on node 1, located under 192.168.50.151, its config data store for the network-topology shard:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://192.168.50.151:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore
{ 
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739174,
"value": {
--[cut]--
"FollowerInfo": [
{
"active": true,
"id": "member-2-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.066"
},
{
"active": true,
"id": "member-3-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.067"
}
],
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-2-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.152:2550/user/shardmanager-config/member-2-shard-topology-config, member-3-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.153:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Leader",
--[cut]--
"ShardName": "member-1-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}

The result presents a lot of interesting information such as the leader of the requested shard, which can be seen under Leader. We can also see the current state (under active) of followers for this particular shard, represented by its id. Finally, it provides the addresses of the peers. Addresses can be found in the AKKA domain, as AKKA is the tool used to enable a node's wiring within the cluster.

By requesting the same shard on another peer, you would see similar information. For instance, for node 2 located under 192.168.50.152:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://192.168.50.152:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore
Make sure to update the digit after member- in the shard name, as this should match the node you're requesting for:
{ 
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739791,
"value": {
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-1-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.151:2550/user/shardmanager-config/member-1-shard-topology-config, member-3-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.153:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Follower",
--[cut]--
"ShardName": "member-2-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}
}

We can see the peers for this shard as well as that this node is voted node 1 - to be elected the shard leader.

主站蜘蛛池模板: 芜湖县| 昌黎县| 黄浦区| 武清区| 浏阳市| 姚安县| 泽州县| 利津县| 柘城县| 天峨县| 彝良县| 体育| 卓尼县| 佛冈县| 盐边县| 新宁县| 临泽县| 鄂伦春自治旗| 浦北县| 双城市| 定边县| 从化市| 固镇县| 嘉黎县| 公主岭市| 洛扎县| 湖北省| 绥芬河市| 定襄县| 水富县| 乌拉特中旗| 灵台县| 通河县| 佛山市| 仲巴县| 松溪县| 梁河县| 务川| 丰县| 武陟县| 涪陵区|