官术网_书友最值得收藏!

Configuring compute node for Neutron

After we have configured the Neutron network node, we can go ahead and configure our compute nodes to use Neutron networking service.

How to do it...

When the controller and Neutron network node are ready, we can configure Nova-Compute node to use Neutron for networking. We will configure Neutron access to the message broker. Then, we will configure Neutron to use ML2 plugin with GRE tunneling segmentation.

Run the following commands on compute1!

  1. Disable reverse path filtering, Edit /etc/sysctl.conf to contain the following:
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0
    and apply the new configuration:
    [root@compute1 ~]# sysctl -p
    
  2. Install the Neutron ML2 and Open vSwitch packages:
    [root@compute1 ~]# yum install -y openstack-neutron-ml2 openstack-neutron-openvswitch
    

Configure message broker

Configure Neutron to use RabbitMQ message broker of the controller:

Tip

Remember to change 10.10.0.1 to your controller management IP.

[root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
[root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host 10.10.0.1

Configure Neutron service

  1. Configure Neutron to use Keystone as an authentication strategy:
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULTauth_strategy keystone
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \auth_uri http://controller:5000
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \auth_host controller
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \auth_protocol http
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \auth_port 35357
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \admin_tenant_name services
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \admin_user neutron
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \admin_password password
    
  2. Now configure Neutron to use ML2 Neutron plugin:
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
    [root@compute1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
    
  3. Configure the ML2 Plugin to use GRE tunneling segregation:
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \type_drivers gre
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \tenant_network_types gre
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \mechanism_drivers openvswitch
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \tunnel_id_ranges 1:1000
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \local_ip 10.20.0.3
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \tunnel_type gre
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \enable_tunneling True
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    [root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \enable_security_group True
    
  4. Create bridges for Neutron layer 2 and Neutron layer 3 agents. First, start the enable vSwitch service:
    [root@compute1 ~]# systemctl start openvswitch
    [root@compute1 ~]# systemctl enable openvswitch
    
  5. After starting Open vSwitch service, we can create the needed bridge:
    [root@compute1 ~]# ovs-vsctl add-br br-int
    
  6. Configure Nova to use Neutron Networking:
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller:9696
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password NEUTRON_PASS
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller:35357/v2.0
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
    [root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
    
  7. Create a symbolic link for ML2 Neutron plugin:
    [root@compute1 ~]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    
  8. Restart the Nova-Compute service:
    [root@compute1 ~]# systemctl restart openstack-nova-compute
    
  9. Start and enable Neutron Open vSwitch agent service:
    [root@compute1 ~]# systemctl start neutron-openvswitch-agent
    [root@compute1 ~]# systemctl enable neutron-openvswitch-agent
    

Creating Neutron networks

At this point, we should have the controller, Neutron network node, and compute1 configured for using Neutron networking. We can go ahead and create Neutron virtual networks needed for instances to be able to communicate with external public networks. We are going to create two layer 2 networks, one for the instances, and another to connect external networks.

Creating Neutron networks

Run the following commands on the controller node!

By default, networks are own and managed by the Admin user, under Admin tenant and shared for other tenants' use.

  1. Source Admin tenant credentials:
    [root@controller ~]# source keystonerc_admin
    
  2. Create an external shared network:
    [root@controller ~(keystone_admin)]# neutron net-create external-net --shared --router:external=True
    

    In this example, we allocate a range of IPs from our existing external physical network, 192.168.200.0/24 for instances to use when communicating with the Internet or with external hosts in the IT environment.

  3. Create a subnet in the newly created network:
    [root@controller ~(keystone_admin)]# neutron subnet-create external-net --name ext-subnet --allocation-pool start=192.168.200.100,end=192.168.200.200 --disable-dhcp --gateway 192.168.200.1 192.168.200.0/24
    

    The IP range is ought to be routable by the external public network and not overlap with the existing configured networks. Chapter 7, Neutron Networking Service, will further discuss Neutron networks planning.

  4. Create a tenant network, which is an isolated network for instances to inner-communicate:
    [root@controller ~(keystone_admin)]# neutron net-create tenant_net
    
    [root@controller ~(keystone_admin)]# neutron subnet-create tenant_net --name tenant_net_subnet --gateway 192.168.1.1 192.168.1.0/24
    
    [root@controller ~(keystone_admin)]# neutron router-create ext-router
    
    [root@controller ~(keystone_admin)]# neutron router-interface-add ext-router tenant-subnet
    
    [root@controller ~(keystone_admin)]# neutron router-gateway-set ext-router external-net
    
主站蜘蛛池模板: 扶绥县| 射洪县| 固始县| 石首市| 玉溪市| 塔城市| 稷山县| 囊谦县| 巩义市| 陇南市| 天祝| 洛阳市| 宁都县| 扎赉特旗| 新兴县| 齐齐哈尔市| 延寿县| 康平县| 安泽县| SHOW| 英超| 吕梁市| 亚东县| 临武县| 凉城县| 长阳| 合作市| 内乡县| 墨玉县| 小金县| 江山市| 龙游县| 四川省| 漳平市| 木里| 舟山市| 连州市| 张掖市| 临海市| 威海市| 达州市|