Saturday, October 20, 2012

Installing OpenStack on VMware VMs

Do not have so much physical machines and not in the position of configuring the public network, so I tried OpenStack with VMware virtual machines.
VMware Workstation 9, Ubuntu 12.04, OpenStack Essex.
The main references are OpenStack Install and Deploy Manual from OpenStack official docs; Installing OpenStack Essex (2012.1) on Ubuntu 12.04 from hastexo. During the installation, I made some modifications and cleared some of the confusing parts. 

Overview
Two hosts, first is controller, as the cloud controller for openstack; second is compute, as the additional compute node. Each of the hosts is configured to have two NICs: eth0, NAT to the physical host, with a static IP address 192.168.1.0/24, as the public network interface for openstack; eth1, custom interface, 192.168.22.0/27, used for the openstack vms.
Use qemu for virtualization.


Pre-setup
Configure the static IP address for eth0s of both hosts.
Modify vmnetdhcp.conf of VMware. Add two host in vmnetdhcp.conf:
# added for openstack static ip
host controller {
    hardware ethernet $MAC_FOR_controller;
    fixed-address 192.168.1.3;
}
host compute {
    hardware ethernet $MAC_FOR_compute;
    fixed-address 192.168.1.4;
}
# end openstack
In the windows host, use command line:
net stop vmnetdhcpnet start vmnetdhcp
Modify the configurations of each VM. For the *.vmx file in each vm's directory, make the following modifications:
ethernet0.generatedAddress ==> ethernet0.Address
ethernet0.addressType ==> "generated"->"static"
ethernet0.generatedAddressOffset ==> delete
Make sure to do these when the vm is shut down.

Setup the controller node
1. Prepare your system.
Install NTP by issuing this command on the command line:
apt-get install ntp
Then, modify /etc/ntp.conf as following lines:
server ntp.ubuntu.com iburst
server 127.127.1.0
fudge 127.127.1.0 stratum 10
Restart NTP by issuing the command
service ntp restart    
Next, install the tgt target, which features an iscsi target ( need it if you wanna try nova-volume):
apt-get install tgt
Then start it with
service tgt start
Given that we'll be running nova-compute on this machine as well, we'll also need the openiscsi-client. Install it with:
apt-get install open-iscsi open-iscsi-utils
We need to configure the network cards to make sure they act as we expect.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
    address 192.168.22.1
    network 192.168.22.0
    netmask 255.255.255.0
    broadcast 192.168.22.255
We can also just make the eth1 manual, "up ifconfig eth1 up".
Install the bridge-utils and restart the network:
apt-get install bridge-utils
/etc/init.d/networking restart
We'll also need RabbitMQ, an AMQP-implementation, as that is what all OpenStack components use to communicate with eath other, and memcached.
apt-get install rabbitmq-server memcached python-memcache
As we'll also want to run Qemu virtual machines on this very same host, we'll need Qemu and libvirt, which OpenStack uses to control virtual machines. Install these packages with:
apt-get install qemu libvirt-bin

2. MySQL Database

Use MySQL, install and set up as following:
apt-get install -y mysql-server python-mysqldb
Be sure to provide a root password ($MYSQL_PASSWD) when installing mysql-server. Using an account without a password might incur a lot of troubling bugs in openstack.
When the package installation is done and you want other machines (read: OpenStack computing nodes) to be able to talk to that MySQL database, too, open up /etc/mysql/my.cnf and change this line:
bind-address = 127.0.0.1
to:
bind-address = 0.0.0.0
Restart MySQL:
service mysql restart
Now create the user accounts in mysql and grant them access on the according databases:
$ mysql -uroot -p$MYSQL_PASSWD
mysql>CREATE DATABASE nova;
mysql>GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'%' IDENTIFIED BY 'nova';
mysql>GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'localhost' IDENTIFIED BY 'nova';

mysql>CREATE DATABASE glance;
mysql>GRANT ALL PRIVILEGES ON glance.* TO 'glancedbadmin'@'%' IDENTIFIED BY 'glance';
mysql>GRANT ALL PRIVILEGES ON glance.* TO 'glancedbadmin'@'localhost' IDENTIFIED BY 'glance';
mysql>CREATE DATABASE keystone;
mysql>GRANT ALL PRIVILEGES ON keystone.* TO 'keystonedbadmin'@'%' IDENTIFIED BY 'keystone';
mysql>GRANT ALL PRIVILEGES ON keystone.* TO 'keystonedbadmin'@'localhost' IDENTIFIED BY 'keystone';
 3. Keystone
Install the according packages:
apt-get install keystone python-keystone python-mysqldb python-keystoneclient
Then, open /etc/keystone/keystone.conf and make sure to set a value for admin_token. I used "openstack". 
Scroll down to the section starting with [sql]. Change it to match the database settings that we defined for Keystone in step 2:
[sql]connection = mysql://keystonedbadmin:keystone@192.168.1.3/keystone 
idle_timeout = 200
Be sure to replace 192.168.1.3 with the actual IP of your MySQL server. After you have conduced these changes, restart Keystone:
service keystone restart
Then make Keystone create its tables within the freshly created keystone database:
keystone-manage db_sync
The next step is to fill Keystone with actual data. You can use the script provided by hastexo, which is  keystone_data.sh_.txt. It's courtesy of the Devstack project with some adaptions. Rename the file to keystone_data.sh. Be sure to replace the admin password (ADMIN_PASSWORD variable) and the value for SERVICE_TOKEN with the entry you specified in keystone.conf for admin_token earlier. Then just make the script executable and call it; if everything goes well, it should deliver a return code of 0.
ubuntu$ ./keystone_data.sh
Last but not least, you'll also want to define endpoints in Keystone. Use the endpoints.sh._txt script for that; rename the script to endpoints.sh and make sure it's executable. It takes several parameters - a typical call would look like this:
ubuntu$ ./endpoints.sh -m 192.168.1.3 -u keystonedbadmin -D keystone -p keystone -K 192.168.1.3 -R RegionOne -E "http://localhost:35357/v2.0" -S 192.168.1.3 -T openstack
The values used have the following meanings:
-m 192.168.1.3 - the host where your MySQL database is running (as defined in step 2)
-u keystonedbadmin - the name of the keystone user that may access the mysql database (as defined in step 2)
-D keystone - the database that belongs to Keystone in MySQL (as defined in step 2)
-p keystone- the password of the keystone MySQL user to access the database (as defined in step 2)
-K 192.168.1.3 - the host where all your OpenStack services will initially run
-R RegionOne - the standard region for your endpoints; leave unchanged when following this howto.
-E "http://localhost:35357/v2.0" - the keystone endpoint for user authentication; leave unchanged when following this howto.
-S 192.168.1.3 - Should you wish to run Swift at a later point, put in the IP address of the swift-proxy server here.
-T openstack- the token you put into keystone.conf; use openstack when following.
Replace the values above to match your setup (especially the values for the -K and -S parameters).

4. Glance
First, install the packages:
apt-get install glance glance-api glance-client glance-common glance-registry python-glance
When that is done, open /etc/glance/glance-api-paste.ini and scroll down to the end of the document. You'll see these three lines at its very end:
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
Fill in values here appropriate for your setup. If you used the keystone_data.sh script from this site, then your admin_tenant_name will be service and your admin_user will be glance. admin_password is the password you defined for ADMIN_PASSWORD in keystone_data.sh, so use the same value here, too. In this example, we'll use openstack.

After this, open /etc/glance/glance-registry-paste.ini and scroll to that file's end, too. Adapt it in the same way you adapted /etc/glance/glance-api-paste.ini earlier.
Open /etc/glance/glance-registry.conf now and scroll down to the line starting with sql_connection. This is where we tell Glance to use MySQL; according to the MySQL configuration we created earlier, the sql_connection-line for this example would look like this:
sql_connection = mysql://glancedbadmin:glance@192.168.1.3/glance
It's important to use the machine's actual IP in this example and not 127.0.0.1! After this, scroll down until the end of the document and add these two lines:
[paste_deploy]
flavor = keystone
These two lines instruct the Glance Registry to use Keystone for authentication, which is what we want. Now we need to do the same for the Glance API. 
Open /etc/glance/glance-api.conf and add these two lines at the end of the document:
[paste_deploy]
flavor = keystone
Afterwards, you need to initially synchronize the Glance database by running these commands:
glance-manage version_control 0 
glance-manage db_sync
It's time to restart Glance now:
service glance-api restart
service glance-registry restart
Now what's the best method to verify that Glance is working as expected? The glance command line utilty can do that for us, but to work properly, it needs to know how we want to authenticate ourselves to Glance (and keystone, subsequently). This is a very good moment to define four environmental variables that we'll need continously when working with OpenStack: OS_TENANT_NAME, OS_USERNAME, OS_PASSWORD and OS_AUTH_URL. Here's what they should look like in our example scenario:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL="http://localhost:5000/v2.0/"
The first three entries are identical with what you inserted into Glance's API configuration files earlier and the entry for OS_AUTH_URL is mostly generic and should just work. After exporting these variables, you should be able to do:
glance index
and get no output at all in return (but the return code will be 0; check with echo $?). If that's the case, Glance is setup correctly and properly connects with Keystone. Now let's add our first image!
We'll be using a Ubuntu UEC image for this. Download one:
wget http://uec-images.ubuntu.com/releases/12.04/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img
Then add this image to Glance:
glance add name="Ubuntu 12.04 cloudimg amd64" is_public=true container_format=ovf disk_format=qcow2 < ubuntu-12.04-server-cloudimg-amd64-disk1.img
After this, if you do:
glance index
once more, you should be seeing the freshly added image.

5. Nova
Install all nova-related components:
apt-get install nova-api nova-cert nova-common nova-compute nova-compute-qemu nova-doc nova-network nova-objectstore nova-scheduler nova-volume nova-consoleauth novnc python-nova python-novaclient
Then, open /etc/nova/nova.conf and replace everything in there with these lines:
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--allow_admin_api=true
--use_deprecated_auth=false
--auth_strategy=keystone
--scheduler_driver=nova.scheduler.simple.SimpleScheduler
--s3_host=192.168.1.3
--ec2_host=192.168.1.3
--rabbit_host=192.168.1.3
--cc_host=192.168.1.3
--nova_url=http://192.168.1.3:8774/v1.1/
--routing_source_ip=192.168.1.3
--glance_api_servers=192.168.1.3:9292
--image_service=nova.image.glance.GlanceImageService
--iscsi_ip_prefix=192.168.22
--sql_connection=mysql://novadbadmin:nova@192.168.1.3/nova
--ec2_url=http://192.168.1.3:8773/services/Cloud
--keystone_ec2_url=http://192.168.1.3:5000/v2.0/ec2tokens
--api_paste_config=/etc/nova/api-paste.ini
--libvirt_type=qemu
--libvirt_use_virtio_for_bridges=true
--start_guests_on_host_boot=true
--resume_guests_state_on_host_boot=true
--vnc_enabled=true
--vncproxy_url=http://192.168.1.3:6080
--vnc_console_proxy_url=http://192.168.1.3:6080
# network specific settings
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=br100
--fixed_range=192.168.22.32/27
--floating_range=192.168.1.3/27
--network_size=32
--flat_network_dhcp_start=192.168.22.33
--flat_injected=False
--force_dhcp_release
--iscsi_helper=tgtadm
--connection_type=libvirt
--root_helper=sudo nova-rootwrap
--verbose
--libvirt_use_virtio_for_bridges
--ec2_private_dns_show
--novnc_enabled=true
--novncproxy_base_url=http://192.168.1.3:6080/vnc_auto.html
--vncserver_proxyclient_address=192.168.1.3
--vncserver_listen=192.168.1.3
As you can see, many of the entries in this file are self-explanatory; the trickiest bit to get done right is the network configuration part, which you can see at the end of the file. We're using Nova's FlatDHCP network mode; 192.168.22.32/27 is the fixed range from which our future VMs will get their IP adresses, starting with 192.168.22.33. Our flat interface is eth1 (nova-network will bridge this into a bridge named br100), our public interface is eth0. An additional floating range is defined at 192.168.1.3/27 (for those VMs that we want to have a 'public IP'). 
Attention: Every occurance of 192.168.1.3 in this file refers to the IP of the machine I used for writing this guide. You need to replace it with the actual machine IP of the box you are running  this on. 

After saving nova.conf, open /etc/nova/api-paste.ini in an editor and scroll down to the end of the file. Adapt it according to the changes you conducted in Glance's paste-files in step 3. Use service as tenant name and nova as username.

Then, restart all nova services to make the configuration file changes take effect:
for a in libvirt-bin nova-network nova-compute nova-cert nova-api nova-objectstore nova-scheduler nova-volume novnc nova-consoleauth; do service "$a" stop; done
for a in libvirt-bin nova-network nova-compute nova-cert nova-api nova-objectstore nova-scheduler nova-volume novnc nova-consoleauth; do service "$a" start; done
After these, some of the component might not be correctly running (not showing if use ps -ea | grep nova). This is quite right, do not need to worry. Do:
nova-manage db sync
And perform another restart as above. Everything will be running correctly.

The next step will create all databases Nova needs in MySQL. While we are at it, we can also create the network we want to use for our VMs in the Nova databases. Do this:
nova-manage network create private --fixed_range_v4=192.168.22.32/27 --num_networks=1 --bridge=br100 --bridge_interface=eth1 --network_size=32 
Also, make sure that all files in /etc/nova belong to the nova user and the nova group:
chown -R nova:nova /etc/nova
Then, restart all nova-related services again, you should now see all these nova-* processes when doing ps -ea | grep nova. And you should be able to use the numerous nova commands. For example,
nova list
should give you a list of all currently running VMs (none, the list should be empty). And 
nova image-list
should show a list of the image you uploaded to Glance in the step before. If that's the case, Nova is working as expected and you can carry on with starting your first VM.

6. Boot the first VM
Once Nova works as desired, starting your first own cloud VM is easy. As we're using a Ubuntu image for this example which allows for SSH-key based login only, we first need to store a public SSH key for our admin user in the OpenStack database. Upload the file containing your SSH public key onto the server (i'll assume the file is called id_dsa.pub) and do this:
nova keypair-add --pub_key id_rsa.pub key-controller
This will add the key to OpenStack Nova and store it with the name "key-controller". The only thing left to do after this is firing up your VM. Find out what ID your Ubuntu image has, you can do this with:
nova image-list
When starting a VM, you also need to define the flavor it is supposed to use. Flavors are pre-defined hardware schemes in OpenStack with which you can define what resources your newly created VM has. OpenStack comes with five pre-defined flavors; you can get an overview over the existing flavors with
nova flavor-list
Flavors are referenced by their ID, not by  their name. That's important for the actual command to execute to start your VM. That command's syntax basically is this:
nova boot --flavor ID --image Image-UUID --key_name key-name vm_name
Here's the command you would need to start that particular VM:
nova boot --flavor 1 --image 9bab7ce7-7523-4d37-831f-c18fbc5cb543 --key_name key-controller vm1
After hitting the Enter key, Nova will show you a summary with all important details concerning the new VM. After some seconds, issue the command:
nova show vm1
In the line with the private_network keyword, you'll see the IP address that Nova has assigned this particular VM. As soon as the VMs status is ACTIVE, you should be able to log into that VM by issuing
ssh -i Private-Key ubuntu@IP
Of course Private-Key needs to be replaced with the path to your SSH private key and IP needs to be replaced with the VMs actual IP. If you're using SSH agent forwarding, you can leave out the "-i"-parameter altogether. 

Not providing other components. For further information, please see the hastexo blog.

Setup the compute node
The compute node is relatively easy. Just install some packages and a few configurations.

1. Preparations
Configure the network interfaces as the controller.
Install and configure ntp :
apt-get -y install ntp
Open and edit /etc/ntp.conf:
# Use Ubuntu's ntp server as a fallback.
#server ntp.ubuntu.com
server 192.168.1.3
Install packages:
apt-get -y install open-iscsi open-iscsi-utils bridge-utils
apt-get install -y nova-api  nova-common nova-compute nova-compute-qemu nova-network python-nova python-novaclient python-keystone python-keystoneclient 
Restart the network service:
/etc/init.d/networking restart

Open and modify /etc/nova/api-paste.ini, scroll down to the end:
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
admin_tenant_name = service
admin_user = nova
admin_password = openstack
Open and modify /etc/nova/nova.conf. This is almost like the controller's conf, just make some modifications:
###### NOVNC CONSOLE
novnc_enabled=true
novncproxy_base_url= http://192.168.1.3:6080/vnc_auto.html
vncserver_proxyclient_address=192.168.1.4
vncserver_listen=192.168.1.4

Change the owner of the nova configuration directory:
chown -R nova:nova /etc/nova
Restart services:
for a in libvirt-bin nova-network  nova-compute nova-api ; do service "$a" stop; done
for a in libvirt-bin nova-network  nova-compute nova-api ; do service "$a" start; done
Everything is done! Use the following command to check the nova services:
nova-manage service list
You can see the compute node is added and nova-compute&nova-network are running on it. 





1 comment: