OpenStack PDF
OpenStack PDF
OpenStack PDF
org
Nov 9, 2012
Folsom, 2012.2
ii
Nov 9, 2012
Folsom, 2012.2
Table of Contents
1. Getting Started with OpenStack .................................................................................. 1 Why Cloud? ............................................................................................................ 1 What is OpenStack? ................................................................................................ 2 Components of OpenStack ...................................................................................... 2 Conceptual Architecture .......................................................................................... 3 Logical Architecture ................................................................................................ 4 Dashboard ...................................................................................................... 5 Compute ......................................................................................................... 6 Object Store .................................................................................................... 7 Image Store .................................................................................................... 8 Identity ........................................................................................................... 8 Network .......................................................................................................... 9 Block Storage .................................................................................................. 9 2. Introduction to OpenStack Compute ......................................................................... 10 Hypervisors ............................................................................................................ 10 Users and Tenants (Projects) ................................................................................. 10 Images and Instances ............................................................................................ 11 System Architecture ............................................................................................... 13 Block Storage and OpenStack Compute ................................................................. 14 3. Installing OpenStack Compute ................................................................................... 16 Compute and Image System Requirements ............................................................ 16 Example Installation Architectures ......................................................................... 17 Co-locating services ....................................................................................... 18 Service Architecture ............................................................................................... 19 Installing OpenStack Compute on Debian .............................................................. 20 Installing on Fedora or Red Hat Enterprise Linux 6 ................................................. 21 Installing on openSUSE or SUSE Linux Enterprise Server ......................................... 22 SUSE Linux Enterprise Server ......................................................................... 22 openSUSE ...................................................................................................... 23 Installing on Ubuntu ............................................................................................. 24 ISO Installation .............................................................................................. 24 Scripted Installation ....................................................................................... 24 Manual Installation on Ubuntu ...................................................................... 24 Installing on Citrix XenServer ................................................................................. 25 4. Configuring OpenStack Compute .............................................................................. 26 Post-Installation Configuration for OpenStack Compute ......................................... 26 Setting Configuration Options in the nova.conf File .................................... 26 Setting Up OpenStack Compute Environment on the Compute Node ............. 29 Creating Credentials ...................................................................................... 30 Creating Certificates ...................................................................................... 31 Enabling Access to VMs on the Compute Node .............................................. 31 Configuring Multiple Compute Nodes ............................................................ 31 Determining the Version of Compute ............................................................ 33 Diagnose your compute nodes ...................................................................... 33 General Compute Configuration Overview ............................................................. 34 Example nova.conf Configuration Files ............................................................... 34 Configuring Logging .............................................................................................. 41 Configuring Hypervisors ......................................................................................... 42
iii
Nov 9, 2012
Folsom, 2012.2
Configuring Authentication and Authorization ...................................................... 44 Configuring Compute to use IPv6 Addresses .......................................................... 47 Configuring Image Service and Storage for Compute ............................................. 48 Configuring Migrations .......................................................................................... 48 KVM-Libvirt ................................................................................................... 49 XenServer ...................................................................................................... 52 Configuring Resize ................................................................................................. 53 XenServer ...................................................................................................... 54 Installing MooseFS as shared storage for the instances directory ............................ 54 Installing the MooseFS metadata and metalogger servers ............................. 55 Installing the MooseFS chunk and client services ............................................ 57 Access to your cluster storage ....................................................................... 58 Configuring Database Connections ........................................................................ 59 Configuring the Compute Messaging System ......................................................... 60 Configuration for RabbitMQ .......................................................................... 60 Configuration for Qpid .................................................................................. 61 Common Configuration for Messaging .......................................................... 62 Configuring the Compute API ............................................................................... 63 Configuring the EC2 API ........................................................................................ 65 Configuring Quotas ............................................................................................... 65 5. Configuration: nova.conf ........................................................................................... 67 File format for nova.conf ...................................................................................... 67 List of configuration options ................................................................................. 68 6. Identity Management ................................................................................................ 88 Basic Concepts ....................................................................................................... 88 User management ......................................................................................... 90 Service management ..................................................................................... 94 Configuration File .................................................................................................. 94 Sample Configuration Files ............................................................................ 96 Running ................................................................................................................ 96 Initializing Keystone .............................................................................................. 96 Adding Users, Tenants, and Roles with python-keystoneclient ................................ 96 Token Auth Method ..................................................................................... 96 Password Auth Method ................................................................................. 97 Example usage .............................................................................................. 97 Tenants ......................................................................................................... 97 Users ............................................................................................................. 98 Roles ........................................................................................................... 100 Services ....................................................................................................... 101 Configuring Services to work with Keystone ........................................................ 102 Setting up credentials .................................................................................. 103 Setting up services ....................................................................................... 103 Setting Up Middleware ................................................................................ 105 7. Image Management ................................................................................................ 111 Adding images .................................................................................................... 111 Getting virtual machine images ........................................................................... 114 CirrOS (test) images ..................................................................................... 114 Ubuntu images ............................................................................................ 114 Fedora images ............................................................................................. 114 OpenSUSE and SLES 11 images .................................................................... 114 Rackspace Cloud Builders (multiple distros) images ...................................... 114
iv
Nov 9, 2012
Folsom, 2012.2 114 115 115 115 115 115 120 122 122 122 123 123 123 123 124 129 130 132 135 135 137 139 139 140 141 141 142 142 143 143 143 147 151 151 153 156 157 157 157 157 157 157 157 159 161 161 161 164 164 165 166
Tool support for creating images ......................................................................... Oz (KVM) .................................................................................................... VMBuilder (KVM, Xen) ................................................................................ BoxGrinder (KVM, Xen, VMWare) ............................................................... VeeWee (KVM) ........................................................................................... Creating raw or QCOW2 images .......................................................................... Booting a test image ........................................................................................... Tearing down (deleting) Instances ....................................................................... Pausing and Suspending Instances ....................................................................... Pausing instance .......................................................................................... Suspending instance .................................................................................... Select a specific host to boot instances on ........................................................... Select a specific zone to boot instances on .......................................................... Creating custom images ...................................................................................... Creating a Linux Image Ubuntu & Fedora ................................................. Creating a Windows Image ................................................................................. Creating images from running instances with KVM and Xen ................................. Replicating images across multiple data centers ................................................... 8. Instance Management ............................................................................................. Interfaces to managing instances ......................................................................... Instance building blocks ....................................................................................... Creating instances ............................................................................................... Create Your Server with the nova Client ...................................................... Launch from a Volume ................................................................................ Controlling where instances run .......................................................................... Instance specific data .......................................................................................... Associating ssh keys with instances .............................................................. Insert metadata during launch .................................................................... Providing User Data to Instances ................................................................. Injecting Files into Instances ........................................................................ Configuring instances at boot time ...................................................................... Config drive ........................................................................................................ Managing instance networking ........................................................................... Manage Floating IP Addresses ..................................................................... Manage Security Groups .............................................................................. Manage Volumes ................................................................................................ Accessing running instances ................................................................................. Stop and Start an Instance .................................................................................. Pause and Unpause ..................................................................................... Suspend and Resume .................................................................................. Change Server Configuration ............................................................................... Commands Used ......................................................................................... Increase or Decrease Server Size .................................................................. Terminate an Instance ......................................................................................... 9. Hypervisors .............................................................................................................. Selecting a Hypervisor ......................................................................................... Hypervisor Configuration Basics ........................................................................... KVM .................................................................................................................... Checking for hardware virtualization support .............................................. Enabling KVM ............................................................................................. Specifying the CPU model of KVM guests ....................................................
Nov 9, 2012
Folsom, 2012.2 167 167 168 168 169 170 171 172 173 173 174 174 174 175 176 176 176 176 176 176 177 177 177 178 178 179 181 181 182 182 182 183 183 185 186 189 190 193 196 197 204 215 216 216 217 217 218 218 219 219 221
Troubleshooting .......................................................................................... QEMU ................................................................................................................. Tips and fixes for QEMU on RHEL ................................................................ Xen, XenAPI, XenServer and XCP ........................................................................ Xen terminology .......................................................................................... XenAPI deployment architecture ................................................................. XenAPI pools ............................................................................................... Installing XenServer and XCP ....................................................................... Xen Boot from ISO ...................................................................................... Further reading ........................................................................................... LXC (Linux containers) ......................................................................................... VMware ESX/ESXi Server Support ........................................................................ Introduction ................................................................................................ Prerequisites ................................................................................................ Configure Tomcat to serve WSDL files ......................................................... VMWare configuration options ................................................................... PowerVM ............................................................................................................ Introduction ................................................................................................ Configuration .............................................................................................. Hyper-V Virtualization Platform ........................................................................... Hyper-V Configuration ................................................................................. Configure NTP ............................................................................................. Configuring Hyper-V Virtual Switching ......................................................... Enable iSCSI Initiator Service ........................................................................ Configuring Shared Nothing Live Migration ................................................. "Python Requirements"> .............................................................................. Installing Nova-compute .............................................................................. Configuring Nova.conf ................................................................................. Preparing Images for use with Hyper-V ........................................................ Running Compute with Hyper-V .................................................................. Troubleshooting Hyper-V Configuration ....................................................... 10. Networking with nova-network ............................................................................. Networking Options ............................................................................................ DHCP server: dnsmasq ......................................................................................... Metadata service ................................................................................................. Configuring Networking on the Compute Node ................................................... Configuring Flat Networking ....................................................................... Configuring Flat DHCP Networking .............................................................. Outbound Traffic Flow with Any Flat Networking ........................................ Configuring VLAN Networking .................................................................... Cloudpipe Per Project Vpns ...................................................................... Enabling Ping and SSH on VMs ............................................................................ Configuring Public (Floating) IP Addresses ........................................................... Private and Public IP Addresses .................................................................... Enabling IP forwarding ................................................................................ Creating a List of Available Floating IP Addresses ......................................... Adding a Floating IP to an Instance ............................................................. Automatically adding floating IPs ................................................................ Removing a Network from a Project .................................................................... Using multiple interfaces for your instances (multinic) .......................................... Using the multinic feature ...........................................................................
vi
Nov 9, 2012
Folsom, 2012.2
Existing High Availability Options for Networking ................................................ 222 Troubleshooting Networking ............................................................................... 225 11. Volumes ................................................................................................................ 229 Cinder Versus Nova-Volumes ............................................................................... 229 Managing Volumes ............................................................................................. 229 Install nova-volume on the cloud controller .................................................. 230 Configuring nova-volume on the compute nodes ......................................... 231 Troubleshoot your nova-volume installation ................................................. 235 Troubleshoot your cinder installation ........................................................... 237 Backup your nova-volume disks ................................................................... 238 Volume drivers .................................................................................................... 242 Ceph RADOS block device (RBD) ................................................................. 242 IBM Storwize family and SVC volume driver ................................................. 244 Nexenta ...................................................................................................... 247 Using the XenAPI Storage Manager Volume Driver ...................................... 248 HP / LeftHand SAN .............................................................................................. 251 Boot From Volume .............................................................................................. 252 12. Scheduling ............................................................................................................. 254 Filter Scheduler .................................................................................................... 254 Filters .................................................................................................................. 254 AggregateInstanceExtraSpecsFilter ............................................................... 255 AllHostsFilter ............................................................................................... 256 AvailabilityZoneFilter .................................................................................. 256 ComputeCapabilitiesFilter ............................................................................ 256 ComputeFilter ............................................................................................ 256 CoreFilter ................................................................................................... 256 DifferentHostFilter ....................................................................................... 257 ImagePropertiesFilter ................................................................................... 257 IsolatedHostsFilter ...................................................................................... 257 JsonFilter ..................................................................................................... 258 RamFilter ..................................................................................................... 258 RetryFilter .................................................................................................... 259 SameHostFilter ............................................................................................ 259 SimpleCIDRAffinityFilter ............................................................................... 259 Costs and Weights ............................................................................................... 260 nova.scheduler.least_cost.compute_fill_first_cost_fn ..................................... 261 nova.scheduler.least_cost.retry_host_cost_fn ............................................... 261 nova.scheduler.least_cost.noop_cost_fn ....................................................... 261 Other Schedulers ................................................................................................. 262 Chance Scheduler ........................................................................................ 262 Multi Scheduler ........................................................................................... 262 Simple Scheduler ......................................................................................... 262 Host aggregates .................................................................................................. 262 13. System Administration ........................................................................................... 266 Understanding the Compute Service Architecture ................................................ 267 Managing Compute Users ................................................................................... 268 Managing the Cloud ........................................................................................... 268 Usage statistics .................................................................................................... 270 Host usage statistics .................................................................................... 270 Instance usage statistics ............................................................................... 271 Using Migration ................................................................................................... 272
vii
Nov 9, 2012
Folsom, 2012.2
14.
18. 19.
Recovering from a failed compute node .............................................................. 274 Recovering from a UID/GID mismatch ................................................................. 275 Nova Disaster Recovery Process ........................................................................... 275 OpenStack Interfaces ............................................................................................. 280 About the Dashboard .......................................................................................... 280 System Requirements for the Dashboard ..................................................... 280 Installing the OpenStack Dashboard ............................................................ 280 Configuring the Dashboard ......................................................................... 281 Validating the Dashboard Install .................................................................. 281 How To Custom Brand The OpenStack Dashboard (Horizon) ........................ 282 Launching Instances using Dashboard .......................................................... 285 Overview of VNC Proxy ....................................................................................... 288 About nova-consoleauth .............................................................................. 289 Typical Deployment ..................................................................................... 289 Frequently asked questions about VNC access to VMs .................................. 292 Security Hardening ................................................................................................ 294 Trusted Compute Pools ....................................................................................... 294 OpenStack Compute Automated Installations ........................................................ 298 Deployment Tool for OpenStack using Puppet (dodai-deploy) .............................. 298 OpenStack Compute Tutorials ................................................................................ 303 Running Your First Elastic Web Application on the Cloud ..................................... 303 Part I: Setting Up as a TryStack User ............................................................ 303 Part II: Starting Virtual Machines ................................................................. 304 Diagnose your compute node ...................................................................... 306 Part III: Installing the Needed Software for the Web-Scale Scenario ............... 307 Running a Blog in the Cloud ........................................................................ 308 Support ................................................................................................................. 309 Community Support ............................................................................................ 309 Troubleshooting OpenStack Compute .................................................................... 311 Log files for OpenStack Compute ........................................................................ 311 Common Errors and Fixes for OpenStack Compute .............................................. 311 Manually reset the state of an instance ............................................................... 312
viii
Nov 9, 2012
Folsom, 2012.2
List of Figures
2.1. Base image state with no running instances ............................................................ 12 2.2. Instance creation from image and run time state .................................................... 12 2.3. End state of image and volume after instance exits ................................................. 13 4.1. KVM, FlatDHCP, MySQL, Glance, LDAP, and optionally sheepdog ............................ 37 4.2. KVM, Flat, MySQL, and Glance, OpenStack or EC2 API ............................................ 39 4.3. KVM, Flat, MySQL, and Glance, OpenStack or EC2 API ............................................ 41 4.4. MooseFS deployment for OpenStack ...................................................................... 55 10.1. Flat network, all-in-one server installation ............................................................ 191 10.2. Flat network, single interface, multiple servers .................................................... 192 10.3. Flat network, multiple interfaces, multiple servers ............................................... 192 10.4. Flat DHCP network, multiple interfaces, multiple servers with libvirt driver ............ 194 10.5. Flat DHCP network, multiple interfaces, multiple servers, network HA with XenAPI driver .............................................................................................................. 195 10.6. Single adaptor hosts, first route .......................................................................... 196 10.7. Single adaptor hosts, second route ..................................................................... 197 10.8. VLAN network, multiple interfaces, multiple servers, network HA with XenAPI driver .......................................................................................................................... 201 10.9. Configuring Viscosity ........................................................................................... 213 10.10. multinic flat manager ........................................................................................ 219 10.11. multinic flatdhcp manager ................................................................................ 220 10.12. multinic VLAN manager .................................................................................... 221 10.13. High Availability Networking Option ................................................................. 223 11.1. Ceph-architecture.png ......................................................................................... 243 12.1. Filtering .............................................................................................................. 255 12.2. Computing weighted costs .................................................................................. 260 14.1. NoVNC Process ................................................................................................... 288
ix
Nov 9, 2012
Folsom, 2012.2
List of Tables
3.1. Hardware Recommendations .................................................................................. 16 4.1. Description of nova.conf log file configuration options ............................................ 41 4.2. Description of nova.conf file configuration options for hypervisors .......................... 42 4.3. Description of nova.conf configuration options for authentication ........................... 45 4.4. Description of nova.conf file configuration options for credentials (crypto) .............. 45 4.5. Description of nova.conf file configuration options for LDAP ................................... 46 4.6. Description of nova.conf configuration options for IPv6 .......................................... 47 4.7. Description of nova.conf file configuration options for S3 access to image storage ...................................................................................................................................... 48 4.8. Description of nova.conf file configuration options for live migration ....................... 52 4.9. Description of nova.conf configuration options for databases ................................. 59 4.10. Description of nova.conf configuration options for Remote Procedure Calls and RabbitMQ Messaging ............................................................................................. 60 4.11. Description of nova.conf configuration options for Tuning RabbitMQ Messaging ..................................................................................................................... 61 4.12. Remaining nova.conf configuration options for Qpid support ............................ 62 4.13. Description of nova.conf configuration options for Customizing Exchange or Topic Names ................................................................................................................. 62 4.14. Description of nova.conf API related configuration options ................................... 63 4.15. Default API Rate Limits ......................................................................................... 64 4.16. Description of nova.conf file configuration options for EC2 API ............................. 65 5.1. Description of common nova.conf configuration options for the Compute API, RabbitMQ, EC2 API, S3 API, instance types .................................................................... 69 5.2. Description of nova.conf configuration options for databases ................................. 73 5.3. Description of nova.conf configuration options for IPv6 .......................................... 73 5.4. Description of nova.conf log file configuration options ............................................ 74 5.5. Description of nova.conf file configuration options for nova- services ....................... 74 5.6. Description of nova.conf file configuration options for credentials (crypto) .............. 75 5.7. Description of nova.conf file configuration options for policies (policy.json) ............. 75 5.8. Description of nova.conf file configuration options for quotas ................................. 75 5.9. Description of nova.conf file configuration options for testing purposes .................. 76 5.10. Description of nova.conf configuration options for authentication ......................... 76 5.11. Description of nova.conf file configuration options for LDAP ................................. 77 5.12. Description of nova.conf file configuration options for roles and authentication ...................................................................................................................................... 78 5.13. Description of nova.conf file configuration options for EC2 API ............................. 78 5.14. Description of nova.conf file configuration options for VNC access to guest instances ....................................................................................................................... 79 5.15. Description of nova.conf file configuration options for networking options ............ 79 5.16. Description of nova.conf file configuration options for live migration ..................... 81 5.17. Description of nova.conf file configuration options for compute nodes .................. 81 5.18. Description of nova.conf file configuration options for bare metal deployment ...... 81 5.19. Description of nova.conf file configuration options for hypervisors ........................ 82 5.20. Description of nova.conf file configuration options for console access to VMs on VMWare VMRC or XenAPI ............................................................................................ 84 5.21. Description of nova.conf file configuration options for S3 access to image storage .......................................................................................................................... 85
Nov 9, 2012
Folsom, 2012.2
5.22. Description of nova.conf file configuration options for schedulers that use algorithms to assign VM launch on particular compute hosts ......................................... 85 5.23. Description of nova.conf file configuration options for config drive features ........... 86 5.24. Description of nova.conf file configuration options for volumes attached to VMs ...................................................................................................................................... 86 6.1. Description of keystone.conf file configuration options for LDAP ........................... 109 9.1. Description of nova.conf file configuration options for hypervisors ........................ 162 11.1. List of configuration flags for Storwize storage and SVC driver ............................. 246 12.1. Description of Simple Scheduler configuration options ......................................... 262 16.1. OSes supported .................................................................................................. 298
xi
Nov 9, 2012
Folsom, 2012.2
Why Cloud?
In data centers today, many computers suffer the same underutilization in computing power and networking bandwidth. For example, projects may need a large amount of computing capacity to complete a computation, but no longer need the computing power after completing the computation. You want cloud computing when you want a service that's available on-demand with the flexibility to bring it up or down through automation or with little intervention. The phrase "cloud computing" is often represented with a diagram that contains a cloud-like shape indicating a layer where responsibility for service goes from user to provider. The cloud in these types of diagrams contains the services that afford computing power harnessed to get work done. Much like the electrical power we receive each day, cloud computing provides subscribers or users with access to a shared collection of computing resources: networks for transfer, servers for storage, and applications or services for completing tasks. These are the compelling features of a cloud: On-demand self-service: Users can provision servers and networks with little human intervention. Network access: Any computing capabilities are available over the network. Many different devices are allowed access through standardized mechanisms. Resource pooling: Multiple users can access clouds that serve other consumers according to demand. Elasticity: Provisioning is rapid and scales out or in based on need. Metered or measured service: Just like utilities that are paid for by the hour, clouds should optimize resource use and control it for the level of service or type of servers such as storage or processing. Cloud computing offers different service models depending on the capabilities a consumer may require. SaaS: Software as a Service. Provides the consumer the ability to use the software in a cloud environment, such as web-based email for example. PaaS: Platform as a Service. Provides the consumer the ability to deploy applications through a programming language or tools supported by the cloud platform provider. An example of platform as a service is an Eclipse/Java programming platform provided with no downloads required. IaaS: Infrastructure as a Service. Provides infrastructure such as computer instances, network connections, and storage so that people can run any software or operating system. 1
Nov 9, 2012
Folsom, 2012.2
When you hear terms such as public cloud or private cloud, these refer to the deployment model for the cloud. A private cloud operates for a single organization, but can be managed on-premise or off-premise. A public cloud has an infrastructure that is available to the general public or a large industry group and is likely owned by a cloud services company. The NIST also defines community cloud as shared by several organizations supporting a specific community with shared concerns. Clouds can also be described as hybrid. A hybrid cloud can be a deployment model, as a composition of both public and private clouds, or a hybrid model for cloud computing may involve both virtual and physical servers. What have people done with cloud computing? Cloud computing can help with largescale computing needs or can lead consolidation efforts by virtualizing servers to make more use of existing hardware and potentially release old hardware from service. People also use cloud computing for collaboration because of its high availability through networked computers. Productivity suites for word processing, number crunching, and email communications, and more are also available through cloud computing. Cloud computing also avails additional storage to the cloud user, avoiding the need for additional hard drives on each user's desktop and enabling access to huge data storage capacity online in the cloud. For a more detailed discussion of cloud computing's essential characteristics and its models of service and deployment, see http://www.nist.gov/itl/cloud/, published by the US National Institute of Standards and Technology.
What is OpenStack?
OpenStack is on a mission: to provide scalable, elastic cloud computing for both public and private clouds, large and small. At the heart of our mission is a pair of basic requirements: clouds must be simple to implement and massively scalable. If you are new to OpenStack, you will undoubtedly have questions about installation, deployment, and usage. It can seem overwhelming at first. But don't fear, there are places to get information to guide you and to help resolve any issues you may run into during the on-ramp process. Because the project is so new and constantly changing, be aware of the revision time for all information. If you are reading a document that is a few months old and you feel that it isn't entirely accurate, then please let us know through the mailing list at https://launchpad.net/~openstack or by filing a bug at https://bugs.launchpad.net/ openstack-manuals/+filebug so it can be updated or removed.
Components of OpenStack
There are currently seven core components of OpenStack: Compute, Object Storage, Identity, Dashboard, Block Storage, Network and Image Service. Let's look at each in turn. Object Store (codenamed "Swift") provides object storage. It allows you to store or retrieve files (but not mount directories like a fileserver). Several companies provide commercial storage services based on Swift. These include KT, Rackspace (from which Swift originated) and Internap. Swift is also used internally at many large companies to store their data. 2
Nov 9, 2012
Folsom, 2012.2
Image (codenamed "Glance") provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute. While this service is technically optional, any cloud of size will require it. Compute (codenamed "Nova") provides virtual servers upon demand. Rackspace and HP provide commercial compute services built on Nova and it is used internally at companies like Mercado Libre and NASA (where it originated). Dashboard (codenamed "Horizon") provides a modular web-based user interface for all the OpenStack services. With this web GUI, you can perform most operations on your cloud like launching an instance, assigning IP addresses and setting access controls. Identity (codenamed "Keystone") provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud. Network (codenamed "Quantum") provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies. Block Storage (codenamed "Cinder") provides persistent block storage to guest VMs. This project was born from code originally in Nova (the nova-volume service described below). In the Folsom release, both the nova-volume service and the separate volume service are available. In addition to these core projects, there are also a number of "incubation" projects that are being considered for future inclusion in the OpenStack core.
Conceptual Architecture
The OpenStack project as a whole is designed to "deliver(ing) a massively scalable cloud operating system." To achieve this, each of the constituent services are designed to work together to provide a complete Infrastructure as a Service (IaaS). This integration is facilitated through public application programming interfaces (APIs) that each service offers (and in turn can consume). While these APIs allow each of the services to use another service, it also allows an implementer to switch out any service as long as they maintain the API. These are (mostly) the same APIs that are available to end users of the cloud. Conceptually, you can picture the relationships between the services as so:
Nov 9, 2012
Folsom, 2012.2
Dashboard ("Horizon") provides a web front end to the other OpenStack services Compute ("Nova") stores and retrieves virtual disks ("images") and associated metadata in Image ("Glance") Network ("Quantum") provides virtual networking for Compute. Block Storage ("Cinder") provides storage volumes for Compute. Image ("Glance") can store the actual virtual disk files in the Object Store("Swift") All the services authenticate with Identity ("Keystone") This is a stylized and simplified view of the architecture, assuming that the implementer is using all of the services together in the most common configuration. It also only shows the "operator" side of the cloud -- it does not picture how consumers of the cloud may actually use it. For example, many users will access object storage heavily (and directly).
Logical Architecture
As you can imagine, the logical architecture is far more complicated than the conceptual architecture shown above. As with any service-oriented architecture, diagrams quickly become "messy" trying to illustrate all the possible combinations of service communications. The diagram below, illustrates the most common architecture of an OpenStack-based cloud. However, as OpenStack supports a wide variety of technologies, it does not represent the only architecture possible.
Nov 9, 2012
Folsom, 2012.2
This picture is consistent with the conceptual architecture above in that: End users can interact through a common web interface (Horizon) or directly to each service through their API All services authenticate through a common source (facilitated through Keystone) Individual services interact with each other through their public APIs (except where privileged administrator commands are necessary) In the sections below, we'll delve into the architecture for each of the services.
Dashboard
Horizon is a modular Django web application that provides an end user and administrator interface to OpenStack services.
Nov 9, 2012
Folsom, 2012.2
As with most web applications, the architecture is fairly simple: Horizon is usually deployed via mod_wsgi in Apache. The code itself is separated into a reusable python module with most of the logic (interactions with various OpenStack APIs) and presentation (to make it easily customizable for different sites). A database (configurable as to which one). As it relies mostly on the other services for data, it stores very little data of its own. From a network architecture point of view, this service will need to be customer accessible as well as be able to talk to each service's public APIs. If you wish to use the administrator functionality (i.e. for other services), it will also need connectivity to their Admin API endpoints (which should be non-customer accessible).
Compute
Nova is the most complicated and distributed component of OpenStack. A large number of processes cooperate to turn end user API requests into running virtual machines. Below is a list of these processes and their functions: nova-api accepts and responds to end user compute API calls. It supports OpenStack Compute API, Amazon's EC2 API and a special Admin API (for privileged users to perform administrative actions). It also initiates most of the orchestration activities (such as running an instance) as well as enforces some policy (mostly quota checks). The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor's APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, etc.). The process by which it does so is fairly complex but the basics are simple: accept actions from the queue and then perform a series of system commands (like launching a KVM instance) to carry them out while updating state in the database. 6
Nov 9, 2012
Folsom, 2012.2
nova-volume manages the creation, attaching and detaching of persistent volumes to compute instances (similar functionality to Amazons Elastic Block Storage). It can use volumes from a variety of providers such as iSCSI or Rados Block Device in Ceph. A new OpenStack project, Cinder, will eventually replace nova-volume functionality. In the Folsom release, nova-volume and the Block Storage service will have similar functionality. The nova-network worker daemon is very similar to nova-compute and novavolume. It accepts networking tasks from the queue and then performs tasks to manipulate the network (such as setting up bridging interfaces or changing iptables rules). This functionality is being migrated to Quantum, a separate OpenStack service. In the Folsom release, much of the functionality will be duplicated between novanetwork and Quantum. The nova-schedule process is conceptually the simplest piece of code in OpenStack Nova: take a virtual machine instance request from the queue and determines where it should run (specifically, which compute server host it should run on). The queue provides a central hub for passing messages between daemons. This is usually implemented with RabbitMQ today, but could be any AMPQ message queue (such as Apache Qpid). New to the Folsom release is support for Zero MQ. The SQL database stores most of the build-time and run-time state for a cloud infrastructure. This includes the instance types that are available for use, instances in use, networks available and projects. Theoretically, OpenStack Nova can support any database supported by SQL-Alchemy but the only databases currently being widely used are sqlite3 (only appropriate for test and development work), MySQL and PostgreSQL. Nova also provides console services to allow end users to access their virtual instance's console through a proxy. This involves several daemons (nova-console, novanovncproxy and nova-consoleauth). Nova interacts with many other OpenStack services: Keystone for authentication, Glance for images and Horizon for web interface. The Glance interactions are central. The API process can upload and query Glance while nova-compute will download images for use in launching images.
Object Store
The swift architecture is very distributed to prevent any single point of failure as well as to scale horizontally. It includes the following components: Proxy server (swift-proxy-server) accepts incoming requests via the OpenStack Object API or just raw HTTP. It accepts files to upload, modifications to metadata or container creation. In addition, it will also serve files or container listing to web browsers. The proxy server may utilize an optional cache (usually deployed with memcache) to improve performance. Account servers manage accounts defined with the object storage service. Container servers manage a mapping of containers (i.e folders) within the object store service. 7
Nov 9, 2012
Folsom, 2012.2
Object servers manage actual objects (i.e. files) on the storage nodes. There are also a number of periodic process which run to perform housekeeping tasks on the large data store. The most important of these is the replication services, which ensures consistency and availability through the cluster. Other periodic processes include auditors, updaters and reapers. Authentication is handled through configurable WSGI middleware (which will usually be Keystone).
Image Store
The Glance architecture has stayed relatively stable since the Cactus release. The biggest architectural change has been the addition of authentication, which was added in the Diablo release. Just as a quick reminder, Glance has four main parts to it: glance-api accepts Image API calls for image discovery, image retrieval and image storage. glance-registry stores, processes and retrieves metadata about images (size, type, etc.). A database to store the image metadata. Like Nova, you can choose your database depending on your preference (but most people use MySQL or SQlite). A storage repository for the actual image files. In the diagram above, Swift is shown as the image repository, but this is configurable. In addition to Swift, Glance supports normal filesystems, RADOS block devices, Amazon S3 and HTTP. Be aware that some of these choices are limited to read-only usage. There are also a number of periodic process which run on Glance to support caching. The most important of these is the replication services, which ensures consistency and availability through the cluster. Other periodic processes include auditors, updaters and reapers. As you can see from the diagram in the Conceptual Architecture section, Glance serves a central role to the overall IaaS picture. It accepts API requests for images (or image metadata) from end users or Nova components and can store its disk files in the object storage service, Swift.
Identity
Keystone provides a single point of integration for OpenStack policy, catalog, token and authentication. keystone handles API requests as well as providing configurable catalog, policy, token and identity services. Each Keystone function has a pluggable backend which allows different ways to use the particular service. Most support standard backends like LDAP or SQL, as well as Key Value Stores (KVS). Most people will use this as a point of customization for their current authentication services. 8
Nov 9, 2012
Folsom, 2012.2
Network
Quantum provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Like many of the OpenStack services, Quantum is highly configurable due to it's plug-in architecture. These plug-ins accommodate different networking equipment and software. As such, the architecture and deployment can vary dramatically. In the above architecture, a simple Linux networking plug-in is shown. quantum-server accepts API requests and then routes them to the appropriate quantum plugin for action. Quantum plugins and agents perform the actual actions such as plugging and unplugging ports, creating networks or subnets and IP addressing. These plugins and agents differ depending on the vendor and technologies used in the particular cloud. Quantum ships with plugins and agents for: Cisco virtual and physical switches, Nicira NVP product, NEC OpenFlow products, Open vSwitch, Linux bridging and the Ryu Network Operating System. The common agents are L3 (layer 3), DHCP (dynamic host IP addressing) and the specific plug-in agent. Most Quantum installations will also make use of a messaging queue to route information between the quantum-server and various agents as well as a database to store networking state for particular plugins. Quantum will interact mainly with Nova, where it will provide networks and connectivity for its instances.
Block Storage
Cinder separates out the persistent block storage functionality that was previously part of Openstack Compute (in the form of nova-volume) into it's own service. The OpenStack Block Storage API allows for manipulation of volumes, volume types (similar to compute flavors) and volume snapshots. cinder-api accepts API requests and routes them to cinder-volume for action. cinder-volume acts upon the requests by reading or writing to the Cinder database to maintain state, interacting with other processes (like cinder-scheduler) through a message queue and directly upon block storage providing hardware or software. It can interact with a variety of storage providers through a driver architecture. Currently, there are drivers for IBM, SolidFire, NetApp, Nexenta, Zadara, linux iSCSI and other storage providers. Much like nova-scheduler, the cinder-scheduler daemon picks the optimal block storage provider node to create the volume on. Cinder deployments will also make use of a messaging queue to route information between the cinder processes as well as a database to store volume state. Like Quantum, Cinder will mainly interact with Nova, providing volumes for its instances. 9
Nov 9, 2012
Folsom, 2012.2
Hypervisors
OpenStack Compute requires a hypervisor and Compute controls the hypervisors through an API server. The process for selecting a hypervisor usually means prioritizing and making decisions based on budget and resource constraints as well as the inevitable list of supported features and required technical specifications. The majority of development is done with the KVM and Xen-based hypervisors. Refer to http://wiki.openstack.org/ HypervisorSupportMatrix for a detailed list of features and support across the hypervisors. With OpenStack Compute, you can orchestrate clouds using multiple hypervisors in different zones. The types of virtualization standards that may be used with Compute include: KVM - Kernel-based Virtual Machine LXC - Linux Containers (through libvirt) QEMU - Quick EMUlator UML - User Mode Linux VMWare ESX/ESXi 4.1 update 1 Xen - Xen, Citrix XenServer and Xen Cloud Platform (XCP)
Nov 9, 2012
Folsom, 2012.2
Note
Earlier versions of OpenStack used the term "project" instead of "tenant". Because of this legacy terminology, some command-line tools use -project_id when a tenant ID is expected. While the original EC2 API supports users, OpenStack Compute adds the concept of tenants. Tenants are isolated resource containers forming the principal organizational structure within the Compute service. They consist of a separate VLAN, volumes, instances, images, keys, and users. A user can specify which tenant he or she wishes to be known as by appending :project_id to his or her access key. If no tenant is specified in the API request, Compute attempts to use a tenant with the same ID as the user. For tenants, quota controls are available to limit the: Number of volumes which may be created Total size of all volumes within a project as measured in GB Number of instances which may be launched Number of processor cores which may be allocated Publicly accessible IP addresses
Nov 9, 2012
Folsom, 2012.2
service which provide persistent block storage as opposed to the ephemeral storage provided by the instance flavor. Here is an example of the life cycle of a typical virtual system withing an OpenStack cloud to illustrate these concepts.
Initial State
The following diagram shows the system state prior to launching an instance. The image store fronted by the image service, Glance, has some number of predefined images. In the cloud there is an available compute node with available vCPU, memory and local disk resources. Plus there are a number of predefined volumes in the nova-volume service.
Launching an instance
To launch an instance the user selects an image, a flavor and optionally other attributes. In this case the selected flavor provides a root volume (as all flavors do) labeled vda in the diagram and additional ephemeral storage labeled vdb in the diagram. The user has also opted to map a volume from the nova-volume store to the third virtual disk, vdc, on this instance.
The OpenStack system copies the base image from the image store to local disk which is used as the first disk of the instance (vda), having small images will result in faster start up of your instances as less data needs to be copied across the network. The system also 12
Nov 9, 2012
Folsom, 2012.2
creates a new empty disk image to present as the second disk (vdb). The compute node attaches to the requested nova-volume using iSCSI and maps this to the third disk (vdc) as requested. The vCPU and memory resources are provisioned and the instance is booted from the first dive. The instance runs and changes data on the disks indicated in red in the diagram. There are many possible variations in the details of the scenario, particularly in terms of what the backing storage is and the network protocols used to attach and move storage. One variant worth mentioning here is that the ephemeral storage used for volumes vda and vdb in this example may be backed by network storage rather than local disk. The details are left for later chapters.
End State
Once the instance has served its purpose and is deleted all state is reclaimed, except the persistent volume. The ephemeral storage is purged. Memory and vCPU resources are released. And of course the image has remained unchanged through out.
System Architecture
OpenStack Compute consists of several main components. A "cloud controller" contains many of these components, and it represents the global state and interacts with all other components. An API Server acts as the web services front end for the cloud controller. The compute controller provides compute server resources and typically contains the compute service, The Object Store component optionally provides storage services. An auth manager provides authentication and authorization services when used with the Compute system, or you can use the Identity Service (keystone) as a separate authentication service. A volume controller provides fast and permanent block-level storage for the compute servers. A network controller provides virtual networks to enable compute servers to interact with each other and with the public network. A scheduler selects the most suitable compute controller to host an instance. OpenStack Compute is built on a shared-nothing, messaging-based architecture. You can run all of the major components on multiple servers including a compute controller, volume controller, network controller, and object store (or image service). A cloud controller communicates with the internal object store via HTTP (Hyper Text Transfer Protocol), but it communicates with a scheduler, network controller, and volume controller via AMQP (Advanced Message Queue Protocol). To avoid blocking each component while waiting 13
Nov 9, 2012
Folsom, 2012.2
for a response, OpenStack Compute uses asynchronous calls, with a call-back that gets triggered when a response is received. To achieve the shared-nothing property with multiple copies of the same component, OpenStack Compute keeps all the cloud system state in a database.
Ephemeral Storage
Ephemeral storage is associated with a single unique instance. Its size is defined by the flavor of the instance. Data on ephemeral storage ceases to exist when the instance it is associated with is terminated. Rebooting the VM or restarting the host server, however, will not destroy ephemeral data. In the typical use case an instance's root filesystem is stored on ephemeral storage. This is often an unpleasant surprise for people unfamiliar with the cloud model of computing. In addition to the ephemeral root volume all flavors except the smallest, m1.tiny, provide an additional ephemeral block device varying from 20G for the m1.small through 160G for the m1.xlarge by default - these sizes are configurable. This is presented as a raw block device with no partition table or filesystem. Cloud aware operating system images may discover, format, and mount this device. For example the cloud-init package included in Ubuntu's stock cloud images will format this space as an ext3 filesystem and mount it on / mnt. It is important to note this a feature of the guest operating system. OpenStack only provisions the raw storage.
Volume Storage
Volume storage is independent or any particular instance and is persistent. Volumes are user created and within quota and availability limits may be of any arbitrary size. When first created volumes are raw block devices with no partition table and no filesystem. They must be attached to an instance to be partitioned and/or formatted. Once this is done they may be used much like an external disk drive. Volumes may attached to only one instance at a time, but may be detached and reattached to either the same or different instances. It is possible to configure a volume so that it is bootable and provides a persistent virtual instance similar to traditional non-cloud based virtualization systems. In this use case the resulting instance may sill have ephemeral storage depending on the flavor selected, but the root filesystem (and possibly others) will be on the persistent volume and thus state will 14
Nov 9, 2012
Folsom, 2012.2
be maintained even if the instance it shutdown. Details of this configuration are discussed in the Boot From Volume section of this manual. Volumes do not provide concurrent access from multiple instances. For that you need either a traditional network filesystem like NFS or CIFS or a cluster filesystem such as GlusterFS. These may be built within an OpenStack cluster or provisioned out side of it, but are not features provided by the OpenStack software.
15
Nov 9, 2012
Folsom, 2012.2
Table3.1.Hardware Recommendations
Server Recommended Hardware Notes Two NICS are recommended but not required. A quad core server with 12 GB RAM would be more than sufficient for a cloud controller node. 32-bit processors will work for the cloud controller node. Cloud Controller Processor: 64-bit x86 node (runs network, volume, API, scheduler Memory: 12 GB RAM and image services) Disk space: 30 GB (SATA or SAS or SSD)
The package repositories referred to in this guide do not contain Volume storage: two i386 packages. disks with 2 TB (SATA) for volumes attached to the compute nodes Network: one 1 GB Network Interface Card (NIC) Compute nodes (runs virtual instances) Processor: 64-bit x86 Memory: 32 GB RAM Disk space: 30 GB (SATA) Network: two 1 GB NICs Note that you cannot run 64-bit VM instances on a 32-bit compute node. A 64-bit compute node can run either 32- or 64-bit VMs, however. With 2 GB RAM you can run one m1.small instance on a node or three m1.tiny instances without memory swapping, so 2 GB RAM would be a minimum for a test-environment compute node. As an example, Rackspace Cloud Builders use 96 GB RAM for compute nodes in OpenStack deployments. Specifically for virtualization on certain hypervisors on the node or nodes running nova-compute, you need a x86 machine with an AMD processor with SVM extensions (also called AMD-V) or an Intel processor with VT (virtualization technology) extensions. For XenServer and XCP refer to the XenServer installation guide and the XenServer harware compatibility list. For LXC, the VT extensions are not required. The packages referred to in this guide do not contain i386 packages.
Note
While certain parts of OpenStack are known to work on various operating systems, currently the only feature-complete, production-supported host environment is Linux. 16
Nov 9, 2012
Folsom, 2012.2
Operating System: OpenStack currently has packages for the following distributions: CentOS, Debian, Fedora, RHEL, and Ubuntu. These packages are maintained by community members, refer to http://wiki.openstack.org/Packaging for additional links.
Note
The Folsom release of OpenStack Compute requires Ubuntu 12.04 or later, as the version of libvirt that ships with Ubuntu 11.10 does not function properly with OpenStack due to bug #1011863. The Folsom release of OpenStack Compute requires Fedora 16 or later, as the version of libvirt that ships with Fedora 15 does not function properly with OpenStack due to bug #1011863. Database: For OpenStack Compute, you need access to either a PostgreSQL or MySQL database, or you can install it as part of the OpenStack Compute installation process. For Object Storage, the container and account servers use SQLite, and you can install it as part of the installation process. Permissions: You can install OpenStack Compute, the Image Service, or Object Storage either as root or as a user with sudo permissions if you configure the sudoers file to enable all the permissions. Network Time Protocol: You must install a time synchronization program such as NTP. For Compute, time synchronization keeps your cloud controller and compute nodes talking to the same time server to avoid problems scheduling VM launches on compute nodes. For Object Storage, time synchronization ensure the object replications are accurately updating objects when needed so that the freshest content is served.
Nov 9, 2012
Folsom, 2012.2
This is an illustration of one possible multiple server installation of OpenStack Compute; virtual server networking in the cluster may vary.
An alternative architecture would be to add more messaging servers if you notice a lot of back up in the messaging queue causing performance problems. In that case you would add an additional RabbitMQ server in addition to or instead of scaling up the database server. Your installation can run any nova- service on any server as long as the nova.conf is configured to point to the RabbitMQ server and the server can send messages to the server. Multiple installation architectures are possible, here is another example illustration.
Co-locating services
While in a best-practice deployment, each OpenStack project's services would live on a different machine, this is not always practical. For example, in small deployments 18
Nov 9, 2012
Folsom, 2012.2
there might be too few machines available, or a limited number of public IP addresses. Components from different OpenStack projects are not necessarily engineered to be able to be co-located, however many users report success with a variety of deployment scenarios. The following is a series of pointers to be used when co-location of services from different OpenStack projects on the same machine is a must: Ensure dependencies aren't in conflict. The OpenStack Continuous Integration team does attempt to ensure there is no conflict - so if you see issues during package installation, consider filing a bug. Monitor your systems and ensure they are not overloaded. Some parts of OpenStack use a lot of CPU time (eg Swift Proxy Servers), while others are IO focused (eg Swift Object Server). Try to balance these so they complement each other. Beware of security. Different parts of OpenStack assume different security models. For example, Swift assumes the storage nodes will be on a private network and does not provide additonal security between nodes in the cluster. Ensure the ports you are running the services on don't conflict. Most ports used by OpenStack are configurable.
Service Architecture
Because Compute has multiple services and many configurations are possible, here is a diagram showing the overall service architecture and communication systems between the services.
19
Nov 9, 2012
Folsom, 2012.2
Nov 9, 2012
Folsom, 2012.2
For the compute node(s) install the following packages: nova-compute nova-network nova-api
Note
Because this manual takes active advantage of the "sudo" command, it would be easier for you to add to it your Debian system, by doing:
# usermod -a -G sudo "myuser"
then re-login. Otherwise you will have to replace every "sudo" call by executing from root account.
OpenStack Compute Administration Manual Getting Started with OpenStack Nova (Fedora 16/ Diablo)
Nov 9, 2012
Folsom, 2012.2
This page was originally written as instructions for getting started with OpenStack on Fedora 16, which includes the Diablo release.
Now you can declare the repository to libzypp with zypper ar.
# zypper ar http://download.opensuse.org/repositories/isv:/B1-Systems:/ OpenStack:/release:/Folsom/SLE_11_SP2/isv:B1-Systems:OpenStack:release:Folsom. repo Adding repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' [done] Repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' successfully added Enabled: Yes Autorefresh: No GPG check: Yes URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/ release:/Folsom/SLE_11_SP2/
After declaring the repository you have to update the metadata with zypper ref.
# zypper ref [...] Retrieving repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' metadata [done] Building repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' cache [done] All repositories have been refreshed.
You can list all available packages for OpenStack with zypper se openstack. You can install packages with zypper in PACKAGE.
22
Nov 9, 2012
Folsom, 2012.2
Warning
You have to apply the latest available updates for SLES11 SP2. Without doing that it's not possible to run OpenStack on SLES11 SP2. For evaluation purposes you can request a free 60 day evaluation for SLES11 SP2 to gain updates. To verify that you use the correct Python interpreter simply check the version. You should use at least Python 2.6.8.
# python --version Python 2.6.8
openSUSE
First of all you have to import the signing key of the repository.
# rpm --import http://download.opensuse.org/repositories/isv:/B1-Systems:/ OpenStack:/release:/Folsom/openSUSE_12.2/repodata/repomd.xml.key
Now you can declare the repository to libzypp with zypper ar.
# zypper ar http://download.opensuse.org/repositories/isv:/ B1-Systems:/OpenStack:/release:/Folsom/openSUSE_12.2/isv:B1Systems:OpenStack:release:Folsom.repo Adding repository 'OpenStack Folsom (latest stable release) (openSUSE_12. 2)' [done] Repository 'OpenStack Folsom (latest stable release) (openSUSE_12.2)' successfully added Enabled: Yes Autorefresh: No GPG check: Yes URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/ release:/Folsom/openSUSE_12.2/
After declaring the repository you have to update the metadata with zypper ref.
# zypper ref [...] Retrieving repository 'OpenStack Folsom (latest stable release) (openSUSE_12. 2)' metadata [done] Building repository 'OpenStack Folsom (latest stable release) (openSUSE_12.2)' cache [done] All repositories have been refreshed.
You can list all available packages for OpenStack with zypper se openstack. You can install packages with zypper in PACKAGE.
23
Nov 9, 2012
Folsom, 2012.2
Installing on Ubuntu
How you go about installing OpenStack Compute depends on your goals for the installation. You can use an ISO image, you can use a scripted installation, and you can manually install with a step-by-step installation.
ISO Installation
Two ISO distributions are available for Essex: See http://sourceforge.net/projects/stackops/files/ for download files and information, license information, and a README file. For documentation on the StackOps ISO, see http:// docs.stackops.org. For free support, go to http://getsatisfaction.com/stackops. See Installing Rackspace Private Cloud on Physical Hardware for download links and instructions for the Rackspace Private Cloud ISO. For documentation on the Rackspace, see http://www.rackspace.com/cloud/private.
Scripted Installation
You can download a script for a standalone install for proof-of-concept, learning, or for development purposes for Ubuntu 11.04 at https://devstack.org. 1. Install Ubuntu 12.10 or RHEL/CentOS/Fedora 16: In order to correctly install all the dependencies, we assume a specific version of the OS to make it as easy as possible. 2. Download DevStack:
$ git clone git://github.com/openstack-dev/devstack.git
The devstack repo contains a script that installs OpenStack Compute, Object Storage, the Image Service, Volumes, the Dashboard and the Identity Service and offers templates for configuration files plus data scripts. 3. Start the install:
$ cd devstack; ./stack.sh
It takes a few minutes, we recommend reading the well-documented script while it is building to learn more about what is going on.
Nov 9, 2012
Folsom, 2012.2
25
Nov 9, 2012
Folsom, 2012.2
26
Nov 9, 2012
Folsom, 2012.2
# DATABASE sql_connection=mysql://nova:[email protected]/nova # COMPUTE libvirt_type=qemu compute_driver=libvirt.LibvirtDriver instance_name_template=instance-%08x api_paste_config=/etc/nova/api-paste.ini # COMPUTE/APIS: if you have separate configs for separate services # this flag is required for both nova-api and nova-compute allow_resize_to_same_host=True # APIS osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions ec2_dmz_host=192.168.206.130 s3_host=192.168.206.130 # RABBITMQ rabbit_host=192.168.206.130 # GLANCE image_service=nova.image.glance.GlanceImageService glance_api_servers=192.168.206.130:9292 # NETWORK network_manager=nova.network.manager.FlatDHCPManager force_dhcp_release=True dhcpbridge_flagfile=/etc/nova/nova.conf firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver # Change my_ip to match each host my_ip=192.168.206.130 public_interface=eth0 vlan_interface=eth0 flat_network_bridge=br100 flat_interface=eth0 fixed_range=192.168.100.0/24 # NOVNC CONSOLE novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html # Change vncserver_proxyclient_address and vncserver_listen to match each compute host vncserver_proxyclient_address=192.168.206.130 vncserver_listen=192.168.206.130 # AUTHENTICATION auth_strategy=keystone [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova signing_dirname = /tmp/keystone-signing-nova [DEFAULT] # LOGS/STATE verbose=True
27
Nov 9, 2012
Folsom, 2012.2
logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova rootwrap_config=/etc/nova/rootwrap.conf # SCHEDULER compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler # VOLUMES volume_driver=nova.volume.driver.ISCSIDriver volume_group=nova-volumes volume_name_template=volume-%s iscsi_helper=tgtadm # DATABASE sql_connection=mysql://nova:[email protected]/nova # COMPUTE libvirt_type=qemu compute_driver=libvirt.LibvirtDriver instance_name_template=instance-%08x api_paste_config=/etc/nova/api-paste.ini # COMPUTE/APIS: if you have separate configs for separate services # this flag is required for both nova-api and nova-compute allow_resize_to_same_host=True # APIS osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions ec2_dmz_host=192.168.206.130 s3_host=192.168.206.130 # QPID qpid_hostname=192.168.206.130 # GLANCE image_service=nova.image.glance.GlanceImageService glance_api_servers=192.168.206.130:9292 # NETWORK network_manager=nova.network.manager.FlatDHCPManager force_dhcp_release=True dhcpbridge_flagfile=/etc/nova/nova.conf firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver # Change my_ip to match each host my_ip=192.168.206.130 public_interface=eth100 vlan_interface=eth0 flat_network_bridge=br100 flat_interface=eth0 fixed_range=192.168.100.0/24 # NOVNC CONSOLE novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html # Change vncserver_proxyclient_address and vncserver_listen to match each compute host vncserver_proxyclient_address=192.168.206.130 vncserver_listen=192.168.206.130 # AUTHENTICATION
28
Nov 9, 2012
Folsom, 2012.2
auth_strategy=keystone [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova signing_dirname = /tmp/keystone-signing-nova
Create a nova group, so you can set permissions on the configuration file:
$ sudo addgroup nova
The nova.config file should have its owner set to root:nova, and mode set to 0640, since the file could contain your MySQL servers username and password. You also want to ensure that the nova user belongs to the nova group.
$ sudo usermod -g nova nova $ chown -R username:nova /etc/nova $ chmod 640 /etc/nova/nova.conf
You also need to populate the database with the network configuration information that Compute obtains from the nova.conf file.
$ nova-manage network create <network-label> <project-network> <number-ofnetworks-in-project> <addresses-in-each-network>
Here is an example of what this looks like with real values entered:
$ nova-manage db sync $ nova-manage network create novanet 192.168.0.0/24 1 256
For this example, the number of IPs is /24 since that falls inside the /16 range that was set in fixed-range in nova.conf. Currently, there can only be one network, and this set up would use the max IPs available in a /24. You can choose values that let you use any valid amount that you would like. The nova-manage service assumes that the first IP address is your network (like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the broadcast is the very last IP in the range you defined (192.168.0.255). If this is not the case you will need to manually edit the sql db networks table. When you run the nova-manage network create command, entries are made in the networks and fixed_ips tables. However, one of the networks listed in the networks table needs to be marked as bridge in order for the code to know that a bridge exists. The network in the Nova networks table is marked as bridged automatically for Flat Manager. 29
Nov 9, 2012
Folsom, 2012.2
Creating Credentials
The credentials you will use to launch instances, bundle images, and all the other assorted API functions can be sourced in a single file, such as creating one called /creds/openrc. Here's an example openrc file you can download from the Dashboard in Settings > Project Settings > Download RC File.
#!/bin/bash # *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. # will use the 1.1 *compute api* export OS_AUTH_URL=http://50.56.12.206:5000/v2.0 export OS_TENANT_ID=27755fd279ce43f9b17ad2d65d45b75c export OS_USERNAME=vish export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_AUTH_USER=norm export OS_AUTH_KEY=$OS_PASSWORD_INPUT export OS_AUTH_TENANT=27755fd279ce43f9b17ad2d65d45b75c export OS_AUTH_STRATEGY=keystone
We
You also may want to enable EC2 access for the euca2ools. Here is an example ec2rc file for enabling EC2 access with the required credentials.
export NOVA_KEY_DIR=/root/creds/ export EC2_ACCESS_KEY="EC2KEY:USER" export EC2_SECRET_KEY="SECRET_KEY" export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud" export S3_URL="http://$NOVA-API-IP:3333" export EC2_USER_ID=42 # nova does not use user id, but bundling requires it export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem export EC2_CERT=${NOVA_KEY_DIR}/cert.pem export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}" alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
Lastly, here is an example openrc file that works with nova client and ec2 tools.
export export export export export export export export export export export export export set OS_PASSWORD=${ADMIN_PASSWORD:-secrete} OS_AUTH_URL=${OS_AUTH_URL:-http://$SERVICE_HOST:5000/v2.0} NOVA_VERSION=${NOVA_VERSION:-1.1} OS_REGION_NAME=${OS_REGION_NAME:-RegionOne} EC2_URL=${EC2_URL:-http://$SERVICE_HOST:8773/services/Cloud} EC2_ACCESS_KEY=${DEMO_ACCESS} EC2_SECRET_KEY=${DEMO_SECRET} S3_URL=http://$SERVICE_HOST:3333 EC2_USER_ID=42 # nova does not use user id, but bundling requires it EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem EC2_CERT=${NOVA_KEY_DIR}/cert.pem NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this
30
Nov 9, 2012
Folsom, 2012.2
Next, add these credentials to your environment prior to running any nova client commands or nova commands.
$ cat /root/creds/openrc >> ~/.bashrc source ~/.bashrc
Creating Certificates
You can create certificates contained within pem files using these nova client commands, ensuring you have set up your environment variables for the nova client:
# nova x509-get-root-cert # nova x509-create-cert
Note
These commands need to be run as root only if the credentials used to interact with nova-api have been put under /root/.bashrc. If the EC2 credentials have been put into another user's .bashrc file, then, it is necessary to run these commands as the user.
$ nova secgroup-add-rule default $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 tcp 22 22 0.0.0.0/0
Another common issue is you cannot ping or SSH to your instances after issuing the eucaauthorize commands. Something to look at is the amount of dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following:
$ sudo killall dnsmasq $ sudo service nova-network restart
If you get the instance not found message while performing the restart, that means the service was not previously running. You simply need to start it instead of restarting it :
$ sudo service nova-network start
Nov 9, 2012
Folsom, 2012.2
For a multi-node install you only make changes to nova.conf and copy it to additional compute nodes. Ensure each nova.conf file points to the correct IP addresses for the respective services. By default, Nova sets the bridge device based on the setting in flat_network_bridge. Now you can edit /etc/network/interfaces with the following template, updated with your IP information.
# The loopback network interface auto lo iface lo inet loopback # The primary network interface auto br100 iface br100 inet static bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # dns-* options are implemented by the resolvconf package, if installed dns-nameservers xxx.xxx.xxx.xxx
Restart networking:
$ sudo service networking restart
With nova.conf updated and networking set, configuration is nearly complete. First, bounce the relevant services to take the latest updates:
$ sudo service libvirtd restart $ sudo service nova-compute restart
To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally:
# chgrp kvm /dev/kvm # chmod g+rwx /dev/kvm
If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into delays with booting. Any server that does not have nova-api running on it needs this iptables entry so that UEC images can get metadata info. On compute nodes, configure the iptables with this next step:
# iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773
Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query:
$ mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;'
Nov 9, 2012
Folsom, 2012.2
+---------------------+---------------------+------------+--------+----+----------+----------------+-----------+--------------+---------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+--------+----+----------+----------------+-----------+--------------+---------+-------------------+ | 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 osdemo02 | nova-network | network | 46064 | 0 | nova | | 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 osdemo02 | nova-compute | compute | 46056 | 0 | nova | | 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova | | 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 osdemo01 | nova-compute | compute | 37050 | 0 | nova | | 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 osdemo04 | nova-compute | compute | 28484 | 0 | nova | | 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 osdemo05 | nova-compute | compute | 29284 | 0 | nova | +---------------------+---------------------+------------+--------+----+----------+----------------+-----------+--------------+---------+-------------------+
You can see that osdemo0{1,2,4,5} are all running nova-compute. When you start spinning up instances, they will allocate on any node that is running nova-compute from this list.
The output of this command will vary depending on the hypervisor. Example output when the hypervisor is Xen:
+----------------+-----------------+ | Property | Value | +----------------+-----------------+ | cpu0 | 4.3627 | | memory | 1171088064.0000 |
33
Nov 9, 2012
Folsom, 2012.2
| memory_target | 1171088064.0000 | | vbd_xvda_read | 0.0 | | vbd_xvda_write | 0.0 | | vif_0_rx | 3223.6870 | | vif_0_tx | 0.0 | | vif_1_rx | 104.4955 | | vif_1_tx | 0.0 | +----------------+-----------------+
While the command should work with any hypervisor that is controlled through libvirt (e.g., KVM, QEMU, LXC), it has only been tested with KVM. Example output when the hypervisor is KVM:
+------------------+------------+ | Property | Value | +------------------+------------+ | cpu0_time | 2870000000 | | memory | 524288 | | vda_errors | -1 | | vda_read | 262144 | | vda_read_req | 112 | | vda_write | 5606400 | | vda_write_req | 376 | | vnet0_rx | 63343 | | vnet0_rx_drop | 0 | | vnet0_rx_errors | 0 | | vnet0_rx_packets | 431 | | vnet0_tx | 4905 | | vnet0_tx_drop | 0 | | vnet0_tx_errors | 0 | | vnet0_tx_packets | 45 | +------------------+------------+
Nov 9, 2012
Folsom, 2012.2
Essex configuration using KVM, FlatDHCP, MySQL, Glance, LDAP, and optionally sheepdog, API is EC2
From gerrit.wikimedia.org, used with permission. Where you see parameters passed in, they are reading from Puppet configuration files. For example, a variable like <%= novaconfig["my_ip"] %> is for the puppet templates they use to deploy.
[DEFAULT] verbose=True auth_strategy=keystone connection_type=libvirt root_helper=sudo /usr/bin/nova-rootwrap instance_name_template=i-%08x daemonize=1 scheduler_driver=nova.scheduler.simple.SimpleScheduler max_cores=200 my_ip=<%= novaconfig["my_ip"] %> logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova sql_connection=mysql://<%= novaconfig["db_user"] %>:<%= novaconfig["db_pass"] %>@<%= novaconfig["db_host"] %>/<%= novaconfig["db_name"] %> image_service=nova.image.glance.GlanceImageService s3_host=<%= novaconfig["glance_host"] %> glance_api_servers=<%= novaconfig["glance_host"] %>:9292 rabbit_host=<%= novaconfig["rabbit_host"] %> cc_host=<%= novaconfig["cc_host"] %> network_host=<%= novaconfig["network_host"] %> ec2_url=http://<%= novaconfig["api_host"] %>:8773/services/Cloud ec2_dmz_host=<%= novaconfig["api_ip"] %> dmz_cidr=<%= novaconfig["dmz_cidr"] %> libvirt_type=<%= novaconfig["libvirt_type"] %> dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge flat_network_dhcp_start=<%= novaconfig["dhcp_start"] %> dhcp_domain=<%= novaconfig["dhcp_domain"] %> network_manager=nova.network.manager.FlatDHCPManager flat_interface=<%= novaconfig["network_flat_interface"] %> flat_injected=False flat_network_bridge=<%= novaconfig["flat_network_bridge"] %> fixed_range=<%= novaconfig["fixed_range"] %> public_interface=<%= novaconfig["network_public_interface"] %> routing_source_ip=<%= novaconfig["network_public_ip"] %> node_availability_zone=<%= novaconfig["zone"] %> zone_name=<%= novaconfig["zone"] %> quota_floating_ips=<%= novaconfig["quota_floating_ips"] %> multi_host=True api_paste_config=/etc/nova/api-paste.ini #use_ipv6=True allow_same_net_traffic=False live_migration_uri=<%= novaconfig["live_migration_uri"] %>
These represent configuration role classes used by the puppet configuration files to build out the rest of the nova.conf file.
35
Nov 9, 2012
Folsom, 2012.2
ldap_base_dn => "dc=wikimedia,dc=org", ldap_user_dn => "uid=novaadmin,ou=people,dc=wikimedia,dc=org", ldap_user_pass => $passwords::openstack::nova::nova_ldap_user_pass, ldap_proxyagent => "cn=proxyagent,ou=profile,dc=wikimedia,dc=org", ldap_proxyagent_pass => $passwords::openstack::nova::nova_ldap_proxyagent_pass, controller_mysql_root_pass => $passwords::openstack::nova::controller_mysql_root_pass, puppet_db_name => "puppet", puppet_db_user => "puppet", puppet_db_pass => $passwords::openstack::nova::nova_puppet_user_pass, # By default, don't allow projects to allocate public IPs; this way we can # let users have network admin rights, for firewall rules and such, and can # give them public ips by increasing their quota quota_floating_ips => "0", libvirt_type => $realm ? { "production" => "kvm", "labs" => "qemu", db_host => $controller_hostname, dhcp_domain => "pmtpa.wmflabs", glance_host => $controller_hostname, rabbit_host => $controller_hostname, cc_host => $controller_hostname, network_flat_interface => $realm ? { "production" => "eth1.103", "labs" => "eth0.103", }, network_flat_interface_name => $realm ? { "production" => "eth1", "labs" => "eth0", }, network_flat_interface_vlan => "103", flat_network_bridge => "br103", network_public_interface => "eth0", network_host => $realm ? { "production" => "10.4.0.1", "labs" => "127.0.0.1", }, api_host => $realm ? { "production" => "virt2.pmtpa.wmnet", "labs" => "localhost", }, api_ip => $realm ? { "production" => "10.4.0.1", "labs" => "127.0.0.1", }, fixed_range => $realm ? { "production" => "10.4.0.0/24", "labs" => "192.168.0.0/24", }, dhcp_start => $realm ? { "production" => "10.4.0.4", "labs" => "192.168.0.4", }, network_public_ip => $realm ? { "production" => "208.80.153.192", "labs" => "127.0.0.1", }, dmz_cidr => $realm ? {
36
Nov 9, 2012
Folsom, 2012.2
"production" => "208.80.153.0/22,10.0.0.0/8", "labs" => "10.4.0.0/24", }, controller_hostname => $realm ? { "production" => "labsconsole.wikimedia.org", "labs" => $fqdn, }, ajax_proxy_url => $realm ? { "production" => "http://labsconsole.wikimedia.org:8000", "labs" => "http://${hostname}.${domain}:8000", }, ldap_host => $controller_hostname, puppet_host => $controller_hostname, puppet_db_host => $controller_hostname, live_migration_uri => "qemu://%s.pmtpa.wmnet/system?pkipath=/var/lib/nova", zone => "pmtpa", keystone_admin_token => $keystoneconfig["admin_token"], keystone_auth_host => $keystoneconfig["bind_ip"], keystone_auth_protocol => $keystoneconfig["auth_protocol"], keystone_auth_port => $keystoneconfig["auth_port"],
37
Nov 9, 2012
Folsom, 2012.2
38
Nov 9, 2012
Folsom, 2012.2
# NOVNC CONSOLE novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html # Change vncserver_proxyclient_address and vncserver_listen to match each compute host vncserver_proxyclient_address=192.168.206.130 vncserver_listen=192.168.206.130 # AUTHENTICATION auth_strategy=keystone [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova signing_dirname = /tmp/keystone-signing-nova
39
Nov 9, 2012
Folsom, 2012.2
40
Nov 9, 2012
Folsom, 2012.2
Configuring Logging
You can use nova.conf configuration options to indicate where Compute will log events, the level of logging, and customize log formats. To customize log formats for OpenStack Compute, use these configuration option settings.
41
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Format string for %(asctime)s in log records. Default: %default (StrOpt) (Optional) The directory to keep log files in (will be prepended to --logfile) (StrOpt) (Optional) Name of log file to output to. If not set, logging will go to stdout. (StrOpt) A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. Default: %default (StrOpt) Log output to a per-service log file in named directory (StrOpt) Log output to a named file (StrOpt) Default file mode used when creating log files (StrOpt) format string to use for log messages with context (StrOpt) data to append to log format when level is DEBUG (StrOpt) format string to use for log messages without context (StrOpt) prefix each line of exception output with this format (BoolOpt) publish error events (BoolOpt) publish error events (BoolOpt) Use syslog for logging (StrOpt) syslog facility to receive log lines
log_format="%(asctime)s %(levelname)8s [%(name)s] %(message)s" logdir=<None> logfile=<None> logfile_mode=0644 logging_context_format_string="%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(instance)s%(message)s" logging_debug_format_suffix="from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d" logging_default_format_string="%(asctime)s %(levelname)s %(name)s [-] %(instance)s%(message)s" logging_exception_prefix="%(asctime)s TRACE %(name)s %(instance)s" publish_errors=false publish_errors=false use_syslog=false syslog_log_facility=LOG_USER
Configuring Hypervisors
OpenStack Compute requires a hypervisor and supports several hypervisors and virtualization standards. Configuring and running OpenStack Compute to use a particular hypervisor takes several installation and configuration steps. The libvirt_type configuration option indicates which hypervisor will be used. Refer to ??? for more details. To customize hypervisor support in OpenStack Compute, refer to these configuration settings in nova.conf.
hyperv_attaching_volume_retry_count=10 hyperv_wait_between_attach_retry=5
42
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Configures the guest CPU model exposed to the hypervisor. Valid options are: custom, host-model, hostpassthrough, none. If the hypervisor is KVM or QEMU, the default value is host-model, otherwise the default value is none. (StrOpt) Specify the guest CPU model exposed to the hypervisor. This configuration option is only applicable if libvirt_cpu_mode is set to custom. Valid options: one of the named models specified in /usr/share/ libvirt/cpu_map.xml, e.g.: Westmere, Nehalem, Opteron_G3. (StrOpt) Override the default disk prefix for the devices attached to a server, which is dependent on libvirt_type. (valid options are: sd, xvd, uvd, vd) (BoolOpt) Inject the ssh public key at boot time (StrOpt) Instance ephemeral storage backend format. Acceptable values are: raw, qcow2, lvm, default. If default is specified, then use_cow_images flag is used instead of this one. Please note, that current snapshot mechanism in OpenStack Compute works only with instances backed with Qcow2 images. (StrOpt) LVM Volume Group that is used for instance ephemerals, when you specify libvirt_images_type=lvm. (BoolOpt) Inject the admin password at boot time, without an agent. (BoolOpt) Use a separated OS thread pool to realize nonblocking libvirt calls (StrOpt) Location where libvirt driver will store snapshots before uploading them to image service (BoolOpt) Create sparse (not fully allocated) LVM volumes for instance ephemerals if you use LVM backend for them. (StrOpt) Libvirt domain type (valid options are: kvm, lxc, qemu, uml, xen) (StrOpt) Override the default libvirt URI (which is dependent on libvirt_type) (StrOpt) The libvirt VIF driver to configure the VIFs.
libvirt_cpu_model=<None>
libvirt_disk_prefix=<None>
libvirt_inject_key=true libvirt_images_type=default
libvirt_volume_drivers="iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver, (ListOpt) Libvirt handlers for remote volumes. local=nova.virt.libvirt.volume.LibvirtVolumeDriver, fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver, rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver" libvirt_wait_soft_reboot_seconds=120 (IntOpt) Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. (BoolOpt) Used by Hyper-V (BoolOpt) Indicates whether unused base images should be removed (IntOpt) Unused unresized base images younger than this will not be removed (IntOpt) Unused resized base images younger than this will not be removed (StrOpt) Rescue ami image (StrOpt) Rescue aki image
43
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Rescue ari image (StrOpt) Snapshot image format (valid options are : raw, qcow2, vmdk, vdi). Defaults to same as source image (BoolOpt) Sync virtual and real mouse cursors in Windows VMs (StrOpt) Name of Integration Bridge used by Open vSwitch (BoolOpt) Use virtio for bridge interfaces (StrOpt) VIM Service WSDL Location e.g http://<server>/ vimService.wsdl, due to a bug in vSphere ESX 4.1 default wsdl. (FloatOpt) The number of times we retry on failures, e.g., socket error, etc. Used only if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) URL for connection to VMWare ESX host.Required if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) Password for connection to VMWare ESX host. Used only if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) Username for connection to VMWare ESX host. Used only if compute_driver is vmwareapi.VMWareESXDriver. (FloatOpt) The interval used for polling of remote tasks. Used only if compute_driver is vmwareapi.VMWareESXDriver, (StrOpt) Physical ethernet adapter name for vlan networking (StrOpt) PowerVM system manager type (ivm, hmc) (StrOpt) PowerVM manager host or ip (StrOpt) PowerVM VIOS host or ip if different from manager (StrOpt) PowerVM manager user name (StrOpt) PowerVM manager user password (StrOpt) PowerVM image remote path. Used to copy and store images from Glance on the PowerVM VIOS LPAR. (StrOpt) Local directory on the compute host to download glance images to.
vmware_vif_driver=nova.virt.vmwareapi.vif.VMWareVlanBridgeDriver (StrOpt) The VMWare VIF driver to configure the VIFs. vmwareapi_api_retry_count=10
vmwareapi_host_ip=<None>
vmwareapi_host_password=<None>
vmwareapi_host_username=<None>
vmwareapi_task_poll_interval=5.0
44
Nov 9, 2012
Folsom, 2012.2
ldap_cloudadmin=cn=cloudadmins,ou=Groups,dc=example,dc=com (StrOpt) cn for Cloud Admins ldap_developer=cn=developers,ou=Groups,dc=example,dc=com (StrOpt) cn for Developers ldap_itsec=cn=itsec,ou=Groups,dc=example,dc=com ldap_password=changeme ldap_project_subtree=ou=Groups,dc=example,dc=com ldap_schema_version=2 ldap_url=ldap://localhost ldap_user_dn=cn=Manager,dc=example,dc=com ldap_user_id_attribute=uid ldap_user_modify_only=false ldap_user_name_attribute=cn ldap_user_subtree=ou=Users,dc=example,dc=com ldap_user_unit=Users role_project_subtree=ou=Groups,dc=example,dc=com auth_driver=nova.auth.dbdriver.DbDriver credential_cert_file=cert.pem credential_key_file=pk.pem credential_rc_file=%src credential_vpn_file=nova-vpn.conf credentials_template=$pybasedir/nova/auth/ novarc.template global_roles=cloudadmin,itsec superuser_roles=cloudadmin vpn_client_template=$pybasedir/nova/cloudpipe/ client.ovpn.template (StrOpt) cn for ItSec (StrOpt) LDAP password (StrOpt) OU for Projects (IntOpt) Current version of the LDAP schema (StrOpt) Point this at your ldap server (StrOpt) DN of admin user (StrOpt) Attribute to use as id (BoolOpt) Modify user attributes instead of creating/ deleting (StrOpt) Attribute to use as name (StrOpt) OU for Users (StrOpt) OID for Users (StrOpt) OU for Roles (StrOpt) Driver that auth manager uses (StrOpt) Filename of certificate in credentials zip (StrOpt) Filename of private key in credentials zip (StrOpt) Filename of rc in credentials zip %s will be replaced by name of the region (nova by default) (StrOpt) Filename of certificate in credentials zip (StrOpt) Template for creating users rc file (ListOpt) Roles that apply to all projects (ListOpt) Roles that ignore authorization checking completely (StrOpt) Template for creating users VPN file ldap_netadmin=cn=netadmins,ou=Groups,dc=example,dc=com (StrOpt) cn for NetAdmins
To customize certificate authority settings for Compute, see these configuration settings in nova.conf.
45
Nov 9, 2012
Folsom, 2012.2
project_cert_subject="/C=US/ST=California/O=OpenStack/ (StrOpt) Subject for certificate for projects, %s for project, OU=NovaDev/CN=project-ca-%.16s-%s" timestamp use_project_ca=false user_cert_subject="/C=US/ST=California/O=OpenStack/ OU=NovaDev/CN=%.16s-%.16s-%s" (BoolOpt) Whether to use a CA for each project (tenant) (StrOpt) Subject for certificate for users, %s for project, user, timestamp
To customize Compute and the Identity service to use LDAP as a backend, refer to these configuration settings in nova.conf.
46
Nov 9, 2012
Folsom, 2012.2
Edit the nova.conf file on all nodes to set the use_ipv6 configuration option to True. Restart all nova- services. When using the command nova-manage network create you can add a fixed range for IPv6 addresses. You must specify public or private after the create parameter.
$ nova-manage network create public fixed_range num_networks network_size vlan_start vpn_start fixed_range_v6
You can set IPv6 global routing prefix by using the fixed_range_v6 parameter. The default is: fd00::/48. When you use FlatDHCPManager, the command uses the original value of fixed_range_v6. When you use VlanManager, the command creates prefixes of subnet by incrementing subnet id. Guest VMs uses this prefix for generating their IPv6 global unicast address. Here is a usage example for VlanManager:
$ nova-manage network create public 10.0.1.0/24 3 32 100 1000 fd00:1::/48
Note that vlan_start and vpn_start parameters are not used by FlatDHCPManager.
47
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Default IPv6 gateway (StrOpt) Backend to use for IPv6 generation (BoolOpt) use IPv6
Configuring Migrations
Note
This feature is for cloud administrators only. Migration allows an administrator to move a virtual machine instance from one compute host to another. This feature is useful when a compute host requires maintenance. Migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine. There are two types of migration: Migration (or non-live migration): In this case the instance will be shut down (and the instance will know that it has been rebooted) for a period of time in order to be moved to another hypervisor. Live migration (or true live migration): Almost no instance downtime, it is useful when the instances must be kept running during the migration. There are two types of live migration: 48
Nov 9, 2012
Folsom, 2012.2
Shared storage based live migration: In this case both hypervisors have access to a shared storage. Block live migration: for this type of migration, no shared storage is required. The following sections describe how to configure your hosts and compute nodes for migrations using the KVM and XenServer hypervisors.
KVM-Libvirt
Prerequisites Hypervisor: KVM with libvirt Shared storage: NOVA-INST-DIR/instances/ (eg /var/lib/nova/instances) has to be mounted by shared storage. This guide uses NFS but other options, including the OpenStack Gluster Connector are available. Instances: Instance can be migrated with ISCSI based volumes
Note
Migrations done by the Compute service do not use libvirt's live migration functionality by default. Because of this, guests are suspended before migration and may therefore experience several minutes of downtime. See ??? for more details.
Note
This guide assumes the default value for instances_path in your nova.conf ("NOVA-INST-DIR/instances"). If you have changed the state_path or instances_path variables, please modify accordingly
Note
You must specify vncserver_listen=0.0.0.0 or live migration will not work correctly. See ??? for more details on this option. Example Nova Installation Environment Prepare 3 servers at least; for example, HostA, HostB and HostC HostA is the "Cloud Controller", and should be running: nova-api, nova-scheduler, novanetwork, nova-volume, nova-objectstore. Host B and Host C are the "compute nodes", running nova-compute. Ensure that, NOVA-INST-DIR (set with state_path in nova.conf) is same on all hosts. In this example, HostA will be the NFSv4 server which exports NOVA-INST-DIR/instances, and HostB and HostC mount it. 49
Nov 9, 2012
Folsom, 2012.2
1. Configure your DNS or /etc/hosts and ensure it is consistent accross all hosts. Make sure that the three hosts can perform name resolution with each other. As a test, use the ping command to ping each host from one another.
$ ping HostA $ ping HostB $ ping HostC
2. Ensure that the UID and GID of your nova and libvirt users are identical between each of your servers. This ensures that the permissions on the NFS mount will work correctly. 3. Follow the instructions at the Ubuntu NFS HowTo to setup an NFS server on HostA, and NFS Clients on HostB and HostC. Our aim is to export NOVA-INST-DIR/instances from HostA, and have it readable and writable by the nova user on HostB and HostC. 4. Using your knowledge from the Ubuntu documentation, configure the NFS server at HostA by adding a line to /etc/exports
NOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)
Change the subnet mask (255.255.0.0) to the appropriate value to include the IP addresses of HostB and HostC. Then restart the NFS server.
$ /etc/init.d/nfs-kernel-server restart $ /etc/init.d/idmapd restart
5. Set the 'execute/search' bit on your shared directory On both compute nodes, make sure to enable the 'execute/search' bit to allow qemu to be able to use the images within the directories. On all hosts, execute the following command:
$ chmod o+x NOVA-INST-DIR/instances
Perform the same check at HostB and HostC - paying special attention to the permissions (nova should be able to write)
$ ls -ld NOVA-INST-DIR/instances/
50
Nov 9, 2012
Folsom, 2012.2
$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 921514972 4180880 870523828 1% / none 16498340 1228 16497112 1% /dev none 16502856 0 16502856 0% /dev/shm none 16502856 368 16502488 1% /var/run none 16502856 0 16502856 0% /var/lock none 16502856 0 16502856 0% /lib/init/rw HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)
Modify /etc/init/libvirt-bin.conf
before : exec /usr/sbin/libvirtd -d after : exec /usr/sbin/libvirtd -d -l
Modify /etc/default/libvirt-bin
before :libvirtd_opts=" -d" after :libvirtd_opts=" -d -l"
Restart libvirt. After executing the command, ensure that libvirt is succesfully restarted.
$ stop libvirt-bin && start libvirt-bin $ ps -ef | grep libvirt
8. Configure your firewall to allow libvirt to communicate between nodes. Information about ports used with libvirt can be found at the libvirt documentation By default, libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to 49261 is used for the KVM communications. As this guide has disabled libvirt auth, you should take good care that these ports are only open to hosts within your installation. 51
Nov 9, 2012
Folsom, 2012.2
9. You can now configure options for live migration. In most cases, you do not need to configure any options. The following chart is for advanced usage only.
The Compute service does not use libvirt's live miration by default because there is a risk that the migration process will never terminate. This can happen if the guest operating system dirties blocks on the disk faster than they can migrated.
XenServer
Shared Storage
Prerequisites Compatible XenServer hypervisors. For more information, please refer to the Requirements for Creating Resource Pools section of the XenServer Administrator's Guide. Shared storage: an NFS export, visible to all XenServer hosts.
Note
Please check the NFS VHD section of the XenServer Administrator's Guide for the supported NFS versions. In order to use shared storage live migration with XenServer hypervisors, the hosts must be joined to a XenServer pool. In order to create that pool, a host aggregate must be created with special metadata. This metadata will be used by the XAPI plugins to establish the pool. 1. Add an NFS VHD storage to your master XenServer, and set it as default SR. For more information, please refer to the NFS VHD section of the XenServer Administrator's Guide. 2. Configure all the compute nodes to use the default sr for pool operations, by including: 52
Nov 9, 2012
Folsom, 2012.2
sr_matching_filter=default-sr:true
in your nova.conf configuration files across your compute nodes. 3. Create a host aggregate
$ nova aggregate-create <name-for-pool> <availability-zone>
The command will display a table which contains the id of the newly created aggregate. Now add special metadata to the aggregate, to mark it as a hypervisor pool
$ nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true $ nova aggregate-set-metadata <aggregate-id> operational_state=created
At this point, the host is part of a XenServer pool. 4. Add additional hosts to the pool:
$ nova aggregate-add-host <aggregate-id> <compute-host-name>
Note
At this point the added compute node and the host will be shut down, in order to join the host to the XenServer pool. The operation will fail, if any server other than the compute node is running/suspended on your host.
Block migration
Prerequisites Compatible XenServer hypervisors. The hypervisors must support the Storage XenMotion feature. Please refer to the manual of your XenServer to make sure your edition has this feature.
Note
Please note, that you need to use an extra option --block-migrate for the live migration command, in order to use block migration.
Note
Please note, that block migration works only with EXT local storage SRs, and the server should not have any volumes attached.
Configuring Resize
Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale according to user needs. In order for this feature to work properly, some underlying virt layers may need further configuration; this section describes the required configuration steps for each hypervisor layer provided by OpenStack.
53
Nov 9, 2012
Folsom, 2012.2
XenServer
To get resize to work with XenServer (and XCP) you need to: Establish a root trust between all hypervisor nodes of your deployment: You can simply do so, by generating an ssh key-pair (with ssh-keygen) and then ensuring that each of your dom0's authorized_keys file (located in /root/.ssh/authorized_keys) contains the public key fingerprint (located in /root/.ssh/id_rsa.pub). Provide a /images mountpoint to your hypervisor's dom0: Dom0 space is a premium so creating a directory in dom0 is kind of dangerous, and almost surely bound to fail especially when resizing big servers. The least you can do is to symlink /images to your local storage SR. The instructions below work for an English based installation of XenServer (and XCP) and in the case of ext3 based SR (with which the resize functionality is known to work correctly).
sr_uuid=$(xe sr-list name-label="Local storage" params=uuid --minimal) img_dir="/var/run/sr-mount/$sr_uuid/images" mkdir -p "$img_dir" ln -s $img_dir /images
Nov 9, 2012
Folsom, 2012.2
One MooseFS master server, running the metadata service. One MooseFS slave server, running the metalogger service. For that particular walkthrough, we will use the following network schema : 10.0.10.15 for the MooseFS metadata server admin IP 10.0.10.16 for the MooseFS metadata server main IP 10.0.10.17 for the MooseFS metalogger server admin IP 10.0.10.18 for the MooseFS metalogger server main IP 10.0.10.19 for the MooseFS first chunkserver IP 10.0.10.20 for the MooseFS second chunkserver IP
Nov 9, 2012
Folsom, 2012.2
In our deployment, both MooseFS master and slave run their services inside a virtual machine ; you just need to make sure to allocate enough memory to the MooseFS metadata server, all the metadata being stored in RAM when the service runs. 1. Hosts entry configuration In the /etc/hosts add the following entry :
10.0.10.16 mfsmaster
2. Required packages Install the required packages by running the following commands :
$ apt-get install zlib1g-dev python pkg-config $ yum install make automake gcc gcc-c++ kernel-devel python26 pkg-config
3. User and group creation Create the adequate user and group :
$ groupadd mfs && useradd -g mfs mfs
4. Download the sources Go the the MooseFS download page and fill the download form in order to obtain your URL for the package. 5. Extract and configure the sources Extract the package and compile it :
$ tar -zxvf mfs-1.6.25.tar.gz && cd mfs-1.6.25
For the MooseFS master server installation, we disable from the compilation the mfschunkserver and mfsmount components :
$ ./configure --prefix=/usr --sysconfdir=/etc/moosefs --localstatedir= /var/lib --with-default-user=mfs --with-default-group=mfs --disablemfschunkserver --disable-mfsmount $ make && make install
6. Create configuration files We will keep the default settings, for tuning performance, you can read the MooseFS official FAQ
$ cd /etc/moosefs $ cp mfsmaster.cfg.dist mfsmaster.cfg $ cp mfsmetalogger.cfg.dist mfsmetalogger.cfg $ cp mfsexports.cfg.dist mfsexports.cfg
56
Nov 9, 2012
Folsom, 2012.2
In /etc/moosefs/mfsexports.cfg edit the second line in order to restrict the access to our private network :
10.0.10.0/24 / rw,alldirs,maproot=0
7. Power up the MooseFS mfsmaster service You can now start the mfsmaster and mfscgiserv deamons on the MooseFS metadataserver (The mfscgiserv is a webserver which allows you to see via a webinterface the MooseFS status realtime) :
$ /usr/sbin/mfsmaster start && /usr/sbin/mfscgiserv start
Open the following url in your browser : http://10.0.10.16:9425 to see the MooseFS status page 8. Power up the MooseFS metalogger service
$ /usr/sbin/mfsmetalogger start
2. Download the sources and configure them For that setup we will retrieve the last version of fuse to make sure every function will be available :
$ wget http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.9.1/fuse-2. 9.1.tar.gz && tar -zxvf fuse-2.9.1.tar.gz && cd fuse-2.9.1 $ ./configure && make && make install
Installing the MooseFS chunk and client services For installing both services, you can follow the same steps that were presented before (Steps 1 to 4) : 1. Hosts entry configuration
57
OpenStack Compute Administration Manual 2. Required packages 3. User and group creation 4. Download the sources
Nov 9, 2012
Folsom, 2012.2
5. Extract and configure the sources Extract the package and compile it :
$ tar -zxvf mfs-1.6.25.tar.gz && cd mfs-1.6.25
For the MooseFS chunk server installation, we only disable from the compilation the mfsmaster component :
$ ./configure --prefix=/usr --sysconfdir=/etc/moosefs --localstatedir=/var/ lib --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster $ make && make install
6. Create configuration files The chunk servers configuration is relatively easy to setup. You only need to create on every server directories that will be used for storing the datas of your cluster.
$ cd /etc/moosefs $ cp $ cp mfschunkserver.cfg.dist mfschunkserver.cfg mfshdd.cfg.dist mfshdd.cfg
Edit /etc/moosefs/mfhdd.cfg and add the directories you created to make them part of the cluster :
# mount points of HDD drives # #/mnt/hd1 #/mnt/hd2 #etc. /mnt/mfschunks1 /mnt/mfschunks2
58
Nov 9, 2012
Folsom, 2012.2
/dev/cciss/c0d0p1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime) mfsmaster:9421 on /var/lib/nova/instances type fuse.mfs (rw,allow_other, default_permissions)
You can interact with it the way you would interact with a classical mount, using build-in linux commands (cp, rm, etc...). The MooseFS client has several tools for managing the objects within the cluster (set replication goals, etc..). You can see the list of the available tools by running
$ mfs <TAB> <TAB>
mfsappendchunks mfschunkserver mfsfileinfo mfsmount mfsrsetgoal mfssetgoal mfscgiserv mfsdeleattr mfsfilerepair mfsrgetgoal mfsrsettrashtime mfssettrashtime mfscheckfile mfsdirinfo mfsgeteattr mfsrgettrashtime mfsseteattr mfssnapshot
You can read the manual for every command. You can also see the online help Add an entry into the fstab file In order to make sure to have the storage mounted, you can add an entry into the /etc/ fstab on both compute nodes :
mfsmount /var/lib/nova/instances fuse mfsmaster=mfsmaster,_netdev 0 0
59
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) driver to use for database access (StrOpt) The SQLAlchemy connection string used to connect to the database (IntOpt) Verbosity of SQL debugging information. 0=None, 100=Everything (BoolOpt) Add python stack traces to SQL as comment strings (IntOpt) timeout before idle sql connections are reaped (IntOpt) maximum db connection retries during startup. (setting -1 implies an infinite retry count) (IntOpt) interval between retries of opening a sql connection (StrOpt) File name of clean sqlite db (StrOpt) the filename to use with sqlite (BoolOpt) If passed, use synchronous mode for sqlite
The following tables describe the rest of the options that can be used when RabbitMQ is used as the messaging system. You can configure the messaging communication for different installation scenarios as well as tune RabbitMQ's retries and the size of the RPC thread pool.
Table4.10.Description of nova.conf configuration options for Remote Procedure Calls and RabbitMQ Messaging
Configuration option rabbit_host rabbit_password rabbit_port rabbit_userid Default localhost guest 5672 guest Description IP address; Location of RabbitMQ installation. String value; Password for the RabbitMQ server. Integer value; Port where RabbitMQ server is running/listening. String value; User ID used for RabbitMQ connections.
60
Nov 9, 2012
Folsom, 2012.2
Default /
rabbit_retry_interval rpc_thread_pool_size
1 1024
This next critical option points the compute nodes to the Qpid broker (server). Set qpid_hostname in nova.conf to be the hostname where the broker is running.
Note
The --qpid_hostname option accepts a value in the form of either a hostname or an IP address.
qpid_hostname=hostname.example.com
If the Qpid broker is listening on a port other than the AMQP default of 5672, you will need to set the qpid_port option:
qpid_port=12345
If you configure the Qpid broker to require authentication, you will need to add a username and password to the configuration:
qpid_username=username qpid_password=password
By default, TCP is used as the transport. If you would like to enable SSL, set the qpid_protocol option: 61
Nov 9, 2012
Folsom, 2012.2
The following table lists the rest of the options used by the Qpid messaging driver for OpenStack Compute. It is not common that these options are used.
qpid_reconnect_timeout
(Qpid default)
qpid_reconnect_limit
(Qpid default)
qpid_reconnect_interval_min
(Qpid default)
qpid_reconnect_interval_max
(Qpid default)
qpid_reconnect_interval
(Qpid default)
qpid_heartbeat
qpid_tcp_nodelay
True
62
Nov 9, 2012
Folsom, 2012.2
Default volume
Description String value; Name of the topic that volume nodes listen on
Specifying Limits
Limits are specified using five values: The HTTP method used in the API call, typically one of GET, PUT, POST, or DELETE. A human readable URI that is used as a friendly description of where the limit is applied. A regular expression. The limit will be applied to all URI's that match the regular expression and HTTP Method. 63
Nov 9, 2012
Folsom, 2012.2
A limit value that specifies the maximum count of units before the limit takes effect. An interval that specifies time frame the limit is applied to. The interval can be SECOND, MINUTE, HOUR, or DAY. Rate limits are applied in order, relative to the HTTP method, going from least to most specific. For example, although the default threshold for POST to */servers is 50 per day, one cannot POST to */servers more than 10 times within a single minute because the rate limits for any POST is 10/min.
Default Limits
OpenStack compute is normally installed with the following limits enabled:
To modify the limits, add a 'limits' specification to the [filter:ratelimit] section of the file. The limits are specified in the order HTTP method, friendly URI, regex, limit, and interval. The following example specifies the default rate limiting values:
[filter:ratelimit] paste.filter_factory = nova.api.openstack.compute. limits:RateLimitingMiddleware.factory limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY); (PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
64
Nov 9, 2012
Folsom, 2012.2
Configuring Quotas
For tenants, quota controls are available to limit the (flag and default shown in parenthesis): Number of volumes which may be created (volumes=10) Total size of all volumes within a project as measured in GB (gigabytes=1000) Number of instances which may be launched (instances=10) Number of processor cores which may be allocated (cores=20) Publicly accessible IP addresses (floating_ips=10) Amount of RAM that can be allocated in MB (ram=512000) Number of files that can be injected (injected_files=5) Maximal size of injected files in B (injected_file_content_bytes=10240) Number of security groups that may be created (security_groups=10) Number of rules per security group (security_group_rules=20) The defaults may be modified by setting the variable in nova.conf, then restarting the nova-api service. To modify a value for a specific project, the nova-manage command should be used. For example:
$ nova-manage project quota --project=1113f5f266f3477ac03da4e4f82d0568 --key= cores --value=40
65
Nov 9, 2012
Folsom, 2012.2
Alternately, quota settings are available through the OpenStack Dashboard in the "Edit Project" page.
66
Nov 9, 2012
Folsom, 2012.2
5. Configuration: nova.conf
File format for nova.conf
Overview
The Compute service supports a large number of configuration options. These options are specified in a configuration file whose default location in /etc/nova/nova.conf. The configuration file is in INI file format, with options specified as key=value pairs, grouped into sections. Almost all of the configuration options are in the DEFAULT section. Here's a brief example:
[DEFAULT] debug=true verbose=true [trusted_computing] server=10.3.4.2
StrOpt
IntOption
MultiStrOpt
String option. Same as StrOpt, except that it can be declared multiple times to indicate multiple values. Example:
ldap_dns_servers=dns1.example.org ldap_dns_servers=dns2.example.org
ListOpt
FloatOpt
Important
Nova options should not be quoted.
67
Nov 9, 2012
Folsom, 2012.2
Sections
Configuration options are grouped by section. The Compute config file supports the following sections. [DEFAULT] Almost all of the configuration options are organized into this section. If the documentation for a configuration option does not specify its section, assume that it should be placed in this one. This section is used for options for configuring the novaconductor service.
[conductor]
[trusted_computing] This section is used for options that relate to the trusted computing pools functionality. Options in this section describe how to connect to a remote attestation service.
Variable substition
The configuration file supports variable substitution. Once a configuration option is set, it can be referenced in later configuration values when preceded by $. Consider the following example where my_ip is defined and then $my_ip is used as a variable.
my_ip=10.2.3.4 glance_host=$my_ip metadata_host=$my_ip
If you need a value to contain the $ symbol, escape it by doing $$. For example, if your LDAP DNS password was $xkj432, you would do:
ldap_dns_password=$$xkj432
The Compute code uses Python's string.Template.safe_substitute() method to implement variable substitution. For more details on how variable substitution is resolved, see Python documentation on template strings and PEP 292.
Whitespace
To include whitespace in a configuration value, use a quoted string. For example:
ldap_dns_passsword='a password with spaces'
Nov 9, 2012
Folsom, 2012.2
Table5.1.Description of common nova.conf configuration options for the Compute API, RabbitMQ, EC2 API, S3 API, instance types
Configuration option=Default value allow_resize_to_same_host=false (Type) Description (BoolOpt) Allow destination machine to match source for resize. Useful when testing in single-host environments. If you have separate configuration files for separate services, this flag is required on both nova-api and nova-compute. (StrOpt) File name for the paste.deploy config for nova-api (BoolOpt) whether to rate limit the Compute API (StrOpt) URL for the Zone's Auth API (StrOpt) To be written, found in /nova/ scheduler/filters/trusted_filter.py, related to FLAGS.trusted_computing.auth_blob. (StrOpt) AWS Access ID (StrOpt) AWS Access Key (IntOpt) Port for eventlet backdoor to listen (IntOpt) Interval to pull bandwidth usage info (StrOpt) Directory where nova binaries are installed (BoolOpt) Cache glance images locally (StrOpt) full class name for the Manager for cert (StrOpt) the topic cert nodes listen on (IntOpt) Found in /nova/compute/resource_tracker.py (StrOpt) The full class name of the Compute API class to use (StrOpt) the topic compute nodes listen on (MultiStrOpt) Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. The default files used are: [] String value; Driver to use for controlling virtualization. For convenience if the driver exists under the nove.virt namespace, nova.virt can be removed. There are 5 drivers in core openstack: fake.FakeDriver, libvirt.LibvirtDriver, baremetal.BareMetalDriver, xenapi.XenAPIDriver, vmwareapi.VMWareESXDriver. If nothing is specified the older connection_type mechanism will be used. Be aware that method will be removed after the Folsom release. libvirt, xenapi, hyperv, or fake; Value that indicates the virtualization connection type. Deprecated as of Folsom, will be removed in G release. (StrOpt) the topic console proxy nodes listen on (StrOpt) the main RabbitMQ exchange to connect to (BoolOpt) Print debugging output (StrOpt) Name of network to use to set access ips for instances (StrOpt) The default format a ephemeral_volume will be formatted with on creation. (StrOpt) default image to use, testing only (StrOpt) default instance type to use, testing only (StrOpt) the default project to use for OpenStack
aws_access_key_id=admin aws_secret_access_key=admin backdoor_port=<None> bandwith_poll_interval=600 bindir=$pybasedir/bin cache_images=true cert_manager=nova.cert.manager.CertManager cert_topic=cert claim_timeout_seconds=600 compute_api_class=nova.compute.api.API
compute_manager=nova.compute.manager.ComputeManager (StrOpt) full class name for the Manager for compute compute_topic=compute config_file=/etc/nova/nova.conf
compute_driver='nova.virt.connection.get_connection'
connection_type='libvirt' (Deprecated)
console_manager=nova.console.manager.ConsoleProxyManager (StrOpt) full class name for the Manager for console proxy console_topic=console control_exchange=nova debug=false default_access_ip_network_name=<None> default_ephemeral_format=<None> default_image=ami-11111 default_instance_type=m1.small default_project=openstack
69
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) availability zone to use when user doesn't specify one (StrOpt) (BoolOpt) Whether to disable inter-process locks (StrOpt) the internal IP address of the EC2 API server (StrOpt) the IP of the ec2 api server (StrOpt) the path prefix used to call the EC2 API server (IntOpt) the port of the EC2 API server (StrOpt) the protocol to use when connecting to the EC2 API server (http, https) (BoolOpt) Enables strict validation for EC2 API server requests (StrOpt) To be written; Found in /nova/service.py (BoolOpt) When true, Compute creates a random password for the instance at create time. Users can get the password from the return value of API call for the instance creation (or through their Dashboard if the Dashboard returns the password visibly). Note that the password isn't stored anywhere, it is returned only once. (BoolOpt) If passed, use fake network devices and addresses (BoolOpt) If passed, use a fake RabbitMQ provider (BoolOpt) To be written; Found in /nova/common/ deprecated.py
firewall_driver=nova.virt.firewall.libvirt.IptablesFirewallDriver (StrOpt) Firewall driver (defaults to iptables) floating_ip_dns_manager=nova.network.dns_driver.DNSDriver (StrOpt) full class name for the DNS Manager for floating IPs glance_api_insecure=false glance_api_servers=$glance_host:$glance_port glance_host=$my_ip glance_num_retries=0 glance_port=9292 host=MGG2WEDRJM (BoolOpt) Allow to perform insecure SSL (https) requests to glance (ListOpt) A list of the glance API servers available to nova ([hostname|ip]:port) (StrOpt) default glance hostname or IP (IntOpt) Number retries when downloading an image from glance (IntOpt) default glance port (StrOpt) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. (StrOpt) Used for image caching; found in /nova/virt/ libvirt/utils.py (StrOpt) The service to use for retrieving and searching images. (StrOpt) To be written; found in /nova/compute/ manager.py (StrOpt) full class name for the DNS Zone for instance IPs
instance_dns_manager=nova.network.dns_driver.DNSDriver(StrOpt) full class name for the DNS Manager for instance IPs instance_usage_audit_period=month (StrOpt) time period to generate instance usages for. Time period must be hour, day, month or year
70
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) To be written; found in /nova/openstack/ common/log.py (ListOpt) Host reserved for specific images (ListOpt) Images to run on isolated host (StrOpt) Directory to use for lock files (StrOpt) If this option is specified, the logging configuration file specified is used and overrides any other logging options specified. Please see the Python logging module documentation for details on logging configuration files. (StrOpt) Format string for %(asctime)s in log records. Default: %default (StrOpt) (Optional) The directory to keep log files in (will be prepended to --logfile) (StrOpt) (Optional) Name of log file to output to. If not set, logging will go to stdout. (StrOpt) A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. Default: %default (StrOpt) Log output to a per-service log file in named directory (StrOpt) Log output to a named file (StrOpt) Default file mode used when creating log files (ListOpt) Memcached servers or None for in process cache. (StrOpt) the IP address for the metadata API server (IntOpt) the port for the metadata API port (BoolOpt) Whether to log monkey patching
log_date_format=%Y-%m-%d %H:%M:%S log_dir=<None> log_file=<None> log_format= "%(asctime)s %(levelname)8s [%(name)s] %(message)s" logdir=<None> logfile=<None> logfile_mode=0644 memcached_servers=<None> metadata_host=$my_ip metadata_port=8775 monkey_patch=false
monkey_patch_modules=nova.api.ec2.cloud:nova.notifier.api.notify_decorator, (ListOpt) List of modules/decorators to monkey patch nova.compute.api:nova.notifier.api.notify_decorator my_ip=192.168.1.82 (StrOpt) IP address of this host; change my_ip to match each host when copying nova.conf files to multiple hosts. (StrOpt) The full class name of the network API class to use (StrOpt) Driver to use for network creation (StrOpt) Full class name for the Manager for network (StrOpt) The topic network nodes listen on (StrOpt) Availability zone of this node (ListOpt) These are image properties which a snapshot should not inherit from an instance (StrOpt) Default driver for sending notifications (StrOpt) kernel image that indicates not to use a kernel, but to use a raw disk image instead (ListOpt) Specify list of extensions to load when using osapi_compute_extension option with nova.api.openstack.compute.contrib.select_extensions (StrOpt) Base URL that will be presented to users in links to the OpenStack Compute API
network_api_class=nova.network.api.API network_driver=nova.network.linux_net network_manager=nova.network.manager.VlanManager network_topic=network node_availability_zone=nova non_inheritable_image_properties=['cache_in_nova', 'instance_uuid', 'user_id', 'image_type', 'backup_type', 'min_ram', 'min_disk'] notification_driver=nova.notifier.no_op_notifier null_kernel=nokernel osapi_compute_ext_list=
71
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Base URL that will be presented to users in links to glance resources (IntOpt) the maximum number of items returned in a single response from a collection resource (StrOpt) the path prefix used to call the OpenStack Compute API server (StrOpt) the protocol to use when connecting to the OpenStack Compute API server (http, https) (ListOpt) Specify list of extensions to load when using osapi_volume_extension option with nova.api.openstack.volume.contrib.select_extensions (IntOpt) Length of generated instance admin passwords (StrOpt) Directory where the nova python module is installed (BoolOpt) use durable queues in RabbitMQ (StrOpt) the RabbitMQ host (IntOpt) maximum retries with trying to connect to RabbitMQ (the default of 0 implies an infinite retry count) (StrOpt) the RabbitMQ password (IntOpt) the RabbitMQ port (IntOpt) how long to backoff for between retries when connecting to RabbitMQ (IntOpt) how frequently to retry connecting with RabbitMQ (BoolOpt) connect over SSL for RabbitMQ (StrOpt) the RabbitMQ userid (StrOpt) the RabbitMQ virtual host (IntOpt) Interval in seconds for reclaiming deleted instances (ListOpt) list of region=fqdn pairs separated by commas (BoolOpt) Whether to start guests that were running before the host rebooted. If enabled, this option causes guests assigned to the host to be restarted when novacompute starts, if they had been active on the host while nova-compute last ran. If such a guest is already found to be running, it is left untouched. (StrOpt) Command prefix to use for running commands as root. Note that the configuration file (and executable) used here must match the one defined in the sudoers entry from packagers, otherwise the commands are rejected. (StrOpt) hostname or IP for the instances to use when accessing the S3 API (StrOpt) hostname or IP for OpenStack to use when accessing the S3 API (IntOpt) port used when accessing the S3 API (StrOpt) the topic scheduler nodes listen on
osapi_volume_extension=nova.api.openstack.volume.contrib.standard_extensions (MultiStrOpt) osapi volume extension to load password_length=12 pybasedir=/usr/lib/python/site-packages rabbit_durable_queues=false rabbit_host=localhost rabbit_max_retries=0 rabbit_password=guest rabbit_port=5672 rabbit_retry_backoff=2 rabbit_retry_interval=1 rabbit_use_ssl=false rabbit_userid=guest rabbit_virtual_host=/ reclaim_instance_interval=0 region_list= resume_guests_state_on_host_boot=false
scheduler_manager=nova.scheduler.manager.SchedulerManager (StrOpt) full class name for the Manager for scheduler security_group_handler=nova.network.quantum.sg.NullSecurityGroupHandler (StrOpt) The full class name of the security group handler class service_down_time=60 (IntOpt) maximum time since last check-in for up service
72
Nov 9, 2012
Folsom, 2012.2
(Type) Description (BoolOpt) Whether to (re-)start guests when the host reboots. If enabled, this option causes guests assigned to the host to be unconditionally restarted when novacompute starts. If the guest is found to be stopped, it starts. If it is found to be running, it reboots. (StrOpt) Top-level directory for maintaining nova's state (StrOpt) Stub network related code (StrOpt) syslog facility to receive log lines (BoolOpt) Whether to use cow images (BoolOpt) Log output to standard error (BoolOpt) Use syslog for logging. (BoolOpt) Print more verbose output (StrOpt) The full class name of the volume API class to use (StrOpt) the topic volume nodes listen on (StrOpt) image id used when starting up a cloudpipe VPN server (StrOpt) Suffix to add to project name for vpn key and secgroups (IntOpt) Number of seconds zombie instances are cleaned up.
state_path=$pybasedir stub_network=False syslog-log-facility=LOG_USER use_cow_images=true use_stderr=true use-syslog=false verbose=false volume_api_class=nova.volume.api.API volume_topic=volume vpn_image_id=0 vpn_key_suffix=-vpn zombie_instance_updated_at_window=172800
volume_manager=nova.volume.manager.VolumeManager (StrOpt) full class name for the Manager for volume
73
Nov 9, 2012
Folsom, 2012.2
log_date_format=%Y-%m-%d %H:%M:%S log_dir=<None> log_file=<None> log_format="%(asctime)s %(levelname)8s [%(name)s] %(message)s" logdir=<None> logfile=<None> logfile_mode=0644 logging_context_format_string="%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(instance)s%(message)s" logging_debug_format_suffix="from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d" logging_default_format_string="%(asctime)s %(levelname)s %(name)s [-] %(instance)s%(message)s" logging_exception_prefix="%(asctime)s TRACE %(name)s %(instance)s" publish_errors=false publish_errors=false use_syslog=false syslog_log_facility=LOG_USER
metadata_listen=0.0.0.0
74
Nov 9, 2012
Folsom, 2012.2
(Type) Description (IntOpt) port for metadata api to listen (StrOpt) IP address for OpenStack API to listen (IntOpt) list port for osapi compute (StrOpt) IP address for OpenStack Volume API to listen (IntOpt) port for os volume api to listen (IntOpt) range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) (IntOpt) seconds between running periodic tasks (IntOpt) seconds between nodes reporting state to datastore (StrOpt) The messaging module to use, defaults to kombu. (StrOpt) Template string to be used to generate snapshot names (StrOpt) Template string to be used to generate instance names
project_cert_subject="/C=US/ST=California/O=OpenStack/ (StrOpt) Subject for certificate for projects, %s for project, OU=NovaDev/CN=project-ca-%.16s-%s" timestamp use_project_ca=false user_cert_subject="/C=US/ST=California/O=OpenStack/ OU=NovaDev/CN=%.16s-%.16s-%s" (BoolOpt) Whether to use a CA for each project (tenant) (StrOpt) Subject for certificate for users, %s for project, user, timestamp
75
Nov 9, 2012
Folsom, 2012.2
(Type) Description (IntOpt) number of floating ips allowed per project (tenant) (IntOpt) number of volume gigabytes allowed per project (tenant) (IntOpt) number of bytes allowed per injected file (IntOpt) number of bytes allowed per injected file path (IntOpt) number of injected files allowed (IntOpt) number of instances allowed per project (tenant) (IntOpt) number of key pairs allowed per user (IntOpt) number of metadata items allowed per instance (IntOpt) megabytes of instance ram allowed per project (tenant) (IntOpt) number of security rules per security group (IntOpt) number of security groups per project (tenant) (IntOpt) number of volumes allowed per project (tenant) (IntOpt) number of seconds until a reservation expires (IntOpt) count of reservations until usage is refreshed
ldap_cloudadmin=cn=cloudadmins,ou=Groups,dc=example,dc=com (StrOpt) cn for Cloud Admins ldap_developer=cn=developers,ou=Groups,dc=example,dc=com (StrOpt) cn for Developers ldap_itsec=cn=itsec,ou=Groups,dc=example,dc=com ldap_password=changeme (StrOpt) cn for ItSec (StrOpt) LDAP password ldap_netadmin=cn=netadmins,ou=Groups,dc=example,dc=com (StrOpt) cn for NetAdmins
76
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) OU for Projects (IntOpt) Current version of the LDAP schema (StrOpt) Point this at your ldap server (StrOpt) DN of admin user (StrOpt) Attribute to use as id (BoolOpt) Modify user attributes instead of creating/ deleting (StrOpt) Attribute to use as name (StrOpt) OU for Users (StrOpt) OID for Users (StrOpt) OU for Roles (StrOpt) Driver that auth manager uses (StrOpt) Filename of certificate in credentials zip (StrOpt) Filename of private key in credentials zip (StrOpt) Filename of rc in credentials zip %s will be replaced by name of the region (nova by default) (StrOpt) Filename of certificate in credentials zip (StrOpt) Template for creating users rc file (ListOpt) Roles that apply to all projects (ListOpt) Roles that ignore authorization checking completely (StrOpt) Template for creating users VPN file
ldap_project_subtree=ou=Groups,dc=example,dc=com ldap_schema_version=2 ldap_url=ldap://localhost ldap_user_dn=cn=Manager,dc=example,dc=com ldap_user_id_attribute=uid ldap_user_modify_only=false ldap_user_name_attribute=cn ldap_user_subtree=ou=Users,dc=example,dc=com ldap_user_unit=Users role_project_subtree=ou=Groups,dc=example,dc=com auth_driver=nova.auth.dbdriver.DbDriver credential_cert_file=cert.pem credential_key_file=pk.pem credential_rc_file=%src credential_vpn_file=nova-vpn.conf credentials_template=$pybasedir/nova/auth/ novarc.template global_roles=cloudadmin,itsec superuser_roles=cloudadmin vpn_client_template=$pybasedir/nova/cloudpipe/ client.ovpn.template
77
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) DN of Users (StrOpt) DN of Users (StrOpt) Attribute to use as id (BoolOpt) Modify user attributes instead of creating/ deleting (StrOpt) Attribute to use as name (StrOpt) OU for Users (StrOpt) OID for Users (StrOpt) OU for Tenants (StrOpt) LDAP ObjectClass to use for Tenants (strOpt) Attribute to use as Tenant (strOpt) Attribute to use as Member (strOpt) OU for Roles (strOpt) LDAP ObjectClass to use for Roles (StrOpt) OU for Roles (StrOpt) Attribute to use as Role member (StrOpt) Attribute to use as Role
ldap_user_dn= "cn=Manager,dc=example,dc=com" ldap_user_objectClass= inetOrgPerson ldap_user_id_attribute= cn ldap_user_modify_only=false ldap_user_name_attribute= cn ldap_user_subtree= "ou=Users,dc=example,dc=com" ldap_user_unit= "Users" ldap_tenant_tree_dn="ou=Groups,dc=example,dc=com" ldap_tenant_objectclass= groupOfNames ldap_tenant_id_attribute= cn ldap_tenant_member_attribute= member ldap_role_tree_dn= "ou=Roles,dc=example,dc=com" ldap_role_objectclass= organizationalRole ldap_role_project_subtree= "ou=Groups,dc=example,dc=com" ldap_role_member_attribute= roleOccupant ldap_role_id_attribute= cn
78
Nov 9, 2012
Folsom, 2012.2
(Type) Description (IntOpt) Number of minutes to lockout if triggered. (IntOpt) Number of minutes for lockout window.
Table5.14.Description of nova.conf file configuration options for VNC access to guest instances
Configuration option=Default value novncproxy_base_url=http://127.0.0.1:6080/ vnc_auto.html vnc_enabled=true vnc_keymap=en-us vncserver_listen=127.0.0.1 vncserver_proxyclient_address=127.0.0.1 xvpvncproxy_base_url=http://127.0.0.1:6081/console xvpvncproxy_host=0.0.0.0 xvpvncproxy_port=6081 (Type) Description (StrOpt) location of VNC console proxy, in the form "http://127.0.0.1:6080/vnc_auto.html" (BoolOpt) enable VNC related features (StrOpt) keymap for vnc (StrOpt) IP address on which instance VNC servers should listen (StrOpt) the address to which proxy clients (like novaxvpvncproxy) should connect (StrOpt) location of nova XCP VNC console proxy, in the form "http://127.0.0.1:6081/console" (StrOpt) Address that the XCP VNC proxy should bind to (IntOpt) Port that the XCP VNC proxy should bind to
linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver (StrOpt) Driver used to create ethernet devices. linuxnet_ovs_integration_bridge=br-int network_device_mtu=<None> networks_path=$state_path/networks public_interface=eth0 routing_source_ip=$my_ip send_arp_for_ha=false use_single_default_gateway=false auto_assign_floating_ip=false cnt_vpn_clients=0 create_unique_mac_address_attempts=5 default_floating_pool=nova
79
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) domain to use for building the hostnames (BoolOpt) If True, skip using the queue and make local calls (IntOpt) Seconds after which a deallocated IP is disassociated (StrOpt) Fixed IP address block (BoolOpt) Whether to attempt to inject network setup into guest (StrOpt) FlatDhcp will bridge into this interface if set (StrOpt) Bridge for simple network instances (StrOpt) Dns for simple network (StrOpt) Floating IP address block (BoolOpt) If True, send a dhcp release on instance termination (StrOpt) Default IPv4 gateway (StrOpt) Indicates underlying L3 management library (BoolOpt) Default value for multi_host in networks (StrOpt) Network host to use for IP allocation in flat modes (IntOpt) Number of addresses in each private subnet (IntOpt) Number of networks to support (StrOpt) VLANs will bridge into this interface if set (IntOpt) First VLAN for private networks (StrOpt) Public IP for the cloudpipe VPN servers (IntOpt) First VPN port for private networks (StrOpt) Template for cloudpipe instance boot script (StrOpt) Netmask to push into openvpn config (StrOpt) Network to push into openvpn config (StrOpt) Instance type for vpn instances (StrOpt) Defaults to nova-network. Must be modified to nova.network.quantumv2.api.API indicate that Quantum should be used rather than the traditional nova-network networking model. (IntOpt) URL for connecting to the Quantum networking service. Indicates the hostname/IP and port of the Quantum server for your deployment. (StrOpt) Should be kept as default 'keystone' for all production deployments. (StrOpt) Tenant name for connecting to Quantum network services in admin context through the Keystone Identity service. (StrOpt) Username for connecting to Quantum network services in admin context through the Keystone Identity service. (StrOpt) Password for connecting to Quantum network services in admin context through the Keystone Identity service.
quantum_url=http://127.0.0.1:9696
quantum_auth_strategy=keystone quantum_admin_tenant_name=<None>
quantum_admin_username=<None>
quantum_admin_password=<None>
80
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Points to the keystone Identity server IP and port. This is the Identity (keystone) admin API server IP and port value, and not the Identity service API IP and port.
running_deleted_instance_poll_interval=30 running_deleted_instance_timeout=0
81
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Tilera command line program for Bare-metal driver (StrOpt) baremetal domain type (BoolOpt) Force backing images to raw format (ListOpt) Order of methods used to mount disk images (StrOpt) Template file for injected network (IntOpt) maximum number of possible nbd devices (IntOpt) time to wait for a NBD device coming up (MultiStrOpt) mkfs commands for ephemeral device. The format is <os_type>=<mkfs command>
tile_monitor=/usr/local/TileraMDE/bin/tile-monitor baremetal_type=baremetal force_raw_images=true img_handlers=loop,nbd,guestfs injected_network_template=$pybasedir/nova/virt/ interfaces.template max_nbd_devices=16 timeout_nbd=10 virt_mkfs=default=mkfs.ext3 -L %(fs_label)s -F %(target)s virt_mkfs=linux=mkfs.ext3 -L %(fs_label)s -F %(target)s virt_mkfs=windows=mkfs.ntfs --force --fast --label %(fs_label)s %(target)s
libvirt_cpu_model=<None>
libvirt_disk_prefix=<None>
libvirt_inject_key=true libvirt_images_type=default
libvirt_images_volume_group=None
82
Nov 9, 2012
Folsom, 2012.2
(Type) Description (BoolOpt) Inject the admin password at boot time, without an agent. (BoolOpt) Use a separated OS thread pool to realize nonblocking libvirt calls (StrOpt) Location where libvirt driver will store snapshots before uploading them to image service (BoolOpt) Create sparse (not fully allocated) LVM volumes for instance ephemerals if you use LVM backend for them. (StrOpt) Libvirt domain type (valid options are: kvm, lxc, qemu, uml, xen) (StrOpt) Override the default libvirt URI (which is dependent on libvirt_type) (StrOpt) The libvirt VIF driver to configure the VIFs.
libvirt_volume_drivers="iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver, (ListOpt) Libvirt handlers for remote volumes. local=nova.virt.libvirt.volume.LibvirtVolumeDriver, fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver, rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver" libvirt_wait_soft_reboot_seconds=120 (IntOpt) Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. (BoolOpt) Used by Hyper-V (BoolOpt) Indicates whether unused base images should be removed (IntOpt) Unused unresized base images younger than this will not be removed (IntOpt) Unused resized base images younger than this will not be removed (StrOpt) Rescue ami image (StrOpt) Rescue aki image (StrOpt) Rescue ari image (StrOpt) Snapshot image format (valid options are : raw, qcow2, vmdk, vdi). Defaults to same as source image (BoolOpt) Sync virtual and real mouse cursors in Windows VMs (StrOpt) Name of Integration Bridge used by Open vSwitch (BoolOpt) Use virtio for bridge interfaces (StrOpt) VIM Service WSDL Location e.g http://<server>/ vimService.wsdl, due to a bug in vSphere ESX 4.1 default wsdl. (FloatOpt) The number of times we retry on failures, e.g., socket error, etc. Used only if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) URL for connection to VMWare ESX host.Required if compute_driver is vmwareapi.VMWareESXDriver.
limit_cpu_features=false remove_unused_base_images=true remove_unused_original_minimum_age_seconds=86400 remove_unused_resized_minimum_age_seconds=3600 rescue_image_id=<None> rescue_kernel_id=<None> rescue_ramdisk_id=<None> snapshot_image_format=<None> use_usb_tablet=true libvirt integration libvirt_ovs_bridge=br-int libvirt_use_virtio_for_bridges=false VMWare integration vmwareapi_wsdl_loc=<None>
vmware_vif_driver=nova.virt.vmwareapi.vif.VMWareVlanBridgeDriver (StrOpt) The VMWare VIF driver to configure the VIFs. vmwareapi_api_retry_count=10
vmwareapi_host_ip=<None>
83
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Password for connection to VMWare ESX host. Used only if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) Username for connection to VMWare ESX host. Used only if compute_driver is vmwareapi.VMWareESXDriver. (FloatOpt) The interval used for polling of remote tasks. Used only if compute_driver is vmwareapi.VMWareESXDriver, (StrOpt) Physical ethernet adapter name for vlan networking (StrOpt) PowerVM system manager type (ivm, hmc) (StrOpt) PowerVM manager host or ip (StrOpt) PowerVM VIOS host or ip if different from manager (StrOpt) PowerVM manager user name (StrOpt) PowerVM manager user password (StrOpt) PowerVM image remote path. Used to copy and store images from Glance on the PowerVM VIOS LPAR. (StrOpt) Local directory on the compute host to download glance images to.
vmwareapi_host_username=<None>
vmwareapi_task_poll_interval=5.0
Table5.20.Description of nova.conf file configuration options for console access to VMs on VMWare VMRC or XenAPI
Configuration option=Default value console_driver=nova.console.xvp.XVPConsoleProxy console_public_hostname=MGG2WEDRJM stub_compute=false console_vmrc_error_retries=10 console_vmrc_port=443 console_xvp_conf=/etc/xvp.conf console_xvp_conf_template=$pybasedir/nova/console/ xvp.conf.template console_xvp_log=/var/log/xvp.log console_xvp_multiplex_port=5900 console_xvp_pid=/var/run/xvp.pid xenapi_agent_path=usr/sbin/xe-update-networking (Type) Description (StrOpt) Driver to use for the console proxy (StrOpt) Publicly visible name for this console host (BoolOpt) Stub calls to compute worker for tests (IntOpt) number of retries for retrieving VMRC information (IntOpt) port for VMware VMRC connections (StrOpt) generated XVP conf file (StrOpt) XVP conf template (StrOpt) XVP log file (IntOpt) port for XVP to multiplex VNC connections on (StrOpt) XVP master process pid file (StrOpt) Specifies the path in which the xenapi guest agent should be located. If the agent is present, network configuration is not injected into the image. Used if compute_driver=xenapi.XenAPIDriver and flat_injected=True. (IntOpt) Maximum number of concurrent XenAPI connections. Used only if compute_driver=xenapi.XenAPIDriver. (StrOpt) URL for connection to XenServer/ Xen Cloud Platform. Required if compute_driver=xenapi.XenAPIDriver. (StrOpt) Password for connection to XenServer/Xen Cloud Platform. Used only if compute_driver=xenapi.XenAPIDriver.
xenapi_connection_concurrent=5
xenapi_connection_url=<None>
xenapi_connection_username=root
84
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Username for connection to XenServer/Xen Cloud Platform. Used only if compute_driver=xenapi.XenAPIDriver. (BoolOpt) Ensure compute service is running on host XenAPI connects to. (BoolOpt) Timeout in seconds for XenAPI login. (BoolOpt) Used to enable the remapping of VBD dev. (Works around an issue in Ubuntu Maverick). (StrOpt) Specify prefix to remap VBD dev to (ex. /dev/xvdb -> /dev/sdb). Used when xenapi_remap_vbd_dev=true. (StrOpt) Base path to the storage repository. (FloatOpt) The interval used for polling of coalescing vhds. Used only if compute_driver=xenapi.XenAPIDriver. (IntOpt) Max number of times to poll for VHD to coalesce. Used only if compute_driver=xenapi.XenAPIDriver.
Table5.22.Description of nova.conf file configuration options for schedulers that use algorithms to assign VM launch on particular compute hosts
Configuration option=Default value scheduler_max_attempts=3 (Type) Description (IntOpt) Maximum number of attempts to schedule an instance before giving up and settting the instance to error (FloatOpt) Virtual CPU to Physical CPU allocation ratio (FloatOpt) virtual ram to physical ram allocation ratio (IntOpt) Amount of disk in MB to reserve for host/dom0 (IntOpt) Amount of memory in MB to reserve for host/ dom0 scheduler_host_manager=nova.scheduler.host_manager.HostManager (StrOpt) The scheduler host manager class to use
scheduler_available_filters=nova.scheduler.filters.all_filters (MultiStrOpt) Filter classes available to the scheduler which may be specified more than once. An entry of "nova.scheduler.filters.all_filters" maps to all filters included with nova. scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter (ListOpt) Which filter class names to use for filtering hosts when not specified in the request. compute_fill_first_cost_fn_weight=-1.0 (FloatOpt) How much weight to give the fill-first cost function. A negative value will reverse behavior: e.g. spread-first (FloatOpt) How much weight to give the retry host cost function. A negative value will reverse behavior: e.g. use multiple-times-retried hosts first
retry_host_cost_fn_weight=1.0
85
Nov 9, 2012
Folsom, 2012.2
(Type) Description
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn (ListOpt) Which cost functions the LeastCostScheduler should use noop_cost_fn_weight=1.0 scheduler_driver=nova.scheduler.multi.MultiScheduler (FloatOpt) How much weight to give the noop cost function (StrOpt) Default driver to use for the scheduler
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler (StrOpt) Driver to use for scheduling compute calls volume_scheduler_driver=nova.scheduler.chance.ChanceScheduler (StrOpt) Driver to use for scheduling volume calls scheduler_json_config_location= max_cores=16 max_gigabytes=10000 max_networks=1000 skip_isolated_core_check=true (StrOpt) Absolute path to scheduler configuration JSON file. (IntOpt) maximum number of instance cores to allow per host (IntOpt) maximum number of volume gigabytes to allow per host (IntOpt) maximum number of networks to allow per host (BoolOpt) Allow overcommitting vcpus on isolated hosts
86
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) Hostname for the DFM server (IntOpt) Port number for the DFM server (StrOpt) Storage service to use for provisioning (StrOpt) Vfiler to use for provisioning (StrOpt) URL of the WSDL file for the DFM server (StrOpt) block size for volumes (blank=default,8KB) (StrOpt) IP address of Nexenta SA (IntOpt) Nexenta target portal port (StrOpt) Password to connect to Nexenta SA (IntOpt) HTTP port to connect to Nexenta REST API server (StrOpt) Use http or https for REST connection (default auto) (BoolOpt) flag to create sparse volumes (StrOpt) prefix for iSCSI target groups on SA (StrOpt) IQN prefix for iSCSI targets (StrOpt) User name to connect to Nexenta SA (StrOpt) pool on SA that will hold all volumes (StrOpt) Cluster name to use for creating volumes (StrOpt) IP address of SAN controller (BoolOpt) Execute commands locally instead of over SSH; use if the volume service is running on the SAN device (StrOpt) Username for SAN controller (StrOpt) Password for SAN controller (StrOpt) Filename of private key to use for SSH authentication (IntOpt) SSH port to use with SAN (BoolOpt) Use thin provisioning for SAN volumes? (StrOpt) The ZFS path under which to create zvols for volumes.
nexenta_target_prefix=iqn.1986-03.com.sun:02:novanexenta_user=admin nexenta_volume=nova san_clustername= san_ip= san_is_local=false san_login=admin san_password= san_private_key= san_ssh_port=22 san_thin_provision=true san_zfs_volume_base=rpool/
87
Nov 9, 2012
Folsom, 2012.2
6. Identity Management
The default identity management system for OpenStack is the OpenStack Identity Service, code-named Keystone. Once Identity is installed, it is configured via a primary configuration file (etc/keystone.conf), possibly a separate logging configuration file, and initializing data into keystone using the command line client.
Basic Concepts
The Identity service has two primary functions: 1. User management: keep track of users and what they are permitted to do 2. Service catalog: Provide a catalog of what services are available and where their API endpoints are located The Identity Service has several definitions which are important to understand. User A digital representation of a person, system, or service who uses OpenStack cloud services. Identity authentication services will validate that incoming request are being made by the user who claims to be making the call. Users have a login and may be assigned tokens to access resources. Users may be directly assigned to a particular tenant and behave as if they are contained in that tenant. Data that belongs to, is owned by, and generally only known by a user that the user can present to prove they are who they are (since nobody else should know that data). Examples are: a matching username and password a matching username and API key yourself and a driver's license with a picture of you a token that was issued to you that nobody else knows of Authentication In the context of the identity service, authentication is the act of confirming the identity of a user or the truth of a claim. The identity service will confirm that incoming request are being made by the user who claims to be making the call by validating a set of claims that the user is making. These claims are initially in the form of a set of credentials (username & password, or username and API key). After initial confirmation, the identity service will issue the user a token which the user can then provide to demonstrate that their identity has been authenticated when making subsequent requests. A token is an arbitrary bit of text that is used to access resources. Each token has a scope which describes which resources are accessible 88
Credentials
Token
Nov 9, 2012
Folsom, 2012.2
with it. A token may be revoked at anytime and is valid for a finite duration. While the identity service supports token-based authentication in this release, the intention is for it to support additional protocols in the future. The intent is for it to be an integration service foremost, and not a aspire to be a full-fledged identity store and management solution. Tenant A container used to group or isolate resources and/or identity objects. Depending on the service operator, a tenant may map to a customer, account, organization, or project. An OpenStack service, such as Compute (Nova), Object Storage (Swift), or Image Service (Glance). A service provides one or more endpoints through which users can access resources and perform (presumably useful) operations. An network-accessible address, usually described by URL, where a service may be accessed. If using an extension for templates, you can create an endpoint template, which represents the templates of all the consumable services that are available across the regions. A personality that a user assumes when performing a specific set of operations. A role includes a set of right and privileges. A user assuming that role inherits those rights and privileges. In the identity service, a token that is issued to a user includes the list of roles that user can assume. Services that are being called by that user determine how they interpret the set of roles a user has and which operations or resources each roles grants access to.
Service
Endpoint
Role
89
Nov 9, 2012
Folsom, 2012.2
User management
The three main concepts of Identity user management are: Users Tenants Roles A user represents a human user, and has associated information such as username, password and email. This example creates a user named "alice":
$ keystone user-create --name=alice --pass=mypassword123 --email= [email protected]
A tenant can be thought of as a project, group, or organization. Whenever you make requests to OpenStack services, you must specify a tenant. For example, if you query the Compute service for a list of running instances, you will receive a list of all of the running instances in the tenant you specified in your query. This example creates a tenant named "acme":
$ keystone tenant-create --name=acme
90
Nov 9, 2012
Folsom, 2012.2
Note
Because the term project was used instead of tenant in earlier versions of OpenStack Compute, some command-line tools use --project_id instead of --tenant-id or --os-tenant-id to refer to a tenant ID. A role captures what operations a user is permitted to perform in a given tenant. This example creates a role named "compute-user":
$ keystone role-create --name=compute-user
Note
It is up to individual services such as the Compute service and Image service to assign meaning to these roles. As far as the Identity service is concerned, a role is simply a name. The Identity service associates a user with a tenant and a role. To continue with our previous examples, we may wish to assign the "alice" user the "compute-user" role in the "acme" tenant:
$ keystone user-list +--------+---------+-------------------+--------+ | id | enabled | email | name | +--------+---------+-------------------+--------+ | 892585 | True | [email protected] | alice | +--------+---------+-------------------+--------+ $ keystone role-list +--------+--------------+ | id | name | +--------+--------------+ | 9a764e | compute-user | +--------+--------------+ $ keystone tenant-list +--------+------+---------+ | id | name | enabled | +--------+------+---------+ | 6b8fd2 | acme | True | +--------+------+---------+ $ keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2
A user can be assigned different roles in different tenants: for example, Alice may also have the "admin" role in the "Cyberdyne" tenant. A user can also be assigned multiple roles in the same tenant. The /etc/[SERVICE_CODENAME]/policy.json controls what users are allowed to do for a given service. For example, /etc/nova/policy.json specifies the access policy for the Compute service, /etc/glance/policy.json specifies the access policy for the 91
Nov 9, 2012
Folsom, 2012.2
Image service, and /etc/keystone/policy.json specifies the access policy for the Identity service. The default policy.json files in the Compute, Identity, and Image service recognize only the admin role: all operations that do not require the admin role will be accessible by any user that has any role in a tenant. If you wish to restrict users from performing operations in, say, the Compute service, you need to create a role in the Identity service and then modify /etc/nova/policy.json so that this role is required for Compute operations. For example, this line in /etc/nova/policy.json specifies that there are no restrictions on which users can create volumes: if the user has any role in a tenant, they will be able to create volumes in that tenant.
"volume:create": [],
If we wished to restrict creation of volumes to users who had the compute-user role in a particular tenant, we would add "role:compute-user", like so:
"volume:create": ["role:compute-user"],
If we wished to restrict all Compute service requests to require this role, the resulting file would look like:
{ "admin_or_owner": [["role:admin"], ["project_id: %(project_id)s"]], "default": [["rule:admin_or_owner"]], "compute:create": ["role":"compute-user"], "compute:create:attach_network": ["role":"compute-user"], "compute:create:attach_volume": ["role":"compute-user"], "compute:get_all": ["role":"compute-user"], "admin_api": [["role:admin"]], "compute_extension:accounts": [["rule:admin_api"]], "compute_extension:admin_actions": [["rule:admin_api"]], "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], "compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]], "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]], "compute_extension:admin_actions:lock": [["rule:admin_api"]], "compute_extension:admin_actions:unlock": [["rule:admin_api"]], "compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]], "compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]],
92
Nov 9, 2012
Folsom, 2012.2
"compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]], "compute_extension:admin_actions:migrateLive": [["rule:admin_api"]], "compute_extension:admin_actions:migrate": [["rule:admin_api"]], "compute_extension:aggregates": [["rule:admin_api"]], "compute_extension:certificates": ["role":"compute-user"], "compute_extension:cloudpipe": [["rule:admin_api"]], "compute_extension:console_output": ["role":"compute-user"], "compute_extension:consoles": ["role":"compute-user"], "compute_extension:createserverext": ["role":"compute-user"], "compute_extension:deferred_delete": ["role":"compute-user"], "compute_extension:disk_config": ["role":"compute-user"], "compute_extension:extended_server_attributes": [["rule:admin_api"]], "compute_extension:extended_status": ["role":"compute-user"], "compute_extension:flavorextradata": ["role":"compute-user"], "compute_extension:flavorextraspecs": ["role":"compute-user"], "compute_extension:flavormanage": [["rule:admin_api"]], "compute_extension:floating_ip_dns": ["role":"compute-user"], "compute_extension:floating_ip_pools": ["role":"computeuser"], "compute_extension:floating_ips": ["role":"compute-user"], "compute_extension:hosts": [["rule:admin_api"]], "compute_extension:keypairs": ["role":"compute-user"], "compute_extension:multinic": ["role":"compute-user"], "compute_extension:networks": [["rule:admin_api"]], "compute_extension:quotas": ["role":"compute-user"], "compute_extension:rescue": ["role":"compute-user"], "compute_extension:security_groups": ["role":"compute-user"], "compute_extension:server_action_list": [["rule:admin_api"]], "compute_extension:server_diagnostics": [["rule:admin_api"]], "compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]], "compute_extension:simple_tenant_usage:list": [["rule:admin_api"]], "compute_extension:users": [["rule:admin_api"]], "compute_extension:virtual_interfaces": ["role":"computeuser"], "compute_extension:virtual_storage_arrays": ["role":"computeuser"], "compute_extension:volumes": ["role":"compute-user"], "compute_extension:volumetypes": ["role":"compute-user"], "volume:create": ["role":"compute-user"], "volume:get_all": ["role":"compute-user"], "volume:get_volume_metadata": ["role":"compute-user"], "volume:get_snapshot": ["role":"compute-user"], "volume:get_all_snapshots": ["role":"compute-user"], "network:get_all_networks": ["role":"compute-user"], "network:get_network": ["role":"compute-user"], "network:delete_network": ["role":"compute-user"], "network:disassociate_network": ["role":"compute-user"], "network:get_vifs_by_instance": ["role":"compute-user"], "network:allocate_for_instance": ["role":"compute-user"], "network:deallocate_for_instance": ["role":"compute-user"], "network:validate_networks": ["role":"compute-user"],
93
Nov 9, 2012
Folsom, 2012.2
"network:get_instance_uuids_by_ip_filter": ["role":"compute-
"network:get_floating_ip": ["role":"compute-user"], "network:get_floating_ip_pools": ["role":"compute-user"], "network:get_floating_ip_by_address": ["role":"compute-user"], "network:get_floating_ips_by_project": ["role":"computeuser"], "network:get_floating_ips_by_fixed_address": ["role":"computeuser"], "network:allocate_floating_ip": ["role":"compute-user"], "network:deallocate_floating_ip": ["role":"compute-user"], "network:associate_floating_ip": ["role":"compute-user"], "network:disassociate_floating_ip": ["role":"compute-user"], "network:get_fixed_ip": ["role":"compute-user"], "network:add_fixed_ip_to_instance": ["role":"compute-user"], "network:remove_fixed_ip_from_instance": ["role":"computeuser"], "network:add_network_to_project": ["role":"compute-user"], "network:get_instance_nw_info": ["role":"compute-user"], "network:get_dns_domains": ["role":"compute-user"], "network:add_dns_entry": ["role":"compute-user"], "network:modify_dns_entry": ["role":"compute-user"], "network:delete_dns_entry": ["role":"compute-user"], "network:get_dns_entries_by_address": ["role":"compute-user"], "network:get_dns_entries_by_name": ["role":"compute-user"], "network:create_private_dns_domain": ["role":"compute-user"], "network:create_public_dns_domain": ["role":"compute-user"], "network:delete_dns_domain": ["role":"compute-user"] }
Service management
The two main concepts of Identity service management are: Services Endpoints The Identity service also maintains a user that corresponds to each service (e.g., a user named nova, for the Compute service) and a special service tenant, which is called service. The commands for creating services and endpoints are described in a later section.
Configuration File
The Identity configuration file is an 'ini' file format with sections, extended from Paste, a common system used to configure python WSGI based applications. In addition to the paste config entries, general configuration values are stored under [DEFAULT], [sql], [ec2] and then drivers for the various services are included under their individual sections. The services include: [identity] - the python module that backends the identity system 94
Nov 9, 2012
Folsom, 2012.2
[catalog] - the python module that backends the service catalog [token] - the python module that backends the token providing mechanisms [policy] - the python module that drives the policy system for RBAC The configuration file is expected to be named keystone.conf. When starting up Identity, you can specify a different configuration file to use with --config-file. If you do not specify a configuration file, keystone will look in the following directories for a configuration file, in order: ~/.keystone ~/ /etc/keystone /etc Logging is configured externally to the rest of Identity, the file specifying the logging configuration is in the [DEFAULT] section of the keystone conf file under log_config. If you wish to route all your logging through syslog, set use_syslog=true option in the [DEFAULT] section. A sample logging file is available with the project in the directory etc/ logging.conf.sample. Like other OpenStack projects, Identity uses the `python logging module`, which includes extensive configuration options for choosing the output levels and formats. In addition to this documentation page, you can check the etc/keystone.conf sample configuration files distributed with keystone for example configuration files for each server application. For services which have separate paste-deploy ini file, auth_token middleware can be alternatively configured in [keystone_authtoken] section in the main config file, such as nova.conf. For example in Nova, all middleware parameters can be removed from apipaste.ini like these:
[filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory
95
Nov 9, 2012
Folsom, 2012.2
Note that middleware parameters in paste config take priority, they must be removed to use values in [keystone_authtoken] section.
Running
Running Identity is simply starting the services by using the command:
keystone-all
Invoking this command starts up two wsgi.Server instances, configured by the keystone.conf file as described above. One of these wsgi 'servers' is admin (the administration API) and the other is main (the primary/public API interface). Both of these run in a single process.
Initializing Keystone
keystone-manage is designed to execute commands that cannot be administered through the normal REST api. At the moment, the following calls are supported: db_sync: Sync the database. import_legacy: Import a legacy (pre-essex) version of the db. export_legacy_catalog: Export service catalog from a legacy (pre-essex) db. import_nova_auth: Load auth data from a dump created with keystone-manage. Generally, the following is the first step after a source installation:
keystone-manage db_sync
Nov 9, 2012
Folsom, 2012.2
--endpoint SERVICE_ENDPOINT : allows you to specify the keystone endpoint to communicate with. The default endpoint is http://localhost:35357/v2.0' --token SERVICE_TOKEN : your administrator service token.
Example usage
The keystone client is set up to expect commands in the general form of keystone command argument, followed by flag-like keyword arguments to provide additional (often optional) information. For example, the command user-list and tenantcreate can be invoked as follows:
# Using token auth env variables export SERVICE_ENDPOINT=http://127.0.0.1:5000/v2.0/ export SERVICE_TOKEN=secrete_token keystone user-list keystone tenant-create --name=demo # Using token auth flags keystone --token=secrete --endpoint=http://127.0.0.1:5000/v2.0/ user-list keystone --token=secrete --endpoint=http://127.0.0.1:5000/v2.0/ tenant-create --name=demo # Using user + password + tenant_name env variables export OS_USERNAME=admin export OS_PASSWORD=secrete export OS_TENANT_NAME=admin keystone user-list keystone tenant-create --name=demo # Using user + password + tenant_name flags keystone --username=admin --password=secrete --tenant_name=admin user-list keystone --username=admin --password=secrete --tenant_name=admin tenant-create --name=demo
Tenants
Tenants are the high level grouping within Keystone that represent groups of users. A tenant is the grouping that owns virtual machines within Nova, or containers within Swift. A tenant can have zero or more users, Users can be associated with more than one tenant, and each tenant - user pairing can have a role associated with it. 97
Nov 9, 2012
Folsom, 2012.2
tenant-create
keyword arguments name description (optional, defaults to None) enabled (optional, defaults to True) example:
keystone tenant-create --name=demo
tenant-delete
arguments tenant_id example:
keystone tenant-delete f2b7b39c860840dfa47d9ee4adffa0b3
tenant-enable
arguments tenant_id example:
keystone tenant-enable f2b7b39c860840dfa47d9ee4adffa0b3
tenant-disable
arguments tenant_id example:
keystone tenant-disable f2b7b39c860840dfa47d9ee4adffa0b3
Users
user-create
keyword arguments: name 98
Nov 9, 2012
Folsom, 2012.2
user-delete
keyword arguments: user example:
keystone user-delete f2b7b39c860840dfa47d9ee4adffa0b3
user-list
list users in the system, optionally by a specific tenant (identified by tenant_id) arguments tenant_id (optional, defaults to None) example:
keystone user-list
user-update-email
arguments user_id email example:
keystone user-update-email 03c84b51574841ba9a0d8db7882ac645 "[email protected]"
user-enable
arguments 99
Nov 9, 2012
Folsom, 2012.2
user-disable
arguments user_id example:
keystone user-disable 03c84b51574841ba9a0d8db7882ac645
user-update-password
arguments user_id password example:
keystone user-update-password 03c84b51574841ba9a0d8db7882ac645 foo
Roles
role-create
arguments name example:
keystone role-create --name=demo
role-delete
arguments role_id example:
keystone role-delete 19d1d3344873464d819c45f521ff9890
role-list
example: 100
Nov 9, 2012
Folsom, 2012.2
role-get
arguments role_id example:
keystone role-get role=19d1d3344873464d819c45f521ff9890
add-user-role
arguments role_id user_id tenant_id example:
keystone add-user-role \ 3a751f78ef4c412b827540b829e2d7dd \ 03c84b51574841ba9a0d8db7882ac645 \ 20601a7f1d94447daa4dff438cb1c209
remove-user-role
arguments role_id user_id tenant_id example:
keystone remove-user-role \ 19d1d3344873464d819c45f521ff9890 \ 08741d8ed88242ca88d1f61484a0fe3b \ 20601a7f1d94447daa4dff438cb1c209
Services
service-create
keyword arguments 101
Nov 9, 2012
Folsom, 2012.2
service-list
arguments service_id example:
keystone service-list
service-get
arguments service_id example:
keystone service-get 08741d8ed88242ca88d1f61484a0fe3b
service-delete
arguments service_id example:
keystone service-delete 08741d8ed88242ca88d1f61484a0fe3b
Nov 9, 2012
Folsom, 2012.2
Clients making calls to the service will pass in an authentication token. The Keystone middleware will look for and validate that token, taking the appropriate action. It will also retrieve additional information from the token such as user name, id, tenant name, id, roles, etc... The middleware will pass those data down to the service as headers.
Setting up credentials
To ensure services that you add to the catalog know about the users, tenants, and roles, you must create an admin token and create service users. These sections walk through those requirements.
Admin Token
For a default installation of Keystone, before you can use the REST API, you need to define an authorization token. This is configured in keystone.conf file under the section [DEFAULT]. In the sample file provided with the keystone project, the line defining this token is [DEFAULT] admin_token = ADMIN This configured token is a "shared secret" between keystone and other OpenStack services, and is used by the client to communicate with the API to create tenants, users, roles, etc.
Setting up services
Creating Service Users
To configure the OpenStack services with service users, we need to create a tenant for all the services, and then users for each of the services. We then assign those service users an Admin role on the service tenant. This allows them to validate tokens - and authenticate and authorize other user requests. Create a tenant for the services, typically named 'service' (however, the name can be whatever you choose): 103
Nov 9, 2012
Folsom, 2012.2
This returns a UUID of the tenant - keep that, you'll need it when creating the users and specifying the roles. Create service users for nova, glance, swift, and quantum (or whatever subset is relevant to your deployment):
keystone user-create --name=nova \ --pass=Sekr3tPass \ --tenant_id=[the uuid of the tenant] \ [email protected]
Repeat this for each service you want to enable. Email is a required field in keystone right now, but not used in relation to the service accounts. Each of these commands will also return a UUID of the user. Keep those to assign the Admin role. For adding the Admin role to the service accounts, you'll need to know the UUID of the role you want to add. If you don't have them handy, you can look it up quickly with:
keystone role-list
Once you have it, assign the service users to the Admin role. This is all assuming that you've already created the basic roles and settings as described in the configuration section:
keystone user-role-add --tenant_id=[uuid of the service tenant] \ --user=[uuid of the service account] \ --role=[uuid of the Admin role]
Defining Services
Keystone also acts as a service catalog to let other OpenStack systems know where relevant API endpoints exist for OpenStack Services. The OpenStack Dashboard, in particular, uses this heavily - and this must be configured for the OpenStack Dashboard to properly function. The endpoints for these services are defined in a template, an example of which is in the project as the file etc/default_catalog.templates. Keystone supports two means of defining the services, one is the catalog template, as described above - in which case everything is detailed in that template. The other is a SQL backend for the catalog service, in which case after keystone is online, you need to add the services to the catalog:
keystone service-create --name=nova \ --type=compute \ --description="Nova Compute Service" keystone service-create --name=ec2 \ --type=ec2 \ --description="EC2 Compatibility Layer"
104
Nov 9, 2012
Folsom, 2012.2
keystone service-create --name=glance \ --type=image \ --description="Glance Image Service" keystone service-create --name=keystone \ --type=identity \ --description="Keystone Identity Service" keystone service-create --name=swift \ --type=object-store \ --description="Swift Service"
Setting Up Middleware
Keystone Auth-Token Middleware
The Keystone auth_token middleware is a WSGI component that can be inserted in the WSGI pipeline to handle authenticating tokens with Keystone.
105
Nov 9, 2012
Folsom, 2012.2
4. Restart swift services. 5. Verify that the Identity service, Keystone, is providing authentication to Object Storage (Swift).
$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN stat
106
Nov 9, 2012
Folsom, 2012.2
[filter:keystone] paste.filter_factory = keystone.middleware.swift_auth:filter_factory operator_roles = admin, swiftoperator [filter:s3token] paste.filter_factory = keystone.middleware.s3_token:filter_factory auth_port = 35357 auth_host = 127.0.0.1 auth_protocol = http [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_port = 5000 service_host = 127.0.0.1 auth_port = 35357 auth_host = 127.0.0.1 auth_protocol = http auth_token = ADMIN admin_token = ADMIN
2. You can then access directly your Swift via the S3 API, here's an example with the `boto` library:
import boto import boto.s3.connection connection = boto.connect_s3( aws_access_key_id='<ec2 access key for user>', aws_secret_access_key='<ec2 secret access key for user>', port=8080, host='localhost', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat())
(being defined in /etc/openldap/schema/inetorgperson.ldiff. You would only need two LDAP fields :CN and CN. The CN field will be used for the bind call, and is the ID field for the user object. Configuring Tenants 107
Nov 9, 2012
Folsom, 2012.2
OpenStack tenants is also a collection. They are instances of the object groupOfNames (defined in /etc/openldap/schema/core.ldiff. In order to bind tenant to users, the user's DN should be indicated into the tenant's members attribute. Configuring Roles Roles will be stored into the organizationalRole LDAP object class, into /etc/ openldap/schema/core.ldiff. The assignment is indicated via the User's DN in the roleOccupant attribute. 2. Setting up Keystone The "[LDAP]" stanza in the keystone.conf file allows you to specify the parameters related to the LDAP backend. Supported values are: url user password suffix use_dumb_member user_tree_dn user_objectclass user_id_attribute user_name_attribute tenant_tree_dn tenant_objectclass tenant_id_attribute tenant_name_attribute tenant_member_attribute role_tree_dn role_objectclass role_id_attribute role_member_attribute Here is a typical set-up:
[ldap] url = ldap://localhost
108
Nov 9, 2012
Folsom, 2012.2
tree_dn = dc=exampledomain,dc=com user_tree_dn = ou=Users,dc=exampledomain,dc=com role_tree_dn = ou=Roles,dc=exampledomain,dc=com tenant_tree_dn = ou=Groups,dc=exampledomain,dc=com user = dc=Manager,dc=exampledomain,dc=com password = freeipa4all backend_entities = ['Tenant', 'User', 'UserRoleAssociation', 'Role'] suffix =cn=exampledomain,cn=com [identity] driver = keystone.identity.backends.ldap.Identity
Overriding default attributes The default object classes and attributes are intentionally simplistic. They reflect the common standard objects according to the LDAP RFCs. By default, the user name in the Identity service is queried against the LDAP SN (SurName) attribute type and the tenant name in the Identity service will be queried against the LDAP OU (Organizational Unit) attribute type. However, in a live deployment, the correct attributes can be overridden to support a preexisting, more complex schema. These can be changed through the user_name_attribute, user_id_attribute and tenant_name_attribute configuration options in keystone.conf. For example, you can configure the Identity service to use the CN (Common Name) instead of SN. As a more detailed example, in the user object, the objectClass posixAccount from RFC2307 is very common. If this is the underlying objectclass, then the uid field should probably be uidNumber and username field either uid or cn. To change these two fields, the corresponding entries in the Keystone configuration file would be:
[ldap] user_id_attribute = uidNumber user_name_attribute = cn
109
Nov 9, 2012
Folsom, 2012.2
(Type) Description (IntOpt) Current version of the LDAP schema (StrOpt) Point this at your ldap server (StrOpt) LDAP User (StrOpt) OU for Users (StrOpt) DN of Users (StrOpt) DN of Users (StrOpt) Attribute to use as id (BoolOpt) Modify user attributes instead of creating/ deleting (StrOpt) Attribute to use as name (StrOpt) OU for Users (StrOpt) OID for Users (StrOpt) OU for Tenants (StrOpt) LDAP ObjectClass to use for Tenants (strOpt) Attribute to use as Tenant (strOpt) Attribute to use as Member (strOpt) Atribute to use as tenant name (strOpt) OU for Roles (strOpt) LDAP ObjectClass to use for Roles (StrOpt) OU for Roles (StrOpt) Attribute to use as Role member (StrOpt) Attribute to use as Role
user_tree_dn="ou=Users,dc=example,dc=com" user_dn= "cn=Manager,dc=example,dc=com" user_objectClass= inetOrgPerson user_id_attribute= cn user_modify_only=false user_name_attribute= sn user_subtree= "ou=Users,dc=example,dc=com" user_unit= "Users" tenant_tree_dn="ou=Groups,dc=example,dc=com" tenant_objectclass= groupOfNames tenant_id_attribute= cn tenant_member_attribute= member tenant_name_attribute= ou role_tree_dn= "ou=Roles,dc=example,dc=com" role_objectclass= organizationalRole role_project_subtree= "ou=Groups,dc=example,dc=com" role_member_attribute= roleOccupant role_id_attribute= cn
It should be noted that when using this option an admin tenant/role relationship is required. The admin user is granted access to the 'Admin' role on the 'admin' tenant.
110
Nov 9, 2012
Folsom, 2012.2
7. Image Management
You can use OpenStack Image Services for discovering, registering, and retrieving virtual machine images. The service includes a RESTful API that allows users to query VM image metadata and retrieve the actual image with HTTP requests, or you can use a client class in your Python code to accomplish the same tasks. VM images made available through OpenStack Image Service can be stored in a variety of locations from simple file systems to object-storage systems like the OpenStack Object Storage project, or even use S3 storage either on its own or through an OpenStack Object Storage S3 interface. The backend stores that OpenStack Image Service can work with are as follows: OpenStack Object Storage - OpenStack Object Storage is the highly-available object storage project in OpenStack. Filesystem - The default backend that OpenStack Image Service uses to store virtual machine images is the filesystem backend. This simple backend writes image files to the local filesystem. S3 - This backend allows OpenStack Image Service to store virtual machine images in Amazons S3 service. HTTP - OpenStack Image Service can read virtual machine images that are available via HTTP somewhere on the Internet. This store is readonly. This chapter assumes you have a working installation of the Image Service, with a working endpoint and users created in the Identity service, plus you have sourced the environment variables required by the nova client and glance client.
Adding images
glance image-create
Use the glance image-create command to add a new virtual machine image to glance, and use glance image-update to modify properties of an image that has been updated. The image-create command takes several optional arguments, but you should specify a name for your image using the --name flag, as well as the disk format with --disk-format and container format with --container_format. Pass in the file via standard input or using the file command. For example:
$ glance image-create --name myimage --disk_format=raw --container_format=bare < /path/to/file.img
or
$ glance image-create --name myimage --file --disk_format=raw -container_format=bare /path/to/file.img
111
Nov 9, 2012
Folsom, 2012.2
Disk format
The --disk_format flag specifies the format of the underlying disk image. Virtual appliance vendors have different formats for laying out the information contained in a virtual machine disk image. The following are valid disk formats: raw qcow2 vhd vmdk iso vdi aki ari ami This is an unstructured disk image format. A disk format supported by the QEMU emulator that can expand dynamically and supports copy-on-write. This is the VHD disk format, a common disk format used by virtual machine monitors from VMWare, Xen, Microsoft, VirtualBox, and others. This common disk format is used by the Compute service's VMware API. An archive format typically used for the data contents of an optical disc (e.g. CDROM, DVD). A disk format supported by VirtualBox virtual machine monitor and the QEMU emulator An Amazon kernel image. An Amazon ramdisk image. An Amazon machine image.
Container format
The --container_format flag indicates whether the virtual machine image is in a file format that also contains metadata about the actual virtual machine. Note that the container format string is not currently used by the Compute service, so it is safe to simply specifybareas the container format if you are unsure. The following are valid container formats: bare ovf aki ari ami This indicates there is no container or metadata envelope for the image. This is the OVF container format, a standard for describing the contents of a virtual machine appliance. Use this format when the disk format is set to aki. Use this format when the disk format is set to ari. Use this format when the disk format is set to ami.
Image metadata
You can associate metadata with an image using the --property key=value argument to glance image-create or glance image-update.For example: 112
Nov 9, 2012
Folsom, 2012.2
If the following properties are set on an image, and the ImagePropertiesFilter scheduler filter is enabled (which it is by default), then the scheduler will only consider compute hosts that satisfy these properties: architecture hypervisor_type vm_mode The CPU architecture that must be supported by the hypervisor, e.g. x86_64, arm. Run uname -m to get the architecture of a machine. The hypervisor type. Allowed values include: xen, qemu, kvm, lxc, uml, vmware, hyperv, powervm. The virtual machine mode. This represents the host/guest ABI (application binary interface) used for the virtual machine. Allowed values are: hvm Fully virtualized. This is the mode used by QEMU and KVM. xen Xen 3.0 paravirtualized. uml User Mode Linux paravirtualized. exe Executables in containers. This is the mode used by LXC. The following metadata properties are specific to the XenAPI driver: auto_disk_config A boolean option. If true, the root partition on the disk will be automatically resized before the instance boots. This value is only taken into account by the Compute service when using a Xenbased hypervisor with the XenAPI driver. The Compute service will only attempt to resize if there is a single partition on the image, and only if the partition is in ext3 or ext4 format. The operating system installed on the image, e.g. linux, windows. The XenAPI driver contains logic that will take different actions depending on the value of the os_type parameter of the image. For example, for images where os_type=windows, it will create a FAT32-based swap partition instead of a Linux swap partition, and it will limit the injected hostname to less than 16 characters.
os_type
The following metadata properties are specific to the VMware API driver: vmware_adaptertype vmware_ostype Indicates the virtual SCSI or IDE controller used by the hypervisor. Allowed values: lsiLogic, busLogic, ide A VMware GuestID which describes the operating system installed in the image. This will be passed to the hypervisor when creating a virtual machine. See thinkvirt.com for a list of valid values. If this is not specified, it will default to otherGuest. Currently unused, set it to 1. 113
vmware_image_version
Nov 9, 2012
Folsom, 2012.2
Ubuntu images
Canonical maintains an official set of Ubuntu-based images These accounts use ubuntu as the login user. If your deployment uses QEMU or KVM, we recommend using the images in QCOW2 format. The most recent version of the 64-bit QCOW2 image for Ubuntu 12.04 is preciseserver-cloudimg-amd64-disk1.img.
Fedora images
The Fedora project maintains prebuilt Fedora JEOS (Just Enough OS) images for download at http://berrange.fedorapeople.org/images . A 64-bit QCOW2 image for Fedora 16, f16-x86_64-openstack-sda.qcow2, is available for download.
Nov 9, 2012
Folsom, 2012.2
Oz (KVM)
Oz is a command-line tool that has the ability to create images for common Linux distributions. Rackspace Cloud Builders uses Oz to create virtual machines, see rackerjoe/ozimage-build on Github for their Oz templates. For an example from the Fedora Project wiki, see Building an image with Oz.
VeeWee (KVM)
VeeWee is often used to build Vagrant boxes, but it can also be used to build KVM images. See the doc/definition.md and doc/template.md VeeWee documentation files for more details.
Note
QCOW2 images are only supported with KVM and QEMU hypervisors. As an example, this section will describe how to create aa CentOS 6.2 image. 64-bit ISO images of CentOS 6.2 can be downloaded from one of the CentOS mirrors. This example uses the CentOS netinstall ISO, which is a smaller ISO file that downloads packages from the Internet as needed.
Nov 9, 2012
Folsom, 2012.2
This shows that vnc displays :0 and :1 are in use. In this example, we will use VNC display :2. Also, we want a temporary file to send power signals to the VM instance. We default to / tmp/file.mon, but make sure it doesn't exist yet. If it does, use a different file name for the MONITOR variable defined below:
$ IMAGE=centos-6.2.img $ ISO=CentOS-6.2-x86_64-netinstall.iso $ VNCDISPLAY=:2 $ MONITOR=/tmp/file.mon $ sudo kvm -m 1024 -cdrom $ISO -drive file=${IMAGE},if=virtio,index=0 \ -boot d -net nic -net user -nographic -vnc ${VNCDISPLAY} \ -monitor unix:${MONITOR},server,nowait
Nov 9, 2012
Folsom, 2012.2
Nov 9, 2012
Folsom, 2012.2
virtual machines, the account is called "ubuntu". On Fedora-based virtual machines, the account is called "ec2-user". You can change the name of the account used by cloud-init by editing the /etc/cloud/ cloud.cfg file and adding a line with a different user. For example, to configure cloudinit to put the key in an account named "admin", edit the config file so it has the line:
user: admin
Note
Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with - (hyphen). Make sure it's http: not http; and authorized_keys not authorizedkeys.
Note
The above script only retrieves the ssh public key from the metadata server. It does not retrieve user data, which is optional data that can be passed by the user when requesting a new instance. User data is often used for running a custom script when an instance comes up. As the OpenStack metadata service is compatible with version 2009-04-04 of the Amazon EC2 metadata service, consult the Amazon EC2 documentation on Using Instance Metadata for details on how to retrieve user data.
118
Nov 9, 2012
Folsom, 2012.2
In the example above, /dev/loop0 is available for use. Associate it to the image using losetup, and expose the partitions as device files using kpartx, as root:
# IMAGE=centos-6.2.img # losetup /dev/loop0 $IMAGE # kpartx -av /dev/loop0
If the image has, say three partitions (/boot, /, /swap), there should be one new device created per partition:
$ ls -l /dev/mapper/loop0p* brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/mapper/loop0p1 brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/mapper/loop0p2 brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/mapper/loop0p3
You can now modify the files in the image by going to /mnt/image. When done, unmount the image and release the loop device, as root:
# umount /mnt/image # losetup -d /dev/loop0
Note
If nbd has already been loaded with max_part=0, you will not be able to mount an image if it has multiple partitions. In this case, you may need to first unload the nbd kernel module, and then load it. To unload it, as root:
# rmmod nbd
Connect your image to one of the network block devices (e.g., /dev/nbd0, /dev/nbd1). In this example, we use /dev/nbd3. As root:
# IMAGE=centos-6.2.img # qemu-nbd -c /dev/nbd3 $IMAGE
If the image has, say three partitions (/boot, /, /swap), there should be one new device created per partition:
$ ls -l /dev/nbd3* brw-rw---- 1 root disk brw-rw---- 1 root disk brw-rw---- 1 root disk brw-rw---- 1 root disk 43, 43, 43, 43, 48 49 50 51 2012-03-05 2012-03-05 2012-03-05 2012-03-05 15:32 15:32 15:32 15:32 /dev/nbd3 /dev/nbd3p1 /dev/nbd3p2 /dev/nbd3p3
119
Nov 9, 2012
Folsom, 2012.2
Note
If the network block device you selected was already in use, the initial qemunbd command will fail silently, and the /dev/nbd3p{1,2,3} device files will not be created. To mount the second partition, as root:
# mkdir /mnt/image # mount /dev/nbd3p2 /mnt/image
You can now modify the files in the image by going to /mnt/image. When done, unmount the image and release the network block device, as root:
# umount /mnt/image # qemu-nbd -d /dev/nbd3
Check that adding the image was successful (Status should be ACTIVE when the operation is complete):
$ nova image-list
120
Nov 9, 2012
Folsom, 2012.2
In general, you need to use an ssh keypair to log in to a running instance, although some images have built-in accounts created with associated passwords. However, since images are often shared by many users, it is not advised to put passwords into the images. Nova therefore supports injecting ssh keys into instances before they are booted. This allows a user to login to the instances that he or she creates securely. Generally the first thing that a user does when using the system is create a keypair. Keypairs provide secure authentication to your instances. As part of the first boot of a virtual image, the private key of your keypair is added to authorized_keys file of the login account. Nova generates a public and private key pair, and sends the private key to the user. The public key is stored so that it can be injected into instances. Run (boot) a test instance:
$ nova boot --image cirros-0.3.0-x86_64 --flavor m1.small --key_name test myfirst-server
Here's a description of the parameters used above: --image: the name or ID of the image we want to launch, as shown in the output of nova image-list --flavor: the name or ID of the size of the instance to create (number of vcpus, available RAM, available storage). View the list of available flavors by running nova flavor-list -key_name: the name of the key to inject in to the instance at launch. Check the status of the instance you launched:
$ nova list
The instance will go from BUILD to ACTIVE in a short time, and you should be able to connect via ssh as 'cirros' user, using the private key you created. If your ssh keypair fails for some reason, you can also log in with the default cirros password: cubswin:) 121
Nov 9, 2012
Folsom, 2012.2
$ ipaddress=... # Get IP address from "nova list" $ ssh -i test.pem -l cirros $ipaddress
The 'cirros' user is part of the sudoers group, so you can escalate to 'root' via the following command when logged in to the instance:
$ sudo -i
Warning
Pausing and Suspending instances only apply to KVM-based hypervisors and XenServer/XCP Hypervisors. Pause/ Unpause : Stores the content of the VM in memory (RAM). Suspend/ Resume : Stores the content of the VM on disk. It can be interesting for an administrator to suspend instances, if a maintenance is planned; or if the instance are not frequently used. Suspending an instance frees up memory and vCPUS, while pausing keeps the instance running, in a "frozen" state. Suspension could be compared to an "hibernation" mode.
Pausing instance
To pause an instance :
nova pause $server-id
Nov 9, 2012
Folsom, 2012.2
Suspending instance
To suspend an instance :
nova suspend $server-id
Note
The --force_hosts scheduler hint has been replaced with --availability_zone in the Folsom release.
$ nova boot --image <uuid> --flavor m1.tiny --key_name test --availabilityzone nova:server2
Nov 9, 2012
Folsom, 2012.2
There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch. cloud-init handles importing ssh keys for password-less login, setting hostname etc. The instance acquires the instance specific configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254. While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing the keys etc. by running a set of commands at boot time from rc.local. The process used for Ubuntu and Fedora is largely the same with a few minor differences, which are explained below. In both cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We are using the machine called client1 as explained in the chapter on Installation and Configuration for this purpose. The approach explained below will give you disk images that represent a disk without any partitions. Nova-compute can resize such disks ( including resizing the file system) based on the instance type chosen at the time of launching the instance. These images cannot have bootable flag and hence it is mandatory to have associated kernel and ramdisk images. These kernel and ramdisk images need to be used by nova-compute at the time of launching the instance. However, we have also added a small section towards the end of the chapter about creating bootable images with multiple partitions that can be used by nova to launch an instance without the need for kernel and ramdisk images. The caveat is that while novacompute can re-size such disks at the time of launching the instance, the file system size is not altered and hence, for all practical purposes, such disks are not re-sizable.
OS Installation
Download the iso file of the Linux distribution you want installed in the image. The instructions below are tested on Ubuntu 11.04 Natty Narwhal 64-bit server and Fedora 14 64-bit. Most of the instructions refer to Ubuntu. The points of difference between Ubuntu and Fedora are mentioned wherever required.
wget http://releases.ubuntu.com/natty/ubuntu-11.04-server-amd64.iso
Boot a KVM Instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command below also sets up a VNC display at port 0
sudo kvm -m 256 -cdrom ubuntu-11.04-server-amd64.iso -drive file=server.img, if=scsi,index=0 -boot d -net nic -net user -nographic -vnc :0
124
Nov 9, 2012
Folsom, 2012.2
Connect to the VM through VNC (use display number :0) and finish the installation. For Example, where 10.10.10.4 is the IP address of client1:
vncviewer 10.10.10.4 :0
During the installation of Ubuntu, create a single ext4 partition mounted on /. Do not create a swap partition. In the case of Fedora 14, the installation will not progress unless you create a swap partition. Please go ahead and create a swap partition. After finishing the installation, relaunch the VM by executing the following command.
sudo kvm -m 256 -drive file=server.img,if=scsi,index=0 -boot c -net nic -net user -nographic -vnc :0
At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image. At the minimum, for Ubuntu you may run the following commands
sudo apt-get update sudo apt-get upgrade sudo apt-get install openssh-server cloud-init
Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance coming up as an interface other than eth0.
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
Shutdown the Virtual machine and proceed with the next steps.
125
Nov 9, 2012
Folsom, 2012.2
Observe the name of the loop device ( /dev/loop0 in our setup) when $filepath is the path to the mounted .raw file. Now we need to find out the starting sector of the partition. Run:
sudo fdisk -cul /dev/loop0
Make a note of the starting sector of the /dev/loop0p1 partition i.e the partition whose ID is 83. This number should be multiplied by 512 to obtain the correct value. In this case: 2048 x 512 = 1048576 Unmount the loop0 device:
sudo losetup -d /dev/loop0
Now mount only the partition(/dev/loop0p1) of server.img which we had previously noted down, by adding the -o parameter with value previously calculated value
sudo losetup -f -o 1048576 server.img sudo losetup -a
Make a note of the mount point of our device(/dev/loop0 in our setup) when $filepath is the path to the mounted .raw file. Copy the entire partition to a new .raw file
sudo dd if=/dev/loop0 of=serverfinal.img
Now we have our ext4 filesystem image i.e serverfinal.img Unmount the loop0 device
sudo losetup -d /dev/loop0
Tweaking /etc/fstab
You will need to tweak /etc/fstab to make it suitable for a cloud instance. Nova-compute may resize the disk at the time of launch of instances based on the instance type chosen. 126
Nov 9, 2012
Folsom, 2012.2
This can make the UUID of the disk invalid. Hence we have to use File system label as the identifier for the partition instead of the UUID. Loop mount the serverfinal.img, by running
sudo mount -o loop serverfinal.img /mnt
Edit /mnt/etc/fstab and modify the line for mounting root partition(which may look like the following)
UUID=e7f5af8d-5d96-45cc-a0fc-d0d1bde8f31c remount-ro 0 1 / ext4 errors=
to
LABEL=uec-rootfs
ext4
defaults
Note
The above script only retrieves the ssh public key from the metadata server. It does not retrieve user data, which is optional data that can be passed by the user when requesting a new instance. User data is often used for running a custom script when an instance comes up. 127
Nov 9, 2012
Folsom, 2012.2
As the OpenStack metadata service is compatible with version 2009-04-04 of the Amazon EC2 metadata service, consult the Amazon EC2 documentation on Using Instance Metadata for details on how to retrieve user data.
Now, we have all the components of the image ready to be uploaded to OpenStack imaging server.
For Fedora, the process will be similar. Make sure that you use the right kernel and initrd files extracted above. The uec-publish-image command returns the prompt back immediately. However, the upload process takes some time and the images will be usable only after the process is complete. You can keep checking the status using the command nova image-list as mentioned below.
Bootable Images
You can register bootable disk images without associating kernel and ramdisk images. When you do not want the flexibility of using the same disk image with different kernel/ ramdisk images, you can go for bootable disk images. This greatly simplifies the process of bundling and registering the images. However, the caveats mentioned in the introduction
128
Nov 9, 2012
Folsom, 2012.2
to this chapter apply. Please note that the instructions below use server.img and you can skip all the cumbersome steps related to extracting the single ext4 partition.
glance image-create name="My Server" --is-public=true --container-format=ovf --disk-format=raw < server.img
Image Listing
The status of the images that have been uploaded can be viewed by using nova image-list command. The output should like this:
nova image-list
+----+---------------------------------------------+--------+ | ID | Name | Status | +----+---------------------------------------------+--------+ | 6 | ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | ACTIVE | | 7 | ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd | ACTIVE | | 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE | +----+---------------------------------------------+--------+
OpenStack presents the disk using aVIRTIO interface while launching the instance. Hence the OS needs to have drivers for VIRTIO. By default, the Windows Server 2008 ISO does not have the drivers for VIRTIO. Sso download a virtual floppy drive containing VIRTIO drivers from the following location http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/ and attach it during the installation Start the installation by running
sudo kvm -m 2048 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio -boot d drive file=virtio-win-0.1-22.iso,index=3,media=cdrom -net nic,model= virtio -net user -nographic -vnc :0
When the installation prompts you to choose a hard disk device you wont see any devices available. Click on Load drivers at the bottom left and load the drivers from A: \i386\Win2008 After the Installation is over, boot into it once and install any additional applications you need to install and make any configuration changes you need to make. Also ensure that RDP is enabled as that would be the only way you can connect to a running instance of 129
Nov 9, 2012
Folsom, 2012.2
Windows. Windows firewall needs to be configured to allow incoming ICMP and RDP connections. For OpenStack to allow incoming RDP Connections, use commands to open up port 3389. Shut-down the VM and upload the image to OpenStack
glance image-create name="My WinServer" --is-public=true --container-format= ovf --disk-format=raw < windowsserver.img
ii ii ii
qemu
0.14.0~rc1+noroms-0ubuntu4~ppalucid1 dummy transitional pacakge from qemu to qemu qemu-common 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 qemu common functionality (bios, documentati qemu-kvm 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 Full virtualization on i386 and amd64 hardwa
Images can only be created from running instances if Compute is configured to use qcow2 images, which is the default setting. You can explicitly enable the use of qcow2 images by adding the following line to nova.conf:
use_cow_images=true
Write data to disk Before creating the image, we need to make sure we are not missing any buffered content that wouldn't have been written to the instance's disk. In order to resolve that ; connect to the instance and run sync then exit. Create the image In order to create the image, we first need obtain the server id : 130
Nov 9, 2012
Folsom, 2012.2
+-----+------------+--------+--------------------+ | ID | Name | Status | Networks | +-----+------------+--------+--------------------+ | 116 | Server 116 | ACTIVE | private=20.10.0.14 | +-----+------------+--------+--------------------+
The command will then perform the image creation (by creating qemu snapshot) and will automatically upload the image to your repository.
Note
The image that will be created will be flagged as "Private" (For glance : --ispublic=False). Thus, the image will be available only for the tenant. Check image status After a while the image will turn from a "SAVING" state to an "ACTIVE" one.
nova image-list
+----+---------------------------------------------+--------+ | ID | Name | Status | +----+---------------------------------------------+--------+ | 20 | Image-116 | ACTIVE | | 6 | ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | ACTIVE | | 7 | ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd | ACTIVE | | 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE | +----+---------------------------------------------+--------+
Create an instance from the image You can now create an instance based on this image as you normally do for other images :
nova boot --flavor 1 --image 20 New_server
Troubleshooting Mainly, it wouldn't take more than 5 minutes in order to go from a "SAVING" to the "ACTIVE" state. If this takes longer than five minutes, here are several hints: - The feature doesn't work while you have attached a volume (via nova-volume) to the instance. Thus, you should dettach the volume first, create the image, and re-mount the volume.
131
Nov 9, 2012
Folsom, 2012.2
- Make sure the version of qemu you are using is not older than the 0.14 version. That would create "unknown option -s" into nova-compute.log. - Look into nova-api.log and nova-compute.log for extra information.
Options: --version show program's version number and exit -h, --help show this help message and exit -c CHUNKSIZE, --chunksize=CHUNKSIZE Amount of data to transfer per HTTP write -d, --debug Print debugging information -D DONTREPLICATE, --dontreplicate=DONTREPLICATE List of fields to not replicate -m, --metaonly Only replicate metadata, not images -l LOGFILE, --logfile=LOGFILE Path of file to log to -s, --syslog Log to syslog instead of a file -t TOKEN, --token=TOKEN Pass in your authentication token if you have one. If you use this option the same token is used for both the master and the slave. -M MASTERTOKEN, --mastertoken=MASTERTOKEN Pass in your authentication token if you have one. This is the token used for the master. -S SLAVETOKEN, --slavetoken=SLAVETOKEN Pass in your authentication token if you have one. This is the token used for the slave. -v, --verbose Print more verbose output
132
Nov 9, 2012
Folsom, 2012.2
fromserver:port: the location of the master glance instance toserver:port: the location of the slave glance instance. Take a copy of the fromserver, and dump it onto the toserver. Only images visible to the user running the replicator will be copied if glance is configured to use the Identity service (keystone) for authentication. Only images active on fromserver are copied across. The copy is done "on-the-wire" so there are no large temporary files on the machine running the replicator to clean up.
server:port: the location of the glance instance. path: a directory on disk to contain the data. Do the same thing as livecopy, but dump the contents of the glance server to a directory on disk. This includes metadata and image data. Depending on the size of the local glance repository, the resulting dump may consume a large amount of local storage. Therefore, we recommend you use the size comamnd first to determine the size of the resulting dump.
load: Load a directory created by the dump command into a glance server
glance-replicator load server:port path
server:port: the location of the glance instance. path: a directory on disk containing the data. Load the contents of a local directory into glance. The dump and load are useful when replicating across two glance servers where a direct connection across the two glance hosts is impossible or too slow.
fromserver:port: the location of the master glance instance. toserver:port: the location of the slave glance instance. The compare command will show you the differences between the two servers, which is effectively a dry run of the livecopy command.
133
Nov 9, 2012
Folsom, 2012.2
server:port: the location of the glance instance. The size command will tell you how much disk is going to be used by image data in either a dump or a livecopy. Note that this will provide raw number of bytes that would be written to the destination, it has no information about the redundancy costs associated with glance-registry back-ends that use replication for redundancy, such as Swift or Ceph.
134
Nov 9, 2012
Folsom, 2012.2
8. Instance Management
Instances are the running virtual machines within an OpenStack cloud. The Images and Instances section of the Introduction to OpenStack Compute Chapter provides a high level overview of instances and their life cycle This chapter deals with the details of how to manage that life cycle
Nova CLI
The nova command provided by the OpenStack python-novaclient package is the basic command line utility for users interacting with OpenStack. This is available as a native package for most modern Linux distributions or the latest version can be installed directly using pip python package installer:
sudo pip install -e git+https://github.com/openstack/python-novaclient. git#egg=python-novaclient
Full details for nova and other CLI tools are provided in the OpenStack CLI Guide. What follows is the minimal introduction required to follow the CLI example in this chapter. In the case of a conflict the OpenStack CLI Guide should be considered authoritative (and a bug filed against this section). In order to function the nova CLI needs to know four things: Authentication URL. This can be passed as the --os_auth_url flag or using the OS_AUTH_URL environment variable. Tenant(sometimes referred to as project) name. This can be passed as the -os_tenant_name flag or using the OS_TENANT_NAME environment variable. User name. This can be passed as the --os_username flag or using the OS_USERNAME environment variable. Password. This can be passed as the --os_password flag or using the OS_PASSWORD environment variable. For example if you have your Keytone identity management service running on the default port (5000) on host keystone.example.com and want to use the nova cli as the user "demouser" with the password "demopassword" in the "demoproject" tenant you can export the following values in your shell environment or pass the equivalent command line args (presuming these identities already exist):
export export export export OS_AUTH_URL="http://keystone.example.com:5000/v2.0/" OS_USERNAME=demouser OS_PASSWORD=demopassword OS_TENANT_NAME=demoproject
135
Nov 9, 2012
Folsom, 2012.2
If you are using the Horizon web dashboard, users can easily download credential files like this with the correct values for your particular implementation.
Compute API
OpenStack provides a RESTful API for all functionality. Complete API documentation is available at at http://docs.openstack.org/api. The OpenStack Compute API documentation refers to instances as "servers". The nova cli can be made to show the API calls it is making by passing it the --debug flag
#nova --debug list connect: (10.0.0.15, 5000) send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 10.0.0.15:5000\r\nContent-Length: 116\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\ naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n{"auth": {"tenantName": "demoproject", "passwordCredentials": {"username": "demouser", "password": "demopassword"}}}' reply: 'HTTP/1.1 200 OK\r\n' header: Content-Type: application/json header: Vary: X-Auth-Token header: Date: Thu, 13 Sep 2012 20:27:36 GMT header: Transfer-Encoding: chunked connect: (128.52.128.15, 8774) send: u'GET /v2/fa9dccdeadbeef23ae230969587a14bf/servers/detail HTTP/1.1\ r\nHost: 10.0.0.15:8774\r\nx-auth-project-id: demoproject\r\nx-auth-token: deadbeef9998823afecc3d552525c34c\r\naccept-encoding: gzip, deflate\r\naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: X-Compute-Request-Id: req-bf313e7d-771a-4c0b-ad08-c5da8161b30f header: Content-Type: application/json header: Content-Length: 15 header: Date: Thu, 13 Sep 2012 20:27:36 GMT +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+
136
Nov 9, 2012
Folsom, 2012.2
Images
In OpenStack the base operating system is usually copied from an "image" stored in the Glance image service. This is the most common case and results in an ephemeral instance which starts from a know templated state and lose all accumulated state on shutdown. It is also possible in special cases to put an operating system on a persistent "volume" in the Nova-Volume or Cinder volume system. This gives a more traditional persistent system that accumulates state which is preserved across restarts. To get a list of available images on your system run:
$nova image-list +--------------------------------------+------------------------------+--------+--------------------------------------+ | ID | Name | Status | Server | +--------------------------------------+------------------------------+--------+--------------------------------------+
137
Nov 9, 2012
Folsom, 2012.2
| aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 12.04 cloudimg amd64 | ACTIVE | | | 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 12.10 cloudimg amd64 | ACTIVE | | | df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | | +--------------------------------------+------------------------------+--------+--------------------------------------+
The displayed image attributes are ID: the automatically generate UUID of the image Name: a free form human readable name given to the image Status: shows the status of the image ACTIVE images are available for use. Server: for images that are created as snapshots of running instance this is the UUID of the instance the snapshot derives from, for uploaded images it is blank
Flavors
Virtual hardware templates are called "flavors" in OpenStack. The default install provides a range of five flavors. These are configurable by admin users (this too is configurable and may be delegated by redefining the access controls for "compute_extension:flavormanage" in /etc/nova/policy.json on the compute-api server) . To get a list of available flavors on your system run:
$ nova flavor-list +----+-------------+-----------+------+-----------+------+------+-------------+-----------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | | Is_Public | extra_specs | +----+-------------+-----------+------+-----------+------+------+-------------+-----------+-------------+ | 1 | m1.tiny | 512 | 0 | 0 | | 1 | | True | {} | | 2 | m1.small | 2048 | 10 | 20 | | 1 | | True | {} | | 3 | m1.medium | 4096 | 10 | 40 | | 2 | | True | {} | | 4 | m1.large | 8192 | 10 | 80 | | 4 | | True | {} | | 5 | m1.xlarge | 16384 | 10 | 160 | | 8 | | True | {} | +----+-------------+-----------+------+-----------+------+------+-------------+-----------+-------------+
RXTX_Factor
The nova flavor-create command allows authorized users to create new flavors. Additional flavor manipulation commands can be shown with the command nova help |grep flavor Flavors define a number of elements 138
Nov 9, 2012
Folsom, 2012.2
Name: a descriptive name. xx.size_name is conventional not required, though some third party tools may rely on it. Memory_MB: virtual machine memory in megabytes Disk: virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. When booting rom a persistent volume it is not used. The "0" size is a special case which uses the native base image size as the size of the ephemeral root volume. Ephemeral: specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance. Swap: optional swap space allocation for the instance VCPUs: number of virtual CPUs presented to the instance RXTX_Factor: optional property allows created servers to have a different bandwidth cap than that defined in the network they are attached to. This factor is multiplied by the rxtx_base property of the network. Default value is 1.0 (that is, the same as attached network). Is_Public: Boolean value, whether flavor is available to all users or private to the tenant it was created in. Defaults to True. extra_specs: additional optional restrictions on which compute nodes the flavor can run on. This is implemented as key/value pairs that must match against the corresponding key/value pairs on compute nodes. Can be used to implement things like special resources (e.g., flavors that can only run on compute nodes with GPU hardware).
Creating instances
Create Your Server with the nova Client
Procedure8.1.To create and boot your server with the nova client:
1. Issue the following command. In the command, specify the server name, flavor ID, and image ID:
$ nova boot myUbuntuServer --image "3afe97b2-26dc-49c5-a2cc-a2fc8d80c001" --flavor 6
The command returns a list of server properties. The status field indicates whether the server is being built or is active. A status of BUILD indicates that your server is being built.
+-------------------------+--------------------------------------+ | Property | Value | +-------------------------+--------------------------------------+ | OS-DCF:diskConfig | AUTO | | accessIPv4 | | | accessIPv6 | | | adminPass | ZbaYPZf6r2an | | config_drive | | | created | 2012-07-27T19:59:31Z | | flavor | 8GB Standard Instance |
139
Nov 9, 2012
Folsom, 2012.2
| hostId | | | id | d8093de0-850f-4513-b202-7979de6c0d55 | | image | Ubuntu 11.10 | | metadata | {} | | name | myUbuntuServer | | progress | 0 | | status | BUILD | | tenant_id | 345789 | | updated | 2012-07-27T19:59:31Z | | user_id | 170454 | +-------------------------+--------------------------------------+
2.
Copy the server ID value from the id field in the output. You use this ID to get details for your server to determine if it built successfully. Copy the administrative password value from the adminPass field. You use this value to log into your server.
140
Nov 9, 2012
Folsom, 2012.2
A device name where the volume will be attached in the system at /dev/dev_name. This value is typically vda. The ID of the volume to boot from, as shown in the output of nova volume-list. This is either snap, which means that the volume was created from a snapshot, or anything other than snap (a blank string is valid). In the example above, the volume was not created from a snapshot, so we will leave this field blank in our example below. The size of the volume, in GB. It is safe to leave this blank and have the Compute service infer the size.
id
type
size (GB)
delete-on-terminate A boolean to indicate whether the volume should be deleted when the instance is terminated. True can be specified as True or 1. False can be specified as False or 0.
Note
Because of bug #1008622, you must specify an image when booting from a volume, even though this image will not be used. The following example will attempt boot from volume with ID=13, it will not delete on terminate. Replace the --image flag with a valid image on your system, and the --keyname with a valid keypair name:
$ nova boot --image f4addd24-4e8a-46bb-b15d-fae2591f1a35 --flavor 2 --key-name mykey \ --block-device-mapping vda=13:::0 boot-from-vol-test
Nov 9, 2012
Folsom, 2012.2
will create a key named mykey which you can associate with instances. Save the file mykey.pem to a secure location as it will allow root access to instances the mykeykey is associated with.
will upload the existing public key mykey.pub and associate it with the name mykey. You will need to have the matching private key to access instances associated with this key.
When viewing the server information, you can see the metadata included on the metadata line:
$ nova show smallimage2 +-----------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------+---------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-STS:power_state | 1 |
142
Nov 9, 2012
Folsom, 2012.2
| OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2012-05-16T20:48:23Z | | flavor | m1.small | | hostId | de0c201e62be88c61aeb52f51d91e147acf6cf2012bb57892e528487 | | id | 8ec95524-7f43-4cce-a754-d3e5075bf915 | | image | natty-image | | key_name | | | metadata | {u'description': u'Small test image', u'creator': u'joecool'} | | name | smallimage2 | | private network | 172.16.101.11 | | progress | 0 | | public network | 10.4.113.11 | | status | ACTIVE | | tenant_id | e830c2fbb7aa4586adf16d61c9b7e482 | | updated | 2012-05-16T20:48:35Z | | user_id | de3f4e99637743c7b6d27faca4b800a9 | +-----------------------+---------------------------------------------------------------+
Nov 9, 2012
Folsom, 2012.2
Puppet or Chef server. When launching instances in an OpenStack cloud, there are two technologies that work together to support automated configuration of instances at boot time: user data and cloud-init.
User data
User data is the mechanism by which a user can pass information contained in a local file to an instance at launch time. The typical use case is to pass something like a shell script or a configuration file as user data. User data is sent using the --user-data /path/to/filename option when calling nova boot. The following example creates a text file and then send the contents of that file as user data to the instance.
$ echo "This is some text" > myfile.txt $ nova boot --user-data ./myfile.txt --image myimage myinstance
The instance can retrieve user data by querying the metadata service at using either the OpenStack metadata API or the EC2 compatibility API:
$ curl http://169.254.169.254/2009-04-04/user-data This is some text $ curl http://169.254.169.254/openstack/2012-08-10/user_data This is some text
Note that the Compute service treats user data as a blob. While the example above used a text file, user data can be in any format.
Cloud-init
To do something useful with the user data, the virtual machine image must be configured to run a service on boot that retrieves the user data from the metadata service and take some action based on the contents of the data. The cloud-init package was designed to do exactly this. In particular, cloud-init is compatible with the Compute metadata service as well as the Compute config drive. Note that cloud-init is not an OpenStack technology. Rather, it is a package that is designed to support multiple cloud providers, so that the same virtual machine image can be used in different clouds without modification. Cloud-init is an open source project, and the source code is available on Launchpad. It is maintained by Canonical, the company which runs the Ubuntu project. All Ubuntu cloud images come pre-installed with cloud-init. However, cloud-init is not designed to be Ubuntu-specific, and has been successfully ported to Fedora. We recommend installing cloud-init on images that you create to simplify the task of configuring your instances on boot. Even if you do not wish to use user data to configure instance behavior at boot time, cloud-init provides useful functionality such as copying the public key to an account (the ubuntu account by default on Ubuntu instances, the ec2user by default in Fedora instances). If you do not have cloud-init installed, you will need to manually configure your image to retrieve the public key from the metadata service on boot and copy it to the appropriate account. 144
Nov 9, 2012
Folsom, 2012.2
Sending a shell script as user data has a similar effect to writing an /etc/rc.local script: it will be executed very late in the boot sequence as root.
Cloud-config format
Cloud-init supports a YAML-based config format that allows the user to configure a large number of options on a system. User data that begins with #cloud-config will be interpreted by cloud-init as cloud-config format.
145
Nov 9, 2012
Folsom, 2012.2
agent: server: "puppetmaster.example.org" ca_cert: | -----BEGIN CERTIFICATE----MIICCTCCAXKgAwIBAgIBATANBgkqhkiG9w0BAQUFADANMQswCQYDVQQDDAJjYTAe Fw0xMDAyMTUxNzI5MjFaFw0xNTAyMTQxNzI5MjFaMA0xCzAJBgNVBAMMAmNhMIGf MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCu7Q40sm47/E1Pf+r8AYb/V/FWGPgc b014OmNoX7dgCxTDvps/h8Vw555PdAFsW5+QhsGr31IJNI3kSYprFQcYf7A8tNWu 1MASW2CfaEiOEi9F1R3R4Qlz4ix+iNoHiUDTjazw/tZwEdxaQXQVLwgTGRwVa+aA qbutJKi93MILLwIDAQABo3kwdzA4BglghkgBhvhCAQ0EKxYpUHVwcGV0IFJ1Ynkv T3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwDwYDVR0TAQH/BAUwAwEB/zAd BgNVHQ4EFgQUu4+jHB+GYE5Vxo+ol1OAhevspjAwCwYDVR0PBAQDAgEGMA0GCSqG SIb3DQEBBQUAA4GBAH/rxlUIjwNb3n7TXJcDJ6MMHUlwjr03BDJXKb34Ulndkpaf +GAlzPXWa7bO908M9I8RnPfvtKnteLbvgTK+h+zX1XCty+S2EQWk29i2AdoqOTxb hppiGMp0tT5Havu4aceCXiy2crVcudj3NFciy8X66SoECemW9UYDCb9T5D0d -----END CERTIFICATE-----
146
Nov 9, 2012
Folsom, 2012.2
chef: install_type: "packages" server_url: "https://chefserver.example.com:4000" node_name: "your-node-name" environment: "production" validation_name: "yourorg-validator" validation_key: | -----BEGIN RSA PRIVATE KEY----YOUR-ORGS-VALIDATION-KEY-HERE -----END RSA PRIVATE KEY----run_list: - "recipe[apache2]" - "role[db]" initial_attributes: apache: prefork: maxclients: 100 keepalive: "off"
Config drive
Introduction
OpenStack can be configured to write metadata to a special configuration drive that will be attached to the instance when it boots. The instance can retrieve any information that would normally be available through the metadata service by mounting this disk and reading files from it. One use case for the config drive is to pass networking configuration (e.g., IP address, netmask, gateway) when DHCP is not being used to assign IP addresses to instances. The instance's IP configuration can be transmitted using the config drive, which can be mounted and accessed before the instance's network settings have been configured. The config drive can be used by any guest operating system that is capable of mounting an ISO9660 or VFAT file system. This functionality should be available on all modern operating systems. In addition, an image that has been built with a recent version of the cloud-init package will be able to automatically access metadata passed via config drive. The current version of cloud-init as of this writing (0.7.1) has been confirmed to work with Ubuntu, as well as Fedora-based images such as RHEL. If an image does not have the cloud-init package installed, the image must be customized to run a script that mounts the config drive on boot, reads the data from the drive, and takes appropriate action such as adding the public key to an account. See below for details on how data is organized on the config drive.
Nov 9, 2012
Folsom, 2012.2
$ nova boot --config-drive=true --image my-image-name --key-name mykey -flavor 1 --user-data ./my-user-data.txt myinstance --file /etc/network/ interfaces=/home/myuser/instance-interfaces --file known_hosts=/home/myuser/. ssh/known_hosts --meta role=webservers --meta essential=false
You can also configure the Compute service to always create a config drive by setting the following option in /etc/nova/nova.conf:
force_config_drive=true
Note
As of this writing, there is no mechanism for an administrator to disable use of the config drive if a user passes the --config-drive=true flag to the nova boot command.
Note
The cirros 0.3.0 test image does not have support for the config drive. Support will be added in version 0.3.1. If your guest operating system does not use udev, then the /dev/disk/by-label directory will not be present. The blkid command can be used to identify the block device that corresponds to the config drive. For example, when booting the cirros image with the m1.tiny flavor, the device will be /dev/vdb:
# blkid -t LABEL="config-2" -odevice /dev/vdb
148
Nov 9, 2012
Folsom, 2012.2
Note the effect of the --file /etc/network/interfaces=/home/myuser/ instance-interfaces argument passed to the original nova boot command. The contents of this file are contained in the file openstack/content/0000 file on the config drive, and the path is specified as /etc/network/interfaces in the meta_data.json file. 149
Nov 9, 2012
Folsom, 2012.2
User data
The files openstack/2012-08-10/user_data, openstack/latest/user_data, ec2/2009-04-04/user-data, and ec2/latest/user-data, will only be present if the --user-data flag was passed to nova boot and will contain the contents of the user data file passed as the argument.
150
Nov 9, 2012
Folsom, 2012.2
For legacy reasons, the config drive can be configured to use VFAT format instead of ISO 9660. It is unlikely that you would require VFAT format, since ISO 9660 is widely supported across operating systems. However, if you wish to use the VFAT format, add the following line to /etc/nova/nova.conf instead:
config_drive_format=vfat
$ nova floating-ip-create nova +--------------+-------------+----------+------+ | Ip | Instance Id | Fixed Ip | Pool | +--------------+-------------+----------+------+ | 50.56.12.232 | None | None | nova | +--------------+-------------+----------+------+
The floating IP address has been reserved, and can now be associated with an instance with the nova add-floating-ip command. For this example, we'll associate this IP address with an image called smallimage.
$ nova add-floating-ip smallimage 50.56.12.232
151
Nov 9, 2012
Folsom, 2012.2
After the command is complete, you can confirm that the IP address has been associated with the nova floating-ip-list and nova-list commands.
$ nova floating-ip-list +--------------+--------------------------------------+------------+------+ | Ip | Instance Id | Fixed Ip | Pool | +--------------+--------------------------------------+------------+------+ | 50.56.12.232 | 542235df-8ba4-4d08-90c9-b79f5a77c04f | 10.4.113.9 | nova | +--------------+--------------------------------------+------------+------+ $ nova list +--------------------------------------+------------+-------+-------------------------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------------+-------+-------------------------------------------------------+ | 4bb825ea-ea43-4771-a574-ca86ab429dcb | tinyimage2 | ACTIVE | public= 10.4.113.6; private=172.16.101.6 | | 542235df-8ba4-4d08-90c9-b79f5a77c04f | smallimage | ACTIVE | public=10.4. 113.9, 50.56.12.232; private=172.16.101.9 | +--------------------------------------+------------+-------+-------------------------------------------------------+
The first table shows that the 50.56.12.232 is now associated with the smallimage instance ID, and the second table shows the IP address included under smallimage's public IP addresses.
After the command is complete, you can confirm that the IP address has been associated with the nova floating-ip-list and nova-list commands.
$ nova floating-ip-list +--------------+-------------+----------+------+ | Ip | Instance Id | Fixed Ip | Pool | +--------------+-------------+----------+------+ | 50.56.12.232 | None | None | nova | +--------------+-------------+----------+------+ $ nova list +--------------------------------------+------------+-------+-----------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------------+-------+-----------------------------------------+ | 4bb825ea-ea43-4771-a574-ca86ab429dcb | tinyimage2 | ACTIVE | public=10.4. 113.6; private=172.16.101.6 | | 542235df-8ba4-4d08-90c9-b79f5a77c04f | smallimage | ACTIVE | public=10.4. 113.9; private=172.16.101.9 |
152
Nov 9, 2012
Folsom, 2012.2
+--------------------------------------+------------+-------+-----------------------------------------+
You can now de-allocate the floating IP address, returning it to the pool so that it can be used by another tenant.
$ nova floating-ip-delete 50.56.12.232
In this example, 50.56.12.232 was the only IP address allocated to this tenant. Running nova floating-ip-list after the de-allocation is complete will return no results.
In this example, the default security group has been modified to allow HTTP traffic on the instance by permitting TCP traffic on Port 80.
153
Nov 9, 2012
Folsom, 2012.2
| secure1 | Test security group | +---------+---------------------+ $ nova secgroup-list +---------+---------------------+ | Name | Description | +---------+---------------------+ | default | default | | secure1 | Test security group | +---------+---------------------+
Security groups can be deleted with nova secgroup-delete. The default security group cannot be deleted. The default security group contains these initial settings: All the traffic originated by the instances (outbound traffic) is allowed All the traffic destined to instances (inbound traffic) is denied All the instances inside the group are allowed to talk to each other
Note
You can add extra rules into the default security group for handling the egress traffic. Rules are ingress only at this time. In the following example, the group secure1 is deleted. When you view the security group list, it no longer appears.
$ nova secgroup-delete secure1 $ nova secgroup-list +---------+-------------+ | Name | Description | +---------+-------------+ | default | default | +---------+-------------+
Note
It is not possible to change the default outbound behaviour. Every security group rule is a policy which allows you to specify inbound connections that are allowed to access the instance, by source address, destination port and IP protocol, (TCP, UDP or ICMP). Currently, ipv6 and other protocols cannot be managed with the security rules, making them permitted by default. To manage such, you can deploy a firewall in front of your OpenStack cloud to control other types of traffic. The command requires the following arguments for both TCP and UDP rules : <secgroup> ID of security group. 154
Nov 9, 2012
Folsom, 2012.2
<ip_proto> IP protocol (icmp, tcp, udp). <from_port> Port at start of range. <to_port> Port at end of range. <cidr> CIDR for address range. For ICMP rules, instead of specifying a begin and end port, you specify the allowed ICMP code and ICMP type: <secgroup> ID of security group. <ip_proto> IP protocol (with icmp specified). <ICMP_code> The ICMP code. <ICMP_type> The ICMP type. <cidr> CIDR for the source address range.
Note
Entering "-1" for both code and type indicates that all ICMP codes and types should be allowed.
In order to allow any IP address to ping an instance inside the default security group (Code 0, Type 8 for the ECHO request.):
$ nova secgroup-add-rule default icmp 0 8 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group |
155
Nov 9, 2012
Folsom, 2012.2
$ nova secgroup-list-rules default +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 80 | 80 | 0.0.0.0/0 | | | icmp | 0 | 8 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
In order to delete a rule, you need to specify the exact same arguments you used to create it: <secgroup> ID of security group. <ip_proto> IP protocol (icmp, tcp, udp). <from_port> Port at start of range. <to_port> Port at end of range. <cidr> CIDR for address range.
$ nova secgroup-delete-rule default tcp 80 80 0.0.0.0/0
Manage Volumes
Depending on the setup of your cloud provider, they may give you an endpoint to use to manage volumes, or there may be an extension under the covers. In either case, you can use the nova CLI to manage volumes.
volume-attach Attach a volume to a server. volume-create Add a new volume. volume-delete Remove a volume. volume-detach Detach a volume from a server. volume-list List all the volumes. volume-show Show details about a volume. volume-snapshot-create Add a new snapshot. volume-snapshot-delete Remove a snapshot. volume-snapshot-list List all the snapshots. volume-snapshot-show Show details about a snapshot. volume-type-create Create a new volume type. volume-type-delete Delete a specific flavor volume-type-list Print a list of available 'volume types'.
156
Nov 9, 2012
Folsom, 2012.2
Commands Used
This process uses the following commands: nova resize* nova rebuild
157
Nov 9, 2012
Folsom, 2012.2
+----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | | 2 | m1.small | 2048 | 10 | 20 | | 1 | 1.0 | | 3 | m1.medium | 4096 | 10 | 40 | | 2 | 1.0 | | 4 | m1.large | 8192 | 10 | 80 | | 4 | 1.0 | | 5 | m1.xlarge | 16384 | 10 | 160 | | 8 | 1.0 | +----+-----------+-----------+------+-----------+------+-------+-------------+
In this example, we'll take a server originally configured with the m1.tiny flavor and resize it to m1.small.
$ nova show acdfb2c4-38e6-49a9-ae1c-50182fc47e35 +-----------------------+----------------------------------------------------------+ | Property | Value | +-----------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2012-05-09T15:47:48Z | | flavor | m1.tiny | | hostId | de0c201e62be88c61aeb52f51d91e147acf6cf2012bb57892e528487 | | id | acdfb2c4-38e6-49a9-ae1c-50182fc47e35 | | image | maverick-image | | key_name | | | metadata | {} | | name | resize-demo | | private network | 172.16.101.6 | | progress | 0 | | public network | 10.4.113.6 | | status | ACTIVE |
158
Nov 9, 2012
Folsom, 2012.2
e830c2fbb7aa4586adf16d61c9b7e482 2012-05-09T15:47:59Z
Use the resize command with the server's ID (6beefcf7-9de6-48b3-9ba9-e11b343189b3) and the ID of the desired flavor (2):
$ nova resize 6beefcf7-9de6-48b3-9ba9-e11b343189b3 2
When the resize operation is completed, the status displayed is VERIFY_RESIZE. This prompts the user to verify that the operation has been successful; to confirm:
$ nova resize-confirm 6beefcf7-9de6-48b3-9ba9-e11b343189b3
However, if the operation has not worked as expected, you can revert it by doing:
$ nova resize-revert 6beefcf7-9de6-48b3-9ba9-e11b343189b3
Terminate an Instance
When you no longer need an instance, use the nova delete command to terminate it. You can use the instance name or the ID string. You will not receive a notification indicating that the instance has been deleted, but if you run the nova list command, the instance will no longer appear in the list. In this example, we will delete the instance tinyimage, which is experiencing an error condition. 159
Nov 9, 2012
Folsom, 2012.2
$ nova list +--------------------------------------+------------+-------+-------------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------------+-------+-------------------------------------------+ | 30ed8924-f1a5-49c1-8944-b881446a6a51 | tinyimage | ERROR | public=10.4. 113.11; private=172.16.101.11 | | 4bb825ea-ea43-4771-a574-ca86ab429dcb | tinyimage2 | ACTIVE | public=10.4. 113.6; private=172.16.101.6 | | 542235df-8ba4-4d08-90c9-b79f5a77c04f | smallimage | ACTIVE | public=10.4. 113.9; private=172.16.101.9 | +--------------------------------------+------------+-------+-------------------------------------------+ $ nova delete tinyimage $ nova list +--------------------------------------+------------+-------+-------------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------------+-------+-------------------------------------------+ | 4bb825ea-ea43-4771-a574-ca86ab429dcb | tinyimage2 | ACTIVE | public=10.4. 113.6; private=172.16.101.6 | | 542235df-8ba4-4d08-90c9-b79f5a77c04f | smallimage | ACTIVE | public=10.4. 113.9; private=172.16.101.9 | +--------------------------------------+------------+-------+-------------------------------------------+
160
Nov 9, 2012
Folsom, 2012.2
9. Hypervisors
This section assumes you have a working installation of OpenStack Compute and want to select a particular hypervisor or run with multiple hypervisors. Before you try to get a VM running within OpenStack Compute, be sure you have installed a hypervisor and used the hypervisor's documentation to run a test VM and get it working.
Selecting a Hypervisor
OpenStack Compute supports many hypervisors, an array of which must provide a bit of difficulty in selecting a hypervisor unless you are already familiar with one. Most installations only use a single hypervisor, however as of the Folsom release, it is possible to use the ComputeFilter and ImagePropertiesFilter to allow scheduling to different hypervisors within the same installation. The following links provide additional information for choosing a hypervisor. Refer to http://wiki.openstack.org/HypervisorSupportMatrix for a detailed list of features and support across the hypervisors. Here is a list of the supported hypervisors with links to a relevant web site for configuration and use: KVM - Kernel-based Virtual Machine. The virtual disk formats that it supports it inherits from QEMU since it uses a modified QEMU program to launch the virtual machine. The supported formats include raw images, the qcow2, and VMware formats. LXC - Linux Containers (through libvirt), use to run Linux-based virtual machines. QEMU - Quick EMUlator, generally only used for development purposes. UML - User Mode Linux, generally only used for development purposes. VMWare ESX/ESXi 4.1 update 1, runs VMWare-based Linux and Windows images through a connection with the ESX server. Xen - XenServer, Xen Cloud Platform (XCP), use to run Linux or Windows virtual machines. You must install the nova-compute service in a para-virtualized VM. PowerVM - Server virtualization with IBM PowerVM, use to run AIX, IBM i and Linux environments on IBM POWER technology. Hyper-V - Server virtualization with Microsoft's Hyper-V, use to run Windows, Linux, and FreeBSD virtual machines. Runs nova-compute natively on the Windows virtualization platform.
Nov 9, 2012
Folsom, 2012.2
Here are the nova.conf options that are used to configure the compute node.
libvirt_cpu_model=<None>
libvirt_disk_prefix=<None>
libvirt_inject_key=true libvirt_images_type=default
162
Nov 9, 2012
Folsom, 2012.2
(Type) Description
fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver, rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver" libvirt_wait_soft_reboot_seconds=120 (IntOpt) Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. (BoolOpt) Used by Hyper-V (BoolOpt) Indicates whether unused base images should be removed (IntOpt) Unused unresized base images younger than this will not be removed (IntOpt) Unused resized base images younger than this will not be removed (StrOpt) Rescue ami image (StrOpt) Rescue aki image (StrOpt) Rescue ari image (StrOpt) Snapshot image format (valid options are : raw, qcow2, vmdk, vdi). Defaults to same as source image (BoolOpt) Sync virtual and real mouse cursors in Windows VMs (StrOpt) Name of Integration Bridge used by Open vSwitch (BoolOpt) Use virtio for bridge interfaces (StrOpt) VIM Service WSDL Location e.g http://<server>/ vimService.wsdl, due to a bug in vSphere ESX 4.1 default wsdl. (FloatOpt) The number of times we retry on failures, e.g., socket error, etc. Used only if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) URL for connection to VMWare ESX host.Required if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) Password for connection to VMWare ESX host. Used only if compute_driver is vmwareapi.VMWareESXDriver. (StrOpt) Username for connection to VMWare ESX host. Used only if compute_driver is vmwareapi.VMWareESXDriver. (FloatOpt) The interval used for polling of remote tasks. Used only if compute_driver is vmwareapi.VMWareESXDriver, (StrOpt) Physical ethernet adapter name for vlan networking (StrOpt) PowerVM system manager type (ivm, hmc) (StrOpt) PowerVM manager host or ip (StrOpt) PowerVM VIOS host or ip if different from manager (StrOpt) PowerVM manager user name (StrOpt) PowerVM manager user password
limit_cpu_features=false remove_unused_base_images=true remove_unused_original_minimum_age_seconds=86400 remove_unused_resized_minimum_age_seconds=3600 rescue_image_id=<None> rescue_kernel_id=<None> rescue_ramdisk_id=<None> snapshot_image_format=<None> use_usb_tablet=true libvirt integration libvirt_ovs_bridge=br-int libvirt_use_virtio_for_bridges=false VMWare integration vmwareapi_wsdl_loc=<None>
vmware_vif_driver=nova.virt.vmwareapi.vif.VMWareVlanBridgeDriver (StrOpt) The VMWare VIF driver to configure the VIFs. vmwareapi_api_retry_count=10
vmwareapi_host_ip=<None>
vmwareapi_host_password=<None>
vmwareapi_host_username=<None>
vmwareapi_task_poll_interval=5.0
163
Nov 9, 2012
Folsom, 2012.2
(Type) Description (StrOpt) PowerVM image remote path. Used to copy and store images from Glance on the PowerVM VIOS LPAR. (StrOpt) Local directory on the compute host to download glance images to.
KVM
KVM is configured as the default hypervisor for Compute.
Note
There are several sections about hypervisor selection in this document. If you are reading this document linearly, you do not want to load the KVM module prior to installing nova-compute. The nova-compute service depends on qemukvm which installs /lib/udev/rules.d/45-qemu-kvm.rules, which sets the correct permissions on the /dev/kvm device node. To enable KVM explicitly, add the following configuration options /etc/nova/ nova.conf:
compute_driver=libvirt.LibvirtDriver libvirt_type=kvm
The KVM hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk) The rest of this section describes how to enable KVM on your system. You may also wish to consult distribution-specific documentation: Fedora: Getting started with virtualization from the Fedora project wiki. Ubuntu: KVM/Installation from the Community Ubuntu documentation. Debian: Virtualization with KVM from the Debian handbook. RHEL: Installing virtualization packages on an existing Red Hat Enterprise Linux system from the Red Hat Enterprise Linux Virtualization Host Configuration and Guest Installation Guide. openSUSE: Installing KVM from the openSUSE Virtualization with KVM manual. SLES: Installing KVM from the SUSE Linux Enterprise Server Virtualization with KVM manual.
Nov 9, 2012
Folsom, 2012.2
If you are running on Ubuntu, use the kvm-ok command to check if your processor has VT support, it is enabled in the BIOS, and KVM is installed properly, as root:
# kvm-ok
In the case that KVM acceleration is not supported, Compute should be configured to use a different hypervisor, such as QEMU or Xen. On distributions that don't have kvm-ok, you can check if your processor has VT support by looking at the processor flags in the /proc/cpuinfo file. For Intel processors, look for the vmx flag, and for AMD processors, look for the svm flag. A simple way to check is to run the following command and see if there is any output:
$ egrep '(vmx|svm)' --color=always /proc/cpuinfo
Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the above command produced no output, you may need to reboot your machine, enter the system BIOS, and enable the VT option.
Enabling KVM
KVM requires the kvm and either kvm-intel or kvm-amd modules to be loaded. This may have been configured automatically on your distribution when KVM is installed. You can check that they have been loaded using lsmod, as follows, with expected output for Intel-based processors:
$ lsmod | grep kvm kvm_intel kvm 137721 415459 9 1 kvm_intel
The following sections describe how to load the kernel modules for Intel-based and AMD-based processors if they were not loaded automatically by your distribution's KVM installation process.
Intel-based processors
If your compute host is Intel-based, run the following as root to load the kernel modules:
# modprobe kvm # modprobe kvm-intel
Add the following lines to /etc/modules so that these modules will load on reboot:
kvm kvm-intel
165
Nov 9, 2012
Folsom, 2012.2
AMD-based processors
If your compute host is AMD-based, run the following as root to load the kernel modules:
# modprobe kvm # modprobe kvm-amd
Add the following lines to /etc/modules so that these modules will load on reboot:
kvm kvm-amd
Host passthrough
If your nova.conf contains libvirt_cpu_mode=host-passthrough, libvirt will tell KVM to passthrough the host CPU with no modifications. The difference to host-model, 166
Nov 9, 2012
Folsom, 2012.2
instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration: the guest can only be migrated to an exactly matching host CPU.
Custom
If your nova.conf contains libvirt_cpu_mode=custom, you can explicitly specify one of the supported named model using the libvirt_cpu_model configuration option. For example, to configure the KVM guests to expose Nehalem CPUs, your nova.conf should contain:
libvirt_cpu_mode=custom libvirt_cpu_model=Nehalem
None (default for all libvirt-driven hypervisors other than KVM & QEMU)
If your nova.conf contains libvirt_cpu_mode=none, then libvirt will not specify any CPU model at all. It will leave it up to the hypervisor to choose the default model. This setting is equivalent to the Compute service behavior prior to the Folsom release.
Troubleshooting
Trying to launch a new virtual machine instance fails with the ERROR state, and the following error appears in /var/log/nova/nova-compute.log
libvirtError: internal error no supported architecture for os type 'hvm'
This is a symptom that the KVM kernel modules have not been loaded. If you cannot start VMs after installation without rebooting, it's possible the permissions are not correct. This can happen if you load the KVM module before you've installed novacompute. To check the permissions, run ls -l /dev/kvm to see whether the group is set to kvm. If not, run sudo udevadm trigger.
QEMU
From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment. The typical uses cases for QEMU are Running on older hardware that lacks virtualization support. Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests. KVM requires hardware support for acceleration. If hardware support is not available (e.g., if you are running Compute inside of a VM and the hypervisor does not expose the 167
Nov 9, 2012
Folsom, 2012.2
required hardware support), you can use QEMU instead. KVM and QEMU have the same level of support in OpenStack, but KVM will provide better performance. To enable QEMU:
compute_driver=libvirt.LibvirtDriver libvirt_type=qemu
For some operations you may also have to install the guestmount utility:
$> sudo apt-get install guestmount
The QEMU hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk)
Note
The second command, setsebool, may take a while.
$> sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu $> setsebool -P virt_use_execmem on $> sudo ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64 $> sudo service libvirtd restart
The above connection details are used by the OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer 168
Nov 9, 2012
Folsom, 2012.2
management console, to your XenServer or XCP box. Note these settings are generally unique to each hypervisor host as the use of the host internal management network IP address (169.254.0.1) will cause features such as live-migration to break. OpenStack with XenAPI supports the following virtual machine image formats: Raw VHD (in a gzipped tarball) It is possible to manage Xen using libvirt. This would be necessary for any Xen-based system that isn't using the XCP toolstack, such as SUSE Linux or Oracle Linux. Unfortunately, this is not well-tested or supported as of the Essex release. To experiment using Xen through libvirt add the following configuration options /etc/nova/nova.conf:
compute_driver=libvirt.LibvirtDriver libvirt_type=xen
The rest of this section describes Xen, XCP, and XenServer, the differences between them, and how to use them with OpenStack. Xen's architecture is different from KVM's in important ways, and we discuss those differences and when each might make sense in your OpenStack cloud.
Xen terminology
Xen is a hypervisor. It provides the fundamental isolation between virtual machines. Xen is open source (GPLv2) and is managed by Xen.org, an cross-industry organization. Xen is a component of many different products and projects. The hypervisor itself is very similar across all these projects, but the way that it is managed can be different, which can cause confusion if you're not clear which tool stack you are using. Make sure you know what tool stack you want before you get started. Xen Cloud Platform (XCP) is an open source (GPLv2) tool stack for Xen. It is designed specifically as platform for enterprise and cloud computing, and is well integrated with OpenStack. XCP is available both as a binary distribution, installed from an iso, and from Linux distributions, such as xcp-xapi in Ubuntu. The current versions of XCP available in Linux distributions do not yet include all the features available in the binary distribution of XCP. Citrix XenServer is a commercial product. It is based on XCP, and exposes the same tool stack and management API. As an analogy, think of XenServer being based on XCP in the way that Red Hat Enterprise Linux is based on Fedora. XenServer has a free version (which is very similar to XCP) and paid-for versions with additional features enabled. Citrix provides support for XenServer, but as of July 2012, they do not provide any support for XCP. For a comparison between these products see the XCP Feature Matrix. Both XenServer and XCP include Xen, Linux, and the primary control daemon known as xapi. The API shared between XCP and XenServer is called XenAPI. OpenStack usually refers to XenAPI, to indicate that the integration works equally well on XCP and XenServer. 169
Nov 9, 2012
Folsom, 2012.2
Sometimes, a careless person will refer to XenServer specifically, but you can be reasonably confident that anything that works on XenServer will also work on the latest version of XCP. Read the XenAPI Object Model Overview for definitions of XenAPI specific terms such as SR, VDI, VIF and PIF.
170
Nov 9, 2012
Folsom, 2012.2
Key things to note: The hypervisor: Xen Domain 0: runs xapi and some small pieces from OpenStack (some xapi plugins and network isolation rules). The majority of this is provided by XenServer or XCP (or yourself using Kronos). OpenStack domU: The nova-compute code runs in a paravirtualized virtual machine, running on the host under management. Each host runs a local instance of novacompute. It will often also be running nova-network (depending on your network mode). In this case, nova-network is managing the addresses given to the tenant VMs through DHCP. Nova uses the XenAPI Python library to talk to xapi, and it uses the Host Internal Management Network to reach from the domU to dom0 without leaving the host. Some notes on the networking: The above diagram assumes FlatDHCP networking (the DevStack default). There are three main OpenStack networks: Management traffic (RabbitMQ, MySQL, etc), Tenant network traffic (controlled by nova-network) and Public traffic (floating IPs, public API end points). Each network that leaves the host has been put through a separate physical network interface. This is the simplest model, but it's not the only one possible. You may choose to isolate this traffic using VLANs instead, for example.
XenAPI pools
Before OpenStack 2012.1 ("Essex"), all XenServer machines used with OpenStack are standalone machines, usually only using local storage. 171
Nov 9, 2012
Folsom, 2012.2
However in 2012.1 and later, the host-aggregates feature allows you to create pools of XenServer hosts (configuring shared storage is still an out of band activity). This move will enable live migration when using shared storage.
Nov 9, 2012
Folsom, 2012.2
Create a Paravirtualised virtual machine that can run the OpenStack compute code. Install and configure the nova-compute in the above virtual machine. For further information on how to perform these steps look at how DevStack performs the last three steps when doing developer deployments. For more information on DevStack, take a look at the DevStack and XenServer Readme. More information on the first step can be found in the XenServer mutli-tenancy protection doc. More information on how to install the XenAPI plugins can be found in the XenAPI plugins Readme.
Set the uuid and configuration. Even if an NFS mount point isn't local storage, you must specify "local-storage-iso".
# xe sr-param-set uuid=[iso sr uuid] other-config:i18n-key=local-storage-iso
Now, make sure the host-uuid from "xe pbd-list" equals the uuid of the host you found earlier
# xe sr-uuid=[iso sr uuid]
You should now be able to add images via the OpenStack Image Registry, with disk_format=iso, and boot them in OpenStack Compute.
glance add name=fedora_iso disk_format=iso container_format=bare < Fedora-16x86_64-netinst.iso
Further reading
Here are some of the resources available to learn more about Xen: Citrix XenServer official documentation: http://docs.vmd.citrix.com/XenServer. What is Xen? by Xen.org: http://xen.org/files/Marketing/WhatisXen.pdf. 173
Nov 9, 2012
Folsom, 2012.2
Xen Hypervisor project: http://xen.org/products/xenhyp.html. XCP project: http://xen.org/products/cloudxen.html. Further XenServer and OpenStack information: http://wiki.openstack.org/XenServer.
Note
Some OpenStack Compute features may be missing when running with LXC as the hypervisor. See the hypervisor support matrix for details. To enable LXC, ensure the following options are set in /etc/nova/nova.conf on all hosts running the nova-compute service.
compute_driver=libvirt.LibvirtDriver libvirt_type=lxc
On Ubuntu 12.04, enable LXC support in OpenStack by installing the nova-compute-lxc package.
174
Nov 9, 2012
Folsom, 2012.2
Prerequisites
You will need to install the following software: python-suds: This software is needed by the nova-compute service. If not installed, the "nova-compute" service shuts down with the message: "Unable to import suds". SSH server Tomcat server On ubuntu, these packages can be installed by doing (as root):
# apt-get install python-suds openssh-server tomcat6
175
Nov 9, 2012
Folsom, 2012.2
PowerVM
Introduction
PowerVM compute driver connects to an Integrated Virtualization Manager (IVM) to perform PowerVM Logical Partition (LPAR) deployment and management. The driver supports file-based deployment using images from Glance.
Note
Hardware Management Console (HMC) is not yet supported. For more detailed information about PowerVM Virtualization system, refer to the IBM Redbook publication: IBM PowerVM Virtualization Introduction and Configuration.
Configuration
To enable the PowerVM compute driver, add the following configuration options /etc/ nova/nova.conf:
compute_driver=nova.virt.powervm.PowerVMDriver powervm_mgr_type=ivm powervm_mgr=powervm_hostname_or_ip_address powervm_mgr_user=padmin powervm_mgr_passwd=padmin_user_password powervm_img_remote_path=/path/to/remote/image/directory powervm_img_local_path=/path/to/local/image/directory/on/compute/host
Nov 9, 2012
Folsom, 2012.2
Windows Clustering Services are not needed for functionality within the OpenStack infrastructure. The use of the Windows Server 2012 platform is recommend for the best experience and is the platform for active development. The following Windows platforms have been tested as compute nodes: Windows Server 2008r2 Both Server and Server Core with the Hyper-V role enabled (Shared Nothing Live migration is not supported using 2008r2) Windows Server 2012 Server and Core (with the Hyper-V role enabled), and Hyper-V Server
Hyper-V Configuration
The following sections discuss how to prepare the Windows Hyper-V node for operation as an OpenStack Compute node. Unless stated otherwise, any configuration information should work for both the Windows 2008r2 and 2012 platforms. Local Storage Considerations The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for all, or partition it into an OS volume and VM volume. It is up to the individual deploying to decide.
Configure NTP
Network time services must be configured to ensure proper operation of the HyperV compute node. To set network time on your Hyper-V host you will need to run the following commands
C:\net stop w32time
177
Nov 9, 2012
Folsom, 2012.2
Nov 9, 2012
Folsom, 2012.2
Hyper-V 2012 RC or Windows Server 2012 RC with Hyper-V role enabled A Windows domain controller with the Hyper-V compute nodes as domain members The instances_path command line option/flag needs to be the same on all hosts The openstack-compute service deployed with the setup must run with domain credentials. You can set the service credentials with:
C:\sc config openstack-compute obj="DOMAIN\username" password="password"
How to setup live migration on Hyper-V To enable shared nothing live migration run the 3 PowerShell instructions below on each Hyper-V host:
PS C:\Enable-VMMigration
PS C:\Set-VMMigrationNetwork IP_ADDRESS
PS C:\Set-VMHost VirtualMachineMigrationAuthenticationTypeKerberos
Note
Please replace the IP_ADDRESS with the address of the interface which will provide the virtual switching for nova-network. Additional Reading Here's an article that clarifies the various live migration options in Hyper-V: http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html
"Python Requirements">
Python Python 2.7.3 must be installed prior to installing the OpenStack Compute Driver on the Hyper-V server. Download and then install the MSI for windows here: http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi Install the MSI accepting the default options. The installation will put python in C:/python27. Setuptools 179
Nov 9, 2012
Folsom, 2012.2
You will require pip to install the necessary python module dependencies. The installer will install under the C:\python27 directory structure. Setuptools for Python 2.7 for Windows can be download from here: http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe Python Dependencies The following packages need to be downloaded and manually installed onto the Compute Node MySQL-python http://codegood.com/download/10/ pywin32 Download and run the installer from the following location http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/ pywin32-217.win32-py2.7.exe greenlet Select the link below: http://www.lfd.uci.edu/~gohlke/pythonlibs/ You will need to scroll down to the greenlet section for the following file: greenlet-0.4.0.win32-py2.7.#exe Click on the file, to initiate the download. Once the download is complete, run the installer. The following python packages need to be installed via easy_install or pip. Run the following replacing PACKAGENAME with the packages below:
C:\c:\Python27\Scripts\pip.exe install PACKAGE_NAME
OpenStack Compute Administration Manual netaddr paste paste-deploy prettytable python-cinderclient python-glanceclient python-keystoneclient repoze.lru routes sqlalchemy simplejson warlock webob wmi
Nov 9, 2012
Folsom, 2012.2
Installing Nova-compute
Using git on Windows to retrieve source Git be used to download the necessary source code. The installer to run Git on Windows can be downloaded here: http://code.google.com/p/msysgit/downloads/list?q=full+installer+official+git Download the latest installer. Once the download is complete double click the installer and follow the prompts in the installation wizard. The default should be acceptable for the needs of the document. Once installed you may run the following to clone the Nova code.
C:\git.exe clone https://github.com/openstack/nova.git
Configuring Nova.conf
The nova.conf file must be placed in C:\etc\nova for running OpenStack on Hyper-V. Below is a sample nova.conf for Windows:
[DEFAULT] verbose=true
181
Nov 9, 2012
Folsom, 2012.2
force_raw_images=false auth_strategy=keystone fake_network=true vswitch_name=openstack-br logdir=c:\openstack\ state_path=c:\openstack\ lock_path=c:\openstack\ instances_path=e:\Hyper-V\instances policy_file=C:\Program Files (x86)\OpenStack\nova\etc\nova\policy.json api_paste_config=c:\openstack\nova\etc\nova\api-paste.ini rabbit_host=IP_ADDRESS glance_api_servers=IP_ADDRESS:9292 image_service=nova.image.glance.GlanceImageService sql_connection=mysql://nova:passwd@IP_ADDRESS/nova instances_shared_storage=false limit_cpu_features=true compute_driver=nova.virt.hyperv.driver.HyperVDriver volume_api_class=nova.volume.cinder.API
182
Nov 9, 2012
Folsom, 2012.2
Networking Options
This section offers a brief overview of each concept in networking for Compute. With the Folsom release, you can chose either to install and configure nova-network for networking between VMs or use the Networking service (quantum) for networking. Refer to the Network Administration Guide to configure Compute networking options with Quantum. For each VM instance, Compute assigns to it a private IP address. (Currently, Compute with nova-network only supports Linux bridge networking that allows the virtual interfaces to connect to the outside network through the physical interface.) The network controller with nova-network provides virtual networks to enable compute servers to interact with each other and with the public network. Currently, Compute with nova-network supports three kinds of networks, implemented in three Network Manager types: Flat Network Manager Flat DHCP Network Manager VLAN Network Manager The three kinds of networks can co-exist in a cloud system. However, since you can't yet select the type of network for a given project, you cannot configure more than one type of network in a given Compute installation.
Note
All of the networking options require network connectivity to be already set up between OpenStack physical nodes. OpenStack will not configure any physical network interfaces. OpenStack will automatically create all network bridges (i.e., br100) and VM virtual interfaces. All machines must have a public and internal network interface (controlled by the options: public_interface for the public interface, and flat_interface and vlan_interface for the internal interface with flat / VLAN managers). The internal network interface is used for communication with VMs, it shouldn't have an IP address attached to it before OpenStack installation (it serves merely as a fabric where the actual endpoints are VMs and dnsmasq). Also, the internal network interface must be put in promiscuous mode, because it will have to receive packets whose target MAC address is of the guest VM, not of the host. All the network managers configure the network using network drivers, e.g. the linux L3 driver (l3.py and linux_net.py) which makes use of iptables, route and other
183
Nov 9, 2012
Folsom, 2012.2
network management facilities, and also of libvirt's network filtering facilities. The driver isn't tied to any particular network manager; all network managers use the same driver. The driver usually initializes (creates bridges etc.) only when the first VM lands on this host node. All network managers operate in either single-host or multi-host mode. This choice greatly influences the network configuration. In single-host mode, there is just 1 instance of novanetwork which is used as a default gateway for VMs and hosts a single DHCP server (dnsmasq), whereas in multi-host mode every compute node has its own nova-network. In any case, all traffic between VMs and the outer world flows through nova-network. There are pros and cons to both modes, read more in Existing High Availability Options. Compute makes a distinction between fixed IPs and floating IPs for VM instances. Fixed IPs are IP addresses that are assigned to an instance on creation and stay the same until the instance is explicitly terminated. By contrast, floating IPs are addresses that can be dynamically associated with an instance. A floating IP address can be disassociated and associated with another instance at any time. A user can reserve a floating IP for their project. In Flat Mode, a network administrator specifies a subnet. The IP addresses for VM instances are grabbed from the subnet, and then injected into the image on launch. Each instance receives a fixed IP address from the pool of available addresses. A system administrator may create the Linux networking bridge (typically named br100, although this configurable) on the systems running the nova-network service. All instances of the system are attached to the same bridge, configured manually by the network administrator.
Note
The configuration injection currently only works on Linux-style systems that keep networking configuration in /etc/network/interfaces. In Flat DHCP Mode, OpenStack starts a DHCP server (dnsmasq) to pass out IP addresses to VM instances from the specified subnet in addition to manually configuring the networking bridge. IP addresses for VM instances are grabbed from a subnet specified by the network administrator. Like Flat Mode, all instances are attached to a single bridge on the compute node. In addition a DHCP server is running to configure instances (depending on single-/multihost mode, alongside each nova-network). In this mode, Compute does a bit more configuration in that it attempts to bridge into an ethernet device (flat_interface, eth0 by default). It will also run and configure dnsmasq as a DHCP server listening on this bridge, usually on IP address 10.0.0.1 (see DHCP server: dnsmasq). For every instance, nova will allocate a fixed IP address and configure dnsmasq with the MAC/IP pair for the VM, i.e. dnsmasq doesn't take part in the IP address allocation process, it only hands out IPs according to the mapping done by nova. Instances receive their fixed IPs by doing a dhcpdiscover. These IPs are not assigned to any of the host's network interfaces, only to the VM's guest-side interface. In any setup with flat networking, the host(-s) with nova-network on it is (are) responsible for forwarding traffic from the private network configured with the fixed_range configuration option in nova.conf. Such host(-s) needs to have br100 configured and physically connected to any other nodes that are hosting VMs. You must set the flat_network_bridge option or create networks with the bridge parameter in order to 184
Nov 9, 2012
Folsom, 2012.2
avoid raising an error. Compute nodes have iptables/ebtables entries created per project and instance to protect against IP/MAC address spoofing and ARP poisoning.
Note
In single-host Flat DHCP mode you will be able to ping VMs via their fixed IP from the nova-network node, but you will not be able to ping them from the compute nodes. This is expected behavior. VLAN Network Mode is the default mode for OpenStack Compute. In this mode, Compute creates a VLAN and bridge for each project. For multiple machine installation, the VLAN Network Mode requires a switch that supports VLAN tagging (IEEE 802.1Q). The project gets a range of private IPs that are only accessible from inside the VLAN. In order for a user to access the instances in their project, a special VPN instance (code named cloudpipe) needs to be created. Compute generates a certificate and key for the user to access the VPN and starts the VPN automatically. It provides a private network segment for each project's instances that can be accessed via a dedicated VPN connection from the Internet. In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. OpenStack Compute creates the Linux networking bridges and VLANs when required.
Note
With the default Compute settings, once a virtual machine instance is destroyed, it can take some time for the IP address associated with the destroyed instance to become available for assignment to a new instance. The force_dhcp_release=True configuration option, when set, causes the Compute service to send out a DHCP release packet when it destroys a virtual machine instance. The result is that the IP address assigned to the instance is immediately released. This configuration option applies to both Flat DHCP mode and VLAN Manager mode. Use of this option requires the dhcp_release program. Verify that this program is installed on all hosts running the nova-compute service before enabling this option. This can be checked with the which command, and will return the complete path if the program is installed. As root:
# which dhcp_release /usr/bin/dhcp_release
Nov 9, 2012
Folsom, 2012.2
The behavior of dnsmasq can be customized by creating a dnsmasq configuration file. Specify the config file using the dnsmasq_config_file configuration option. For example:
dnsmasq_config_file=/etc/dnsmasq-nova.conf
See the high availability section for an example of how to change the behavior of dnsmasq using a dnsmasq configuration file. The dnsmasq documentation has a more comprehensive dnsmasq configuration file example. Dnsmasq also acts as a caching DNS server for instances. You can explicitly specify the DNS server that dnsmasq should use by setting the dns_server configuration option in / etc/nova/nova.conf. The following example would configure dnsmasq to use Google's public DNS server:
dns_server=8.8.8.8
Dnsmasq logging output goes to the syslog (typically /var/log/syslog or /var/ log/messages, depending on Linux distribution). The dnsmasq logging output can be useful for troubleshooting if VM instances boot successfully but are not reachable over the network. A network administrator can run nova-manage fixed reserve -address=x.x.x.x to specify the starting point IP address (x.x.x.x) to reserve with the DHCP server, replacing the flat_network_dhcp_start configuration option that was available in Diablo. This reservation only affects which IP address the VMs start at, not the fixed IP addresses that the nova-network service places on the bridges.
Metadata service
Introduction
The Compute service uses a special metadata service to enable virtual machine instances to retrieve instance-specific data. Instances access the metadata service at http://169.254.169.254. The metadata service supports two sets of APIs: an OpenStack metadata API and an EC2-compatable API. Each of the APIs is versioned by date. To retrieve a list of supported versions for the OpenStack metadata API, make a GET request to
http://169.254.169.254/openstack
For example:
$ curl http://169.254.169.254/openstack 2012-08-10 latest
To retrieve a list of supported versions for the EC2-compatible metadata API, make a GET request to
http://169.254.169.254
Nov 9, 2012
Folsom, 2012.2
If you write a consumer for one of these APIs, always attempt to access the most recent API version supported by your consumer first, then fall back to an earlier version if the most recent one is not available.
For example:
$ curl http://169.254.169.254/openstack/2012-08-10/meta_data.json {"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", "availability_zone": "nova", "hostname": "test.novalocal", "launch_index": 0, "meta": {"priority": "low", "role": "webserver"}, "public_keys": {"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI +USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/ 3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"}, "name": "test"}
Here is the same content after having run through a JSON pretty-printer:
{ "availability_zone": "nova", "hostname": "test.novalocal", "launch_index": 0, "meta": { "priority": "low", "role": "webserver" }, "name": "test", "public_keys": { "mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI +USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/ 3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n" }, "uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38" }
Instances also retrieve user data (passed as the user_data parameter in the API call or by the --user_data flag in the nova boot command) through the metadata service, by making a GET request to:
http://169.254.169.254/openstack/2012-08-10/user_data
187
Nov 9, 2012
Folsom, 2012.2
For example:
$ curl http://169.254.169.254/2009-04-04/meta-data/ ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname instance-action instance-id instance-type kernel-id local-hostname local-ipv4 placement/ public-hostname public-ipv4 public-keys/ ramdisk-id reservation-id security-groups $ curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/ ami $ curl http://169.254.169.254/2009-04-04/meta-data/placement/ availability-zone $ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/ 0=mykey
Instances can retrieve the public SSH key (identified by keypair name when a user requests a new instance) by making a GET request to:
http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key
For example:
$ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI +USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/ 3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova
Instances can retrieve user data by making a GET request to: 188
Nov 9, 2012
Folsom, 2012.2
http://169.254.169.254/2009-04-04/user-data
For example:
$ curl http://169.254.169.254/2009-04-04/user-data #!/bin/bash echo 'Extra user data here'
Warning
The metadata_host configuration option must be an IP address, not a hostname.
Note
The default Compute service settings assume that the nova-network service and the nova-api service are running on the same host. If this is not the case, you must make the following change in the /etc/nova/nova.conf file on the host running the nova-network service: Set the metadata_host configuration option to the IP address of the host where the nova-api service is running.
189
Nov 9, 2012
Folsom, 2012.2
--network_manager=nova.network.manager.FlatDHCPManager Flat networking with DHCP, you must set a bridge using the flat_network_bridge option --network_manager=nova.network.manager.VlanManager VLAN networking with DHCP. This is the Default if no network manager is defined in nova.conf. When you issue the nova-manage network create command, it uses the settings from the nova.conf configuration options file. Use the following command to create the subnet that your VMs will run on :
nova-manage network create private 192.168.0.0/24 1 256
When using the XenAPI compute driver, the OpenStack services run in a virtual machine. This means networking is significantly different when compared to the networking with the libvirt compute driver. Before reading how to configure networking using the XenAPI compute driver, you may find it useful to read the Citrix article on Understanding XenServer Networking and the section of this document that describes XenAPI and OpenStack.
Note
When configuring Flat Networking, failing to enable flat_injected can prevent guest VMs from receiving their IP information at boot time.
Nov 9, 2012
Folsom, 2012.2
host needs to have br100 configured and talking to any other nodes that are hosting VMs. With either of the Flat Networking options, the default gateway for the virtual machines is set to the host which is running nova-network. Set the compute node's external IP address to be on the bridge and add eth0 to that bridge. To do this, edit your network interfaces configuration to look like the following example:
# The loopback network interface auto lo iface lo inet loopback # Networking for OpenStack Compute auto br100 iface br100 inet dhcp bridge_ports bridge_stp bridge_maxwait 0 bridge_fd
eth0 off 0
Next, restart networking to apply the changes: sudo /etc/init.d/networking restart For an all-in-one development setup, this diagram represents the network setup.
For multiple compute nodes with a single network adapter, which you can use for smoke testing or a proof of concept, this diagram represents the network setup. 191
Nov 9, 2012
Folsom, 2012.2
For multiple compute nodes with multiple network adapters, this diagram represents the network setup. You may want to use this setup for separate admin and data traffic.
192
Nov 9, 2012
Folsom, 2012.2
193
Nov 9, 2012
Folsom, 2012.2
Figure10.4.Flat DHCP network, multiple interfaces, multiple servers with libvirt driver
Be careful when setting up --flat_interface. If you specify an interface that already has an IP it will break and if this is the interface you are connecting through with SSH, you cannot fix it unless you have ipmi/console access. In FlatDHCP mode, the setting for -network_size should be number of IPs in the entire fixed range. If you are doing a /12 in CIDR notation, then this number would be 2^20 or 1,048,576 IP addresses. That said, it will take a very long time for you to create your initial network, as an entry for each IP will be created in the database. If you have an unused interface on your hosts (eg eth2) that has connectivity with no IP address, you can simply tell FlatDHCP to bridge into the interface by specifying flat_interface=<interface> in your configuration file. The network host will automatically add the gateway ip to this bridge. If this is the case for you, edit your nova.conf file to contain the following lines:
dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge network_manager=nova.network.manager.FlatDHCPManager fixed_range=10.0.0.0/8 flat_network_bridge=br100 flat_interface=eth2 flat_injected=False public_interface=eth0
You can also add the unused interface to br100 manually and not set flat_interface. Integrate your network interfaces to match this configuration. 194
Nov 9, 2012
Folsom, 2012.2
Figure10.5.Flat DHCP network, multiple interfaces, multiple servers, network HA with XenAPI driver
Here is an extract from a nova.conf file in a system running the above setup:
network_manager=nova.network.manager.FlatDHCPManager xenapi_vif_driver=nova.virt.xenapi.vif.(XenAPIBridgeDriver or XenAPIOpenVswitchDriver) flat_interface=eth1 flat_network_bridge=xenbr2 public_interface=eth3 multi_host=True dhcpbridge_flagfile=/etc/nova/nova.conf fixed_range=10.0.0.0/24 force_dhcp_release=True send_arp_for_ha=True flat_injected=False firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
195
Nov 9, 2012
Folsom, 2012.2
You should notice that flat_interface and public_interface refer to the network interface on the VM running the OpenStack services, not the network interface on the Hypervisor. Secondly flat_network_bridge refers to the name of XenAPI network that you wish to have your instance traffic on, i.e. the network on which the VMs will be attached. You can either specify the bridge name, such an xenbr2, or the name label, such as vmbr. Specifying the name-label is very useful in cases where your networks are not uniform across your XenServer hosts. When you have a limited number of network cards on your server, it is possible to use networks isolated using VLANs for the public and network traffic. For example, if you have two XenServer networks xapi1 and xapi2 attached on VLAN 102 and 103 on eth0, respectively, you could use these for eth1 and eth3 on your VM, and pass the appropriate one to flat_network_bridge. When using XenServer, it is best to use the firewall driver written specifically for XenServer. This pushes the firewall rules down to the hypervisor, rather than running them in the VM that is running nova-network.
196
Nov 9, 2012
Folsom, 2012.2
Next, the host on which nova-network is configured acts as a router and forwards the traffic out to the Internet.
Warning
If you're using a single interface, then that interface (often eth0) needs to be set into promiscuous mode for the forwarding to happen correctly. This does not appear to be needed if you're running with physical hosts that have and use two interfaces.
Note
The terms network and subnet are often used interchangeably in discussions of VLAN mode. In all cases, we are referring to a range of IP addresses specified by a subnet (e.g., 172.16.20.0/24) that are on the same VLAN (layer 2 network). Running in VLAN mode is more complex than the other network modes. In particular: IP forwarding must be enabled 197
Nov 9, 2012
Folsom, 2012.2
The hosts running nova-network and nova-compute must have the 8021q kernel module loaded Your networking switches must support VLAN tagging Your networking switches must be configured to enable the specific VLAN tags you specify in your Compute setup You will need information about your networking setup from your network administrator to configure Compute properly (e.g., netmask, broadcast, gateway, ethernet device, VLAN IDs) The network_manager=nova.network.manager.VlanManager option specifies VLAN mode, which happens to be the default networking mode. The bridges that are created by the network manager will be attached to the interface specified by vlan_interface, the example above uses the eth0 interface, which is the default. The fixed_range option is a CIDR block which describes the IP address space for all of the instances: this space will be divided up into subnets. This range is typically a private network. The example above uses the private range 172.16.0.0/12. The network_size option refers to the default number of IP addresses in each network, although this can be overriden at network creation time . The example above uses a network size of 256, whicih corresponds to a /24 network. Networks are created with the nova-manage network create command. Here is an example of how to create a network consistent with the above example configuration options, as root:
# nova-manage network create --label=example-net --fixed_range_v4=172.16.169. 0/24 --vlan=169 --bridge=br169 --project_id=a421ae28356b4cc3a25e1429a0b02e98 --num_networks=1
This creates a network called example-net associated with tenant a421ae28356b4cc3a25e1429a0b02e98. The subnet is 172.16.169.0/24 with a VLAN tag of 169 (the VLAN tag does not need to match the third byte of the address, though it is a useful convention to remember the association). This will create a bridge interface device called br169 on the host running the nova-network service. This device will appear in the output of an ifconfig command. Each network is associated with one tenant. As in the example above, you may (optionally) specify this association at network creation time by using the --project_id flag which corresponds to the tenant ID. Use the keystone tenant-list command to list the tenants and corresponding IDs that you have already created. Instead of manually specifying a VLAN, bridge, and project id, you can create many networks at once and have the Compute service automatically associate these networks with tenants as needed, as well as automatically generating the VLAN IDs and bridge interface names. For example, the following command would create 100 networks, from 172.16.100.0/24 to 172.16.199.0/24. (This assumes the network_size=256 option has been set at nova.conf, though this can also be specified by passing -network_size=256 as a flag to the nova-manage command) 198
Nov 9, 2012
Folsom, 2012.2
The nova-manage network create command supports many configuration options, which are displayed when called with the --help flag:
Usage: nova-manage network create <args> [options] Options: -h, --help show this help message and exit --label=<label> Label for network (ex: public) --fixed_range_v4=<x.x.x.x/yy> IPv4 subnet (ex: 10.0.0.0/8) --num_networks=<number> Number of networks to create --network_size=<number> Number of IPs per network --vlan=<vlan id> vlan id --vpn=VPN_START vpn start --fixed_range_v6=FIXED_RANGE_V6 IPv6 subnet (ex: fe80::/64) --gateway=GATEWAY gateway --gateway_v6=GATEWAY_V6 ipv6 gateway --bridge=<bridge> VIFs on this network are connected to this bridge --bridge_interface=<bridge interface> the bridge is connected to this interface --multi_host=<'T'|'F'> Multi host --dns1=<DNS Address> First DNS --dns2=<DNS Address> Second DNS --uuid=<network uuid> Network UUID --fixed_cidr=<x.x.x.x/yy> IPv4 subnet for fixed IPS (ex: 10.20.0.0/16) --project_id=<project id> Project id --priority=<number> Network interface priority
In particular, flags to the nova-mange network create command can be used to override settings from nova.conf: --network_size Overrides the network_size configuration option
--bridge_interface Overrides the vlan_interface configuration option To view a list of the networks that have been created, as root:
# nova-manage network list
To modify an existing network, use the nova-manage network modify command, as root:
# nova-manage network modify --help Usage: nova-manage network modify <args> [options] Options: -h, --help show this help message and exit --fixed_range=<x.x.x.x/yy>
199
Nov 9, 2012
Folsom, 2012.2
Network to modify --project=<project name> Project name to associate --host=<host> Host to associate --disassociate-project Disassociate Network from Project --disassociate-host Disassociate Host from Project
Note that a network must first be disassociated from a project using the nova-manage network modify command before it can be deleted. Creating a network will automatically cause the Compute database to populate with a list of available fixed IP addresses. You can view the list of fixed IP addresses and their associations with active virtual machines by doing, as root:
# nova-manage fix list
If users need to access the instances in their project across a VPN, a special VPN instance (code named cloudpipe) needs to be created as described in the section titled Cloudpipe Per Project VPNs.
To have this kernel module loaded on boot, add the following line to /etc/modules:
8021q
Here is an example of settings from /etc/nova/nova.conf for a host configured to run nova-network in VLAN mode
network_manager=nova.network.manager.VlanManager vlan_interface=eth0 fixed_range=172.16.0.0/12 network_size=256
In certain cases, the network manager may not properly tear down bridges and VLANs when it is stopped. If you attempt to restart the network manager and it does not start, check the logs for errors indicating that a bridge device already exists. If this is the case, you will likely need to tear down the bridge and VLAN devices manually. It is also advisable to 200
Nov 9, 2012
Folsom, 2012.2
kill any remaining dnsmasq processes. These commands would stop the service, manually tear down the bridge and VLAN from the previous example, kill any remaining dnsmasq processes, and start the service up again, as root:
# # # # # # stop nova-network vconfig rem vlan169 ip link set br169 down brctl delbr br169 killall dnsmasq start nova-network
Figure10.8.VLAN network, multiple interfaces, multiple servers, network HA with XenAPI driver
201
Nov 9, 2012
Folsom, 2012.2
Here is an extract from a nova.conf file in a system running the above setup:
network_manager=nova.network.manager.VlanManager xenapi_vif_driver=nova.virt.xenapi.vif.(XenAPIBridgeDriver or XenAPIOpenVswitchDriver) vlan_interface=eth1 public_interface=eth3 multi_host=True force_dhcp_release=True send_arp_for_ha=True flat_injected=False firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
You should notice that vlan_interface refers to the network interface on the Hypervisor and the network interface on the VM running the OpenStack services. As with before public_interface refers to the network interfce on the VM running the OpenStack services. With VLAN networking and the XenAPI driver, the following things happen when you start a VM: First the XenServer network is attached to the appropriate physical interface (PIF) and VLAN unless the network already exsists. When the VM is created, its VIF is attached to the above network. The 'Openstack domU', i.e. where nova-network is running, acts as a gateway and DHCP for this instance. The DomU does this for multiple VLAN networks, so it has to be attached on a VLAN trunk. For this reason it must have an interface on the parent bridge of the VLAN bridge where VM instances are plugged. To help understand VLAN networking with the XenAPI further, here are some important things to note: A physical interface (PIF) identified either by (A) the vlan_interface flag or (B) the bridge_interface column in the networks db table will be used for creating a XenServer VLAN network. The VLAN tag is found in the vlan column, still in the networks table, and by default the first tag is 100. VIF for VM instances within this network will be plugged in this VLAN network. You won't see the bridge until a VIF is plugged in it. The 'Openstack domU', i.e. the VM running the nova network node, instead will not be plugged into this network; since it acts as a gateway for multiple VLAN networks, it has to be attached on a VLAN trunk. For this reason it must have an interface on the parent bridge of the VLAN bridge where VM instances are plugged. For example, if vlan_interface is eth0 it must be plugged in xenbr1, eth1 --> xenbr1, etc. Within the Openstack domU, 'ip link' is then used to configure VLAN interfaces on the 'trunk' port. Each of this vlan interfaces is associated with a dnsmasq instance, which will distribute IP addresses to instances. The lease file for dnsmasq is constantly updated by nova-network, thus ensuring VMs get the IP address specified by the layer3 network driver (nova IPAM or Melange). 202
Nov 9, 2012
Folsom, 2012.2
With this configuration, VM instances should be able to get the IP address assigned to them from the appropriate dnsmasq instance, and should be able to communicate without any problem with other VMs on the same network and with the their gateway. The above point (3) probably needs some more explanations. With Open vSwitch, we don't really have distinct bridges for different VLANs; even if they appear as distinct bridges to linux and XenServer, they are actually the same OVS instance, which runs a distinct 'fake-bridge' for each VLAN. The 'real' bridge is the 'parent' of the fake one. You can easily navigate fake and real bridges with ovs-vsctl. As you can see I am referring to Openvswitch only. This is for a specific reason: the fakeparent mechanism automatically imply that ports which are not on a fake bridge are trunk ports. This does not happen with linux bridge. A packet forwarded on a VLAN interfaces does not get back in the xenbrX bridge for ethX. For this reason, with XenAPI, you must use Open vSwitch when running VLAN networking with network HA (i.e. mult-host) enabled. On XenServer 6.0 and later, Open vSwitch is the default network stack. When using VLAN networking with XenAPI and linux bridge, the default networking stack on XenServer prior to version 6.0, you must run the network node on a VM on a XenServer that does not host any nova-compute controlled instances.
203
Nov 9, 2012
Folsom, 2012.2
2. Creating the server configuration template Create a configuration for Openvpn, and save it under /etc/openvpn/server.conf :
port 1194 proto udp dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" script-security 3 system persist-key persist-tun ca ca.crt cert server.crt key server.key # This file should be kept secret
204
Nov 9, 2012
Folsom, 2012.2
dh dh1024.pem ifconfig-pool-persist ipp.txt server-bridge VPN_IP DHCP_SUBNET DHCP_LOWER DHCP_UPPER client-to-client keepalive 10 120 comp-lzo max-clients 1 user nobody group nogroup persist-key persist-tun status openvpn-status.log verb 3 mute 20
3. Create the network scripts The next step is to create both scripts that will be used when the network components will start up and shut down. The scripts will be respectively saved under /etc/ openvpn.up.sh and /etc/openvpn/down.sh : /etc/openvpn/up/sh
#!/bin/sh # Openvpn startup script. BR=$1 DEV=$2 MTU=$3 /sbin/ifconfig $DEV mtu $MTU promisc up /sbin/brctl addif $BR $DEV
/etc/openvpn/down.sh
#!/bin/sh # Openvpn shutdown script BR=$1 DEV=$2 /usr/sbin/brctl delif $BR $DEV /sbin/ifconfig $DEV down
4. Edit the network interface configuration file Update the /etc/network/interfaces accordingly (We tear down the main interface and enable the bridged interface) :
# This file describes the network interfaces available on your system
205
Nov 9, 2012
Folsom, 2012.2
# and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up down ifconfig $IFACE down auto br0 iface br0 inet dhcp bridge_ports eth0
5. Edit the rc.local file The next step consists in updating the /etc/rc.local file. We will ask our image to retrive the payload, decrypt it, and use both key and CRL for our Openvpn service : / etc/rc.local
#!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. ####### These lines go at the end of /etc/rc.local ####### . /lib/lsb/init-functions echo Downloading payload from userdata wget http://169.254.169.254/latest/user-data -O /tmp/payload.b64 echo Decrypting base64 payload openssl enc -d -base64 -in /tmp/payload.b64 -out /tmp/payload.zip mkdir -p /tmp/payload echo Unzipping payload file unzip -o /tmp/payload.zip -d /tmp/payload/ # if the autorun.sh script exists, run it if [ -e /tmp/payload/autorun.sh ]; then echo Running autorun.sh cd /tmp/payload chmod 700 /etc/openvpn/server.key sh /tmp/payload/autorun.sh if [ ! -e /etc/openvpn/dh1024.pem ]; then openssl dhparam -out /etc/openvpn/dh1024.pem 1024 fi else echo rc.local : No autorun script to run fi
206
Nov 9, 2012
Folsom, 2012.2
The called script (autorun.sh) is a script which mainly parses the network settings of the running instances in order to set up the initial routes. Your instance is now ready to be used as a cloudpipe image. In the next step, we will update that instance to Glance.
Make sure the instance has been upload to the Glance repository :
$ nova image-list +--------------------------------------+---------------+-------+--------------------------------------+ | ID | Name | Status | Server | +--------------------------------------+---------------+-------+--------------------------------------+ | 0bfc8fd3-1590-463b-b178-bce30be5ef7b | cloud-pipance | ACTIVE | fb93eda8-4eb8-42f7-b53c-91c6d83cface | +--------------------------------------+---------------+-------+--------------------------------------+
Public : Yes
Update /etc/nova.conf
Some settings need to be added into /etc/nova.conffile in order to make nova able to use our image : /etc/nova.conf 207
Nov 9, 2012
Folsom, 2012.2
+----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | 071ffb95837e4d509cb7153f21c57c4d | stone | True | | 520b6689e344456cbb074c83f849914a | service | True | | d1f5d27ccf594cdbb034c8a4123494e9 | admin | True | | dfb0ef4ab6d94d5b9e9e0006d0ac6706 | demo | True | +----------------------------------+---------+---------+
+----------------------------------+------------+-------------+--------------+ | Project Id | Public IP | Public Port | Internal IP | +----------------------------------+------------+-------------+--------------+ | d1f5d27ccf594cdbb034c8a4123494e9 | 172.17.1.3 | 1000 | 192.168.22.34 | +----------------------------------+------------+-------------+--------------+
The output basically shows our instance is started. Nova will create the necessary rules for our cloudpipe instance (icmp and OpenVPN port) :
ALLOW 1194:1194 from 0.0.0.0/0 ALLOW -1:-1 from 0.0.0.0/0
208
Nov 9, 2012
Folsom, 2012.2
VPN Access
In VLAN networking mode, the second IP in each private network is reserved for the cloudpipe instance. This gives a consistent IP to the instance so that nova-network can create forwarding rules for access from the outside world. The network for each project is given a specific high-numbered port on the public IP of the network host. This port is automatically forwarded to 1194 on the VPN instance. If specific high numbered ports do not work for your users, you can always allocate and associate a public IP to the instance, and then change the vpn_public_ip and vpn_public_port in the database. Rather than using the database directly, you can also use nova-manage vpn change [new_ip] [new_port]
Nov 9, 2012
Folsom, 2012.2
sections we will present both ways of using cloudpipe, the first using a configuration file for clients without interfaces, and for clients using an interface. Connect to your cloudpipe instance without an interface (CLI) 1. Generate your certificates Start by generating a private key and a certificate for your project:
$ nova x509-create-cert
2. Create the openvpn configuration file The following template, which can be found under nova/cloudpipe/ client.ovpn.template contains the necessary instructions for establishing a connection :
# NOVA user connection # Edit the following lines to point to your cert files: cert /path/to/the/cert/file key /path/to/the/key/file ca cacert.pem client dev tap proto udp remote $cloudpipe-public-ip $cloudpipe-port resolv-retry infinite nobind # Downgrade privileges after initialization (non-Windows only) user nobody group nogroup comp-lzo # Set log file verbosity. verb 2 keepalive 10 120 ping-timer-rem persist-tun persist-key
Update the file accordingly. In order to get the public IP and port of your cloudpipe instance, you can run the following command :
$ nova cloudpipe-list
210
Nov 9, 2012
Folsom, 2012.2
+----------------------------------+------------+------------+---------------+ | Project Id | Public IP | Public Port | Internal IP | +----------------------------------+------------+------------+---------------+ | d1f5d27ccf594cdbb034c8a4123494e9 | 172.17.1.3 | 1000 | 192.168.22. 34 | +----------------------------------+------------+------------+---------------+
3. Start your OpenVPN client Depending on the client you are using, make sure to save the configuration file under the directory it should be, so the certificate file and the private key. Usually, the file is saved under /etc/openvpn/clientconf/client.conf Connect to your cloudpipe instance using an interface 1. Download an OpenVPN client In order to connect to the project's network, you will need an OpenVPN client for your computer. Here are several clients For Ubuntu : OpenVPN network-manager-openvpn kvpnc (For Kubuntu) gopenvpn For Mac OsX : OpenVPN (Official Client) Viscosity Tunnelblick For Windows : OpenVPN (Official Client) 2. Configure your client In this example we will use Viscosity, but the same settings apply to any client. Start by filling the public ip and the public port of the cloudpipe instance. These informations can be found by running a
$ nova cloudpipe-list
211
Nov 9, 2012
Folsom, 2012.2
+----------------------------------+------------+------------+---------------+ | Project Id | Public IP | Public Port | Internal IP | +----------------------------------+------------+------------+---------------+ | d1f5d27ccf594cdbb034c8a4123494e9 | 172.17.1.3 | 1000 | 192.168.22. 34 | +----------------------------------+------------+------------+---------------+
212
Nov 9, 2012
Folsom, 2012.2
Figure10.9.Configuring Viscosity
213
Nov 9, 2012
Folsom, 2012.2
You can now save the configuration and establish the connection!
Once the job has been run, $ nova cloudpipe-listshould not return anything ; but if the cloudpipe instance is respawned too quickly; the following error could be encountered :
ERROR nova.rpc.amqp Returning exception Fixed IP address 192.168.22.34 is already in use.
In order to resolve that issue, log into the mysql server and update the ip address status :
(mysql) use nova; (mysql) SELECT * FROM fixed_ips WHERE address='192.168.22.34';
+---------------------+---------------------+------------+---------+----+---------------+------------+-------------+-----------+--------+---------+----------------------+------+ | created_at | updated_at | deleted_at | deleted | id | address | network_id | instance_id | allocated | leased | reserved | virtual_interface_id | host | +---------------------+---------------------+------------+---------+----+---------------+------------+-------------+-----------+--------+---------+----------------------+------+ | 2012-05-21 12:06:18 | 2012-06-18 09:26:25 | NULL | 0 | 484 | 192.168.22.34 | 13 | 630 | 0 | 0 | 1 | NULL | NULL | +---------------------+---------------------+------------+---------+----+---------------+------------+-------------+-----------+--------+---------+----------------------+------+
(mysql) UPDATE fixed_ips SET allocated=0, leased=0, instance_id=NULL WHERE address='192.168.22.34'; (mysql) SELECT * FROM fixed_ips WHERE address='192.168.22.34';
214
Nov 9, 2012
Folsom, 2012.2
+---------------------+---------------------+------------+---------+----+---------------+------------+-------------+-----------+--------+---------+----------------------+------+ | created_at | updated_at | deleted_at | deleted | id | address | network_id | instance_id | allocated | leased | reserved | virtual_interface_id | | +---------------------+---------------------+------------+---------+----+---------------+------------+-------------+-----------+--------+---------+----------------------+------+ | 2012-05-21 12:06:18 | 2012-06-18 09:26:25 | NULL | 0 | 484 | 192.168.22.34 | 13 | NULL | 0 | 0 | 1 | NULL | NULL | +---------------------+---------------------+------------+---------+----+---------------+------------+-------------+-----------+--------+---------+----------------------+------+
Cloudpipe-related files Nova stores cloudpipe keys into /var/lib/nova/keys. Certificates are stored into /var/lib/nova/CA. Credentials are stored into /var/lib/nova/CA/projects/ Automate the cloudpipe image installation You can automate the image creation by download that script and running it from inside the instance : Get the script from Github
Note
These commands need to be run as root only if the credentials used to interact with nova-api have been put under /root/.bashrc. If the EC2 credentials
215
Nov 9, 2012
Folsom, 2012.2
have been put into another user's .bashrc file, then, it is necessary to run these commands as the user. Using the nova command-line tool:
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
Using euca2ools:
$ euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 default $ euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default
If you still cannot ping or SSH your instances after issuing the nova secgroup-add-rule commands, look at the number of dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following as root:
# killall dnsmasq # service nova-network restart
Restart the nova-network service if you change nova.conf while the service is running.
Nov 9, 2012
Folsom, 2012.2
pool of floating ips you define. This configuration is also necessary to make source_groups work if the vms in the source group have floating ips.
Enabling IP forwarding
By default, the IP forwarding is disabled on most of Linux distributions. The "floating IP" feature requires the IP forwarding enabled in order to work.
Note
The IP forwarding only needs to be enabled on the nodes running the service nova-network. If the multi_host mode is used, make sure to enable it on all the compute node, otherwise, enable it on the node running the nova-network service. you can check if the forwarding is enabled by running the following command:
$ cat /proc/sys/net/ipv4/ip_forward 0
Or using sysctl
$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
In this example, the IP forwarding is disabled. You can enable it on the fly by running the following command:
$ sysctl -w net.ipv4.ip_forward=1
or
$ echo 1 > /proc/sys/net/ipv4/ip_forward
In order to make the changes permanent, edit the /etc/sysctl.conf and update the IP forwarding setting :
net.ipv4.ip_forward = 1
Save the file and run the following command in order to apply the changes :
$ sysctl -p
It is also possible to update the setting by restarting the network service. Here's an example for Ubuntu:
$/etc/init.d/procps.sh restart
Nov 9, 2012
Folsom, 2012.2
The following nova-manage commands apply to floating IPs. nova-manage floating list: List the floating IP addresses in the pool. nova-manage floating create --pool=[pool name] --ip_range=[CIDR]: Create specific floating IPs for either a single address or a subnet. nova-manage floating delete [cidr]: Remove floating IP addresses using the same parameters as the create command.
If the instance no longer needs a public address, remove the floating IP address from the instance and de-allocate the address:
$ nova remove-floating-ip 12 68.99.26.170 $ nova floating-ip-delete 68.99.26.170
218
Nov 9, 2012
Folsom, 2012.2
Note that if this option is enabled and all of the floating IP addresses have already been allocated, the nova boot command will fail with an error.
219
Nov 9, 2012
Folsom, 2012.2
220
Nov 9, 2012
Folsom, 2012.2
Now every time you spawn a new instance, it gets two IP addresses from the respective DHCP servers :
$ nova list +-----+------------+--------+----------------------------------------+ | ID | Name | Status | Networks | +-----+------------+--------+----------------------------------------+ | 124 | Server 124 | ACTIVE | network2=20.20.0.3; private=20.20.10.14| +-----+------------+--------+----------------------------------------+
Note
Make sure to power up the second interface on the instance, otherwise that last won't be reacheable via its second IP. Here is an example of how to setup 221
Nov 9, 2012
Folsom, 2012.2
the interfaces within the instance (this is the configuration that needs to be applied inside the image) : /etc/network/interfaces
# The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet dhcp
Note
If the Virtual Network Service Quantum is installed, it is possible to specify the networks to attach to the respective interfaces by using the --nic flag when invoking the nova command :
$ nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id= <id of first network> --nic net-id= <id of first network> test-vm1
HA Option 1: Multi-host
To eliminate the network host as a single point of failure, Compute can be configured to allow each compute host to do all of the networking jobs for its own VMs. Each compute host does NAT, DHCP, and acts as a gateway for all of its own VMs. While there is still a single point of failure in this scenario, it is the same point of failure that applies to all virtualized systems. This setup requires adding an IP on the VM network to each host in the system, and it implies a little more overhead on the compute hosts. It is also possible to combine this with option 4 (HW Gateway) to remove the need for your compute hosts to gateway. In that hybrid version they would no longer gateway for the VMs and their responsibilities would only be DHCP and NAT. The resulting layout for the new HA networking option looks the following diagram:
222
Nov 9, 2012
Folsom, 2012.2
In contrast with the earlier diagram, all the hosts in the system are running the novacompute, nova-network and nova-api services. Each host does DHCP and does NAT for public traffic for the VMs running on that particular host. In this model every compute host requires a connection to the public internet and each host is also assigned an address from the VM network where it listens for DHCP traffic. The nova-api service is needed so that it can act as a metadata server for the instances. To run in HA mode, each compute host must run the following services: nova-compute nova-network nova-api-metadata or nova-api If the compute host is not an API endpoint, use the nova-api-metadata service. The nova.conf file should contain:
multi_host=True
If a compute host is also an API endpoint, use the nova-api service. Your enabled_apis option will need to contain metadata, as well as additional options depending on the API services. For example, if it supports compute requests, volume requests, and EC2 compatibility, the nova.conf file should contain:
multi_host=True enabled_apis=ec2,osapi_compute,osapi_volume,metadata
The multi_host option must be in place for network creation and nova-network must be run on every compute host. These created multi hosts networks will send all network 223
Nov 9, 2012
Folsom, 2012.2
related commands to the host that the VM is on. You need to set the configuration option enabled_apis such that it includes metadata in the list of enabled APIs.
Note
You must specify the multi_host option on the command line when creating fixed networks. For example:
# nova-manage network create --fixed_range_v4=192. 168.0.0/24 --num_networks=1 --network_size=256 --multi_host=T -label=test
HA Option 2: Failover
The folks at NTT labs came up with a ha-linux configuration that allows for a 4 second failover to a hot backup of the network host. Details on their approach can be found in the following post to the openstack mailing list: https://lists.launchpad.net/openstack/ msg02099.html This solution is definitely an option, although it requires a second host that essentially does nothing unless there is a failure. Also four seconds can be too long for some real-time applications. To enable this HA option, your nova.conf file must contain the following option:
send_arp_for_ha=True
See https://bugs.launchpad.net/nova/+bug/782364 for details on why this option is required when configuring for failover.
HA Option 3: Multi-nic
Recently, nova gained support for multi-nic. This allows us to bridge a given VM into multiple networks. This gives us some more options for high availability. It is possible to set up two networks on separate vlans (or even separate ethernet devices on the host) and give the VMs a NIC and an IP on each network. Each of these networks could have its own network host acting as the gateway. In this case, the VM has two possible routes out. If one of them fails, it has the option of using the other one. The disadvantage of this approach is it offloads management of failure scenarios to the guest. The guest needs to be aware of multiple networks and have a strategy for switching between them. It also doesn't help with floating IPs. One would have to set up a floating IP associated with each of the IPs on private the private networks to achieve some type of redundancy.
224
Nov 9, 2012
Folsom, 2012.2
1. Create a dnsmasq configuration file (e.g., /etc/dnsmasq-nova.conf) that contains the IP address of the external gateway. If running in FlatDHCP mode, assuming the IP address of the hardware gateway was 172.16.100.1, the file would contain the line:
dhcp-option=option:router,172.16.100.1
If running in VLAN mode, a separate router must be specified for each network. The networks are identified by the --label argument when calling nova-manage network create to create the networks as documented in the Configuring VLAN Networking subsection. Assuming you have three VLANs, that are labeled red, green, and blue, with corresponding hardware routers at 172.16.100.1, 172.16.101.1 and 172.16.102.1, the dnsmasqconfiguration file (e.g., /etc/dnsmasq-nova.conf) would contain the following:
dhcp-option=tag:'red',option:router,172.16.100.1 dhcp-option=tag:'green',option:router,172.16.101.1 dhcp-option=tag:'blue',option:router,172.16.102.1
3. Configure the hardware gateway to forward metadata requests to a host that's running the nova-api service with the metadata API enabled. The virtual machine instances access the metadata service at 169.254.169.254 port 80. The hardware gateway should forward these requests to a host running the novaapi service on the port specified as the metadata_host config option in /etc/nova/ nova.conf, which defaults to 8775. Make sure that the list in the enabled_apis configuration option /etc/nova/ nova.conf contains metadata in addition to the other APIs. An example that contains the EC2 API, the OpenStack compute API, the OpenStack volume API, and the metadata service would look like:
enabled_apis=ec2,osapi_compute,osapi_volume,metadata
4. Ensure you have set up routes properly so that the subnet that you use for virtual machines is routable.
Troubleshooting Networking
Can't reach floating IPs
If you aren't able to reach your instances via the floating IP address, make sure the default security group allows ICMP (ping) and SSH (port 22), so that you can reach the instances:
$ nova secgroup-list-rules default +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | | tcp | 22 | 22 | 0.0.0.0/0 | |
225
Nov 9, 2012
Folsom, 2012.2
+-------------+-----------+---------+-----------+--------------+
Ensure the NAT rules have been added to iptables on the node that nova-network is running on, as root:
# iptables -L -nv -A nova-network-OUTPUT -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3 # iptables -L -nv -t nat -A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination10.0.0.3 -A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170
Check that the public address, in this example "68.99.26.170", has been added to your public interface: You should see the address in the listing when you enter "ip addr" at the command prompt.
$ ip addr 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0 inet 68.99.26.170/32 scope global eth0 inet6 fe80::82b:2bf:fe1:4b2/64 scope link valid_lft forever preferred_lft forever
Note that you cannot SSH to an instance with a public IP from within the same server as the routing configuration won't allow it. You can use tcpdump to identify if packets are being routed to the inbound interface on the compute host. If the packets are reaching the compute hosts but the connection is failing, the issue may be that the packet is being dropped by reverse path filtering. Try disabling reverse path filtering on the inbound interface. For example, if the inbound interface is eth2, as root:
# sysctl -w net.ipv4.conf.eth2.rp_filter=0
If this solves your issue, add the following line to /etc/sysctl.conf so that the revesrse path filter will be disabled the next time the compute host reboots:
net.ipv4.conf.rp_filter=0
Disabling firewall
To help debug networking issues with reaching VMs, you can disable the firewall by setting the following option in /etc/nova/nova.conf:
firewall_driver=nova.virt.firewall.NoopFirewallDriver
226
Nov 9, 2012
Folsom, 2012.2
We strongly recommend you remove the above line to re-enable the firewall once your networking issues have been resolved.
In the second terminal, also on the host running nova-network, use tcpdump to monitor DNS-related traffic on the bridge interface. As root:
# tcpdump -K -p -i br100 -v -vv udp port 53
In the third terminal, SSH inside of the instance and generate DNS requests by using the nslookup command:
$ nslookup www.google.com
The symptoms may be intermittent, so try running nslookup multiple times. If the network configuration is correct, the command should return immediately each time. If it is not functioning properly, the command will hang for several seconds. If the nslookup command somteimes hangs, and there are packets that appear in the first terminal but not the second, then the problem may be due to filtering done on the bridges. Try to disable filtering, as root:
# sysctl -w net.bridge.bridge-nf-call-arptables=0 # sysctl -w net.bridge.bridge-nf-call-iptables=0 # sysctl -w net.bridge.bridge-nf-call-ip6tables=0
If this solves your issue, add the following line to /etc/sysctl.conf so that these changes will take effect the next time the host reboots:
net.bridge.bridge-nf-call-arptables=0 net.bridge.bridge-nf-call-iptables=0 net.bridge.bridge-nf-call-ip6tables=0
Nov 9, 2012
Folsom, 2012.2
for a period of time. Some users have reported success with loading the vhost_net kernel module as a workaround for this issue (see bug #997978) . This kernel module may also improve network performance on KVM. To load the kernel module, as root:
# modprobe vhost_net
Note that loading the module will have no effect on instances that are already running.
228
Nov 9, 2012
Folsom, 2012.2
11. Volumes
Cinder Versus Nova-Volumes
You now have two options in terms of Block Storage. Currently (as of the Folsom release) both are nearly identical in terms of functionality, API's and even the general theory of operation. Keep in mind however that Nova-Volumes is deprecated and will be removed at the release of Grizzly. See the Cinder section of the Folsom Install Guide for Cinder-specific information.
Managing Volumes
Nova-volume is the service that allows you to give extra block level storage to your OpenStack Compute instances. You may recognize this as a similar offering from Amazon EC2 known as Elastic Block Storage (EBS). However, nova-volume is not the same implementation that EC2 uses today. Nova-volume is an iSCSI solution that employs the use of Logical Volume Manager (LVM) for Linux. Note that a volume may only be attached to one instance at a time. This is not a shared storage solution like a SAN of NFS on which multiple servers can attach to. Before going any further; let's discuss the nova-volume implementation in OpenStack: The nova-volumes service uses iSCSI-exposed LVM volumes to the compute nodes which run instances. Thus, there are two components involved: 1. lvm2, which works with a VG called "nova-volumes" (Refer to http://en.wikipedia.org/ wiki/Logical_Volume_Manager_(Linux) for further details) 2. open-iscsi, the iSCSI implementation which manages iSCSI sessions on the compute nodes Here is what happens from the volume creation to its attachment: 1. The volume is created via nova volume-create; which creates an LV into the volume group (VG) "nova-volumes" 2. The volume is attached to an instance via nova volume-attach; which creates a unique iSCSI IQN that will be exposed to the compute node 3. The compute node which run the concerned instance has now an active ISCSI session; and a new local storage (usually a /dev/sdX disk) 4. libvirt uses that local storage as a storage for the instance; the instance get a new disk (usually a /dev/vdX disk) For this particular walk through, there is one cloud controller running nova-api, novascheduler, nova-objectstore, nova-network and nova-volume services. There are two additional compute nodes running nova-compute. The walk through uses a custom partitioning scheme that carves out 60GB of space and labels it as LVM. The network is a /28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack Compute (Nova). 229
Nov 9, 2012
Folsom, 2012.2
Please note that the network mode doesn't interfere at all with the way nova-volume works, but networking must be set up for nova-volumes to work. Please refer to Networking for more details. To set up Compute to use volumes, ensure that nova-volume is installed along with lvm2. The guide will be split in four parts : Installing the nova-volume service on the cloud controller. Configuring the "nova-volumes" volume group on the compute nodes. Troubleshooting your nova-volume installation. Backup your nova volumes.
On RHEL and derivatives, the nova-volume service should already be installed. Configure Volumes for use with nova-volume The openstack-nova-volume service requires an LVM Volume Group called nova-volumes to exist. If you do not already have LVM volumes on hand, but have free drive space, you will need to create a LVM volume before proceeding. Here is a short run down of how you would create a LVM from free drive space on your system. Start off by issuing an fdisk command to your drive with the free space:
$ fdisk /dev/sda
Once in fdisk, perform the following commands: 1. Press n to create a new disk partition, 2. Press p to create a primary disk partition, 3. Press 1 to denote it as 1st disk partition, 4. Either press ENTER twice to accept the default of 1st and last cylinder to convert the remainder of hard disk to a single disk partition -OR- press ENTER once to accept the default of the 1st, and then choose how big you want the partition to be by specifying +size[K,M,G] e.g. +5G or +6700M. 5. Press t and select the new partition that you have created. 6. Press 8e change your new partition to 8e, i.e. Linux LVM partition type.
230
Nov 9, 2012
Folsom, 2012.2
7. Press p to display the hard disk partition setup. Please take note that the first partition is denoted as /dev/sda1 in Linux. 8. Press w to write the partition table and exit fdisk upon completion. Refresh your partition table to ensure your new partition shows up, and verify with fdisk. We then inform the OS about the table partition update :
$ partprobe $ fdisk -l
You should see your new partition in this listing. Here is how you can set up partitioning during the OS install to prepare for this novavolume configuration:
root@osdemo03:~# fdisk -l
Device Boot Start End Blocks Id System /dev/sda1 * 1 12158 97280 83 Linux /dev/sda2 12158 24316 97655808 83 Linux /dev/sda3 24316 24328 97654784 83 Linux /dev/sda4 24328 42443 145507329 5 Extended /dev/sda5 24328 32352 64452608 8e Linux LVM /dev/sda6 32352 40497 65428480 8e Linux LVM /dev/sda7 40498 42443 15624192 82 Linux swap / Solaris
Now that you have identified a partition has been labeled for LVM use, perform the following steps to configure LVM and prepare it as nova-volumes. You must name your volume group nova-volumes or things will not work as expected:
$ pvcreate /dev/sda5 $ vgcreate nova-volumes /dev/sda5
231
Nov 9, 2012
Folsom, 2012.2
Note
If you are using KVM as your hypervisor, then the actual device name in the guest will be different than the one specified in the nova volume-attach command. You can specify a device name to the KVM hypervisor, but the actual means of attaching to the guest is over a virtual PCI bus. When the guest sees a new device on the PCI bus, it picks the next available name (which in most cases is /dev/vdc) and the disk shows up there on the guest. Installing and configuring the iSCSI initiator Remember that every node will act as the iSCSI initiator while the server running novavolumes will act as the iSCSI target. So make sure, before going further that your nodes can communicate with you nova-volumes server. If you have a firewall running on it, make sure that the port 3260 (tcp) accepts incoming connections. First install the open-iscsi package on the initiators, so on the compute-nodes only
$ apt-get install open-iscsi
Then run on the nova-controller (iscsi target), start tgt, which is installed as a dependency of the volume package:
$ service tgt start
Start nova-volume and create volumes You are now ready to fire up nova-volume, and start creating volumes!
$ service nova-volume start
Once the service is started, login to your controller and ensure youve properly sourced your novarc file. One of the first things you should do is make sure that nova-volume is checking in as expected. You can do so using nova-manage:
$ nova-manage service list
If you see a smiling nova-volume in there, you are looking good. Now create a new volume:
$ nova volume-create --display_name myvolume 10
232
Nov 9, 2012
Folsom, 2012.2
--display_name sets a readable name for the volume, while the final argument refers to the size of the volume in GB. You should get some output similar to this:
+----+-----------+--------------+-----+-------------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +----+-----------+--------------+-----+-------------+--------------------------------------+ | 1 | available | myvolume | 10 | None | | +----+-----------+--------------+-----+-------------+--------------------------------------+
You can view that status of the volumes creation using nova volume-list. Once that status is available, it is ready to be attached to an instance:
$ nova volume-attach 857d70e4-35d5-4bf6-97ed-bf4e9a4dcf5a 1 /dev/vdb
The first argument refers to the instance you will attach the volume to; the second is the volume ID; The third is the mountpoint on the compute-node that the volume will be attached to. Compute generates a non-conflicting device name if one is not passed to attach_volume and ensures that the volume name isn't already attached there. By doing that, the compute-node which runs the instance basically performs an iSCSI connection and creates a session. You can ensure that the session has been created by running :
$ iscsiadm -m session
If you do not get any errors, you can login to the instance and see if the new space is there. KVM changes the device name, since it's not considered to be the same type of device as the instances uses as it's local one, you will find the nova-volume will be designated as "/ dev/vdX" devices, while local are named "/dev/sdX". You can check the volume attachment by running :
$ dmesg | tail
233
Nov 9, 2012
Folsom, 2012.2
You should from there see a new disk. Here is the output from fdisk -l:
Disk /dev/vda: 10.7 GB, 10737418240 bytes 16 heads, 63 sectors/track, 20805 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 000000000 Disk /dev/vda doesnt contain a valid partition table Disk /dev/vdb: 21.5 GB, 21474836480 bytes <Here is our new volume! 16 heads, 63 sectors/track, 41610 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 000000000
1. Press n to create a new disk partition. 2. Press p to create a primary disk partition. 3. Press 1 to designated it as the first disk partition. 4. Press ENTER twice to accept the default of first and last cylinder to convert the remainder of hard disk to a single disk partition. 5. Press t, then select the new partition you made. 6. Press 83 change your new partition to 83, i.e. Linux partition type. 7. Press p to display the hard disk partition setup. Please take note that the first partition is denoted as /dev/vda1 in your instance. 8. Press w to write the partition table and exit fdisk upon completion. 9. Lastly, make a file system on the partition and mount it.
$ mkfs.ext3 /dev/vdb1 $ mkdir /extraspace $ mount /dev/vdb1 /extraspace
Your new volume has now been successfully mounted, and is ready for use! The commands are pretty self-explanatory, so play around with them and create new volumes, tear them down, attach and reattach, and so on.
234
Nov 9, 2012
Folsom, 2012.2
This error happens when the compute node is unable to resolve the nova-volume server name. You could either add a record for the server if you have a DNS server; or add it into the /etc/hosts file of the nova-compute. ERROR "No route to host"
iscsiadm: cannot make connection to 172.29.200.37: No route to host\ niscsiadm: cannot make connection to 172.29.200.37
This error could be caused by several things, but it means only one thing : openiscsi is unable to establish a communication with your nova-volumes server. The first thing you could do is running a telnet session in order to see if you are able to reach the nova-volume server. From the compute-node, run :
$ telnet $ip_of_nova_volumes 3260
If the session times out, check the server firewall; or try to ping it. You could also run a tcpdump session which may also provide extra information:
$ tcpdump -nvv -i $iscsi_interface port dest $ip_of_nova_volumes
Nov 9, 2012
Folsom, 2012.2
"Lost connectivity between nova-volumes and node-compute ; how to restore a clean state ?" Network disconnection can happens, from an "iSCSI view", losing connectivity could be seen as a physical removal of a server's disk. If the instance runs a volume while you loose the network between them, you won't be able to detach the volume. You would encounter several errors. Here is how you could clean this : First, from the nova-compute, close the active (but stalled) iSCSI session, refer to the volume attached to get the session, and perform the following command :
$ iscsiadm -m session -r $session_id -u
For example, to free volume 9, close the session number 9. The cloud-controller is actually unaware of the iSCSI session closing, and will keeps the volume state as in-use:
+----+-----------+--------------+-----+-------------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +----+-----------+--------------+-----+-------------+--------------------------------------+ | 9 | in-use | New Volume | 20 | None | 7db4cb64-7f8f-42e3-9f58-e59c9a31827d |
You now have to inform the cloud-controller that the disk can be used. Nova stores the volumes info into the "volumes" table. You will have to update four fields into the database nova uses (eg. MySQL). First, connect to the database:
$ mysql -uroot -p$password nova
Using the volume id, you will have to run the following sql queries: 236
Nov 9, 2012
Folsom, 2012.2
volumes set mountpoint=NULL where id=9; volumes set status="available" where status "error_deleting" volumes set attach_status="detached" where id=9; volumes set instance_id=0 where id=9;
Now if you run again nova volume-list from the cloud controller, you should see an available volume now:
+----+-----------+--------------+-----+-------------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +----+-----------+--------------+-----+-------------+--------------------------------------+ | 9 | available | New Volume | 20 | None | |
237
Nov 9, 2012
Folsom, 2012.2
While this should all be handled for you by you installer, it can go wrong. If you're having trouble creating volumes and this directory does not exist you should see an error message in the cinder-volume log indicating that the volumes_dir doesn't exist, and it should give you information to specify what path exactly it was looking for. persistent tgt include file Along with the volumes_dir mentioned above, the iSCSI target driver also needs to be configured to look in the correct place for the persist files. This is a simple entry in /etc/ tgt/conf.d, and you should have created this when you went through the install guide. If you haven't or you're running into issues, verify that you have a file /etc/tgt/conf.d/ cinder.conf (for Nova-Volumes, this will be /etc//tgt/conf.d/nova.conf). If the files not there, you can create it easily by doing the following:
sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/ cinder.conf"
No sign of create call in the cinder-api log This is most likely going to be a minor adjustment to you nova.conf file. Make sure that your nova.conf has the following entry:
volume_api_class=nova.volume.cinder.API
And make certain that you EXPLICITLY set enabled_apis as the default will include osapi_volume:
enabled_apis=ec2,osapi_compute,metadata
238
Nov 9, 2012
Folsom, 2012.2
data corruption, because data will not be manipulated during the process of creating the volume itself. Remember the volumes created through a nova volume-create exist in an LVM's logical volume. Before creating the snapshot, ensure that you have enough space to save it. As a precaution, you should have at least twice as much space as the potential snapshot size. If insufficient space is available, there is a risk that the snapshot could become corrupted. Use the following command to obtain a list of all volumes.
$ lvdisplay
In this example, we will refer to a volume called volume-00000001, which is a 10GB volume. This process can be applied to all volumes, not matter their size. At the end of the section, we will present a script that you could use to create scheduled backups. The script itself exploits what we discuss here. First, create the snapshot; this can be achieved while the volume is attached to an instance :
$ lvcreate --size 10G --snapshot --name volume-00000001-snapshot /dev/novavolumes/volume-00000001
We indicate to LVM we want a snapshot of an already existing volume with the -snapshot configuration option. The command includes the size of the space reserved for the snapshot volume, the name of the snapshot, and the path of an already existing volume (In most cases, the path will be /dev/nova-volumes/$volume_name). The size doesn't have to be the same as the volume of the snapshot. The size parameter designates the space that LVM will reserve for the snapshot volume. As a precaution, the size should be the same as that of the original volume, even if we know the whole space is not currently used by the snapshot. We now have a full snapshot, and it only took few seconds ! Run lvdisplay again to verify the snapshot. You should see now your snapshot :
--- Logical volume --Name /dev/nova-volumes/volume-00000001 Name nova-volumes UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr Write Access read/write snapshot status source of /dev/nova-volumes/volume-00000026-snap [active] LV Status available # open 1 LV Size 15,00 GiB Current LE 3840 Segments 1 Allocation inherit Read ahead sectors auto LV VG LV LV LV
239
Nov 9, 2012
Folsom, 2012.2
256 251:13
/dev/nova-volumes/volume-00000001-snap nova-volumes HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr read/write active destination for /dev/nova-volumes/ available 0 15,00 GiB 3840 10,00 GiB 2560 0,00% 4,00 KiB 1 inherit auto 256 251:14
2- Partition table discovery If we want to exploit that snapshot with the tar program, we first need to mount our partition on the nova-volumes server. kpartx is a small utility which performs table partition discoveries, and maps it. It can be used to view partitions created inside the instance. Without using the partitions created inside instances, we won' t be able to see its content and create efficient backups.
$ kpartx -av /dev/nova-volumes/volume-00000001-snapshot
If no errors are displayed, it means the tools has been able to find it, and map the partition table. Note that on a Debian flavor distro, you could also use apt-get install kpartx. You can easily check the partition table map by running the following command:
$ ls /dev/mapper/nova*
You should now see a partition called nova--volumes-volume--00000001-snapshot1 If you created more than one partition on that volumes, you should have accordingly several partitions; for example. nova--volumes-volume--00000001--snapshot2, nova--volumes-volume--00000001--snapshot3 and so forth. We can now mount our partition : 240
Nov 9, 2012
Folsom, 2012.2
If there are no errors, you have successfully mounted the partition. You should now be able to directly access the data that were created inside the instance. If you receive a message asking you to specify a partition, or if you are unable to mount it (despite a well-specified filesystem) there could be two causes : You didn't allocate enough space for the snapshot kpartx was unable to discover the partition table. Allocate more space to the snapshot and try the process again. 3- Use tar in order to create archives Now that the volume has been mounted, you can create a backup of it :
$ tar --exclude={"lost+found","some/data/to/exclude"} -czf volume-00000001. tar.gz -C /mnt/ /backup/destination
This command will create a tar.gz file containing the data, and data only. This ensures that you do not waste space by backing up empty sectors. 4- Checksum calculation I You should always have the checksum for your backup files. The checksum is a unique identifier for a file. When you transfer that same file over the network, you can run another checksum calculation. If the checksums are different, this indicates that the file is corrupted; thus, the checksum provides a method to ensure your file has not been corrupted during its transfer. The following command runs a checksum for our file, and saves the result to a file :
$ sha1sum volume-00000001.tar.gz > volume-00000001.checksum
Be aware the sha1sum should be used carefully, since the required time for the calculation is directly proportional to the file's size. For files larger than ~4-6 gigabytes, and depending on your CPU, the process may take a long time. 5- After work cleaning Now that we have an efficient and consistent backup, the following commands will clean up the file system. 1. Unmount the volume: unmount /mnt
241
Nov 9, 2012
Folsom, 2012.2
2. Delete the partition table: kpartx -dv /dev/nova-volumes/volume-00000001snapshot 3. Remove the snapshot: lvremove -f /dev/nova-volumes/volume-00000001-snapshot And voila :) You can now repeat these steps for every volume you have. 6- Automate your backups Because you can expect that more and more volumes will be allocated to your nova-volume service, you may want to automate your backups. This script here will assist you on this task. The script performs the operations from the previous example, but also provides a mail report and runs the backup based on the backups_retention_days setting. It is meant to be launched from the server which runs the nova-volumes component. Here is an example of a mail report:
Backup Start Time - 07/10 at 01:00:01 Current retention - 7 days The backup volume is mounted. Proceed... Removing old backups... : /BACKUPS/EBS-VOL/volume-00000019/ volume-00000019_28_09_2011.tar.gz /BACKUPS/EBS-VOL/volume-00000019 - 0 h 1 m and 21 seconds. Size - 3,5G The backup volume is mounted. Proceed... Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/ volume-0000001a_28_09_2011.tar.gz /BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G --------------------------------------Total backups size - 267G - Used space : 35% Total execution time - 1 h 75 m and 35 seconds
The script also provides the ability to SSH to your instances and run a mysqldump into them. In order to make this to work, ensure the connection via the nova's project keys is enabled. If you don't want to run the mysqldumps, you can turn off this functionality by adding enable_mysql_dump=0 to the script.
Volume drivers
The default nova-volume behaviour can be altered by using different volume drivers that are included in Nova codebase. To set volume driver, use volume_driver flag. The default is as follows:
volume_driver=nova.volume.driver.ISCSIDriver iscsi_helper=tgtadm
242
Nov 9, 2012
Folsom, 2012.2
If you are using KVM or QEMU as your hypervisor, the Compute service can be configured to use Ceph's RADOS block devices (RBD) for volumes. Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform is capable of auto-scaling to the exabyte level and beyond, it runs on commodity hardware, it is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. As a result of its open source nature, this portable storage platform may be installed and used in public or private clouds.
Figure11.1.Ceph-architecture.png
RADOS?
You can easily get confused by the denomination: Ceph? RADOS? RADOS: Reliable Autonomic Distributed Object Store is an object storage. RADOS takes care of distributing the objects across the whole storage cluster and replicating them for fault tolerance. It is built with 3 major components: Object Storage Device (ODS): the storage daemon - RADOS service, the location of your data. You must have this daemon running on each server of your cluster. For each OSD you can have an associated hard drive disks. For performance purpose its usually better to pool your hard drive disk with raid arrays, LVM or btrfs pooling. With that, for one server your will have one daemon running. By default, three pools are created: data, metadata and RBD. Meta-Data Server (MDS): this is where the metadata are stored. MDSs builds POSIX file system on top of objects for Ceph clients. However if you are not using the Ceph File System, you do not need a meta data server. 243
Nov 9, 2012
Folsom, 2012.2
Monitor (MON): this lightweight daemon handles all the communications with the external applications and the clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance when you mount a Ceph shared on a client you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup you will at least run 3 ceph-mon daemons on separate servers. Quorum decisions and calculs are elected by a majority vote, we expressly need odd number. Ceph developers recommend to use btrfs as a file system for the storage. Using XFS is also possible and might be a better alternative for production environments. Neither Ceph nor Btrfs are ready for production. It could be really risky to put them together. This is why XFS is an excellent alternative to btrfs. The ext4 file system is also compatible but doesnt take advantage of all the power of Ceph.
Note
We recommend configuring Ceph to use the XFS file system in the near term, and btrfs in the long term once it is stable enough for production. See ceph.com/docs/master/rec/file system/ for more information about usable file systems.
Nov 9, 2012
Folsom, 2012.2
Note
You should make sure that the compute nodes have iSCSI network access to the Storwize family or SVC system.
Note
Make sure the compute node running the nova-volume management driver has SSH network access to the storage system. To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Please consult with your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.
Note
When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role. If using password authentication, assign a password to the user on the Storwize or SVC system. The driver configuration flags for the user and password are san_login and san_password, respectively.
245
Nov 9, 2012
Folsom, 2012.2
If you are using the SSH key pair authentication, create SSH private and public keys using the instructions below or by any other method. Associate the public key with the user by uploading the public key: select the "choose file" option in the Storwize family or SVC management GUI under "SSH public key". Alternatively, you may associate the SSH public key using the command line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the san_private_key configuration flag.
The command prompts for a file to save the key pair. For example, if you select 'key' as the filename, two files will be created: key and key.pub. The key file holds the private SSH key and key.pub holds the public SSH key. The command also prompts for a passphrase, which should be empty. The private key file should be provided to the driver using the san_private_key configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command line interface.
Configuring options for the Storwize family and SVC driver in nova.conf
The following options apply to all volumes and cannot be changed for a specific volume.
Default 22
Description Management IP or host name Management port Management login username Management login password Management login SSH private key Pool name for volumes Volume virtualization type b Initial physical allocation c
246
Nov 9, 2012
Folsom, 2012.2
Description Space allocation warning threshold b Enable or disable volume auto expand d Volume grain size b in KB Enable or disable Real-Time Compression
e
storwize_svc_flashcopy_timeout Optional
The authentication requires either a password (san_password) or SSH private key (san_private_key). One must be specified. If both are specified the driver will use only the SSH private key. b More details on this configuration option are available in the Storwize family and SVC command line documentation under the mkvdisk command. c The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation size for thin-provisioned volumes, or if set to -1, the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation. d Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the autoexpand flag of the Storwize family and SVC command line interface mkvdisk command. e Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work. f Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work. g The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time the driver will wait for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
Nexenta
NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast network storage arrays. The NexentaStor is based on the OpenSolaris and uses ZFS as a disk management system. NexentaStor can serve as a storage node for the OpenStack and provide block-level volumes for the virtual servers via iSCSI protocol. The Nexenta driver allows you to use Nexenta SA to store Nova volumes. Every Nova volume is represented by a single zvol in a predefined Nexenta volume. For every new volume the driver creates a iSCSI target and iSCSI target group that are used to access it from compute hosts. To use Nova with Nexenta Storage Appliance, you should: set volume_driver=nova.volume.nexenta.volume.NexentaDriver. set --nexenta_host flag to the hostname or IP of your NexentaStor set --nexenta_user and --nexenta_password to the username and password of the user with all necessary privileges on the appliance, including the access to REST API set --nexenta_volume to the name of the volume on the appliance that you would like to use in Nova, or create a volume named nova (it will be used by default) Nexenta driver has a lot of tunable flags. Some of them you might want to change: nexenta_target_prefix defines the prefix that will be prepended to volume id to form target name on Nexenta 247
Nov 9, 2012
Folsom, 2012.2
nexenta_target_group_prefix defines the prefix for target groups nexenta_blocksize can be set to the size of the blocks in newly created zvols on appliance, with the suffix; for example, the default 8K means 8 kilobytes nexenta_sparse is boolean and can be set to use sparse zvols to save space on appliance Some flags that you might want to keep with the default values: nexenta_rest_port is the port where Nexenta listens for REST requests (the same port where the NMV works) nexenta_rest_protocol can be set to http or https, but the default is auto which makes the driver try to use HTTP and switch to HTTPS in case of failure nexenta_iscsi_target_portal_port is the port to connect to Nexenta over iSCSI
Nov 9, 2012
Folsom, 2012.2
Operation
The admin uses the nova-manage command detailed below to add flavors and backends. One or more nova-volume service instances will be deployed per availability zone. When an instance is started, it will create storage repositories (SRs) to connect to the backends available within that zone. All nova-volume instances within a zone can see all the available backends. These instances are completely symmetric and hence should be able to service any create_volume request within the zone.
Configuration
Set the following configuration options for the nova volume service: (nova-compute also requires the volume_driver configuration option.)
--volume_driver="nova.volume.xensm.XenSMDriver" --use_local_volumes=False
The backend configurations that the volume driver uses need to be created before starting the volume service. 249
Nov 9, 2012
Folsom, 2012.2
$ nova-manage sm flavor_create <label> <description> $ nova-manage sm flavor_delete <label> $ nova-manage sm backend_add <flavor label> <SR type> [config connection parameters] Note: SR type and config connection parameters are in keeping with the XenAPI Command Line Interface. http://support.citrix.com/article/CTX124887 $ nova-manage sm backend_delete <backend-id>
Example: For the NFS storage manager plugin, the steps below may be used.
$ nova-manage sm flavor_create gold "Not all that glitters" $ nova-manage sm flavor_delete gold $ nova-manage sm backend_add gold nfs name_label=mybackend server=myserver serverpath=/local/scratch/myname $ nova-manage sm backend_remove 1
250
Nov 9, 2012
Folsom, 2012.2
To configure and use a SolidFire cluster with Cinder, modify your cinder.conf file similarly to how you would a nova.conf:
volume_driver=cinder.volume.solidfire.SolidFire iscsi_ip_prefix=172.17.1.* # the prefix of your SVIP san_ip=172.17.1.182 # the address of your MVIP san_login=sfadmin # your cluster admin login san_password=sfpassword # your cluster admin password
HP / LeftHand SAN
HP/LeftHand SANs are optimized for virtualized environments with VMware ESX & Microsoft Hyper-V, though the OpenStack integration provides additional support to various other virtualized environments (Xen, KVM, OpenVZ etc) by exposing the volumes via ISCSI to connect to the instances. The HpSanISCSIDriver allows you to use a HP/Lefthand SAN that supports the Cliq interface. Every supported volume operation translates into a cliq call in the backend. To use Nova with HP/Lefthand SAN, you should set the following required parameters in nova.conf: set volume_driver=nova.volume.san.HpSanISCSIDriver. set san_ip flag to the hostname or VIP of your Virtual Storage Appliance (VSA). set san_login and san_password to the username and password of the ssh user with all necessary privileges on the appliance. set san_ssh_port=16022 the default is set to 22, but the default for the VSA is usually 16022. set san_clustername to the name of the cluster on which the associated volumes will be created. Some of the optional settings with their default values: san_thin_provision=True set it to False to disable thin provisioning. san_is_local=False This is almost always False for this driver. Setting it to True will try and run the cliq commands locally instead of over ssh.
Nov 9, 2012
Folsom, 2012.2
For Xen this will be the hypervisor hostname. This can either be done thru Cliq or the Centralized Management Console.
The command arguments are: dev_name A device name where the volume will be attached in the system at /dev/dev_name. This value is typically vda. The ID of the volume to boot from, as shown in the output of nova volume-list. This is either snap, which means that the volume was created from a snapshot, or anything other than snap (a blank 252
id
type
Nov 9, 2012
Folsom, 2012.2
string is valid). In the example above, the volume was not created from a snapshot, so we will leave this field blank in our example below. size (GB) The size of the volume, in GB. It is safe to leave this blank and have the Compute service infer the size.
delete_on_terminate A boolean to indicate whether the volume should be deleted when the instance is terminated. True can be specified as True or 1. False can be specified as False or 0.
Note
Because of bug #1008622, you must specify an image when booting from a volume, even though this image will not be used. The following example will attempt boot from volume with ID=13, it will not delete on terminate. Replace the --image flag with a valid image on your system, and the -key_name with a valid keypair name:
$ nova boot --image f4addd24-4e8a-46bb-b15d-fae2591f1a35 --flavor 2 --key_name mykey --block_device_mapping vda=13:::0 boot-from-vol-test
253
Nov 9, 2012
Folsom, 2012.2
12. Scheduling
Compute uses the nova-scheduler service to determine how to dispatch compute and volume requests. For example, the nova-scheduler service determines which host a VM should launch on. The term "host" in the context of filters means a physical node that has a nova-compute service running on it. The scheduler is configurable through a variety of options. Compute is configured with the following default scheduler options:
scheduler_driver=nova.scheduler.multi.MultiScheduler volume_scheduler_driver=nova.scheduler.chance.ChanceScheduler compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn compute_fill_first_cost_fn_weight=-1.0
Compute is configured by default to use the Multi Scheduler, which allows the admin to specify different scheduling behavior for compute requests versus volume requests. The volume scheduler is configured by default as a Chance Scheduler, which picks a host at random that has the nova-volume service running. The compute scheduler is configured by default as a Filter Scheduler, described in detail in the next section. In the default configuration, this scheduler will only consider hosts that are in the requested availability zone (AvailabilityZoneFilter), that have sufficient RAM available (RamFilter), and that are actually capable of servicing the request (ComputeFilter). From the resulting filtered list of eligible hosts, the scheduler will assign a cost to each host based on the amount of free RAM (nova.scheduler.least_cost.compute_fill_first_cost_fn), will multiply each cost value by -1 (compute_fill_first_cost_fn_weight), and will select the host with the minimum cost. This is equivalent to selecting the host with the maximum amount of RAM available.
Filter Scheduler
The Filter Scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is the default scheduler for scheduling virtual machine instances. It supports filtering and weighting to make informed decisions on where a new instance should be created. This Scheduler can only be used for scheduling compute requests, not volume requests, i.e. it can only be used with the compute_scheduler_driver configuration option.
Filters
When the Filter Scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in the costs and weight section. 254
Nov 9, 2012
Folsom, 2012.2
Figure12.1.Filtering
The scheduler_available_filters configuration option in nova.conf provides the Compute service with the list of the filters that will be used by the scheduler. The default setting specifies all of the filter that are included with the Compute service:
scheduler_available_filters=nova.scheduler.filters.all_filters
This configuration option can be specified multiple times. For example, if you implemented your own custom filter in Python called myfilter.MyFilter and you wanted to use both the built-in filters and your custom filter, your nova.conf file would contain:
scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_available_filters=myfilter.MyFilter
The scheduler_default_filters configuration option in nova.conf defines the list of filters that will be applied by the nova-scheduler service. As mentioned above, the default filters are:
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
AggregateInstanceExtraSpecsFilter
Matches properties defined in an instance type's extra specs against admin-defined properties on a host aggregate. See the host aggregates section for documentation on how to use this filter. 255
Nov 9, 2012
Folsom, 2012.2
AllHostsFilter
This is a no-op filter, it does not eliminate any of the available hosts.
AvailabilityZoneFilter
Filters hosts by availability zone. This filter must be enabled for the scheduler to respect availability zones in requests.
ComputeCapabilitiesFilter
Matches properties defined in an instance type's extra specs against compute capabilities
ComputeFilter
Filters hosts by flavor (also known as instance type) and image properties. The scheduler will check to ensure that a compute host has sufficient capabilities to run a virtual machine instance that corresponds to the specified flavor. If the image has properties specified, this filter will also check that the host can support them. The image properties that the filter checks for are: architecture: Architecture describes the machine architecture required by the image. Examples are i686, x86_64, arm, and powerpc. hypervisor_type: Hypervisor type describes the hypervisor required by the image. Examples are xen, kvm, qemu, and xenapi. vm_mode: Virtual machine mode describes the hypervisor application binary interface (ABI) required by the image. Examples are 'xen' for Xen 3.0 paravirtual ABI, 'hvm' for native ABI, 'uml' for User Mode Linux paravirtual ABI, exe for container virt executable ABI. In general, this filter should always be enabled.
CoreFilter
Only schedule instances on hosts if there are sufficient CPU cores available. If this filter is not set, the scheduler may overprovision a host based on cores (i.e., the virtual cores running on an instance may exceed the physical cores). This filter can be configured to allow a fixed amount of vCPU overcommitment by using the cpu_allocation_ratio Configuration option in nova.conf. The default setting is:
cpu_allocation_ratio=16.0
With this setting, if there are 8 vCPUs on a node, the scheduler will allow instances up to 128 vCPU to be run on that node. To disallow vCPU overcommitment set: 256
Nov 9, 2012
Folsom, 2012.2
cpu_allocation_ratio=1.0
DifferentHostFilter
Schedule the instance on a different host from a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using different_host as the key and a list of instance uuids as the value. This filter is the opposite of the SameHostFilter. Using the nova command-line tool, use the --hint flag. For example:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host= 8c19174f-4220-44f0-824a-cd1eeef10287 server-1
ImagePropertiesFilter
Filters hosts based on properties defined on the instance's image. It passes hosts that can support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, and virtual machine mode. E.g., an instance might require a host that runs an ARM-based processor and QEMU as the hypervisor. An image can be decorated with these properties using
glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu
IsolatedHostsFilter
Allows the admin to define a special (isolated) set of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run isolated images. The admin must specify the isolated set of images and hosts in the nova.conf file using the isolated_hosts and isolated_images configuration options. For example:
isolated_hosts=server1,server2 isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0ebd132d6b7d09
257
Nov 9, 2012
Folsom, 2012.2
JsonFilter
The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format. The following operators are supported: = < > in <= >= not or and The filter supports the following variables: $free_ram_mb $free_disk_mb $total_usable_ram_mb $vcpus_total $vcpus_used Using the nova command-line tool, use the --hint flag:
$ nova boot --image 827d564a-e636-4fc4-a376-d36f7ebe1747 --flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1
RamFilter
Only schedule instances on hosts if there is sufficient RAM available. If this filter is not set, the scheduler may overprovision a host based on RAM (i.e., the RAM allocated by virtual machine instances may exceed the physical RAM). 258
Nov 9, 2012
Folsom, 2012.2
This filter can be configured to allow a fixed amount of RAM overcommitment by using the ram_allocation_ratio configuration option in nova.conf. The default setting is:
ram_allocation_ratio=1.5
With this setting, if there is 1GB of free RAM, the scheduler will allow instances up to size 1.5GB to be run on that instance.
RetryFilter
Filter out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to respond to the request, this filter will prevent the scheduler from retrying that host for the service request. This filter is only useful if the scheduler_max_attempts configuration option is set to a value greater than zero.
SameHostFilter
Schedule the instance on the same host as another instance in a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using same_host as the key and a list of instance uuids as the value. This filter is the opposite of the DifferentHostFilter. Using the nova command-line tool, use the --hint flag:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 -hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host= 8c19174f-4220-44f0-824a-cd1eeef10287 server-1
SimpleCIDRAffinityFilter
Schedule the instance based on host IP subnet range. To take advantage of this filter, the requester must specify a range of valid IP address in CIDR format, by passing two scheduler hints: build_near_host_ip The first IP address in the subnet (e.g., 192.168.1.1) cidr The CIDR that corresponds to the subnet (e.g., /24) 259
Nov 9, 2012
Folsom, 2012.2
Using the nova command-line tool, use the --hint flag. For example, to specify the IP subnet 192.168.1.1/24
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1
The Filter Scheduler takes the hosts that remain after the filters have been applied and applies one or more cost function to each host to get numerical scores for each host. Each cost score is multiplied by a weighting constant specified in the nova.conf config 260
Nov 9, 2012
Folsom, 2012.2
file. The weighting constant configuration option is the name of the cost function, with the _weight string appended. Here is an example of specifying a cost function and its corresponding weight:
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn compute_fill_first_cost_fn_weight=-1.0
Multiple cost functions can be specified in the least_cost_functions configuration option, separated by commas. For example:
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn, nova.scheduler.least_cost.noop_cost_fn compute_fill_first_cost_fn_weight=-1.0 noop_cost_fn_weight=1.0
If there are multiple cost functions, then the weighted cost scores are added together. The scheduler selects the host that has the minimum weighted cost. The Compute service comes with three cost functions:
nova.scheduler.least_cost.compute_fill_first_cost_fn
This cost function calculates the amount of free memory (RAM) available on the node. Because the scheduler minimizes cost, if this cost function is used as a weight of +1, by doing:
compute_fill_first_cost_fn_weight=1.0
then the scheduler will tend to "fill up" hosts, scheduling virtual machine instances to the same host until there is no longer sufficient RAM to service the request, and then moving to the next node If the user specifies a weight of -1 by doing:
compute_fill_first_cost_fn_weight=-1.0
then the scheduler will favor hosts that have the most amount of available RAM, leading to a "spread-first" behavior.
nova.scheduler.least_cost.retry_host_cost_fn
This cost function adds additional cost for retrying scheduling a host that was already used for a previous scheduling attempt. The normal method of using this function is to set retry_host_cost_fn_weight to a positive value, so that hosts which consistently encounter build failures will be used less often.
nova.scheduler.least_cost.noop_cost_fn
This cost function returns 1 for all hosts. It is a "no-op" cost function (i.e., it does not do anything to discriminate among hosts). In practice, this cost function is never used. 261
Nov 9, 2012
Folsom, 2012.2
Other Schedulers
While an administrator is likely to only need to work with the Filter Scheduler, Compute comes with other schedulers as well, described below.
Chance Scheduler
The Chance Scheduler (nova.scheduler.chance.ChanceScheduler) randomly selects from the lists of filtered hosts. It is the default volume scheduler.
Multi Scheduler
The Multi Scheduler nova.scheduler.multi.MultiScheduler holds multiple subschedulers, one for nova-compute requests and one for nova-volume requests. It is the default top-level scheduler as specified by the scheduler_driver configuration option.
Simple Scheduler
The Simple Scheduler (nova.scheduler.simple.SimpleScheduler) implements a naive scheduler that tries to find the least loaded host (i.e., implements a "spread-first" algorithm). It can schedule requests for both nova-compute and nova-volume. The Simple Scheduler supports the following configuration options:
max_gigabytes=10000
Host aggregates
Overview
Host aggregates are a mechanism to further partition an availabilityzone; while availability zones are visible to users, host aggregates are only visible to administrators. Host aggregates started out as a way to use Xen hypervisor resource pools, but has been generalized to provide a mechanism to allow administrators to assign key-value pairs to groups of machines. Each node can have multiple aggregates, each aggregate can have multiple key-value pairs, and the same key-value pair can be assigned to multiple aggregate. This information can be used in the scheduler to enable advanced scheduling, to set up Xen hypervisor resources pools or to define logical groups for migration. 262
Nov 9, 2012
Folsom, 2012.2
Command-line interface
The nova command-line tool supports the following aggregate-related commands. nova aggregate-list nova aggregate-create <name> <availability-zone> nova aggregate-delete <id> nova aggregate-details <id> nova aggregate-add-host <id> <host> nova aggregate-remove-host <id> <host> nova aggregate-set-metadata <id> <key=value> [<key=value> ...] nova aggregateupdate <id> <name> [<availability_zone>] nova host-list nova host-update -maintenance [enable | disable] Print a list of all aggregates. Create a new aggregate named <name> in availability zone <availability-zone>. Returns the ID of the newly created aggregate. Delete an aggregate with id <id>. Show details of the aggregate with id <id>. Add host with name <host> to aggregate with id <id>. Remove the host with name <host> from the aggregate with id <id>. Add or update metadata (key-value pairs) associated with the aggregate with id <id>. Update the aggregate's name and optionally availablity zone. List all hosts by service. Put/resume host into/from maintenance.
Note
These commands are only accessible to administrators. If the username and tenant you are using to access the Compute service do not have the admin role, or have not been explicitly granted the appropriate privileges, you will see one of the following errors when trying to use these commands:
ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2e7e1f96b4864) ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
Nov 9, 2012
Folsom, 2012.2
To configure the scheduler to support host aggregates, the scheduler_default_filters configuration option must contain the AggregateInstanceExtraSpecsFilter in addition to the other filters used by the scheduler. Add the following line to /etc/nova/nova.conf on the host that runs the nova-scheduler service to enable host aggregates filtering, as well as the other filters that are typically enabled:
scheduler_default_filters=AggregateInstanceExtraSpecsFilter, AvailabilityZoneFilter,RamFilter,ComputeFilter
Next, we use the nova flavor-create command to create a new flavor called ssd.large with an ID of 6, 8GB of RAM, 80GB root disk, and 4 vCPUs.
$ nova flavor-create ssd.large 6 8192 80 4 +----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs | +----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+ | 6 | ssd.large | 8192 | 80 | 0 | | 4 | 1 | True | {} |
264
Nov 9, 2012
Folsom, 2012.2
+----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+
Once the flavor has been created, we specify one or more key-value pair that must match the key-value pairs on the host aggregates. In this case, there's only one key-value pair, ssd=true. Setting a key-value pair on a flavor is done using the nova-manage instance_type set_key command.
# nova-manage instance_type set_key --name=ssd.large --key=ssd --value=true
Once it is set, you should see the extra_specs property of the ssd.large flavor populated with a key of ssd and a corresponding value of true.
$ nova flavor-show ssd.large +----------------------------+-------------------+ | Property | Value | +----------------------------+-------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | extra_specs | {u'ssd': u'true'} | | id | 6 | | name | ssd.large | | os-flavor-access:is_public | True | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+-------------------+
Now, when a user requests an instance with the ssd.large flavor, the scheduler will only consider hosts with the ssd=true key-value pair. In this example, that would only be node1 and node2.
265
Nov 9, 2012
Folsom, 2012.2
266
Nov 9, 2012
Folsom, 2012.2
API Server
At the heart of the cloud framework is an API Server. This API Server makes command and control of the hypervisor, storage, and networking programmatically available to users in realization of the definition of cloud computing. The API endpoints are basic http web services which handle authentication, authorization, and basic command and control functions using various API interfaces under the Amazon, Rackspace, and related models. This enables API compatibility with multiple existing tool sets created for interaction with offerings from other vendors. This broad compatibility prevents vendor lock-in.
Message Queue
A messaging queue brokers the interaction between compute nodes (processing), volumes (block storage), the networking controllers (software which controls network infrastructure), API endpoints, the scheduler (determines which physical hardware to allocate to a virtual resource), and similar components. Communication to and from the cloud controller is by HTTP requests through multiple API endpoints. A typical message passing event begins with the API server receiving a request from a user. The API server authenticates the user and ensures that the user is permitted to issue the subject command. Availability of objects implicated in the request is evaluated and, if available, the request is routed to the queuing engine for the relevant workers. Workers continually listen to the queue based on their role, and occasionally their type hostname. When such listening produces a work request, the worker takes assignment of the task and begins its execution. Upon completion, a response is dispatched to the queue which is received by the API server and relayed to the originating user. Database entries are queried, added, or removed as necessary throughout the process.
Compute Worker
Compute workers manage computing instances on host machines. Through the API, commands are dispatched to compute workers to: Run instances Terminate instances Reboot instances Attach volumes Detach volumes Get console output 267
Nov 9, 2012
Folsom, 2012.2
Network Controller
The Network Controller manages the networking resources on host machines. The API server dispatches commands through the message queue, which are subsequently processed by Network Controllers. Specific operations include: Allocate fixed IP addresses Configuring VLANs for projects Configuring networks for compute nodes
Volume Workers
Volume Workers interact with iSCSI storage to manage LVM-based instance volumes. Specific functions include: Create volumes Delete volumes Establish Compute volumes Volumes may easily be transferred between instances, but may be attached to only a single instance at a time.
Nov 9, 2012
Folsom, 2012.2
and password, set as environment variables for convenience, and then you can have the ability to send commands to your cloud on the command-line. To install python-novaclient, download the tarball from http://pypi.python.org/ pypi/python-novaclient/2.6.3#downloads and then install it in your favorite python environment.
$ curl -O http://pypi.python.org/packages/source/p/python-novaclient/pythonnovaclient-2.6.3.tar.gz $ tar -zxvf python-novaclient-2.6.3.tar.gz $ cd python-novaclient-2.6.3 $ sudo python setup.py install
Now that you have installed the python-novaclient, confirm the installation by entering:
$ nova help
usage: nova [--debug] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-name_name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--service-type SERVICE_TYPE] [--service-name SERVICE_NAME] [--endpoint-type ENDPOINT_TYPE] [--version VERSION] <subcommand> ...
In return, you will get a listing of all the commands and parameters for the nova command line client. By setting up the required parameters as environment variables, you can fly through these commands on the command line. You can add --os-username on the nova command, or set them as environment variables:
$ export OS_USERNAME=joecool $ export OS_PASSWORD=coolword $ export OS_TENANT_NAME=coolu
Using the Identity Service, you are supplied with an authentication endpoint, which nova recognizes as the OS_AUTH_URL.
$ export OS_AUTH_URL=http://hostname:5000/v2.0 $ export NOVA_VERSION=1.1
269
Nov 9, 2012
Folsom, 2012.2
For administrators, the standard pattern for executing a nova-manage command is:
$ nova-manage category command [args]
For example, to obtain a list of all projects: nova-manage project list Run without arguments to see a list of available command categories: nova-manage You can also run with a category argument such as user to see a list of all commands in that category: nova-manage service
Usage statistics
The nova command-line tool can provide some basic statistics on resource usage for hosts and instances. For more sophisticated monitoring, see the Ceilometer project, which is currently under development. You may also wish to consider installing tools such as Ganglia or Graphite if you require access to more detailed data.
Use the nova host-describe command to retrieve a summary of resource usage of all of the instances running on the host. The "cpu" column is the sum of the virtual CPUs of all of the instances running on the host, the "memory_mb" column is the sum of the memory (in MB) allocated to the instances running on the hosts, and the "disk_gb" column is the sum of the root and ephemeral disk sizes (in GB) of the instances running on the hosts. 270
Nov 9, 2012
Folsom, 2012.2
Note that these values are computed using only information about the flavors of the instances running on the hosts. This command does not query the CPU usage, memory usage, or hard disk usage of the physical host.
$ nova host-describe c2-compute-01 +---------------+----------------------------------+-----+----------+---------+ | HOST | PROJECT | cpu | memory_mb | | +---------------+----------------------------------+-----+----------+---------+ | c2-compute-01 | (total) | 24 | 96677 | | | c2-compute-01 | (used_max) | 2 | 2560 | | | c2-compute-01 | (used_now) | 4 | 7168 | | | c2-compute-01 | f34d8f7170034280a42f6318d1a4af34 | 2 | 2560 | | +---------------+----------------------------------+-----+----------+---------+
disk_gb
492 0 0 0
Use the nova usage-list command to get summary statistics for each tenant:
$ nova usage-list Usage from 2012-10-10 to 2012-11-08: +----------------------------------+-----------+--------------+----------+---------------+ | Tenant ID | Instances | RAM MB-Hours | CPU Hours | Disk GB-Hours |
271
Nov 9, 2012
Folsom, 2012.2
+----------------------------------+-----------+--------------+----------+---------------+ | 0eec5c34a7a24a7a8ddad27cb81d2706 | 8 | 240031.10 | 468.81 | 0. 00 | | 92a5d9c313424537b78ae3e42858fd4e | 5 | 483568.64 | 236.12 | 0. 00 | | f34d8f7170034280a42f6318d1a4af34 | 106 | 16888511.58 | 9182.88 | 0. 00 | +----------------------------------+-----------+--------------+----------+---------------+
Using Migration
Before starting migrations, review the Configuring Migrations section. Migration provides a scheme to migrate running instances from one OpenStack Compute server to another OpenStack Compute server. This feature can be used as described below. First, look at the running instances, to get the ID of the instance you wish to migrate.
# nova list +--------------------------------------+------+--------+-----------------+ | ID | Name | Status |Networks | +--------------------------------------+------+--------+-----------------+ | d1df1b5a-70c4-4fed-98b7-423362f2c47c | vm1 | ACTIVE | private=a.b.c.d | | d693db9e-a7cf-45ef-a7c9-b3ecb5f22645 | vm2 | ACTIVE | private=e.f.g.h | +--------------------------------------+------+--------+-----------------+
Second, look at information associated with that instance - our example is vm1 from above.
# nova show d1df1b5a-70c4-4fed-98b7-423362f2c47c +------------------------------------+----------------------------------------------------------+ | Property | Value | +------------------------------------+----------------------------------------------------------+ ... | OS-EXT-SRV-ATTR:host | HostB | ... | flavor | m1.tiny | | id | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | name | vm1 | | private network | a.b.c.d | | status | ACTIVE | ... +------------------------------------+----------------------------------------------------------+
272
Nov 9, 2012
Folsom, 2012.2
In this example, vm1 is running on HostB. Third, select the server to migrate instances to.
# nova-manage service list HostA nova-scheduler enabled :-) None HostA nova-volume enabled :-) None HostA nova-network enabled :-) None HostB nova-compute enabled :-) None HostC nova-compute enabled :-) None
In this example, HostC can be picked up because nova-compute is running on it. Third, ensure that HostC has enough resource for migration.
# nova-manage service describe_resource HostC HOST PROJECT cpu mem(mb) HostC(total) 16 32232 HostC(used_now) 13 21284 HostC(used_max) 13 21284 HostC p1 5 10240 HostC p2 5 10240 .....
cpu:the nuber of cpu mem(mb):total amount of memory (MB) hddtotal amount of NOVA-INST-DIR/instances(GB) 1st line shows total amount of resource physical server has. 2nd line shows current used resource. 3rd line shows maximum used resource. 4th line and under is used resource per project. Finally, use the nova live-migration command to migrate the instances.
# nova live-migration bee83dd3-5cc9-47bc-a1bd-6d11186692d0 HostC Migration of bee83dd3-5cc9-47bc-a1bd-6d11186692d0 initiated.
Make sure instances are migrated successfully with nova list. If instances are still running on HostB, check logfiles (src/dest nova-compute and nova-scheduler) to determine why.
Note
While the nova command is called live-migration, under the default Compute configuration options the instances are suspended before migration. See the Configuring Migrations section for more details.
273
Nov 9, 2012
Folsom, 2012.2
First, you can review the status of the host using the nova database, some of the important information is highlighted below. This example converts an EC2 API instance ID into an openstack ID - if you used the nova commands, you can substitute the ID directly. You can find the credentials for your database in /etc/nova.conf.
SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G; *************************** 1. row *************************** created_at: 2012-06-19 00:48:11 updated_at: 2012-07-03 00:35:11 deleted_at: NULL ... id: 5561 ... power_state: 5 vm_state: shutoff ... hostname: at3-ui02 host: np-rcc54 ... uuid: 3f57699a-e773-4650-a443-b4b37eed5a06 ... task_state: NULL ...
Recover the VM
Armed with the information of VMs on the failed host, determine which compute host the affected VMs should be moved to. In this case, the VM will move to np-rcc46, which is achieved using this database command:
UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443b4b37eed5a06';
Next, if using a hypervisor that relies on libvirt (such as KVM) it is a good idea to update the libvirt.xml file (found in /var/lib/nova/instances/[instance ID]). The important changes to make are to change the DHCPSERVER value to the host ip address of 274
Nov 9, 2012
Folsom, 2012.2
the nova compute host that is the VMs new home, and update the VNC IP if it isn't already 0.0.0.0. Next, reboot the VM:
$ nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06
In theory, the above database update and nova reboot command are all that is required to recover the VMs from a failed host. However, if further problems occur, consider looking at recreating the network filter configuration using virsh, restarting the nova services or updating the vm_state and power_state in the nova database.
Repeat the steps for the libvirt-qemu owned files if those were needed to change Restart the services Following this, you can run the find command to verify that all files using the correct identifiers.
Nov 9, 2012
Folsom, 2012.2
In this section, we will review managing your cloud after a disaster, and how to easily backup the persistent storage volumes, which is another approach when you face a disaster. Even apart from the disaster scenario, backup ARE mandatory. While the Diablo release includes the snapshot functions, both the backup procedure and the utility do apply to the Cactus release. For reference, you cand find a DRP definition here : http://en.wikipedia.org/wiki/ Disaster_Recovery_Plan.
Nov 9, 2012
Folsom, 2012.2
Before going further, and in order to prevent the admin to make fatal mistakes, the instances won't be lost, since no "destroy" or "terminate" command had been invoked, so the files for the instances remain on the compute node. The plan is to perform the following tasks, in that exact order. Any extra step would be dangerous at this stage : 1. We need to get the current relation from a volume to its instance, since we will recreate the attachment. 2. We need to update the database in order to clean the stalled state. (After that, we won't be able to perform the first step). 3. We need to restart the instances (so go from a "shutdown" to a "running" state). 4. After the restart, we can reattach the volumes to their respective instances. 5. That step, which is not a mandatory one, exists in an SSH into the instances in order to reboot them.
set mountpoint=NULL; set status="available" where status set attach_status="detached"; set instance_id=0;
Now, when running nova volume-list all volumes should be available. Instances Restart We need to restart the instances. This can be done via a simple nova reboot $instance At that stage, depending on your image, some instances will completely reboot and become reachable, while others will stop on the "plymouth" stage. DO NOT reboot a second time the ones which are stopped at that stage (see below, the fourth step). In fact it depends on whether you added an /etc/fstab entry for that 277
Nov 9, 2012
Folsom, 2012.2
volume or not. Images built with the cloud-init package will remain on a pending state, while others will skip the missing volume and start. (More information is available on help.ubuntu.com) But remember that the idea of that stage is only to ask nova to reboot every instance, so the stored state is preserved. Volume Attachment After the restart, we can reattach the volumes to their respective instances. Now that nova has restored the right status, it is time to perform the attachments via a nova volume-attach Here is a simple snippet that uses the file we created :
#!/bin/bash while read line; do volume=`echo $line | $CUT -f 1 -d " "` instance=`echo $line | $CUT -f 2 -d " "` mount_point=`echo $line | $CUT -f 3 -d " "` echo "ATTACHING VOLUME FOR INSTANCE - $instance" nova volume-attach $instance $volume $mount_point sleep 2 done < $volumes_tmp_file
At that stage, instances which were pending on the boot sequence (plymouth) will automatically continue their boot, and restart normally, while the ones which booted will see the volume. SSH into instances If some services depend on the volume, or if a volume has an entry into fstab, it could be good to simply restart the instance. This restart needs to be made from the instance itself, not via nova. So, we SSH into the instance and perform a reboot :
$ shutdown -r now
Voila! You successfully recovered your cloud after that. Here are some suggestions : Use the parameter errors=remount in the fstab file, which will prevent data corruption. The system would lock any write to the disk if it detects an I/O error. This configuration option should be added into the nova-volume server (the one which performs the ISCSI connection to the SAN), but also into the instances' fstab file. Do not add the entry for the SAN's disks to the nova-volume's fstab file. Some systems will hang on that step, which means you could lose access to your cloud-controller. In order to re-run the session manually, you would run the following command before performing the mount: 278
Nov 9, 2012
Folsom, 2012.2
For your instances, if you have the whole /home/ directory on the disk, instead of emptying the /home directory and map the disk on it, leave a user's directory with the user's bash files and the authorized_keys file. This will allow you to connect to the instance, even without the volume attached, if you allow only connections via public keys.
C- Scripted DRP
You can download from here a bash script which performs these five steps : The "test mode" allows you to perform that whole sequence for only one instance. To reproduce the power loss, connect to the compute node which runs that same instance and close the iscsi session. Do not dettach the volume via nova volume-detach, but instead manually close the iscsi session. In the following example, the iscsi session is number 15 for that instance :
$ iscsiadm -m session -u -r 15
Do not forget the flag -r; otherwise, you will close ALL sessions.
279
Nov 9, 2012
Folsom, 2012.2
Nov 9, 2012
Folsom, 2012.2
3. Restart and run the Apache server. Install the OpenStack Dashboard, as root:
# apt-get install -y memcached libapache2-mod-wsgi openstack-dashboard # yum install -y memcached mod-wsgi openstack-dashboard
Next, modify the variable CACHE_BACKEND in /etc/openstack-dashboard/ local_settings.py to match the ones set in /etc/memcached.conf/ etc/sysconfig/memcached.conf. Open /etc/openstack-dashboard/ local_settings.py and look for this line:
CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
Note
The address and port in the new value need to be equal to the ones set in / etc/memcached.conf/etc/sysconfig/memcached.conf. If you change the memcached settings, restart the Apache web server for the changes to take effect.
Note
This guide has selected memcache as a session store for OpenStack Dashboard. There are other options available, each with benefits and drawbacks. Refer to the OpenStack Dashboard Session Storage section for more information.
Note
In order to change the timezone you can use either dashboard or inside /etc/ openstack-dashboard/local_settings.py you can change below mentioned parameter.
TIME_ZONE = "UTC"
Nov 9, 2012
Folsom, 2012.2
and a Firefox browser. Once you connect to the Dashboard with the URL, you should see a login window. Enter the credentials for users you created with the Identity Service, Keystone. For example, enter "admin" for the username and "secretword" as the password.
282
Nov 9, 2012
Folsom, 2012.2
Once you know where to make the appropriate changes, its super simple. Step-by-step: 1. Create a graphical logo with a transparent background. The text TGen Cloud in this example is actually rendered via .png files of multiple sizes created with a graphics 283
Nov 9, 2012
Folsom, 2012.2
program. Use a 20027 for the logged-in banner graphic, and 36550 for the login screen graphic. 2. Set the HTML title (shown at the top of the browser window) by adding the following line to /etc/openstack-dashboard/local_settings.py: SITE_BRANDING = "Example, Inc. Cloud" 3. Upload your new graphic files to:
/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/
4. Create a new CSS stylesheet well call ours custom.css in the directory:
/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/css/
5. Edit your CSS file using the following as a starting point for customization, which simply overrides the Ubuntu customizations made in the ubuntu.css file. Change the colors and image file names as appropriate, though the relative directory paths should be the same.
/* * New theme colors for dashboard that override the defaults: * dark blue: #355796 / rgb(53, 87, 150) * light blue: #BAD3E1 / rgb(186, 211, 225) * * By Preston Lee <[email protected]> */ h1.brand { background: #355796 repeat-x top left; border-bottom: 2px solid #BAD3E1; } h1.brand a { background: url(../img/my_cloud_logo_small.png) top left no-repeat; } #splash .login { background: #355796 url(../img/my_cloud_logo_medium.png) no-repeat center 35px; } #splash .login .modal-header { border-top: 1px solid #BAD3E1; } .btn-primary { background-image: none !important; background-color: #355796 !important; border: none !important; box-shadow: none; } .btn-primary:hover, .btn-primary:active { border: none; box-shadow: none; background-color: #BAD3E1 !important; text-decoration: none; }
284
Nov 9, 2012
Folsom, 2012.2
7. Add a line to include your new stylesheet pointing to custom.css: (Ive highlighted the new line in bold.)
... <link href='{{ STATIC_URL }}bootstrap/css/bootstrap.min.css' media='screen' rel='stylesheet' /> <link href='{{ STATIC_URL }}dashboard/css/{% choose_css %}' media='screen' rel='stylesheet' /> <link href='{{ STATIC_URL }}dashboard/css/custom.css' media='screen' rel= 'stylesheet' /> ...
8. Restart apache just for good measure: sudo service apache2 restartsudo service httpd restart 9. Reload the dashboard in your browser and fine tune your CSS appropriate. Youre done!
Select IP protocol TCP and enter 22 in "From Port" and "To Port" and CIDR 0.0.0.0/0. This opens port 22 for requests from any IP. If you want requests from particular range of IP, provide it in CIDR field. Select IP protocol ICMP and enter -1 in "From Port" and "To Port" and CIDR 0.0.0.0/0. This allows ping from any IP. If you want ping requests from particular range of IP, provide it in CIDR field. 285
Nov 9, 2012
Folsom, 2012.2
Adding Keypair
Next add a Keypair. Once a Keypair is added, the public key would be downloaded. This key can be used to SSH to the launched instance.
Launching Instance
Click Images & Snapshots and launch a required instance from the list of images available.
Click launch on the required image. Provide a Server Name, select the flavor, the keypair added above and the default security group. Provide the number of instances required. Once these details are provided, click Launch Instance.
286
Nov 9, 2012
Folsom, 2012.2
Once the status is Active, the instance is ready and we can ping and SSH to the instance.
3. Use the ssh-add command to ensure that the keypair is known to SSH:
$ ssh-add MyKey.pem
4. Copy the IP address from the MyFirstInstance. 5. Use the SSH command to make a secure connection to the instance:
287
Nov 9, 2012
Folsom, 2012.2
You should see a prompt asking "Are you sure you want to continue connection (yes/ no)?" Type yes and you have successfully connected.
Figure14.1.NoVNC Process
288
Nov 9, 2012
Folsom, 2012.2
About nova-consoleauth
Both client proxies leverage a shared service to manage token auth called novaconsoleauth. This service must be running in order for either proxy to work. Many proxies of either type can be run against a single nova-consoleauth service in a cluster configuration. The nova-consoleauth shared service should not be confused with nova-console, which is a XenAPI-specific service that is not used by the most recent VNC proxy architecture.
Typical Deployment
A typical deployment will consist of the following components: One nova-consoleauth process. Typically this runs on the controller host. One or more nova-novncproxy services. This supports browser-based novnc clients. For simple deployments, this service typically will run on the same machine as nova-api, since it proxies between the public network and the private compute host network. One or more nova-xvpvncproxy services. This supports the special Java client discussed in this document. For simple deployments, this service typically will run on the same machine as nova-api, since it proxies between the public network and the private compute host network. One or more compute hosts. These compute hosts must have correctly configured configuration options, as described below.
Specify 'novnc' to retrieve a URL suitable for pasting into a web browser. Specify 'xvpvnc' for a URL suitable for pasting into the Java client. So to request a web browser URL:
$ nova get-vnc-console [server_id] novnc
Nov 9, 2012
Folsom, 2012.2
to libvirt only. For multi-host libvirt deployments this should be set to a host management IP on the same network as the proxies.
Note
If you intend to support live migration, you cannot specify a specific IP address for vncserver_listen, because that IP address will not exist on the destination host. The result is that live migration will fail and the following error will appear in the libvirtd.log file in the destination host:
error: qemuMonitorIORead:513 : Unable to read from monitor: Connection reset by peer
If you wish to support live migration in your deployment, you must specify a value of 0.0.0.0 for vncserver_listen. vncserver_proxyclient_address - Defaults to 127.0.0.1. This is the address of the compute host that nova will instruct proxies to use when connecting to instance vncservers. For all-in-one XenServer domU deployments this can be set to 169.254.0.1. For multi-host XenServer domU deployments this can be set to a dom0 management ip on the same network as the proxies. For multi-host libvirt deployments this can be set to a host management IP on the same network as the proxies. novncproxy_base_url=[base url for client connections] - This is the public base URL to which clients will connect. "?token=abc" will be added to this URL for the purposes of auth. When using the system as described in this document, an appropriate value is "http://$SERVICE_HOST:6080/vnc_auto.html" where SERVICE_HOST is a public hostname. xvpvncproxy_base_url=[base url for client connections] - This is the public base URL to which clients will connect. "?token=abc" will be added to this URL for the purposes of auth. When using the system as described in this document, an appropriate value is "http://$SERVICE_HOST:6081/console" where SERVICE_HOST is a public hostname.
Then, to create a session, first request an access URL using python-novaclient and then run the client like so. To retrieve access URL:
290
Nov 9, 2012
Folsom, 2012.2
To run client:
$ java -jar VncViewer.jar [access_url]
nova-novncproxy (novnc)
You will need the novnc package installed, which contains the nova-novncproxy service. As root:
# apt-get install novnc
The configuration option parameter should point to your nova.conf configuration file that includes the message queue server address and credentials. By default, nova-novncproxy binds on 0.0.0.0:6080. In order to connect the service to your nova deployment, add the two following configuration options into your nova.conf file : vncserver_listen=0.0.0.0 This configuration option allow you to specify the address for the vnc service to bind on, make sure it is assigned one of the compute node interfaces. This address will be the one used by your domain file. <graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
Note
In order to have the live migration working, make sure to use the 0.0.0.0address. vncserver_ proxyclient_ address =127.0.0.1 This is the address of the compute host that nova will instruct proxies to use when connecting to instance vncservers.
Note
The previous vnc proxy implementation, called nova-vncproxy, has been deprecated.
291
Nov 9, 2012
Folsom, 2012.2
Then, paste the URL into your web browser. Additionally, you can use the OpenStack Dashboard (codenamed Horizon), to access browser-based VNC consoles for instances.
292
Nov 9, 2012
Folsom, 2012.2
Note that novncproxy_base_url and novncproxy_base_url use a public ip; this is the url that is ultimately returned to clients, who generally will not have access to your private network. Your PROXYSERVER must be able to reach vncserver_proxyclient_address, as that is the address over which the vnc connection will be proxied. See "Important nova-compute Options" for more information. Q: My noVNC does not work with recent versions of web browsers. Why? A: Make sure you have python-numpy installed, which is required to support a newer version of the WebSocket protocol (HyBi-07+). Also, if you are using Diablo's novavncproxy, note that support for this protocol is not provided. Q: How do I adjust the dimensions of the VNC window image in horizon? A: These values are hard-coded in a Django HTML template. To alter them, you must edit the template file _detail_vnc.html. The location of this file will vary based on Linux distribution. On Ubuntu 12.04, the file can be found at /usr/share/pyshared/ horizon/dashboards/nova/templates/nova/instances_and_volumes/ instances/_detail_vnc.html. Modify the width and height parameters:
<iframe src="{{ vnc_url }}" width="720" height="430"></iframe>
293
Nov 9, 2012
Folsom, 2012.2
294
Nov 9, 2012
Folsom, 2012.2
Nov 9, 2012
Folsom, 2012.2
Hostname or IP address of the host that runs the attestation service HTTPS port for the attestation service Certificate file used to verify the attestation server's identity. The attestation service URL path. An authentication blob, which is required by the attestation service.
Add the following lines to /etc/nova/nova.conf in the DEFAULT and trusted_computing sections to enable scheduling support for Trusted Compute Pools, and edit the details of the trusted_computing section based on the details of your attestation service.
[DEFAULT] compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter, TrustedFilter [trusted_computing] server=10.1.71.206 port=8443 server_ca_file=/etc/nova/ssl.10.1.71.206.crt # If using OAT v1.5, use this api_url: api_url=/AttestationService/resources # If using OAT pre-v1.5, use this api_url: #api_url=/OpenAttestationWebServices/V1.0 auth_blob=i-am-openstack
Restart the nova-compute and nova-scheduler services after making these changes.
A user can request that their instance runs on a trusted host by specifying a trusted flavor when invoking the nova boot command.
296
Nov 9, 2012
Folsom, 2012.2
297
Nov 9, 2012
Folsom, 2012.2
Features
Manage installation, uninstallation and testing of a software. Support deployment on multiple machines. Support target machines in different network segments. Provide web UI to facilitate user operations. Provide REST API to make it possible to integrate it with other tools. Support parallel installation of software components.
OSes supported
Table16.1.OSes supported
OpenStack Folsom (Compute, Glance, Swift, Keystone) OpenStack Essex (Nova with ubuntu 10.10 ubuntu 11.04 ubuntu 11.10 ubuntu 12.04 :)
:)
298
Nov 9, 2012
Folsom, 2012.2
:) :) :)
:) :) :)
Glossary
dodai-deploy server - The server in which services of dodai-deploy is started. Node - The machine that is the target of installation. Nova, Glance, Swift etc. Proposal - The set of the kinds of configurations which describe how to install a software. The configurations include "Node config", "Config item", "Software config", "Component config". Node config - A configuration that describes which component to be installed on a node. Config item - A variable which can be used in the content of software config and component config. Software config - A configuration that describes the content of a configuration file for all components. Component config - A configuration that describes the content of a configuration file for only one component.
Installation
The $home in the following sections is the path of the home directory of the dodai-deploy. 1. Download dodai-deploy. Execute the following commands on the dodai-deploy server and all the nodes.
$ sudo apt-get install git -y $ git clone https://github.com/nii-cloud/dodai-deploy.git $ cd dodai-deploy
2. Set up the dodai-deploy server. Execute the following commands on dodai-deploy server to install necessary softwares and modify their settings.
$ sudo $home/setup-env/setup.sh server
3. Set up nodes. Execute the following commands on all the nodes to install necessary softwares and modify their settings.
299
Nov 9, 2012
Folsom, 2012.2
The $server in the above command is the fully qualified domain name (fqdn) of the dodai-deploy server. You can confirm the fqdn with the following command.
$ sudo hostname -f
After nodes were set up, the system time of nodes should be synchronized with dodaideploy server. 4. Set up storage device for Swift. You must set up a storage device before swift is installed. You should execute the commands for a physical device or for a loopback device on all nodes in which swift storage server is to be installed. For a physical device, use the following command.
$ sudo $home/setup-env/setup-storage-for-swift.sh physical $storage_path $storage_dev
For example,
$ sudo $home/setup-env/setup-storage-for-swift.sh physical /srv/node sdb1
For example,
$ sudo $home/setup-env/setup-storage-for-swift.sh loopback /srv/node sdb1 4
5. Create volume group for nova-volume. You must create a volume group before nova-volume is installed. You should execute the commands for a physical device or for a loopback device on the node in which novavolume is to be installed. For a physical device, use the following command.
$ sudo $home/setup-env/create-volume-group.sh physical $volume_group_name $device_path
300
Nov 9, 2012
Folsom, 2012.2
For example,
$ sudo $home/setup-env/create-volume-group.sh physical nova-volumes /dev/ sdb1
For example,
$ sudo $home/setup-env/create-volume-group.sh loopback nova-volumes /root/ volume.data 4
6. Start servers. Execute the following command on the dodai-deploy server to start the web server and job server.
$ sudo $home/script/start-servers production
You can stop the web server and job server with the following command.
$ sudo $home/script/stop-servers
Using web UI
You can find step-by-step guidance at http://$dodai_deploy_server:3000/.
Notes
1. SSH login nova instance after test of nova An instance will be started during the test of nova. After the test, you can login the instance by executing the following commands. For openstack nova diablo, 301
Nov 9, 2012
Folsom, 2012.2
2. Glance should be installed before using nova, because nova depends on glance in default settings. In /etc/nova/nova.conf the value of setting image_service is nova.image.glance.GlanceImageService. 3. Change Linux's setting net.ipv4.ip_forward to 1 in the machine where novanetwork will be installed before nova installation with the following command.
$ sudo sysctl -w net.ipv4.ip_forward=1
302
Nov 9, 2012
Folsom, 2012.2
Next, create a file named openrc to contain your TryStack credentials, such as: 303
Nov 9, 2012
Folsom, 2012.2
You can always retrieve your username and password from https://trystack.org/dash/ api_info/ after logging in with Facebook. Okay, you've created the basic scaffolding for your cloud user so that you can get some images and run instances on TryStack with your starter set of StackDollars. You're rich, man! Now to Part II!
and look for the images available in the text that returns. Look for the ID value.
+----+--------------------------------------+--------+--------+ | ID | Name | Status | Server | +----+--------------------------------------+--------+--------+ | 12 | natty-server-cloudimg-amd64-kernel | ACTIVE | | | 13 | natty-server-cloudimg-amd64 | ACTIVE | | | 14 | oneiric-server-cloudimg-amd64-kernel | ACTIVE | | | 15 | oneiric-server-cloudimg-amd64 | ACTIVE | | +----+--------------------------------------+--------+--------+
304
Nov 9, 2012
Folsom, 2012.2
+----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 0 | N/A | 0 | 1 | | | 2 | m1.small | 2048 | 20 | N/A | 0 | 1 | | | 3 | m1.medium | 4096 | 40 | N/A | 0 | 2 | | | 4 | m1.large | 8192 | 80 | N/A | 0 | 4 | | | 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | | +----+-----------+-----------+------+-----------+------+-------+-------------+
Create a keypair to launch the image, in a directory where you run the nova boot command later.
$ nova keypair-add mykeypair > mykeypair.pem
Create security group that enables public IP access for the webserver that will run WordPress for you. You can also enable port 22 for SSH.
$ nova secgroup-create openpub "Open for public" $ nova secgroup-add-rule openpub icmp -1 -1 0.0.0.0/0 $ nova secgroup-add-rule openpub tcp 22 22 0.0.0.0/0
Next, with the ID value of the server you want to launch and the ID of the flavor you want to launch, use your credentials to start up the instance with the identifier you got by looking at the image list.
$ nova boot --image 15 --flavor 2 --key_name mykeypair --security_groups openpub testtutorial
+--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | accessIPv4 | | | accessIPv6 | | | adminPass | StuacCpAr7evnz5Q | | config_drive | | | created | 2012-03-21T20:31:40Z | | flavor | m1.small | | hostId | | | id | 1426 | | image | oneiric-server-cloudimg-amd64 | | key_name | testkey2 | | metadata | {} | | name | testtut | | progress | 0 | | status | BUILD | | tenant_id | 296 | | updated | 2012-03-21T20:31:40Z | | user_id | facebook521113267 | | uuid | be9f80e8-7b20-49e8-83cf-fa059a36c9f8 | +--------------+--------------------------------------+
Now you can look at the state of the running instances by using nova list. 305
Nov 9, 2012
Folsom, 2012.2
+------+----------------+--------+----------------------+ | ID | Name | Status | Networks | +------+----------------+--------+----------------------+ | 1426 | testtut | ACTIVE | internet=8.22.27.251 | +------+----------------+--------+----------------------+
The instance goes from launching to running in a short time, and you should be able to connect via SSH. Look at the IP addresses so that you can connect to the instance once it starts running.
+--------------------------------------+------+-------+------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------+-------+------------------------+ | 50191b9c-b26d-4b61-8404-f149c29acd5a | test | ACTIVE | local-net=192.168.4. 35 | +--------------------------------------+------+-------+------------------------+
+------------------+------------+ | Property | Value | +------------------+------------+ | cpu0_time | 9160000000 | | memory | 524288 | | memory-actual | 524288 | | memory-rss | 178040 | | vda_errors | -1 | | vda_read | 3146752 | | vda_read_req | 202 | | vda_write | 1024 | | vda_write_req | 1 | | vnet0_rx | 610 | | vnet0_rx_drop | 0 | | vnet0_rx_errors | 0 | | vnet0_rx_packets | 7 | | vnet0_tx | 0 | | vnet0_tx_drop | 0 | | vnet0_tx_errors | 0 | | vnet0_tx_packets | 0 | +------------------+------------+
306
Nov 9, 2012
Folsom, 2012.2
Part III: Installing the Needed Software for the Web-Scale Scenario
Basically launch a terminal window from any computer, and enter:
$ ssh -i mykeypair [email protected]
On this particular image, the 'ubuntu' user has been set up as part of the sudoers group, so you can escalate to 'root' via the following command:
$ sudo -i
The WordPress package will extract into a folder called wordpress in the same directory that you downloaded latest.tar.gz. Next, enter "exit" and disconnect from this SSH session.
Nov 9, 2012
Folsom, 2012.2
308
Nov 9, 2012
Folsom, 2012.2
18. Support
Online resources aid in supporting OpenStack and the community members are willing and able to answer questions and help with bug suspicions. We are constantly improving and adding to the main features of OpenStack, but if you have any problems, do not hesitate to ask. Here are some ideas for supporting OpenStack and troubleshooting your existing installations.
Community Support
Here are some places you can locate others who want to help.
Nov 9, 2012
Folsom, 2012.2
More is being added all the time, so be sure to check back often. You can find the search box in the upper right hand corner of any OpenStack wiki page.
310
Nov 9, 2012
Folsom, 2012.2
Nov 9, 2012
Folsom, 2012.2
have a dhcp server, and an ami-tiny image doesn't support interface injection so you cannot connect to it. The fix for this type of problem is to use an Ubuntu image, which should obtain an IP address correctly with FlatManager network settings. To troubleshoot other possible problems with an instance, such as one that stays in a spawning state, first check your instances directory for i-ze0bnh1q dir to make sure it has the following files: libvirt.xml disk disk-raw kernel ramdisk console.log (Once the instance actually starts you should see a console.log.) Check the file sizes to see if they are reasonable. If any are missing/zero/very small then nova-compute has somehow not completed download of the images from objectstore. Also check nova-compute.log for exceptions. Sometimes they don't show up in the console output. Next, check the /var/log/libvirt/qemu/i-ze0bnh1q.log file to see if it exists and has any useful error messages in it. Finally, from the instances/i-ze0bnh1q directory, try virsh create libvirt.xml and see if you get an error there.
You can also use the --active to force the instance back into an active state instead of an error state, for example:
$ nova reset-state --active c6bbbf26-b40a-47e7-8d5c-eb17bf65c485
Note
The version of the nova client that ships with Essex on most distributions does not support the reset-state command. You can download a more recent version of the nova client from PyPI. The package name is python-novaclient, which can be installed using a Python package tool such as pip.
312