Category Archives: OpenStack

Nova V2.1 API Plans in Kilo

Completing port of code to V2.1 API

With Juno now closed to new features we’ve started looking at what we plan to do in the Kilo development cycle. As mentioned in a previous post most of the V2 API had been implemented in V2.1 with the exception of the networking APIs. The first and highest priority will be to complete and verify the V2.1 API is equivalent to the V2 API:

  • Finish porting the missing parts of the V2 API to V2.1. This is primarily networking support and a handful of patches which did not merge in the Juno cycle before feature freeze.
  • Continue tempest testing of the V2.1 API using the V2 API tempest tests. Testing so far has already found some bugs and there will be some work to ensure we have adequate coverage of the V2 API.
  • Encourage operators to start testing the V2.1 API so we have further verification that V2.1 is equivalent to the V2 API . It should also give us a better feeling for how much impact strong input validation will have on current users of the V2 API.

Microversions

Support for making both backwards compatible and non-backwards compatible changes using microversions is probably the second most important Nova API feature to be developed in Kilo. Microversions work by allowing a client to request a specific version of the Nova API. Each time a change is made to the Nova API that is visible to a client, the version of the API is incremented. This allows the client to detect when new API features are available, and control when they want to adapt their programs to backwards incompatible changes. By default when a client makes a request of the V2.1 it will behave as the V2 API. However if it supplies a header like:

X-OS-Compute_Version: 2.214

then the V2.1 API code will behave as the 2.214 version of the API. There will also be the ability to query the server what versions are supported. Although there was broad support for using a microversions technique the community was unable to come to a consensus on the detail of how microversions would be implemented. A high priority early in the Kilo cycle will be to get agreement on the implementation details of microversions. In addition to development work required in the Nova API to support microversions we will also need to add microversion functionality to novaclient.

API Policy Cleanup

The policy authorisation checks are currently spread between the API, compute and db layers. The deeper into the Nova internals the policy checks are made, the more work there is needed to unwind in case of authentication failure. Since the Icehouse development cycle we have been progressively been moving the policy checks from the lower layers of Nova up into the Nova API. The draft nova specification for this work is here.

The second part of the policy changes is to have a policy for each method. Currently the API gives fairly inconsistent control to operators over how resources can be accessed. Sometimes they are able to set permissions based on a per plugin basis, and at other times on an individual method granularity. Rather than add flexibility to authentication to plugins on an ad hoc basis we will be adding them to all methods on all plugins. The draft nova specification for this work is here.

Reducing technical debt in the V2/V2.1 API Code

Like many other areas of Nova, the API code has over time accumulated a reasonable amount of technical debt. The major areas we will look at addressing in Kilo are:

  • Maximise sharing of unittest code
  • General unittest code cleanup. Poorly structured unittests make it more difficult to add new tests as well as make it harder to debug test failures.
  • API samples test infrastructure improvements. The api sample tests are currently very slow to execute and there is significant overhead when updating them due to API changes. There are also gaps in test coverage, both in terms of APIs covered and full verification of the responses for those that do have existing tests.
  • Documentation generation for the Nova API is currently a very manual and error prone process. We need to automate as much of this process as possible and can use the new jsonschema input validation to help do this.

Feedback

The planning for the work that will be done in Kilo is still ongoing and the API team welcomes any feedback from users, operators and other developers. The etherpad where work items for Kilo can be proposed is here. Note that the focus of this etherpad is on infrastructure improvements to the API code, not new API features.

The Nova API team also holds meetings every Friday at 00:00UTC in the #openstack-meeting channel on freenode. Anyone interested in the future development direction of the Nova API is welcome to join.

Nova V2.1 API

Early in 2014 there was a very long discussion on the openstack-dev mailing list about the future of the Nova V3 API development. There were two main concerns. The first was the willingness and ability for users to port their applications from the V2 to the V3 API. The second was the the level of maintenance required to keep two Nova APIs up to date since it was becoming increasingly clear that we would not be able to deprecate the V2 API in only 2-4 cycles. As part of this discussion I wrote a document describing the problems with the V2 API and why the V3 API was developed. It also covered some ideas on how to minimise the dual maintenance overhead with supporting two REST APIs. This document describes most of the differences for clients between the V2 and V3 API.

During the Juno Design summit, the development cycle and the Nova mid cycle update there were further discussions around these ideas:

Not long after the community finally reached consensus on the first part of the work required to implement a V2.1 API which is implemented using the original V3 API code. The details of the work being carried out in Juno is described in the nova specification.

In short, from a client point of view, the V2.1 API looks exactly the same as the original V2 API with the following exceptions:

  • Strong input validation. In many cases the V2 API code does not verify properly the data passed to it. This can lead to clients sending data to the REST API which is silently ignored because there is a typo in the request. The V2.1 API primarily using jsonschema is very strict about the data it will accept and will reject the request if it receives bad data. Client applications will need to fixed before using the V2.1 API if they have been sending invalid data to Nova.
  • No XML support. The V2 XML API is not widely used and was marked as deprecated in Juno. The V2.1 API has no support for XML, only for JSON.

From an operator’s point of view:

  • The V2.1 API can be simultaneously deployed alongside the original V2 API code. By default the V2 API is exposed on /v2 and the V2.1 API on /v2.1/. This may make it easier for users to test and transition their applications over time rather than all at one time when the OpenStack software is upgraded. The V2.1 API is however not enabled by default in Juno.
  • The number of extensions has been reduced. A number of extensions in the original V2 code are dummy or minimalistic extensions which were only added because adding a new extension was the only way to signal to a client that new functionality is available. In these cases the V2.1/V3 API code removed the extra extension and incorporated the newer functionality into the original extension and enabled it by default. Note that from the perspective of clients they still see the extra extensions if the functionality is enabled. So no changes are required on the client side.

Because of the late acceptance of the V2.1 specification we have not been able to merge all of the required patches to implement the V2.1 API in Juno. However, there is support for most of the equivalent of the V2 API with the exception of networking. It is expected that the remaining patches will be completed soon after Kilo opens. I will cover the V2.1 work and discussions on how we plan on handling backwards incompatible API changes in a future article.

Nova API extensions not to be ported to V3

Some of the API extensions which exist in the Nova V2 API are not being ported to the V3 API. The general guideline is that if the extension is just acting as a proxy for another OpenStack service and that same functionality can be requested by the client to that other service, then the extension is not required for V3. An example of this is the os-volumes extension where identical functionality can be requested directly through Cinder.

The list of extensions proposed not to be ported to the V3 API are:

  • baremetal_nodes.py
  • cloudpipe.py
  • cloudpipe_update.py
  • createserverext.py
  • extended_virtual_interfaces_net.py
  • floating_ip_dns.py
  • floating_ip_pools.py
  • floating_ips_bulk.py
  • floating_ips.py
  • networks_associate.py
  • os_networks.py
  • os_tenant_networks.py
  • security_group_default_rules.py
  • security_group_rules.py (except for servers extension part)
  • virtual_interfaces.py
  • volumes.py (except for volume attach part)

If you believe that any of these extensions really do need to be ported to the V3 API then please send an email to the OpenStack Development development mailing list. There is an existing discussion thread on the mailing list here.

The canonical list of the extensions which will ported and those which won’t is on the OpenStack Etherpad.

Swift Authentication using Keystone

I’m rather new to OpenStack and have had some difficulty understanding how authentication to Swift using Keystone along with the ACLs work so I thought I’d detail some of what I’ve learnt over the last couple of days. This is probably nothing new to more experienced OpenStack users/administrators but may be of some help to newbies. I have been working with the Essex release so some of the information may be out of date if you try to use it with Folsom, which was released last week. Disclaimer: some of what is below may well be incorrect, please let me know if it is and I’ll update it. This is just what I’ve worked out from looking at various guides, forum posts, log files, code and experimentation.

This blog has some useful information about Swift. One part which was particularly useful was a clarification of terminology between Swift and Keystone which will help in translating between documentation for the different components.

  • A tenant in keystone is an account in swift.
  • A user in keystone is also a user in swift.
  • A role in keystone is a group in swift.

Note that if you are using Horizon to create users, then a project in horizon is equivalent to a tenant.

If you haven’t already done so then follow the instructions in the Essex administration guide to enable Swift authentication using Keystone.

Check that Keystone/Swift Authentication is working

In /etc/swift/proxy-server.conf you will have set operator_role to something like:

operator_roles = admin, swiftoperator

A user must be in one of these roles in order to change the ACLs for containers belonging to a Swift account. Remember that in Keystone the role that a user has is a per-tenant property. And that a user does not necessarily have any role in a specific tenant. Specifically, if you create a user in Horizon and set which project (tenant) it belongs to this does not add the user to a role for that tenant (AFAICT). You can add a user to a role for a specific tenant in keystone by doing the following (you will need the appropriate keystone privileges):

$ keystone role-user-add --user_id user-id --tenant_id tenant-id --role_id role-id

The user, tenant and role ids can be listed using the following keystone commands

$ keystone user-list
$ keystone role-list
$ keystone tenant-list

In Essex you can not view the roles a user is in using Horizon, but the following command line argument will list them:

$ keystone role-list --user user-id --tenant tenant-id

At the end of the instructions to enable Keystone authentication for Swift is a command to verify that Swift is properly using Keystone for authentication.

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN stat

In the command above -U admin:admin specifies the tenant and user information to pass to keystone at http://localhost:5000/v2.0 to retreive a token which is then sent in a second request to the Swift proxy server. The token is used by the server to determine if the user/tenant combination has permission to run the stat command. Because in the example above (and many other examples) the user and tenant are the same name there is some ambiguity, but note that the -U parameter is of the form -U keystone_tenant:keystone_user and not the other way around. Also, the parameter to -K should be the password for the tenant user account, not some other key or id (which is used in some other examples around).

If the swift command fails to retreive a token from the Keystone server, say because of an incorrect password, then you’ll see a response like this:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
Auth GET failed: http://localhost:5000/v2.0/tokens 401 Not Authorized

If it fails because the swift command successfully retrieved a token from Keystone but the user specified is not in a role for the specified tenant that has sufficient privileges to run the command (in this case not in operator_roles, then you’ll see a response like this:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
Account HEAD failed: http://127.0.0.1:8080/v1/AUTH_0bd26c26d3ac42f2886d327d9c8249aa 403 Forbidden

or even this if the user and password is valid, but the tenant is not:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
There is no object-store endpoint on this auth server.

Note the URL in the error message which indicates that the error occurred when it was attempting to connect to the Swift server and not that Keystone server. To run the stat command the user will need to be in a role listed in operator_roles in proxy-server.conf. If the the command runs successfully you will see something like:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
Account: AUTH_0bd26c26d3ac42f2886d327d9c8249aa
Containers: 2
Objects: 1
Bytes: 104
Accept-Ranges: bytes
X-Trans-Id: tx993e0b8ad1ee43f6ae437292eef2da44

Returned is a summary of the information for the account (the tenant in Keystone-speak) in Swift.

Upload and download a file to Swift

Rather than have to specify the authentication URL, tenant, user and password every time they can be set using environment variables:

OS_USERNAME=<username>
OS_TENANT_NAME=<tenant_name>
OS_PASSWORD=<password>
ST_AUTH=http://localhost:5000/v2.0
ST_USER=<tenant_name>:<username>
ST_KEY=<user_password>

Until the ACLs are set only a user in a role listed in operator_roles will be able to upload or download a file, or list the containers or contents of containers. So firstly, as a user that has a role for the tenant in operator_roles, to upload a file:

$ swift upload test_container test_file
test_file

The container specified will be created automatically if it does not exist. To download a file:

$ swift upload test_container test_file
test_file

To list the containers for an account (tenant):

$ swift list
test_container

To list the contents of a container:

$ swift list test_container
test_file

Setting Swift ACLs

ACLs can only be set on containers, not on individual objects. In order to view the ACLs on a container you can stat the container with a user that has a role in operator_roles for that tenant:

$ swift stat test_container
Account: AUTH_0bd26c26d3ac42f2886d327d9c8249aa
Container: test_container
Objects: 1
Bytes: 0
Read ACL:
Write ACL:

Sync To:
Sync Key:
Accept-Ranges: bytes
X-Trans-Id: txf87368f68d2a46cb93e2141554328924

Note: If you stat a container with a user that does not have a role in the operator role list, but does have read privileges on the container it will show you empty ACLs even if they are not empty.

An empty ACL means that only a user with a role in the operator role is able to read or write objects to the container or list objects in the container. In order to set the read ACL so users in the role example_role for a container can download objects:

$ swift post -r "example_role" test_container

Note that this will overwrite the current read ACL value. In order to set the read ACL so user Test1 in the account example_account (tenant in Keystone speak) can download objects:

$ swift post -r "example_account:Test1" test_container

It appears to be necessary to always specific an account/user combination. You can not just specify an account (tenant) or a username. You can combine the above two examples by separating the ACLs with a ‘,‘ character:

$ swift post -r "example_role,example_account:Test1" test_container

Read privileges alone do not allow a user to list the contents of a container. To allow this add the .rlistings directive. This will allow any user with read privileges for the container to also retrieve an object list for the container.

$ swift post -r "example_role,example_account:Test1,.rlistings" test_container

Write privileges are managed in a similar manner, although the .rlistings directive is not required.

$ swift post -w "example_role" test_container

In my version of Swift even if you have write privileges to a container an error is returned related to not having permission to create a container for the file even if it already exists. The file is however uploaded correctly. The latest definitions of ACLs for Swift are available in this document, although it doesn’t seem to quite match my experiences. It could just be a that its for a later release of Swift included in Folsom.