Nova V2.1 API Plans in Kilo

Completing port of code to V2.1 API

With Juno now closed to new features we’ve started looking at what we plan to do in the Kilo development cycle. As mentioned in a previous post most of the V2 API had been implemented in V2.1 with the exception of the networking APIs. The first and highest priority will be to complete and verify the V2.1 API is equivalent to the V2 API:

  • Finish porting the missing parts of the V2 API to V2.1. This is primarily networking support and a handful of patches which did not merge in the Juno cycle before feature freeze.
  • Continue tempest testing of the V2.1 API using the V2 API tempest tests. Testing so far has already found some bugs and there will be some work to ensure we have adequate coverage of the V2 API.
  • Encourage operators to start testing the V2.1 API so we have further verification that V2.1 is equivalent to the V2 API . It should also give us a better feeling for how much impact strong input validation will have on current users of the V2 API.


Support for making both backwards compatible and non-backwards compatible changes using microversions is probably the second most important Nova API feature to be developed in Kilo. Microversions work by allowing a client to request a specific version of the Nova API. Each time a change is made to the Nova API that is visible to a client, the version of the API is incremented. This allows the client to detect when new API features are available, and control when they want to adapt their programs to backwards incompatible changes. By default when a client makes a request of the V2.1 it will behave as the V2 API. However if it supplies a header like:

X-OS-Compute_Version: 2.214

then the V2.1 API code will behave as the 2.214 version of the API. There will also be the ability to query the server what versions are supported. Although there was broad support for using a microversions technique the community was unable to come to a consensus on the detail of how microversions would be implemented. A high priority early in the Kilo cycle will be to get agreement on the implementation details of microversions. In addition to development work required in the Nova API to support microversions we will also need to add microversion functionality to novaclient.

API Policy Cleanup

The policy authorisation checks are currently spread between the API, compute and db layers. The deeper into the Nova internals the policy checks are made, the more work there is needed to unwind in case of authentication failure. Since the Icehouse development cycle we have been progressively been moving the policy checks from the lower layers of Nova up into the Nova API. The draft nova specification for this work is here.

The second part of the policy changes is to have a policy for each method. Currently the API gives fairly inconsistent control to operators over how resources can be accessed. Sometimes they are able to set permissions based on a per plugin basis, and at other times on an individual method granularity. Rather than add flexibility to authentication to plugins on an ad hoc basis we will be adding them to all methods on all plugins. The draft nova specification for this work is here.

Reducing technical debt in the V2/V2.1 API Code

Like many other areas of Nova, the API code has over time accumulated a reasonable amount of technical debt. The major areas we will look at addressing in Kilo are:

  • Maximise sharing of unittest code
  • General unittest code cleanup. Poorly structured unittests make it more difficult to add new tests as well as make it harder to debug test failures.
  • API samples test infrastructure improvements. The api sample tests are currently very slow to execute and there is significant overhead when updating them due to API changes. There are also gaps in test coverage, both in terms of APIs covered and full verification of the responses for those that do have existing tests.
  • Documentation generation for the Nova API is currently a very manual and error prone process. We need to automate as much of this process as possible and can use the new jsonschema input validation to help do this.


The planning for the work that will be done in Kilo is still ongoing and the API team welcomes any feedback from users, operators and other developers. The etherpad where work items for Kilo can be proposed is here. Note that the focus of this etherpad is on infrastructure improvements to the API code, not new API features.

The Nova API team also holds meetings every Friday at 00:00UTC in the #openstack-meeting channel on freenode. Anyone interested in the future development direction of the Nova API is welcome to join.

Nova V2.1 API

Early in 2014 there was a very long discussion on the openstack-dev mailing list about the future of the Nova V3 API development. There were two main concerns. The first was the willingness and ability for users to port their applications from the V2 to the V3 API. The second was the the level of maintenance required to keep two Nova APIs up to date since it was becoming increasingly clear that we would not be able to deprecate the V2 API in only 2-4 cycles. As part of this discussion I wrote a document describing the problems with the V2 API and why the V3 API was developed. It also covered some ideas on how to minimise the dual maintenance overhead with supporting two REST APIs. This document describes most of the differences for clients between the V2 and V3 API.

During the Juno Design summit, the development cycle and the Nova mid cycle update there were further discussions around these ideas:

Not long after the community finally reached consensus on the first part of the work required to implement a V2.1 API which is implemented using the original V3 API code. The details of the work being carried out in Juno is described in the nova specification.

In short, from a client point of view, the V2.1 API looks exactly the same as the original V2 API with the following exceptions:

  • Strong input validation. In many cases the V2 API code does not verify properly the data passed to it. This can lead to clients sending data to the REST API which is silently ignored because there is a typo in the request. The V2.1 API primarily using jsonschema is very strict about the data it will accept and will reject the request if it receives bad data. Client applications will need to fixed before using the V2.1 API if they have been sending invalid data to Nova.
  • No XML support. The V2 XML API is not widely used and was marked as deprecated in Juno. The V2.1 API has no support for XML, only for JSON.

From an operator’s point of view:

  • The V2.1 API can be simultaneously deployed alongside the original V2 API code. By default the V2 API is exposed on /v2 and the V2.1 API on /v2.1/. This may make it easier for users to test and transition their applications over time rather than all at one time when the OpenStack software is upgraded. The V2.1 API is however not enabled by default in Juno.
  • The number of extensions has been reduced. A number of extensions in the original V2 code are dummy or minimalistic extensions which were only added because adding a new extension was the only way to signal to a client that new functionality is available. In these cases the V2.1/V3 API code removed the extra extension and incorporated the newer functionality into the original extension and enabled it by default. Note that from the perspective of clients they still see the extra extensions if the functionality is enabled. So no changes are required on the client side.

Because of the late acceptance of the V2.1 specification we have not been able to merge all of the required patches to implement the V2.1 API in Juno. However, there is support for most of the equivalent of the V2 API with the exception of networking. It is expected that the remaining patches will be completed soon after Kilo opens. I will cover the V2.1 work and discussions on how we plan on handling backwards incompatible API changes in a future article.

Nova API extensions not to be ported to V3

Some of the API extensions which exist in the Nova V2 API are not being ported to the V3 API. The general guideline is that if the extension is just acting as a proxy for another OpenStack service and that same functionality can be requested by the client to that other service, then the extension is not required for V3. An example of this is the os-volumes extension where identical functionality can be requested directly through Cinder.

The list of extensions proposed not to be ported to the V3 API are:

  • (except for servers extension part)
  • (except for volume attach part)

If you believe that any of these extensions really do need to be ported to the V3 API then please send an email to the OpenStack Development development mailing list. There is an existing discussion thread on the mailing list here.

The canonical list of the extensions which will ported and those which won’t is on the OpenStack Etherpad.

Swift Authentication using Keystone

I’m rather new to OpenStack and have had some difficulty understanding how authentication to Swift using Keystone along with the ACLs work so I thought I’d detail some of what I’ve learnt over the last couple of days. This is probably nothing new to more experienced OpenStack users/administrators but may be of some help to newbies. I have been working with the Essex release so some of the information may be out of date if you try to use it with Folsom, which was released last week. Disclaimer: some of what is below may well be incorrect, please let me know if it is and I’ll update it. This is just what I’ve worked out from looking at various guides, forum posts, log files, code and experimentation.

This blog has some useful information about Swift. One part which was particularly useful was a clarification of terminology between Swift and Keystone which will help in translating between documentation for the different components.

  • A tenant in keystone is an account in swift.
  • A user in keystone is also a user in swift.
  • A role in keystone is a group in swift.

Note that if you are using Horizon to create users, then a project in horizon is equivalent to a tenant.

If you haven’t already done so then follow the instructions in the Essex administration guide to enable Swift authentication using Keystone.

Check that Keystone/Swift Authentication is working

In /etc/swift/proxy-server.conf you will have set operator_role to something like:

operator_roles = admin, swiftoperator

A user must be in one of these roles in order to change the ACLs for containers belonging to a Swift account. Remember that in Keystone the role that a user has is a per-tenant property. And that a user does not necessarily have any role in a specific tenant. Specifically, if you create a user in Horizon and set which project (tenant) it belongs to this does not add the user to a role for that tenant (AFAICT). You can add a user to a role for a specific tenant in keystone by doing the following (you will need the appropriate keystone privileges):

$ keystone role-user-add --user_id user-id --tenant_id tenant-id --role_id role-id

The user, tenant and role ids can be listed using the following keystone commands

$ keystone user-list
$ keystone role-list
$ keystone tenant-list

In Essex you can not view the roles a user is in using Horizon, but the following command line argument will list them:

$ keystone role-list --user user-id --tenant tenant-id

At the end of the instructions to enable Keystone authentication for Swift is a command to verify that Swift is properly using Keystone for authentication.

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN stat

In the command above -U admin:admin specifies the tenant and user information to pass to keystone at http://localhost:5000/v2.0 to retreive a token which is then sent in a second request to the Swift proxy server. The token is used by the server to determine if the user/tenant combination has permission to run the stat command. Because in the example above (and many other examples) the user and tenant are the same name there is some ambiguity, but note that the -U parameter is of the form -U keystone_tenant:keystone_user and not the other way around. Also, the parameter to -K should be the password for the tenant user account, not some other key or id (which is used in some other examples around).

If the swift command fails to retreive a token from the Keystone server, say because of an incorrect password, then you’ll see a response like this:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
Auth GET failed: http://localhost:5000/v2.0/tokens 401 Not Authorized

If it fails because the swift command successfully retrieved a token from Keystone but the user specified is not in a role for the specified tenant that has sufficient privileges to run the command (in this case not in operator_roles, then you’ll see a response like this:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
Account HEAD failed: 403 Forbidden

or even this if the user and password is valid, but the tenant is not:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
There is no object-store endpoint on this auth server.

Note the URL in the error message which indicates that the error occurred when it was attempting to connect to the Swift server and not that Keystone server. To run the stat command the user will need to be in a role listed in operator_roles in proxy-server.conf. If the the command runs successfully you will see something like:

$ swift -V 2 -A http://localhost:5000/v2.0 -U admin:admin -K ADMIN_PASSWORD stat
Account: AUTH_0bd26c26d3ac42f2886d327d9c8249aa
Containers: 2
Objects: 1
Bytes: 104
Accept-Ranges: bytes
X-Trans-Id: tx993e0b8ad1ee43f6ae437292eef2da44

Returned is a summary of the information for the account (the tenant in Keystone-speak) in Swift.

Upload and download a file to Swift

Rather than have to specify the authentication URL, tenant, user and password every time they can be set using environment variables:


Until the ACLs are set only a user in a role listed in operator_roles will be able to upload or download a file, or list the containers or contents of containers. So firstly, as a user that has a role for the tenant in operator_roles, to upload a file:

$ swift upload test_container test_file

The container specified will be created automatically if it does not exist. To download a file:

$ swift upload test_container test_file

To list the containers for an account (tenant):

$ swift list

To list the contents of a container:

$ swift list test_container

Setting Swift ACLs

ACLs can only be set on containers, not on individual objects. In order to view the ACLs on a container you can stat the container with a user that has a role in operator_roles for that tenant:

$ swift stat test_container
Account: AUTH_0bd26c26d3ac42f2886d327d9c8249aa
Container: test_container
Objects: 1
Bytes: 0
Read ACL:
Write ACL:

Sync To:
Sync Key:
Accept-Ranges: bytes
X-Trans-Id: txf87368f68d2a46cb93e2141554328924

Note: If you stat a container with a user that does not have a role in the operator role list, but does have read privileges on the container it will show you empty ACLs even if they are not empty.

An empty ACL means that only a user with a role in the operator role is able to read or write objects to the container or list objects in the container. In order to set the read ACL so users in the role example_role for a container can download objects:

$ swift post -r "example_role" test_container

Note that this will overwrite the current read ACL value. In order to set the read ACL so user Test1 in the account example_account (tenant in Keystone speak) can download objects:

$ swift post -r "example_account:Test1" test_container

It appears to be necessary to always specific an account/user combination. You can not just specify an account (tenant) or a username. You can combine the above two examples by separating the ACLs with a ‘,‘ character:

$ swift post -r "example_role,example_account:Test1" test_container

Read privileges alone do not allow a user to list the contents of a container. To allow this add the .rlistings directive. This will allow any user with read privileges for the container to also retrieve an object list for the container.

$ swift post -r "example_role,example_account:Test1,.rlistings" test_container

Write privileges are managed in a similar manner, although the .rlistings directive is not required.

$ swift post -w "example_role" test_container

In my version of Swift even if you have write privileges to a container an error is returned related to not having permission to create a container for the file even if it already exists. The file is however uploaded correctly. The latest definitions of ACLs for Swift are available in this document, although it doesn’t seem to quite match my experiences. It could just be a that its for a later release of Swift included in Folsom.

Filtering calls with Asterisk

I mentioned on Google+ that I get Asterisk to filter my calls during times when it’s inconvenient to answer the phone and someone asked me to post the details. I’m definitely not an Asterisk expert so there’s probably a better way of doing this.

The PSTN line is answered through a SPA3102 and it is configured not to automatically make the phone on the FXS port ring on incoming calls. There is an option in the advanced settings on the PSTN Line tab on the SPA3102 web config interface that allows you to do this:

Ring Thru Line 1: No

The means that the phone on the FXS port does not ring at all unless the call gets through the filtering in Asterisk and Asterisk tells it to ring.

Below is the relevant excerpt from the extensions.conf file.

; Whitelist various phone numbers
exten => s,n,GotoIf($["${CALLERID(number)}" = "0403XXXXXX"]?ring-all-phones,s,1)

; Check to see if we want to block all calls currently
exten => s,n,GotoIf($[${DB(phonecontrol/state)} = "block"]?out_of_hours,1)

; Check to see if its the right time period to accept calls
exten => s,n,GotoIfTime(9:00-23:00|mon-fri|*|*?ring-all-phones,s,1)
exten => s,n,GotoIfTime(11:00-21:00|sat-sun|*|*?ring-all-phones,s,1)

; Check to see if we want to accept all calls regardless of the time
exten => s,n,GotoIf($[${DB(phonecontrol/state)} = "accept"]?ring-all-phones,s,1)

exten => s,n,Goto(out_of_hours,1)

; Message about not accepting calls
exten => out_of_hours,1,Background(custom/out_of_hours)
exten => out_of_hours,n,WaitExten(5)
exten => out_of_hours,n,Goto(1)

; Ring phone anyway (1)
exten => 1,1,Goto(ring-all-phones,s,1)

; Leave voicemail (2)
exten => 2,1,VoiceMail(3000@default,u)
exten => 2,n,Hangup

The DB entry for phonecontrol/state which is controls whether or not I want to override whether calls are accepted or not is toggled through a web interface.

Controlling the house lighting via MQTT

The lights and some other electrical devices in our new house are controlled by a C-Bus system. Essentially this means that rather than the light switches switching the power to the lights directly, they instead sit on a bus which is connected up to relays which control the power to individual lights. This makes it easy to have smart switches which can control multiple lights and a do a series of tasks (eg dim some lights, pull down a projector screen etc). The most interesting part for me is that when we had the C-Bus system installed is also had an ethernet interface module for the system installed so we can talk to it directly from any of our other computers.

C-Gate is a program which mediates access to the C-Bus interface so multiple programs can access it simultaneously, and fortunately although it was written for windows it’s written in java and runs ok on Linux. The input/output format is not particularly nice for programmatic control, and I ended up writing some scripts that allow for synchronisation of the state between the C-Gate server and an MQTT server.

I already use MQTT as a mechanism to communicate data about power usage in the house. Incidentally I’m also now using the Open Source implementation of MQTT, Mosquitto which for me has been just a drop in replacement of a proprietary version. MQTT can provide a nice uniform interface for apps which insulates them from the details of how data is transferred to and from backend systems. It avoids a bunch of work when the backends change.

I have one perl script which listens for state changes (for example caused by someone pressing a physical light switch) from the C-Bus system and updates the state in MQTT under a simple hierarchy:


And another one which listens for changes in a similar hierarchy in MQTT and sends those changes to the C-Bus system:


The same hierarchy is not used for both to reduce the problem of race conditions and loops occurring. Light numbers are defined in the physical C-Bus setup.


This makes command line control of the lights very straightforward (as long as you know what number a light has been assigned):

mosquitto_pub -h stitch -t lights/<light_num>/set_state -m 255

but I wanted something a bit more user-friendly. So using a bit of javascript, php and a very useful, but slightly hacked version of phpMQTT, I put together a dodgy web page which shows the state of all the lights and exhaust fans in the house as well as allowing us to control them.


So what’s next on the list to work on?

  • Display the state of the lights and allow control of them through an image of the floorplan of the house
  • Add other inputs such as water and gas usage, which computers are currently on and being used, alarm sensors etc into MQTT
  • Add temperature and humidity sensors in all the rooms in the house as well as outside
  • Experiment with little agent programs that sit around monitoring the data from the MQTT server and try to do smart things – eg warn us when we leave lights or appliances on, perhaps even proactively turn them off, warn us when there has been an unusual pattern of electricity/gas/water usage, open windows when its too hot inside and the temperature outside has dropped below the inside temperature, etc

Wireless Ambient Orb

I’ve been tracking our household electricity usage live for a while. We have an LCD display but its not something that we remember to check very often to make sure that everything that should be turned off is off.

I noticed some cheap rgb led strips on deal extreme and thought I’d make my own ambient orb. I dug out an old arduino I wasn’t using and found some information from this site on how to control the strip using a darlington array. I added a perl script to bridge between the microbroker which receives the power usage information and translates it to a color for the ambient orb to display.

Ambient Orb

At what is our normal minimum power usage the orb glows blue and as the power usage increases turns green, yellow, orange, and then red. This makes it pretty easy to see at a glance when leaving the house or going to bed if the household power usage is about right. After a bit of testing I added purple at the end for when Kelly turns on the kettle and the toaster at the same time :-)

I’ve been interested in playing with xbees for a while, so rather than get a 802.11b wifi shield for communication I bought an arduino xbee shield and a couple of xbees. It turned out pretty easy to setup the xbees and I think I’ll end up with a little mesh network at home with both sensors and display devices like ambient orbs.


I found some really cheap giant usb driven plastic keys on ebay. It just lights up with a white color when pressed but was easy to disassemble and put the led strips and arduino inside instead.




The white plastic does a better job of diffusing the led light than in the photo above.

Now Kelly wants an orb of her own, so I’m helping her make a smaller and cheaper version using an Arduino Pro Mini 328 instead of an Arduino Duemilanove.


About a week after planting the seeds we have little seedlings appearing :-) All the cucumber seeds have sprouted as well as a couple of the tomato plants. So far no sign of life from the eggplant or cherry tomato plant seeds.

Apparently its a bit too cold to plant the seedlings yet and warming the soil a bit can also help. So I’ve put down some black plastic where we’re planning on planting the seedlings when they’re ready. I dug some organic fertiliser into the ground and we picked up some pea straw for mulch so are already once the seedlings have matured enough to go outside.

Practicing photography

The weather forecast for Saturday was clear and sunny so Kelly and I decided to take Alyssa out to see if we could get some good high resolution outdoor photos of her. Most of the photos we have of her are low resolution ones taken with our iPhones. So I got out my D70s and 50mm f/1.8 portrait lens and we headed out to Tusmore Park near to where I grew up. Its a really nice green grassy park with a good playground, creek and tall trees.

By the time we had arrived at the park Alyssa had fallen asleep in her car seat and we lay her down on the grass until she woke up. Although I took almost 200 photos my favourite photo of the set was taken right near the beginning when she was still asleep on the grass.


I like it so much I’m thinking of getting a large canvas print done. When she woke up and realised she was in the park with a playground she was very happy! The lens has such a narrow depth of field that taking photos of her in focus while on the swing was quite difficult.


Same problem with the slide, although this action shot is not framed well, I love the expression on her face it captured.


I think this one would have been really nice if I’d rotated the camera 90 degrees like the one after it. I think she’s lit really well in these two photos and it might have been because of the light coloured pool floor reflecting light up from below her.



I think this one is pretty cute as we didn’t realise she was able to climb up steps that high:


Although it would have been much nicer if I’d framed it like the following which shows how tall the trees in the background are.


It turned out to be a lot cloudier than we expected so the light wasn’t as nice as we were hoping for. I’m really pleased with how some of the photos turned out and I learned quite a bit, so next time there is bright sunny day on the weekend we’ll be out at a park again.

Growing our own vegies

We’ve been thinking of growing some of our own vegetables for a while now. Finally this weekend got around to buying some seeds. Its still a bit cold to plant anything outside, but we’re using a couple of egg cartons in the kitchen window to start the seedlings which should be ready when the weather warms up.

We have a row each of tomatoes, cucumber, cherry tomatoes and eggplant. If these sprout ok in couple of weeks we’ll start another lot of the same.