2016-01-03

sphinx: ignoring sys.path?

I'm a big fan of using Sphinx for documentation and I use it for my Pyslet Python module. For some time now I've had an infuriating problem with generating documentation that I've been putting off solving on the assumption that it is caused by some weird and hard to discover configuration issue.

The problem is that whenever I run sphinx to build my package documentation it uses the currently installed version of the package and not the local copy I'm working on. This is particularly annoying because I have various versions of Python installed on my Mac (going back some years) and sphinx always seems to run with the native Python that came with XCode (Python 2.7.10 at the time of writing) whereas typing "python" at the terminal uses one of my older custom builds. I guess I'd always thought there was something weird going on and just relied on the workaround which was to run:

$/usr/bin/python setup.py install

...every time I wanted to rebuild the docs. That forced the latest version of the package to be installed into the Python interpreter that was going to be run by sphinx.

I had already edited my sphinx conf.py file as follows:

sys.path.insert(0, os.path.abspath('..'))

That should have been enough to put the correct path to the working copy in the search order before anything that was installed in the site packages but, alas, it just didn't seem to work. I checked that sys.path was correct, I even started the interpreter on the command line to ensure the working copies were getting loaded after this modification. They were, it seemed inexplicable!

I finally solved the mystery today, and thought I'd blog the answer in case anyone else does the same stupid thing I did.

Earlier today I added a new module to my package and running the docs gave me an import error. Previously I'd thought that some strange site-specific import hook was causing sys.path to be ignored but an ImportError suggested that sys.path was being completely ignored. I got so frustrated that I edited the autodoc.py package to check that sys.path was set correctly just before it calls __import__ and then I set a breakpoint to see if I could see what was happening. I tried setting sys.path to a single directory, the one with the working version of my package in it. Still the import brought up the version installed in site packages. How is this possible?

When you import a module with a name like "package.modA" you're actually (sort of) importing the package first and not the module. As a result, if the package has already imported and you execute import package.modB Python will ignore sys.path because it already knows where the package is. This was exactly what was going wrong for me...

The Solution

My package contains a trivial module called 'info' which contains a few strings that I use to describe the package both within the package itself and also in setup.py. At some point I must have gone in to the conf.py I use for Sphinx and optimised away some of the redundant text by importing the strings directly from my package...

#
# conf.py
# 
import pyslet.info as info

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))

# ... [snip]

# General information about the project.
project = info.title_name
copyright = info.copyright

Notice that conf.py imports my 'info' module before I change sys.path, as a result it gets imported from site packages and every future import from my pyslet package will come from site packages too. The solution is simple, and sanity has been restored:

#
# conf.py
# 

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
import pyslet.info as info

# ... [snip]

# General information about the project.
project = info.title_name
copyright = info.copyright

And now my documentation builds correctly from the working copy. I cannot believe that I'm the first person to trip over this issue. A bit of Googling (after the fact) reveals this module, for example: CodeChat conf.py which uses a similar technique to mine.

Happy New Year!

2015-10-11

Pyslet goes https

After months of being too busy to sort this out I have finally moved the Pyslet website to SSL. This is a quick post to explain how I've done this.

Firstly, I've wanted to do this for a while because I want to use the above website to host a web version of the QTI migration tool but encouraging users to upload their precious assessment materials to a plain old HTTP URL should (hopefully would) have proved a challenge. I saw an advert for free SSL certificates for open source projects from GlobalSign so in a rush of enthusiasm I applied and got my certificate. There's a checklist of rules that the site must comply with to be eligible (see previous link) which I'll summarise here:

  1. OSI license: Pyslet uses the BSD 3-Clause License: check!
  2. Actively maintained: well, Pyslet is a spare-time activity but I'm going to give myself a qualified tick here.
  3. Not used for commercial purposes: the Pyslet website is just a way of hosting demos of Pyslet in action, no adverts, no 'monetization' of any kind: check!
  4. Must get an A rating with GlobalSign's SSL Checker...

That last one is not quite as easy as you might think. Here's what I did to make it happen, I'll assume you have already dome some openssl magic, applied for and received your crt file.

  • Download the intermediate certificate chain file from GlobalSign here, the default one for SHA-256 Orders was the correct one for me.
  • Put the following files into /var/www/ssl (your location may vary):

    www.pyslet.org.key
    www.pyslet.org.crt
    globalsign-intermediate.crt

    The first one is the key I originally created with:

    openssl genrsa -des3 -out www.pyslet.org.key.encrypted 2048
    openssl req -new -key www.pyslet.org.key.encrypted -out www.pyslet.org.csr
    openssl rsa -in www.pyslet.org.key.encrypted -out www.pyslet.org.key

    The second file is the certificate I got from GlobalSign themselves. The third one is the intermediate certificate I downloaded above.

  • Set permissions (as root):
    chown -R root:root /var/www/ssl/*.key
    chmod 700 /var/www/ssl/*.key
  • Add a virtual host to Apache's httpd.conf (suitable for Apache/2.2.31):
    Listen 443
    
    <VirtualHost *:443>
        ServerName www.pyslet.org
        SSLEngine on
        
        SSLCertificateFile /var/www/ssl/www.pyslet.org.crt
        SSLCertificateKeyFile /var/www/ssl/www.pyslet.org.key
        SSLCertificateChainFile /var/www/ssl/globalsign-intermediate.crt
        
        SSLCompression off
        SSLProtocol all -SSLv3 -SSLv2
        SSLCipherSuite AES128+EECDH:AES128+EDH    
        SSLHonorCipherOrder on
        
    #   Rest of configuration goes here....
    
    </VirtualHost>

This is a relatively simple configuration designed to get an A rating while not worrying too much about compatibility with really old browsers.

2015-02-16

Accessing the ESA Sentinel Mission Data with Python and OData

I've had a couple of enquiries now about how to access the OData feeds on the ESA Sentinel mission science data hub. Sentinel 1 is the first of a new group of satellites in the Copernicus programme to monitor the Earth. That's about all I know I'm afraid. This data is not pretty desktop pictures (though doubtless there are some pretty pictures buried in there somewhere) but raw scientific data from instruments currently orbiting the Earth.

The source code described here is available in the samples directory on GitHub, you must be using the latest Pyslet from master for this script to enable the metadata override technique used here.


The data hub advertises access to the data through OData (version 1) but my Python library, Pyslet, was not able to access the feeds properly: hence the enquiries.

Turns out that the data feeds use a concept called containment in OData. The model of OData is one of entity sets (think SQL tables) with relations between them modelled by navigation properties. There's one particular use case that doesn't work very well in this scenario but seems popular. Given an entity (think table row or record) people want to add arbitrary key-value pairs. The ESA's data model does this by creating 'sub-tables' which define collections of attributes that hang off of each entity. The attribute name is the key in these collections. This doesn't really work in OData v1 (or v2) because these attribute values should still be entities in their own right and therefore they need a unique key and an entity set definition to contain them.

This isn't the only schema I've seen that attempts to do something like this either, SAP have published a similar schema suggesting that some early Java tools exposed OData this way.

The upshot is that you get nasty errors when you try and load these services with Pyslet. It complains of a rather obscure condition concerning (possibly multiple) unbound principals. When I wrote that error message, I didn't expect anyone to ever actually see it.

There's a proper way to do containment in earlier versions of OData, described in Containment is Coming with OData v4 which explains how to use composite keys. As the name of the article suggests though, this is written with hindsight after a better solution has been found for this use case in OData v4.

The fix for the ESA data feed is to download and edit a copy of the advertised metadata to get around the errors reported by Pyslet and then to initialise your OData client using this modified schema instead. It isn't a perfect fix, as far as Pyslet knows those attributes really are unique and do reside in their own entity set but it doesn't really matter for the purposes of using the OData client. You can navigate and formulate queries without tripping over data inconsistencies.

I've written a little script that I've added to Pyslet's sample code directory to illustrate the technique, along with a fixed up metadata file. The result is a little UNIX-style utility for downloading products from the ESA data hub:

$ ./download.py --help
Usage: download.py [options]

Options:
  -h, --help            show this help message and exit
  -u USER, --user=USER  user name for basic auth credentials
  -p PASSWORD, --password=PASSWORD
                        password for basic auth credentials
  -v                    increase verbosity of output up to 3x
  -c, --cert            download and trust site certificate

The data is available via https and requires a user name and password (you'll have to register on the data hub site but it's free to do so). To make it easier to set up the trust aspect I've added a -c option to download the site certificate and store it. If you don't have the site certificate you'll get an error like this:

ERROR:root:scihub.esa.int: closing connection after error failed to build secure connection to scihub.esa.int

Subsequent downloads verify that the site certificate hasn't changed: a bit like the way ssh offers to store a fingerprint the first time you connect to a remote host. Only use the -c option if you trust the network you are running on (you can use Firefox or some other 'trusted' browser to download the certificate too of course).

The password is optional, if you don't provide it you'll be prompted to enter it using Python's getpass function for privacy.

You pass the product identifiers as command line arguments, here is an example of a successful first-time run:

$ ./download.py -c -u swl10 8bf64ff9-f310-4027-b31f-8e95dd9bbf82
Password: 
ERROR:root:Entity set Attributes has more than one unbound principal
dropping mutliplicity of Attribute_Node to 0..1.  Continuing
ERROR:root:Entity set Attributes has more than one unbound principal
dropping mutliplicity of Attribute_Product to 0..1.  Continuing
S1A_EW_GRDM_1SDH_20150207T084156_20150207T084218_004515_0058AE_3051 150751068

After running this command I had a scihub.esa.int.crt file (from the -c option) and a 150MB zip file downloaded to the current directory.

If you run with -vv to provide a bit more information you can see the OData magic in operation:

./download.py -vv -u swl10 8bf64ff9-f310-4027-b31f-8e95dd9bbf82
Password: 
INFO:root:Sending request to scihub.esa.int
INFO:root:GET /dhus/odata/v1/ HTTP/1.1
INFO:root:Connected to scihub.esa.int with DHE-RSA-AES256-SHA, TLSv1/SSLv3, key length 256
INFO:root:Finished Response, status 401
INFO:root:Resending request to: https://scihub.esa.int/dhus/odata/v1/
INFO:root:Sending request to scihub.esa.int
INFO:root:GET /dhus/odata/v1/ HTTP/1.1
INFO:root:Connected to scihub.esa.int with DHE-RSA-AES256-SHA, TLSv1/SSLv3, key length 256
INFO:root:Finished Response, status 200
WARNING:root:Entity set Attributes has an unbound principal: Nodes
WARNING:root:Entity set Attributes has an unbound principal: Products
ERROR:root:Entity set Attributes has more than one unbound principal
dropping multiplicity of Attribute_Node to 0..1.  Continuing
ERROR:root:Entity set Attributes has more than one unbound principal
dropping multiplicity of Attribute_Product to 0..1.  Continuing
INFO:root:Sending request to scihub.esa.int
INFO:root:GET /dhus/odata/v1/Products('8bf64ff9-f310-4027-b31f-8e95dd9bbf82') HTTP/1.1
INFO:root:Connected to scihub.esa.int with DHE-RSA-AES256-SHA, TLSv1/SSLv3, key length 256
INFO:root:Finished Response, status 200
S1A_EW_GRDM_1SDH_20150207T084156_20150207T084218_004515_0058AE_3051 150751068
INFO:root:Sending request to scihub.esa.int
INFO:root:GET /dhus/odata/v1/Products('8bf64ff9-f310-4027-b31f-8e95dd9bbf82')/$value HTTP/1.1
INFO:root:Connected to scihub.esa.int with DHE-RSA-AES256-SHA, TLSv1/SSLv3, key length 256
INFO:root:Finished Response, status 200

As you can see, the fixed up metadata still generates error messages but these are no longer critical and the client is able to interact with the service.

I was given this product identifier as an example of something small to test with. I haven't researched what the data actually represents but the resulting zip file does contain a 'quick_look' image:

2015-01-19

Yosemite Spotlight issues with HP drivers: check your console

I recently imported a bunch of email into Outlook for OS X and was disappointed that I was unable to search its contents. Outlook uses Apple's native spotlight search so, in theory, all I need to do is wait for spotlight to churn through the new material and I should be done. Hours passed and nothing seemed to happen.

The first thing I tried was to simply force a re-index of my hard-disk. I just added my main drive to the list of places to exclude from spotlight searching and then after waiting a minute or two (for superstitious reasons) I removed that item again and sat back waiting for the inevitable slowdown as mdworker launches into life and starts scanning all my data.

Nothing.

The next step I took was to try to figure out how to see if Spotlight was actually doing any indexing at all. There's no simple control panel or dashboard view of the indexing process. The only way I could find was to press command-space and search for something. It should show the indexing progress-bar if it is indexing (but if it is complete you'll see nothing). I was still getting nowhere and now I'd lost the ability to search for anything.

Check the Console

In these situations it is always worth checking the console. I don't mean the termainal, just the console utility app that spools system messages onto your screen and allows you to see what is happening on your Mac. There's a handy search box (which doesn't use Spotlight!) at the top which filters the current day's logs. Putting just 'md' in that box was enough to filter all the other stuff enabling me to see a constant stream of output from Spotlight's indexing application: mdworker.

Here's a sample:

Jan 12 23:06:45 LernaBookPro com.apple.xpc.launchd[1] (com.apple.mdworker.bundles[24329]): Could not find uid associated with service: 0: Undefined error: 0 1422
Jan 12 23:06:45 LernaBookPro com.apple.xpc.launchd[1] (com.apple.mdworker.bundles): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.

You don't need to be an expert to see that this is some type of unexpected condition and the second line tells me that the resulting process exited out straight away. True to its word, every 10 seconds I got a pair of lines like this in my console log. Interestingly, the problem had been going on for some time, when I searched back through my logs I realised that Spotlight had probably not been indexing properly for ages. I have a feeling that things like new mail that arrives in your inbox gets indexed through some other mechanism and this can mask the fact that the long running indexer has not made a proper index of your hard disk. But that's just a hunch.

Check the web

Armed with this information I was able to find a thread on the internet that clearly dealt with the problem I was having: [...] com.apple.mdworker.bundles pollute logs with errors. Clearly the person who posted this thread didn't think (or perhaps realise) that Spotlight had actually failed and was chasing up unexpected slowness.

Interestingly, from this thread it is clear that the last number in the log entry is a numeric user id. In the case of the poster this was 502 which is typical of the range Apple uses for real users you add to your machine from the control panel. I think they start at 501. If you delete a user from your machine but leave lots of data lying around that is owned by that user then Yosemite seems to be having trouble and it is killing the Spotlight indexer.

The user id causing trouble for me is 1422, which is outside of this range so although the remedy might be similar the origin of my problem is different. I put 1422 into my search and found this thread: 27 inch iMac suddenly running very slowly where the person erases their disk and re-installs (yeah, that fixed it!).

Now use the Terminal

With the clues in the first thread it seems like I need to find some files on my disk owned by user id 1422. The terminal can do that using the Unix find command:

cd /
sudo find . -user 1422

This type of thing takes ages, especially if you have a large backup drive.

The Culprit

Turns out that the files owned by user 1422 are all part of the HP printer drivers. These may have been inherited from a previous installation, I'm not that sure, but either way there was information in /Library and inside the Application bundles themselves from HP that were identified this way. I had to use chown -R to change those to root ownership instead (I'm not sure what they are supposed to be).

You'll also find a few files in /private/var/ with paths similar to:

/private/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/0/com.apple.Spotlight/1422

The exact names will be different but the critical thing is the end part, which is a directory created just for spotlight. It is named after the missing user and it has its ownership set to this non-existent user. It is actually these latter files which are causing the problem, Spotlight is trying to create an index for that user but is surprised when it finds the user doesn't exist. Just removing this directory isn't enough though because Spotlight will re-index your disk and as soon as it finds a file owned by 1422 again it will create this folder in /private/var and grind to a halt again. You must remove or re-own everything that spotlight might see: a real hassle if you have a large backup drive because of the way Time Machine works. I've solved that problem by just excluding my backup drive from Spotlight.

FWIW, unix systems are usually very tolerant of non-existent user ids. Many archive programs will restore files from other machines and systems and, if run as root, will update their ownership to match the original ownership before the files were archived. On networks of Unix workstations that share a user directory this is useful because you can tar up files on one machine and transfer them to another and all the ownership information comes across too. On a personal machine this is less useful and perhaps even dangerous, hence the '--insecure' option on tar.

I considered removing and reinstalling the HP software but I'm not convinced that it isn't a problem with the installer itself. It works fine on the machine I upgraded from 10.8, through 10.9 to 10.10 (after these fixes) but I noticed that on a different machine that came with Mavericks and was upgraded to Yosemite I had to re-install the driver even though I used the migration assistant to set it up from its predecessor (running 10.8) which did have the HP drivers installed. I struggled to find the right download for my HP Officejet 6310 and perhaps now I know why!

Restart

I couldn't find any way of telling mdworker to give up on user id 1422. Removing it's special directory didn't seem to help so I assume that somewhere a process has a cache of that information and the only way I could figure to get it going again was a restart.

Conclusion

If you are using Yosemite and have an HP printer or scanner check your console just in case Spotlight has died for you too. Do battle with the terminal. Restart. Enjoy Spotlight indexing and smoother performance from your Mac.

2014-11-24

LTI Tools: Putting Cookies in the Frame

Last year I ran into a problem with IMS LTI in some versions of Internet Explorer. It turns out that my LTI tool was assuming it was OK to save cookies in the browser but IE was assuming it wasn't. Chuck has written about this briefly on his blog and the advice given there is basically now best practice and documented in the specification itself. See LTI, Frames and Cookies – Oh MY!

So I just spent the day trying to implement something like this ready for my QTI migration tool to become an LTI tool rather than a desktop application and it was much harder than I thought it would be.

Firstly, why is this happening? Well if your tool is launched in a frame inside the tool consumer then you are probably falling in to the definition of 'third party' as far as cookies are concerned. There is clearly a balance to be had between privacy and utility here. If you think about it, framed pages probably know who framed them and so something innocuous like an advert for an online store could use cookies to piece together a browser history of every site you visited that included their ad. That's why when you visit your favourite online store after browsing some informational site to do product research the store seems to already know what you want to buy.

Cynics will see a battle between companies like Google and Amazon who make money by encouraging you to find products online (and buy them) and Microsoft who are more into the business of selling software and services, especially to businesses who may not appreciate having their purchasing department gamed. Perhaps it is no wonder that Amazon found themselves in court in Seattle having to defend their technical implementation of P3P.

It turns out that IE can/could be persuaded to accept your tool's cookie if your tool publishes a privacy policy which IE isn't able to parse. I don't want to appear too dismissive but I simply cannot understand how decisions about privacy can be codified into policies that computers can then read and execute with any degree of usefulness. Stackoverflow has some advice on how to do it 'properly' if you are that way inclined, however.


For the rest of us, we're going to have to get used to the idea that cookies may not be accepted and follow Chuck's advice to workaround the issue by opening a new window. It isn't just IE now anyway, by default Safari will block cookies in this situation too. Yes, the solution is horrible, but how horrible?

Some time ago I wrote some Python classes to help implement basic LTI. That felt like a good starting point for my new LTI tool. Here's Safari, pointing at the IMS LTI test harness which I've just primed with the URL of my tool: http://localhost:8081/launch

In the next shot, I've scrolled down to where the action is going to take place. I've opened Safari's developer tools so we can see something of what is happening with cookies.

So what happens when I hit "Press to Launch"? It all happens rather quickly but the frame POSTs to my launch page, which then redirects to a cookie test page. That page realises that the cookies didn't stick and shows a form which autosubmits, popping up a new window with my tool content in. The content is not very interesting but here's how my browser looked immediately after:

There's a real risk that this window will be blocked by a pop-up blocker. Indeed, Chrome does seem to block this type of pop-up window. In fact, Chrome allows the original frame to use cookies so on that browser the pop-up doesn't need to fire. I only tripped the blocker during simulated tests. Still, it is clear that you do need to consider that LTI tools may neither have access to cookies nor be able to pop-up a new window automatically. To cover this case my app has a button that allows you to open the window manually instead, if I change tabs back to the Tool Consumer page you'll see how my pop-up sequence has left the original page/frame.

Notice that Safari now shows my two cookies. These cookies were set by the new window, not by the content of this frame, even though they show up in this view. They show me that my tool has not only displayed its content but has successfully created a cookie-trackable session.


The code required to make this happen is more complex than I thought. I've tried to represent my session construction logic in diagramatic form. Each box represents a request. The top line contains the incoming cookie values ('-' means missing) and the page name. The next line indicates the values of some key query parameters used to pass Session information, Return URL and whether or not a pop-up Window has been opened. The final line shows the out-going cookies. If you follow the right hand path from top to bottom you see a hit to the home page of the tool (aka launch page) with no cookies. It sets the values 0/A and redirects to the test page which confirms that cookies work, outputs a new session identifier (B) and redirects to the content. The left-turns in the sequence show the paths taken when cookies that were set are blocked by the browser.

Why does the session get rewritten from value A to B during the sequence? Just a bit of paranoia. Exposing session identifiers in URLs is considered bad form because they can be easily leaked. Having taken the risk of passing the initial session identifier through the query string we regenerate it to prevent any form of session hijacking or fixation attacks. This is quite a complex issue and is related to other weaknesses such as cross-site request forgery. I found Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet very useful reading!


I will check the code for this into my Pyslet GitHub project shortly, if you want a preview please get in touch or comment on this post. The code is too complex to show in the blog post itself.

2014-11-14

Basic Authentication, SSL and Pyslet's HTTP/OData client

Pyslet is my Python package for Standards in Learning Education and Training and represents a packaging up of the core of my QTI migration script code in a form that makes it easier for other developers to use. Earlier this year I released Pyslet to PyPi and moved development to Github to make it easier for people to download, install and engage with the source code.

Note: this article updated 2017-05-24 with code correction (see comments for details).

Warning: The code in this article will work with the latest Pyslet master from Github, and with any distribution on or later than pyslet-0.5.20141113. At the time of writing the version on PyPi has not been updated!

A recent issue that came up concerns Pyslet's HTTP client. The client is the base class for Pyslet's OData client. In my own work I often use this client to access OData feeds protected with HTTP's basic authentication but I've never properly documented how to do it. There are two approaches...

The simplest way, and the way I used to do it, is to override the client object itself and add the Authorization header at the point where each request is queued.

from pyslet.http.client import Client

class MyAuthenticatedClient(Client):

    # add an __init__ method to set some credentials 
    # in the client

    def queue_request(self, request):
        # add in the authorization credentials
        if (self.credentials is not None and
                not request.has_header("Authorization")):
            request.set_header('Authorization',
                               str(self.credentials))
            super(MyAuthenticatedClient, self).queue_request(request)

This works OK but it forces the issue a bit and will result in the credentials being sent to all URLs, which you may not want. The credentials object should be an instance of pyslet.http.auth.BasicCredentials which takes care of correctly formatting the header. Here is some sample code to create that object:

from pyslet.http.auth import BasicCredentials
from pyslet.rfc2396 import URI

credentials = BasicCredentials()
credentials.userid = "user@example.com"
credentials.password = "secretPa$$word"
credentials.protectionSpace = URI.from_octets(
    'https://www.example.com/mypage').get_canonical_root()

With the above code, str(credentials) returns the string: 'Basic dXNlckBleGFtcGxlLmNvbTpzZWNyZXRQYSQkd29yZA==' which is what you'd expect to pass in the Authorization header.

To make this code play more nicely with the HTTP standard I added some core-support to the HTTP client itself, so you don't need to override the class anymore. The HTTP client now has a credential store and an add_credentials method. Once added, the following happens when a 401 response is received:

  1. The client iterates through any received challenges
  2. Each challenge is matched against the stored credentials
  3. If matching credentials are found then an Authorization header is added and the request resent
  4. If the request receives another 401 response the credentials are removed from the store and we go back to (1)

This process terminates when there are no more credentials that match any of the challenges or when a code other than 401 is received.

If the matching credentials are BasicCredentials (and that's the only type Pyslet supports out of the box!), then some additional logic gets activated on success. RFC 2617 says that for basic authentication, a challenge implies that all paths "at or deeper than the depth of the last symbolic element in the path field" fall into the same protection space. Therefore, when credentials are used successfully, Pyslet adds the path to the credentials using BasicCredentials.add_success_path. Next time a request is sent to a URL on the same server with a path that meets this criterium the Authorization header will be added pre-emptively.

If you want to pre-empt the 401 handling completely then you just need to add a suitable path to the credentials before you add them to the client. So if you know your credentials are good for everything in /website/~user/ you could continue the above code like this:

credentials.add_success_path('/website/~user/')

That last slash is really important, if you leave it off it will add everything in '/website/' to your protection space which is probably not what you want.

SSL

If you're going to pass basic auth credentials around you really should be using https. Python makes it a bit tricky to use HTTPs and be sure that you are using a trusted connection. Pyslet tries to make this a little bit easier. Here's what I do.

  1. With Firefox, go to the site in question and check that SSL is working properly
  2. Export the certificate from the site in PEM format and save to disk, e.g., www.example.com.crt
  3. Repeat for any other sites I want my python script to work with.
  4. Concatenate the files together and save them to, say, 'certificates.pem'
  5. Pass this file name to the HTTP (or OData) client constructor.
from pyslet.http.client import Client

my_client = Client(ca_certs='certificates.pem')
my_client.add_credentials(credentials)

In this code, I've assumed that the credentials were created as above. To be really sure you are secure here, try grabbing a file from a different site or, even better, generate a self-signed certificate and use that instead. (The master version of Pyslet currently has such a certificate ready made in unittests/data_rfc2616/server.crt). Now pass that file for ca_certs and check that you get SSL errors! If you don't, something is broken and you should proceed with caution, or you may just be on a Mac (see notes in Is Python's SSL module correctly validating certificates... for details). And don't pass None for ca_certs as that tells the ssl module not to check at all!

If you don't like messing around with the certificates, and you are using a machine and network that is pretty trustworthy and from which you would happily do your internet banking then the following can be used to proxy for the browser method:

import ssl, string
import pyslet.rfc2396 as uri

certs = []
for s in ('https://www.example.com', 'https://www.example2.com', ):
    # add other sites to the above tuple as you like
    url = uri.URI.from_octets(s)
    certs.append(ssl.get_server_certificate(url.get_addr(),
                 ssl_version=ssl.PROTOCOL_TLSv1))
    with open('certificates.pem', 'wb') as f:
        f.write(string.join(certs,''))

Passing the ssl_version is optional above but the default setting in many Python installations will use the discredited SSLv3 or worse and your server may refuse to serve you, I know mine does! Set it to a protocol you trust.

Remember that you'll have to do this every so often because server certificates expire. You can always grab the certificate authority's certificate instead (and thereby trust a whole slew of sites at once) but if you're going that far then there are better recipes for finding and re-using the built-in machine certificate store anyway. The beauty of this method is that you can self-sign a server certificate you trust and connect to it securely with a Python client without having to mess around with certificate authorities at all, provided you can safely courier the certificate from your server to your client that is! If you are one of the growing number of people who think the whole trust thing is broken anyway since Snowden then this may be an attractive option.

With thanks to @bolhovsky on Github for bringing the need for this article to my attention.

2014-11-04

VMWare Fusion, Windows 8 and British Keyboards

I've been running VMWare Fusion for years to run a dual monitor setup with one screen Mac and the other screen 'PC'.  I recently upgraded to VMWare Fusion 7 and my Windows 7 virtual machine kept working just fine.  However I recently had to start from scratch with a new VM to install Windows 8.1.  I realise that there may be upgrade routes via Windows 8 but, after two hours on the phone with Microsoft technical support for an unrelated issue I personally decided that a clean install would be the way to go.

The problem is, the keyboard just doesn't work properly in the default setup.  It is years since I had to play with these things and it took me a while to figure out how to put mappings in place that make my British keyboard do the right thing.  There may actually be a problem with Fusion 7 if this forum thread is anything to go by.

The symptoms are that the '@' symbol, which is typed with shift-2 on the Mac keyboard was coming out as double-quote like it would on a PC keyboard and try as I might I could not figure out how to type a back-slash at first at all.

The solution is to go to the VM's settings, duplicate the profile you are using (VMWare created a Windows 8 profile for me) and then add some custom mappings. I found this a bit counter intuitive because you need to start by opening something like Notepad in the VM and experimenting until you find the characters you want, note down the key combination you pressed to get them and then switch to the profile screen and add a mapping from the keys you want to press to the key combinations you just discovered work. A picture will help...

Shift-'
This is the double quote on my keyboard but without the custom profile it types '@', a simple switch with Shift-2 that almost all Mac users are probably already familiar with.
Shift-2
The reverse mapping of the above
Option-3
Turns out that the backslash key types a hash/number/sharp sign on the PC so we map Option-3 to that if, like me, you have become habituated to the Apple way.
\
Turns out that the backslash is now typed by the section sign (that's the curly thing that looks like a tiny spiral-arm galaxy).
Shift-`
Lastly, you may have to hunt for the '~', but it gets typed when you hit shift-backslash on the PC. On the Mac it appears over the back-quote or back-tick symbol.

That's the most important ones. Note that shift-3 correctly types the pound-sterling sign '£' so I didn't have to touch that at all. I have no idea how one types a section sign or the combined plus-minus '±' sign on the PC but in my line of work there is very little call for them. I warmly invite readers to add their own suggested mappings as comments to this post!