Setting up SFTP access inside a chroot jail

This is potentially very useful for secure shared hosting using Apache or another web server. I assume that readers understand that SFTP is a secure version of FTP (although there also exist FTPS and FTP over SSH), and that FTP itself ought not to be used without encryption because it transmits all data including passwords insecurely.

For this, we will set up a group called sftponly for the users within the chroot (change root) jail:

groupadd sftponly

We will be editing /etc/ssh/sshd_config and then restarting the ssh daemon with sudo service ssh restart (in Ubuntu). Some distros may require sudo /etc/init.d/sshd restart instead.

There are some confusing instructions for this on the web. The first thing that you must NOT do is use this instruction without carefully considering the result:

AllowGroups sftponly

What will happen is that NO other users will be able to use either SFTP or SSH. If you are using a VPS like Amazon EC2, this is your only way to access the virtual box and you will be locked out! Instead, check that your admin user is in the groups admin or sudo or similar (depending on distro and version), and add those (at least, but see below) to the instruction too:

AllowGroups sftponly admin

You will need to alter these lines to change the version of SFTP that you are using, both of which are supplied by default with OpenSSH in most standard distros:

#Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp internal-sftp

UsePAM yes

You will see in the lines that follow that I have needed to alter the way that I restrict users from accessing the box from the internet. The admin user (not called that) is in the admin (or sudo group) and is thus equivalent on Ubuntu distributions to a user in the /etc/sudoers file, with root access. However, I don’t like this user to log in directly except from the LAN. If it was compromised by brute force, an automated attacker could access files anywhere on the box by using sudo -s with the same password. Thus there is a user called myuser (or whatever) who does not have root access and has a different password. The brute force attack would need to be repeated twice and the attacker would need to know the name of the only user who can log in from outside, which is not obvious, before being able to type su admin and enter the second password. This is a fairly effective form of security through obscurity, especially since I use port 22 only from the LAN but use port forwarding on my router to direct it to another, custom port from the outside. Automated attackers will typically target the standard port 22.

However, there is no way to use AllowUsers if you are using AllowGroups for groups of users, since they all need to be added to the AllowUsers directive individually if they are not to be explicitly blocked, i.e. they need to be allowed within both directives or else you must avoid using AllowUsers. I have escaped this restriction by moving it into a Match Group directive (see below), which must be at the end of the file, so that it only applies to users in that group.

As usual, I have left my comments in, purely for documentation. Sorry for the mess! 😉

##Note that the next line has been obsoleted by using Match Groups directives, which
##must appear at the bottom of this file. AllowGroups is also used here.
#AllowUsers admin@192.168.1.? admin@192.168.1.?? admin@192.168.1.??? myuser sftpuser
DenyUsers root

##WARNING: next line allows only this group!! It will disable the above users! So don't use it!
#AllowGroups sftponly
AllowGroups sftponly myuser admin

You will also notice that root login is prohibited and that the group for the user myuser is allowed, which will have the same name as the user itself and will include it in exactly the same way as if we had used AllowUsers to do it. This enables my approach to security, as described.

##Note that security requires that chrooted folders must be owned by root and must not be writeable by group
##or all in order for them to log in, so set these users' home directories to ~/public or similar.

##You can then mount these to /var/www/mysite also then set these users' home folders to /public
##(rel. to the chroot directory), which should be chmod 700 so that no other sftp-only users can see their files.
##Trying to do this with symlinks will not work because paths are relative to the chroot directory.

##Then use sudo mount --bind /var/www/mysite /home/sftpuser/public
##and to undo user sudo umount /home/sftpuser/public
##and add /var/www/mysite /home/sftpuser/public none bind to /etc/fstab to make this work on reboot

##Note that, while mounted, the permissions on /var/www/mysite will override/replace those on /home/sftpuser/public
##and that they will revert completely when umounted. So set the former for the user to upload files.

##An alternative is to point the Apache or Nginx document root at this location, avoiding /etc/fstab changes. This is
##less likely to get forgotten about later and seems like a better option than mounting.

First, /home and /home/sftpuser (or whatever) must be owned by root and not writeable by the group, as noted.

As noted in the comments, there are two ways to allow the user in the SFTP jail to read and write to the web folder. While the mounting method seems fine, it’s likely that you may later forget the entries in /etc/fstab, whereas it is easier to use virtual hosts in Apache or Nginx (or whatever) to point directly at the /home/sftpuser/public folder. The choice is up to you.

##Remember to adduser --uid 9[01-99] --ingroup sftponly username (numbers less than 1000 cannot login using GUI)
##where users already created and their UIDs can be found in /etc/passwd
##(if you forget the --ingroup parameter, use adduser sftpuser sftponly in order to keep them in the chroot jail)
##and then use usermod -d /public (instead of creating a home directory with adduser --home, which would create it in the real root!)
##and you can lock and unlock users with usermod -L username and usermod -U username
##and change users and groups with usermod -l newuser username groupmod -n newgroup groupname
##remembering also to move /home/username to its new location

These comments speak for themselves. I chose the range 900-999 for the UID range of users. Don’t user adduser ‐‐home because this script will add /public to the real root of the server, whereas usermod -d /public will not, so this will be treated as relative to the chroot jail /home/%u once the user logs in via SFTP. Make sure the user is in the sftponly group!

##Nothing except other Match Groups blocks may follow these blocks!
Match Group sftponly
  ChrootDirectory /home/%u
  AllowTCPForwarding no
  X11Forwarding no
  ForceCommand internal-sftp
Match Group admin
  AllowUsers admin@192.168.1.? admin@192.168.1.?? admin@192.168.1.???

As noted, these blocks must be the last lines in this file. Only another Match Group directive will not be interpreted as belonging to the last Match Group block already defined rather than as independent directives.

The first block ties the users down to SFTP and not SSH (as setting up a chroot jail for SSH is rather more involved and I haven’t attempted it yet), with some obvious security restrictions. The second block replaces the original AllowUsers directive that I had used above, but now only applying to this particular group, i.e. the admin user(s). If you don’t use my security methods, omit this. The single, double and triple question marks are required to cover all possible internal IP addresses, as an asterisk will not work here.

Lastly, if you are still having problems getting your chroot jailed user to log in via SFTP, it’s likely that its a permissions issue. Check the notes in the code above. They must have read-write access to their ~/public folder but only root may own and write to the chroot jail. Also check that they own their ~/public folder and that it is in their group of the same name.

I’m not sure that I will bother with a more complex SSH chroot jail, which requires copies of the binaries for the basic linux commands to be put somewhere accessible in the chroot jail (or, at least, the ones you want your restricted users to be able to use). That seems like a real hassle when SFTP allows such basic operations for restricted users anyway. My unlimited users are unaffected by the change. Effectively I can use this arrangement for secure shared hosting.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

CloudFlare CDN and PHP Dropbox proxy scripts

Some time ago, because my server isn’t set up for shared hosting and because it suited my friend’s needs, I decided to adapt a PHP proxy script for Dropbox so that her personal web site could be updated just by copying HTML files into a particular folder in Dropbox and they would automatically be updated on the Web. My server would merely handle the routing of HTTP requests to Dropbox. Since then, even that is now done elsewhere via RedHat OpenShift, a free VPS for developers where the PHP script now resides. The DNS is pointed there rather than, as it was previously, at my server. Since she doesn’t need HTTPS (for which you have to pay), this is sufficient for her needs.

However, there are certain drawbacks to using Dropbox as a web server. The first and simplest is that it isn’t designed as one. It doesn’t do basic things like automatically redirecting the web root to index.html, index.php or whatever you choose it to look for, so you have to code this into the PHP. This particular example is not a serious issue, but some other aspects of Dropbox are much less favourable to using it as a do-it-yourself web server:

Speed

Dropbox is very slow indeed. However, I moved the DNS servers to CloudFlare‘s content distribution network, effectively a web-scale version of a web cache packaged with lots of other optimisation techniques. This cached version of the web page is served lightning fast, even though the actual site is served very slowly from Dropbox to the proxy (the bottle neck is Dropbox, not the PHP proxy script). Effectively, this problem is solved.

PHP

The main difference is that PHP is not available. That is, it’s available on OpenShift where the proxy script resides but not in Dropbox. The HTML files cannot use PHP or any other server-side scripting, which will fail. On the other hand, browser-side scripting such as JavaScript will work fine. As noted below, what was required for such a simple site could be re-written in JavaScript, but this would not work for a site where PHP or similar was required.

Character Sets

I had to alter the PHP proxy script to force UTF-8 encoding by modifying the HTTP header. Dropbox reverts to ASCII, an obsolete and deprecated character set that really shouldn’t be used any more, despite being the foundation of modern character sets. This problem could be overcome, but I must admit that it’s a hack.

Last-Modified HTTP Header

Because the original HTML page had used PHP for a “Last updated” notice at the bottom and I had to re-write this in JavaScript, I ran into a problem where the script would send the last modified date of the PHP proxy script rather than the HTML that was being served, clearly the wrong behaviour and quite useless for the purpose. I therefore had to alter the script to modify the HTTP response by overriding the Last-Modified header with the correct date. Again, this was a hack, but it worked in order to overcome the immediate problem.

Overall Result

It was possible to use a combination of web technologies to emulate a basic web server’s functionality so that a user would not see the difference. This did enable a less technical user the convenience of being able to simply upload modified HTML files to a special Dropbox folder and thus avoid SFTP or FTPS etc, which I consequently didn’t need to set up. I do not have to upload files on the user’s behalf either, or go to the trouble of modifying permissions across my server in order to enable secure shared hosting. We were able to completely mitigate the scripting and performance issues so that they were no longer an issue.

I should probably get round to learning more about using chroot in linux to set up a proper shared hosting environment, which would mean I didn’t need this sort of solution. [Ed.: see this later post on setting up a chroot jail – 2014-06-27 12:58]

This is a suitable solution for simple sites but would not fit all purposes. It was a really useful learning experience for me, especially in PHP coding, HTTP response headers and DNS.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

User-friendly secure voice calls and chat for everyone

This isn’t really news, but it’s something that I wish more non-technical internet users knew. It is really easy and anybody who can use Skype can use this with no extra effort.

1. Instead of downloading Skype, download Jitsi.
2. Instead of registering with Skype, just use GMail (or Facebook or others).
3. Enter the account details for whatever service(s) you use instead of Skype.
4. Convince others to do the same so that you can avoid being snooped on.

Bad Skype/Microsoft!

Ok, so you know how you need to download software from the Skype site and sign up for an account there, then enter your details into the software you downloaded, in order that you can make free calls and instant messaging chat? Well, you don’t have to have Microsoft, who bought Skype, mining your text chat and recording your video. You can do so securely and privately for free.

Good Jitsi! Using GMail or Facebook!

You can do exactly the same by downloading a piece of free and open source software called Jitsi, whether you use Windows, Mac (OS X), Debian-based or RedHat/Fedora-based Linux (and there is a version being tested for Android phones too). You can then use your GMail account or any other Jabber/XMPP instant messaging service. You could use several accounts all in the same place, if you like.

You can use Facebook but you have to fiddle with a setting on their web site first. That is Facebook’s fault really. Still, you can use it and it’s not all that hard.

Secure, encrypted calls – nothing to set up

The default settings – unlike Skype, where Microsoft could easily monitor your calls – are to encrypt voice calls at your end using an encryption standard called ZRTP wherever possible. That means that even the server that you are talking over can’t see or hear your call.

That is, provided both people in the conversation are using Jitsi (or, in theory, something else that supports ZRTP). You’ll be warned if it’s secure or not.

Easy to chat securely by instant message

If you need or want to use encrypted IM (called Off The Record or OTR), you can simply right click on your contact and choose Secure chat –> Start Private Conversation.

If your contact can’t do OTR secure chat because of the client program they are using, you’ll immediately be told when you try it. So you will always know if its secure or not.

Although you probably don’t need to, you can optionally then use Authenticate buddy with various methods, the easiest of which are shared questions to which only you two know the answer and shared secrets (apparently the same but without a question). These are sent encrypted so that nobody is able to fake it. Or, of course, you can video call them in order to check who they are, if you are still not really sure! 🙂

Comparison with other alternatives

Other IM clients (i.e. desktop programs) like Pidgin and some Android apps like Xabber will let you do OTR secure chat but, as far as I know, no other Jabber/XMPP clients support ZRTP encrypted voice calls by default, i.e. without you doing anything else complicated to set it up. That means you can’t use instant messaging using your GMail and/or Facebook account unless you use the Jitsi desktop client as I am suggesting.

Conclusion: use Jitsi, not Skype!

The bottom line is that it’s exactly like Skype only minus Microsoft and potentially the NSA watching. It can do loads of things but it also just works out of the box.

You might ask: won’t GMail or Facebook see all this? No, not if you encrypt it. If you really care about privacy, you could use a free Jabber service elsewhere. They exist.

Plus you can talk to people on their phones if they have IM chat there! Some Android apps like Xabber will let you do OTR secure chat. The more people want to use it, the more apps will be coded so that it works. Hopefully, Jitsi will soon have a stable release on Android and perhaps other apps will follow. Then voice calls using those apps will be secure too.

Plus you can then integrate all your GMail and Facebook clients rather than having separate Skype ones. Or not, if you don’t want. In fact, you can manage it any way you like, really.

If you can use Skype, I repeat, you can use this!

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

cu nout marc ha catuur

Here is a translation of a famous verse by Tolkien from Lord of the Rings, itself his version of a Saxon poem, rendered into a form of reconstructed Common Brythonic, in a contemporary and modern orthography respectively. It resembles Welsh but is not entirely intelligible even to an educated speaker, perhaps something like the effect a speaker of Alemmanic (mostly Swiss) German might get from reading Frisian or Dutch.

cu nout marc ha catuur? cut corn hai lebe?
cut pen hoiarn ha pois ha guolt gloiu hai guibe?
cut lom guar telin ha tan rud hai debe?
cut guantoin ha metel hac it hir hai tibe?
rit etont mal glau ar minid mal auel in prat
rit et diou in gorleuin tub hunt di briniou do scot
pui casclid muc coit maru debid
nou guelet huil blined di ar mor hai dibid?
"Cw noud march ha chadwr? Cwd corn ai lefei?
Cwd pen hoearn ha phois, ha gwolt gloyw ai chwifei?
Cwd lof war delyn, ha than rhudh ai dheifei?
Cwd gwantwyn ha medel hag yd hir ai dyfei?
Rhyd aethont fal glaw ar fynydh, fal awel yn prad:
Rhyd aeth diou yn gorlewin tu hwnt di bryniou dy scod.
Pwy gasclydh mwg coed marw deifydh
Nou gweled hwyl blynedh di ar for ai dhyfydh?"
"Where now are the horse and the rider? Where is the horn that was blowing?
Where is the helm and the hauberk, and the bright hair flowing?
Where is the hand on the harpstring, and the red fire glowing?
Where is the spring and the harvest and the tall corn growing?
They have passed like rain on the mountain, like a wind in the meadow;
The days have gone down in the West behind the hills into shadow.
Who shall gather the smoke of the deadwood burning,
Or behold the flowing years from the Sea returning?"
(J.R.R. Tolkien, The Lord of the Rings)

Linguists may note, amongst other features, that the demonstrative is not yet generalised as definite article (and omitted especially in poetry); that the present copula is represented by a particle, sometimes incorporated into another word; that two different forms of the relative are used in the present-future indicative, once as an infixed particle and the other as a suffixed ending; that the prepositions ar and (g)war are used differently.

The phonology reflects an immediate predecessor of Welsh, Cornish and Breton. In this variety, the conjunction (h)a “and” has acquired a generalised sandhi /h/ from the lost endings of preceding words with final etymological /-s/ etc (like Cornish and Guémené Vannetais Breton), whereas generalising the etymological form without initial /h/ is just as plausible. The particle plus relative pronoun (h)ai has orthographic <h> in the “inherited” version but this is not a phonological feature.

(Ed.: further translations into modern Breton and modern Welsh in comments below.)

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

IPv6 and firmware flashing on the TG582n router

I recently decided that I should support IPv6 transition, however minuscule my part may be in the scheme of things, by switching my server over to a dual-stack configuration that allows pages to be reached either by IPv4 or IPv6. Every little helps, as the saying goes. There are a number of stages:

Web server

It’s easy enough to make the web server start listening over IPv6, but this alone will not be enough without IPv6 connectivity and an appropriately configured firewall. Anyway, as the first in a number of stages, I did this with Nginx:

server {
# The next line will only work for IPv4
listen 80;
# The next line is for port 80 (HTTP) over IPv6
listen [::]:80;
# The next line will only work for IPv4
listen 443 ssl spdy;
# The next line is for port 443 (HTTPS) over IPv6
listen [::]:443 ssl spdy; }

You’ll notice, by the way, that I also use SPDY with TLS/SSL. You can omit “spdy” in the above lines if you don’t want to use it for any reason. With Apache, you can use mod_spdy instead, which appears to be easy to set up (though I no longer use Apache in production, so I can’t speak from having tried it). Either way, you may want to look the blog post that I wrote on SPDY for more details, including the Alternate_Protocol header in Nginx.

For IPv6 on Apache, there are instructions on the internet to listen via IPv6 as well as IPv4 but, in essence, you need to avoid restricting Apache to listening on a particular IPv4 (or IPv6) IP address if you want both to work. It may well be that you don’t need to do anything at all to your settings, whether or not you are using virtual hosts (which you almost certainly should be), but some people may want to change IPv4-only statements like this:

Listen 0.0.0.0:80 --> Listen *:80

Alternatively, you can have two listen statements (square brackets for IPv6), e.g.:

Listen 74.86.48.99:80
Listen [2607:f0d0:1002:11::4]:80

These should obviously be replaced with your own IPv4 and IPv6 addresses. Later, I will describe how I set up AAAA records for IPv6 domains and set up custom firewall rules on the TG582n router: without these, nobody else will see your site over IPv6.

Native IPv6 versus 6in4 tunnels (IPv6 over IPv4)

The next part was much harder. My ISP (Plusnet) has trialled IPv6 but for some reason the UK government is neither obliging nor encouraging ISPs to enable IPv6 transition despite its critical importance to the future stability of the internet via the sustainable availability of sufficient IP addresses, direct addressability of devices without the need for Network Address Translation (NAT) and so on. The larger ISPs are not making enough effort on their own to put the UK in the forefront of IPv6 adoption, although you can get IPv6 connectivity from smaller UK ISPs. If your ISP does provide IPv6 then you can avoid all of the trouble that I went through, and presumably any router that they provide you or advise you to buy will also be natively IPv6 enabled.

In order to circumvent the problem of not having native IPv6 connectivity, I set up a 6in4 tunnel (IPv6 packets encapsulated in IPv4 packets via a tunnel broker) but I must warn you that this was a fairly in-depth technical procedure. There are some very similar protocols like 6to4, 6over4 etc but I have no experience of how these need to be set up. But first I needed to flash the firmware of my router to a more recent software version that enables IPv6 connectivity, as the original 8.4.4.J firmware only supports IPv4.

Flashing the Technicolor TG582n firmware

The instructions in the this section are quite specific to the router that I am using, although I might add that the TG582n is a fairly inexpensive router provided free by a lot of ISPs, so many people might face the same issues unless they simply forgo the trouble by replacing it. Before you read them thoroughly, please consider getting yourself a really good router that already has this functionality or, if your budget constrains you, getting a cheap router that supports the OpenWrt firmware and flash that instead, making it a very powerful and flexible device that is quite easy to set up using its web interface. OpenWrt maintains a list of compatible devices and instructions for how to do flash the firmware on each one.

Even if you choose OpenWrt on a different device (since OpenWrt does not apparently support ADSL or the wireless interface on the TG582n at the time of writing), it may be useful to read about the mistakes I made, particularly if you are using OS X (Apple Mac) on the device from which you are flashing the firmware.  I hope that I can save a few people reading this some time figuring it out the hard way.

While flashing firmware is a lot easier in principle than you might expect, at least once you know how to get TFTP working, the details of doing it on any particular device can often take a long time in reality and can be very frustrating to debug, because TFTP is quite fiddly and all sorts of things can go wrong that can stop the router seeing the new firmware when you hard reset it and it goes into its BOOTP sequence.

First of all, you must get the right firmware for your specific device. It is important to realise that there are two very similar boards for the TG582n that have different firmware, the DANT-1 (rarer, e.g. BE) and the DANT-T (most UK ISPs). You can find out which board you have from the information in the web interface of the router under Technicolor Gateway –> Information. The only difference seems to be that the former has two banks of 8MB SPI flash memory (switchable in the factory firmware) whereas the latter has only one, but you should still make sure you’re using the correct one. Slightly confusingly, both versions have a variant with and without a USB port for content and printer sharing. If you use the wrong firmware, these may not be enabled and thus may stop working.

For the DANT-T board, Plusnet supplies its own firmware for free, although it is possible to alter the settings in its user.ini file in order to use other ISPs and, to a large extent, return it to something resembling its factory defaults. However, it only provides a means to flash the firmware on Windows platforms and does not give much specific advice on the “fallback” option of using TFTP, which is most likely how the “firmware update tool” ultimately works under the hood anyway. If you are using Linux/Unix, you’ll have to find your own specific instructions for TFTP from the web, though issues such as file permissions will be the same as for OS X and indeed for any operating system except Windows.

The generic firmware for the DANT-1 board was made available for Uno broadband. You can get earlier versions of the firmware, as in Jonathon Davies’ blog post, but these will not enable IPv6. I used his instructions as a general guide, but they did not help me get TFTP to work on OS X, which I had to figure out on my own.

You should probably now type ftp 192.168.1.254 in the terminal window and enter the user name and password of the router. Type ls to get a listing. You should see user.ini and some other files. Type binary and then get user.ini, which will copy it to whatever folder you were in in the terminal, probably Documents. I used its sub-folder Desktop, so that I could see the file appear on the desktop. Keep a safe copy of this somewhere else, first of all! If everything goes wrong, you may need to re-flash the original 8.4.4.J firmware and type binary and put user.ini in the same way in order to get it back! Also, you can copy some of the settings over manually later, as I did, if you are crafty and figure them out 🙂

Previously, I had used TftpServer to flash firmware on an old Inventel Livebox, but this simply does not work now under either Snow Leopard or Mountain Lion. If you don’t want to find out for yourself how to use TFTP on the OS X command line, there is a useful script provided here by the BE User Group, but you will need to modify it as I did in order for it to work with this particular router. Otherwise, you will end up pulling your hair out, as I did.

You will need to turn any wireless connection on your router off, meaning you will not have internet access for a while. You will need to connect your computer to the router by cable via one of the four ethernet ports. If one of them (usually number 4) is marked red, it’s quite possible that you won’t be able to connect by telnet and ftp if this is set up as a WAN port, so you will probably want to use one of the others, e.g. ethernet port 1.

Unzip the file. Edit the files setup-for-flash.sh and end-flash-setup.sh in a plain text editor, preferably using the command line in order to completely avoid adding any byte code, not in a desktop text editor (you have been warned!) I use nano or gedit, but some people prefer the ancient, arcane and user-unfriendly vi or vim. You can actually use the internal IP addresses (IPv4) that the scripts prefer, 10.0.0.9 (client) and 10.0.0.20 (router) or change them to 192.168.1.2 (or whatever between 1 and 252 for client) and 192.168.1.254 (router) on this particular Technicolor router. However, CANT-P is the firmware that a completely different router (TG585 v7) is looking for, so you must change it to DANT-1 or DANT-T in both these scripts. They expect you to put a copy of the firmware in the folder that TFTP is using and to rename it to firmware.bin or else they will fail to work properly. (They will actually work just as well if you rename it directly to DANT-1 or DANT-T without a file extension, missing out the middle man, since the first script will rename it to this.) It will be deleted by the second script later.

Now examine the instructions provided in the zip file. If the folder that TFTP is using is not readable, writable and executable by all users, TFTP will not work. You may as well give the firmware file the same permissions to avoid any problems, but this isn’t strictly required and I did it just to be completely sure that it wasn’t a file permissions issue. This can be done using the (sudo) chmod 777 <folder or filename> command, for which there are many instructions available. Remember to change these back to something sane later, e.g. 755 or 744 for folders, 644 for files, when you are done, for security.

Normally with TFTP, you would need to set up a static internal IP address e.g. 10.0.0.9 (or 192.168.1.2 etc if you have altered the two scripts) with the router (10.0.0.20 or 192.168.1.254) as the default gateway and 255.255.255.0 as the netmask. However, the first script will do this for you, so you can just run it as instructed. Also remember to turn off your firewall – but *only* once you aren’t connected to the internet, to avoid your computer being hacked! – and perhaps any anti-virus software that may interfere. I found it almost impossible to completely shut down Sophos in the background but it worked anyway.

sh ./setup-for-flash.sh

Now use a pin in the hard reset hole, near the power button, of the router. Remember that all your settings will be lost! Your new username and password will almost certainly be either Administrator and no password or possibly admin and no password. If not, search the web as I did: the Plusnet and Modem Help forums will be where to look. You need to hold the pin pressed in the hole and wait until the power light goes red. Mine did not go red and then orange as described, only red. Now wait. If all goes well, you’ll have new firmware and the software version in the router’s web interface will be updated, along with factory defaults (or default Plusnet settings, for the DANT-T firmware above). I repeated this stage perhaps a hundred times before I got it to work, by not doing all the things that I have described above.

Now run the second script to restore your normal settings, including removing the static IP address and making it automatically set by your router’s DHCP as usual:

sh ./end-flash-setup.sh

Now remember to turn your firewall back on *before* you connect to the internet! Change the permissions on the folder that TFTP is using back to 755 or whatever you think is sane and safe. Set a new, secure password for your Administrator user via the web interface.

Hopefully, you now have a router capable of IPv6. If you don’t, or if you have bricked your router, don’t blame me. I was careful enough to buy a second TG582n (which turned out to be one with the DANT-1 board) and so had my Plusnet DANT-T router as a fallback, so I would not be left unable to connect to the internet later. You may be able to re-flash the router to recover it, if you have simply flashed the wrong firmware. Don’t panic until you have tried to flash it at least 50 times! :-p

Setting up the 6in4 tunnel

What you may not have at this point is IPv6 connectivity. If you are lucky enough or sensible enough to have a small ISP who does provide IPv6 connectivity, you might now find that it works immediately – but, in that case, I’d be surprised if you really needed to go through the foregoing steps because they would either have given you a better router or the updated firmware on the TG582n in the first place – but what do I know? 😉

If you still need a 6in4 tunnel, I can simply point you at the excellent instructions provided by Matt Turner on the Plusnet forums. They have worked for a number of people who commented there, as well as for me – and I was using the DANT-1, unlike the others there. I used Hurricane Electric’s tunnel broker service, though if you are worried about the NSA snooping on you, you may prefer SixXS or Freenet6 (though I’ve not seen feedback on the latter). For those of you who may have access to Janet via a UK Higher Education institution, you may be able to use Janet’s tunnel broker service, provided by the University of Southampton.

At this point, you will find, if you search the web for sites that identify your IP address, that those with IPv6 connectivity now give your IPv6 address. Some sites will identify you by the IPv4 address because they don’t have IPv6 capability, i.e. it naturally needs to be working on both ends in order for pages to be served using IPv6. You may find that you can connect to your web sites via your IPv6 address in square brackets in the URL bar, but remember that these won’t be visible by anybody else until your open up some holes in the firewall, as noted below. You must also set up a static address for your web server. In Linux, you will need to edit /etc/network/interfaces and add a section like this at the end (with every “xxxx” replaced by the actual numbers in the IPv6 address of your default gateway, i.e. router):

### Start IPV6 static configuration
iface eth0 inet6 static
pre-up modprobe ipv6
address 2607:f0d0:1002:11::4
netmask 64
gateway fe80:0000:0000:0000:xxxx:xxxx:xxxx:xxxx
### END IPV6 configuration

You can find out what your IPv6 address is using ifconfig or ip -6 addr and will notice that, after you type sudo service networking restart (on Ubuntu) or the equivalent on other Linux varieties, your address will no longer be marked “temporary”, i.e. it will be static. Also try ip -6 route show to confirm the default gateway address etc. Unlike in IPv4, your server and other devices on your subnet may have multiple addresses. These are marked with the amount of time before they will expire and become “deprecated”, and the oldest temporary one will eventually drop off the list in turn: at this point it will no longer work. Obviously, static addresses remain on the list. You normally set these on each device, if required.

Important notice: for security, you should enable privacy extensions, which are switched on in OS X Lion and later. For Snow Leopard, check these instructions. If you don’t, your IPv6 address will encode your computer’s unique MAC address, allowing it to be uniquely trackable on the Internet. This is not a good thing. Please check it 🙂

Setting up the firewall

My query about Matt Turner’s instructions later in the same thread turned out to be unnecessary, so I can confirm that you should follow them as directed with no alterations, as you will not be using NAT for IPv6. It won’t work on the TG582n and should probably never be used with IPv6, though it may be technically possible. Since all devices can have unique addresses, IPv6 makes NAT redundant anyway. I have also left some instructions there for setting up the firewall to allow incoming connections ports 80 and 443 (and others, if you like). This is required if you are going to run any services on a web server. There doesn’t seem much point in me repeating the instructions in this blog post, so please see the above thread.

The proof that this works is that other sites, e.g. online HTTP response checking tools, can see your site, though some of these may be unhappy with just the IPv6 address in square brackets, so you may need to wait until you can point your domain names at your sites over IPv6 as well. If you can go somewhere like an internet café or workplace, on another subnet (or use, say, a mobile phone over 3G/4G without using your local wifi), you could try to connect to the site via the IPv6 address in square brackets from there.

You can use, in Chrome, the “if ipv6” extension to see when you are connecting to a site by IPv6 or IPv4, but remember that this doesn’t show if others can. It’s possible that you can do so because you are on the same subnet, behind the same firewall as the server.

Pointing domain name records at your server

Normally, in order to run a web server, you set up A records to point to the static IPv4 address that you are given by your domain name registrar, which you typically log into using the web interface that they provide on their site. Some of these will provide the ability to set AAAA records, which need IPv6 addresses in the same way. One of these that I have used, for example, is freeparking, but I cannot recommend any particular one, as it will depend on price and on the services that they offer: what each person wants may be different, e.g. MX records, email hosting, email forwarding etc.

I use Freeola/GetDotted and, at first, I was sorry to see that they don’t yet allow you to set AAAA records and was ready (wrongly, as it turned out) to change to another registrar. However, on opening a support ticket, I am happy to say that they told me that not only are they currently testing an upgrade to their domain name record interface that will allow you to control your own AAAA records but they are already capable of setting them for you. All you have to do is ask: they were courteous and very quick about setting them correctly, especially since I had many domains and subdomains that needed records. Not ideal, I know, but they are on the right side and they are doing their best – what more could I ask?

You will usually just point your AAAA record at the IPv6 address that corresponds to your server. This will mirror the way that you use A records to point at your IPv4 address, normally your router. There are specific IPv6 web site reachability tools like this one that can help you check if this has worked.

Obviously, without NAT, you cannot use port forwarding (a.k.a. “Game and Application Sharing” on the TG582n web interface), hence the firewall rules that were described. If you had multiple servers on the same ports, e.g. 80 and 443, you could point different domains at different machines for IPv6, but this would be impossible for IPv4 because NAT (technically NAPT, in fact) can only redirect all traffic on a port to a particular machine. In practice, while IPv4 continues to exist as a major protocol, i.e. for the immediate foreseeable future, this will be limited because you usually need to provide the same service over the two IP protocols, i.e. a dual stack configuration that allows identical web services over IPv4 and IPv6. One day, IPv4 will necessarily be deprecated but for now it seems very far off. All the same, adoption of IPv6 has now started to rise fast.

IPv6 allows all devices to be directly addressable, i.e. entirely avoiding the problem that NAT was created to solve, where there simply aren’t enough IPv4 addresses for all devices to have one. You may not see any practical difference in how your server is seen by the world, but every sensor and every device (e.g. in a home automation system) can now have an IPv6 address, enabling all sorts of new technologies in future.

Changing your DNS servers

You may find, as I did, that your ISPs DNS servers are not the fastest. This may be made worse by enabling IPv6, but never fear! Check out Google namebench and Pete Sent Me’s instructions on how to change the DNS servers on the TG582n via telnet. (You don’t need to be on FTTC for these to work, despite the title of his blog post: I am still on ADSL2+ until – hopefully no later – June or so.) The command “dns server route list” has been replaced by “dns server forward list” in the new firmware but the others are unchanged.

Note that it may confuse you by suggesting the IPv4 address of your router. This will be marked as a duplicate of one of your ISP’s DNS servers, so use that. But also use OpenDNS or another one of the suggestions that it makes. You can have more than two if you like. It is perfectly fine to use a tertiary DNS server (etc). It will not appear in the router’s web interface since this will only show the first two, but it’s possible to set any number.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

SPDY on Nginx

I have previously referred to SPDY in a post about DSpace but I feel that I should elaborate a little here. Presuming that you have a version of Nginx that supports SPDY, here is a sample set up. Note that I also have IPv6 enabled here as well.

server {
# The next line will only work for IPv4
listen 80;
# The next line is for port 80 (HTTP) over IPv6
listen [::]:80;
# The next line will only work for IPv4
listen 443 ssl spdy;
# The next line is for port 443 (HTTPS) over IPv6
listen [::]:443 ssl spdy; }

Essentially, I have merely added the string “spdy” after “ssl”. Using the latter is the recommended way in recent versions of Nginx, rather than using the separate instruction “ssl on” as previously (before version 0.7.14).

My understanding, by the way, is that mod_spdy with Apache requires no particular configuration after installation, but I have not done this myself and no longer use Apache in production.

You may also want to add a version of one of the next two lines in order to advertise its availability to browsers, depending on whether you are, at the time of writing, using the stable or the “Mainline” (formerly development) branch of Nginx: both are now considered suitable for production. You need version 1.15.x for the latter.

# Nginx 1.4.x (Stable): no support for SDPY 3 or 3.1
add_header Alternate-Protocol 443:npn-spdy/2;
# Nginx 1.5.x (Mainline)
add_header Alternate-Protocol 443:npn-spdy/2,443:npn-spdy/3,443:npn-spdy/3.1;

I have read some comments that SPDY 3 has problems, yet major sites are supporting it, so I am not clear if they are particularly serious, so you may want to skip it and use just SPDY 3.1. You can omit SPDY 2 but some relatively recent browsers that are not the most recent version may not support SPDY 3 or 3.1 yet – on the other hand, you can take the view that users who don’t upgrade their browsers have only themselves to blame and will only suffer a small page loading speed hit anyway. This is an area of fast development and SPDY 4 is in alpha development i.e. it is not yet stable. Perhaps all this will eventually be fixed in the HTTP 2.0 specification – who knows?

Since writing my blog post about DSpace noted above, I have upgraded Nginx to version 1.15.x in order to add SPDY 3 and 3.1 support, as described. You can find out which version you are using in Linux/Unix etc as follows:

nginx -v

Anyway, it should be this simple. You obviously need the other appropriate configuration for TLS/SSL in Nginx but there are many detailed guides on the web for setting that up. I won’t explain my own configuration in detail here but, for completeness, I now add the include directive for each virtual host in order to avoid duplicating the configuration for each one, as well as the specific certificate and key locations for each one:

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl spdy;
    listen [::]:443 ssl spdy;

    server_name www.example.com;
    root /var/www/www.example.com/;

    include /etc/nginx/tls.conf;

    ssl_certificate /etc/ssl/certs/example.com-selfsigned.crt;
    ssl_certificate_key /etc/ssl/private/example.com-selfsigned.key;
    
    location / {
        try_files $uri $uri/ =404;
        ...
    }
    ...
    
    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    location ~ /\.ht {
        deny all;
    }
}

I have added the following to /etc/nginx/tls.conf:

    ssl_session_cache shared:SSL:20m;
    #ssl_session_timeout 5m;
    ssl_session_timeout 10m;

    ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    #original
    #ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
    #very secure, very compatible
    ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
    #highly secure, less compatible
    #ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:!ADH:!AECDH:!MD5;
    ssl_prefer_server_ciphers on;

    add_header Alternate-Protocol 443:npn-spdy/2,443:npn-spdy/3,443:npn-spdy/3.1;
Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

DSpace 4.x via Nginx reverse proxy

Yesterday, after preparing over some weeks, I finally switched all my live web sites, development sites and demos over from Apache 2.4 to Nginx 1.4.x, to which I recently added SPDY 2 to speed up the TLS/SSL connection. Some of these required various types of reverse proxying using PHP-FPM, FastCGI, uWSGI etc for Python, Perl, Java on Tomcat7 and so on. I won’t go into all the details here, although you can get an idea from my list of technical activities that I keep on my web site, partly for my own reference and partly as an on-going documentation effort of my skills.

Today, I decided to add to these sites by following a friend’s advice to install a demo of DSpace 4.x. For the most part, I followed the instructions for installing that can be found on the DSpace web site. I installed from source and it was generally straightforward, so I will not go over details where they don’t vary from the instructions. In brief, I was already running Tomcat7 on localhost for CKAN, Greenstone 3, Magnolia and other purposes, all of which are behind a reverse proxy, formerly Apache 2.x and now Nginx as of yesterday. This meant that I did not want to install into the Tomcat web root, which I am already using for something else. It also meant that I used the unix user tomcat7 instead of the recommended dspace user. Neither of these differences made much impact and the install process was comparatively simple.

One small issue was that the JSPUI didn’t work (Ed.: see comment below: this is because Solr was not installed). I used the XMLUI anyway, which is easier to customise. For security, e.g. passwords, I redirected the whole site to HTTPS. I could have spent some more time redirecting just /xmlui/password-login/ and /xmlui/profile/ but it was not worth the time fixing annoying redirect loops just for a demo site. (I use CAcert.org certificates, which give browser security warnings because of a commercial cartel that leads to the non-adoption of free, community-supported certificate authorities.)

More tricky was the business of making the Nginx reverse proxy work right. This can also be difficult with Apache, as I have found before: if you get it wrong with a particular site, the result is often that you don’t see the CSS, JavaScript etc because the paths are wrong, or that it simply fails to show anything. It is much easier for sites that you want to install in the web root of Tomcat, which avoids these problems. For reference, here is a slightly cleaned-up version of my Nginx configuration. As usual, I have left some minor cruft in there as comments, which represents failed efforts: the reason is that it may help someone in future work out what didn’t work. I have used my discretion in this, so it’s not a total mess. I hope it helps somebody who tries to do something similar.

server {
        listen 80;
        listen [::]:80;
	listen 443 ssl spdy;
        listen [::]:443 ssl spdy;

	server_name dspace.talatchaudhri.net;
	# Next line might matter for security if Tomcat7 ever fell over? Leave in for safety.
	root /var/lib/tomcat7/webapps/xmlui/;

        # Redirect to the new site - not really necessary but you could adapt this for something
        #if ($host = 'dspace.talatchaudhri.com') {
        #    rewrite  ^/(.*)$  $scheme://dspace.talatchaudhri.net:$server_port/$1  permanent;
        #}

	# Redirect to https because /xmlui/password-login and /profile need to be protected
	if ($scheme = 'http') {
		rewrite ^(.*)$ https://$host$request_uri permanent;
	}

        #remove the following line if one port is http (add "ssl" to the secure port instead)
	#ssl on;
	ssl_certificate /etc/ssl/certs/example.com-selfsigned.crt;
	ssl_certificate_key /etc/ssl/private/example.com-selfsigned.key;

	ssl_session_cache shared:SSL:20m;
	#ssl_session_timeout 5m;
	ssl_session_timeout 10m;

	ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
	#original
	#ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
	#very secure, very compatible
	ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
	#highly secure, less compatible
	#ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:!ADH:!AECDH:!MD5;
	ssl_prefer_server_ciphers on;

        add_header        Alternate-Protocol  443:npn-spdy/2;

	proxy_intercept_errors on;

	rewrite ^/$ /xmlui/ permanent;

	location /xmlui {
	index index.jsp;
	proxy_pass http://localhost:8080;
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	# The next sections didn't work well for me: for information only
	#proxy_set_header X-Forwarded-Host $host;
	#proxy_set_header X-Forwarded-Server $host;
	#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	#proxy_pass http://127.0.0.1:8080;
	#proxy_redirect   http://127.0.0.1:8080/xmlui /xmlui;
	#rewrite ^/(.*)$ /xmlui/;
	}

	# The following sections aren't needed but may be informative
	#location ~ \.do$ {
	#proxy_pass              http://localhost:8080;
	#proxy_redirect		default;
	#proxy_set_header        X-Real-IP $remote_addr;
	#proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
	#proxy_set_header        Host $http_host;
	#}

	#location ~ \.jsp$ {
	#proxy_pass              http://localhost:8080;
	#proxy_set_header        X-Real-IP $remote_addr;
	#proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
	#proxy_set_header        Host $http_host;
	#}

	#location ^~/servlets/* {
	#proxy_pass              http://localhost:8080;
	#proxy_set_header        X-Real-IP $remote_addr;
	#proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
	#proxy_set_header        Host $http_host;
	#}
}

This reverse proxy approach completely avoids the intractable issue having to setup Tomcat on port 443: obviously I can’t bind the port for both my main web server, be it Apache or Nginx, and also for Tomcat. It is also convenient for me to do it the same way as for all my other sites, using Nginx. There is much to be said for a consistent approach anyway, but here the only practical choice is a reverse proxy if I want to run sites using two different server platforms on the same box.

From this, you will also see how I have my server set up, particularly TLS/SSL and SPDY. Anecdotally, Nginx appears to be faster and consumes less memory, as billed, as well as things like rewrite rules being rather less arcane than in Apache (though you can’t have .htaccess files, which is actually good because these slow up the server). Of course, I haven’t benchmarked the two. For reference, I ran Nginx side-by-side on alternative ports (for which I happened to choose 9000 and 9001, quite arbitarily) until I was sure that everything was secure and exactly mirrored the Apache sites. There is quite a well-known PHP vulnerability that must be mitigated for Nginx with PHP-FPM and/or FastCGI. I have still got Apache running in the background on localhost for testing, though with all the sites disabled, so it won’t consume the memory that it used to.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

uWSGI and Nginx with CKAN

I have recently been considering migrating from Apache 2.4 to Nginx. At present, I am simply testing this by serving the web services that I run on Nginx on alternative ports while retaining Apache. Partly, this is to educate myself in Nginx, as I am not dissatisfied with Apache and have, over the years, gained considerable experience in it.

Naturally, one of my first considerations was how to get PHP, Python and possibly Perl to run on Nginx, which is easy on Apache. My attention turned to my CKAN test installation, which has been running on Apache using the modwsgi adapter. There are instructions on the CKAN Deployment page that indicate that using Nginx is possible, either as a reverse proxy or with uWSGI. Unfortunately, there are no specific instructions to do this. I have used and modified various sources, most of which (sorry!) I have now mislaid. But I was put on the right track by the following, even though it was intended to run CKAN via paster (i.e. in a virtual environment) with UWSGI rather than deploying it on the server:

http://ckan-docs-tw.readthedocs.org/en/latest/deployment.html

To prepare, I re-used my earlier apache.wsgi by copying it to nginx.wsgi, copying production.ini to production_uswgi.ini and updating the reference in the former to the latter. Then I set about altering production_uwsgi.ini. For tidyness, this should really be in /etc/uwsgi/apps-available/production_uwsgi.ini (where either .ini or XML files are equally valid) and linking it by typing:

ln -s /etc/uwsgi/apps-available/production_uwsgi.ini /etc/uwsgi/apps-enabled/production_uwsgi.ini

However, I was working in /etc/ckan/default and linked it from there, which is terribly bad practice in terms of systems administration but obviously works just as well. I must tidy this up, I suppose, in case I forget what I have done.

I will not bore readers with the  long and frustrating process of learning about uWSGI settings by trial and error. Those of you who are more experienced with uWSGI, Python and Nginx than I am could probably have saved me some time. However, I feel that it’s useful to give you a somewhat sanitised version of the (possibly highly imperfect) set-up that worked for me:

proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=cache:30m max_size=250m;
proxy_temp_path /tmp/nginx_proxy 1 2;

server {
    ## you will usually use 80 here
    listen 9000;
    ## you will usually use 443 here
    listen 9001 ssl;
    server_name ckan.example.com;
    access_log /var/log/ckan_access.log;
    error_log /var/log/ckan_error.log;
    client_max_body_size 100M;

    #remove the following line if one port is http (add "ssl" to the secure port instead)
    #ssl on;
    ssl_certificate /etc/ssl/certs/ckan.example.com.crt;
    ssl_certificate_key /etc/ssl/private/ckan.example.com.key;

    ssl_session_timeout 5m;

    ssl_protocols SSLv3 TLSv1;
    ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
    ssl_prefer_server_ciphers on;

    location / {
       include uwsgi_params;
       uwsgi_pass unix:///tmp/uwsgi.sock;
       uwsgi_param SCRIPT_NAME '';
       #uwsgi_pass      127.0.0.1:3031;
       #uwsgi_param     UWSGI_SCHEME $scheme;
       #uwsgi_param     SERVER_SOFTWARE    nginx/$nginx_version;
    }
}

You will notice that I have left in a few possible parameters that are either alternatives like using a TCP port instead of a UNIX socket, or else seem to have very little effect as far as I can tell. I’d be interested if anyone could comment on refinements to these settings. I don’t know whether putting UNIX sockets in /tmp is a good idea, since I have seen this habit criticised for various reasons, but it seems to work for me for the purpose at hand.

In production_uwsgi.ini (or whatever you call it), I changed some lines as follows, but I am not clear whether this is strictly necessary or made any difference:

[server:main]
use = egg:Paste#http
#host = 0.0.0.0
#host = 127.0.0.1:3031
host = ckan.example.com
#port = 5000
## you will usually use 80 here
port = 9000

Further down, I tried all manner of different settings in a new [uwsgi] section, but those in strikethrough are settings that caused internal server errors unless otherwise noted:

[uwsgi]
plugins = python
socket = /tmp/uwsgi.sock
## surprisingly, pythonpath and chdir made no difference, despite all advice
#pythonpath = /usr/lib/ckan/default/bin/
#pythonpath = /etc/ckan/default/
#chdir = /etc/ckan/default/
#chdir = /usr/lib/ckan/default/bin
## the error log always failed to find the application, 
## whatever went here (you are supposed to leave off .py):
#module = activate_this
## the next line only worked with wsgi-file and not module:
wsgi-file = /etc/ckan/default/nginx.wsgi
## the next line was critical to getting it working!
mountpath = /usr/lib/ckan/default/bin/
## all advice used the following, which was wrong:
#mountpath = /
master = true
processes = 4
harakiri = 60
## the following lines are for tweaking performance:
#reload-mercy = 8
#cpu-affinity = 1
#stats = /tmp/stats.socket
#max-requests = 2000
#limit-as = 512
#reload-on-as = 256
#reload-on-rss = 192
#no-orphans = true
#vacuum = true
## despite all advice, the next line always broke the application
#callable = app
## the next lines are necessary to make sure Nginx has permissions
chmod-socket = 666
uid = www-data
gid = www-data
## apparently, these fail: you should set up SSL in Nginx instead 
#https = =0,/etc/ssl/certs/ckan.example.crt,/etc/ssl/private/ckan.example.key,HIGH
#http-to = /tmp/uwsgi.sock
## you would usually use 80 and 443 here
#shared-socket = 0.0.0.0:9000
#shared-socket = 0.0.0.0:9001

You can choose what you like from the above, but I have added some useful in-line notes on my experiences. I can’t speak authoritatively on works and what doesn’t beyond what I have tried, but I hope that somebody else may see this and be spared some trouble.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Leiden Leechbook and other glosses – Breton, Cornish or SW British?

There was a debate yesterday on the Celtic Linguistics group on Facebook about whether the ninth-century Leiden Leechbook, long considered Old Breton, can be considered Breton at all, rather than South West Brythonic or perhaps even only dialectically so. It all comes down to whether one is a lumper or a splitter, as I noted, given that we know too little to know just how dissimilar these languages were in the tenth century and previously (e.g. their morphology and syntax), apart from a few potentially minor phonetic differences that we could as easily ascribe to dialect as language differences. There is famously no scientifically agreed distinction between the two, so all such terms, at their boundaries at least, are a matter of academic convenience.

I also made the following minor emendation to the text in this Facebook comment:

I note that hobæbl is probably an error for lobæbl, which would then be another word with lob, lub (characteristic for this text) and mean Sambucus Ebulus (Dwarf Elder), fitting nicely with Stokes’ guess. Perhaps Falileyev & Owen already commented on this, as I don’t have access to a copy right now, but the text seems to have at least three different etymons related to the elder, e.g. hobæbl-lobæbl, scau, trom, in which perhaps the glossator had an interest…? The only thing that immediately strikes my eye as having a potentially Breton flavour is <e> (mostly) for inherited short /i/, but that could be just an orthographical matter and not necessarily diagnostic on its own so early anyway…(?)

Since it appears to be out of print, I have not been able to get hold of a copy of Falileyev A., Owen M. E. The Leiden Leechbook: A Study of the Earliest Neo-Brittonic Medical Compilation. Innsbruck: Institut für Sprachen und Literaturen der Universität; 2005 (ISBN: 3851242157).

Since I specialize in Cornish as well as in Brythonic historical linguistics, I would be fascinated to see if others have found themselves more able to ascribe the text specifically to Breton, as Whitley Stokes obviously was in his edition. However, I don’t think that many academics today would be prepared to do so purely on paleographical and orthographical grounds, as was formerly more acceptable, since that risks creating an artificial distinction between language polities that may or may not have existed at the date in question. Just as colloquial Hindustani is separated into colloquial Urdu and colloquial Hindi by digraphia and religio-political identities more than by diglossia, we should not jump to the conclusion that Breton and Cornish were separate languages purely on the basis that one is written in a style influenced more by Frankish scriptorial traditions and the other by Anglo-Saxon.

Equally, who is to say that the 9th century phrase ud rocashaas1 should not be considered South West Brythonic (and thus equally “Breton” as “Cornish” or even “Dumnonian”, “Somerset Brythonic”, “Dorset Brythonic” and so on)? We don’t know whether or not the glossator had even been to what was later called Cornwall. Given that much more of the South West of Britain was probably speaking Brythonic at this date than merely Cornwall, is it not anachronistic, even if we separate Breton from this language, to call it “Old Cornish”? I would go so far as to say that could even hold true for the 12th-century Vocabularium Cornicum,2 especially since one of the main diagnostic features of Cornish at that date, assibilation, is a phonetically trivial feature that could have been merely dialectical. One might reasonably compare /kw/ > /p/ in certain varieties of Celtic, which is no longer seen as a diagnostic marker of linguistic relatedness as it was in former scholarship.

This all goes to show that language classification is extremely challenging at the margins of knowledge, and that ultimately such convenience boundaries may come down to perspective in the absence of any better morphological or syntactic data.

[1] Sims-Williams, Patrick ‘A New Brittonic Gloss on Boethius: ud rocashaas’, Cambrian Medieval Celtic Studies 50 (Geurey 2005), 77-86.
[2] Mills, Jon, The Vocabularium Cornicum: a Cornish vocabulary? http://ora.ox.ac.uk/objects/uuid%3A479f80db-d8f3-4a5e-ae64-06f8cf9b65d1

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather