Welcome to my blog.

Have a look at the most recent posts below, or browse the tag cloud on the right. An archive of all posts is also available.

RSS Atom Add a new post titled:

I fairly regularly buy ebooks from Baen books and Weightless Books who both send me the books as attachments to an e-mail. I've automated the processing of these e-mails so that books sent this way are automatically incorporated into my Calibre library. I also buy bundles of ebooks from Storybundle. Unfortunately Storybundle will only send books to @kindle.com addresses.

While Storybundle don't send me the e-books they do send an e-mail where the ebooks can be downloaded. While this normally involves clicking on buttons and such it is possible to tweak the url so that the books in question can be downloaded directly. The first thing I do is set up a script on my Bitfolk VM (where this blog is hosted) which takes the url Storybundle provided and uses it to download the bundle and e-mail it to me:

#!/bin/bash                                                                                                                                                  
PATH=/usr/bin:/bin                                                                                                                                           
export PATH                                                                                                                                                  
URL="$1"                                                                                                                                                     
MAILTO="$2"                                                                                                                                                  
ZIP="$(echo ${URL}|sed -e s':^.*/\([^/]*$\):\1.zip:')"                                                                                                       
MYDIR=$(mktemp -d)                                                                                                                                           
until curl -s -L --data-urlencode "download=DOWNLOAD ALL" ${URL}/download_all >"${MYDIR}/${ZIP}"                                                             
do 
sleep 60
done
mpack -a -s "Your Storybundle ${ZIP}"  -c "application/zip" "${MYDIR}/${ZIP}" "${MAILTO}"
rm "${MYDIR}/${ZIP}"
rmdir  "${MYDIR}"

For this to be useful I need to extract the URL from the storybundle e-mail. This is fairly easy to do as the message storybundle sends has a standard format and I store my mail in MH folders. One message per file. I added the code for this to the script I use for processing e-books I've received by mail:

#!/bin/bash
mkdir -p ~/books/messages
shopt -s nullglob
for MSG in $(find ~/Mail/entz/books/delivered -maxdepth 1 -links 1 -regex '.*/[1-9][0-9]*$')
do 
   MD5=$(md5sum ${MSG}|awk '{print $1}')
   if ln ${MSG} ~/books/messages/${MD5} >/dev/null 2>&1
   then
      mkdir -p ~/books/import/${MD5}
      cd ~/books/import/${MD5}
      munpack -q <~/books/messages/${MD5} >/dev/null 2>&1
      echo *.zip |xargs -n 1 7z e
      calibredb add --ignore ".*" --ignore "*.zip" . >/dev/null
   fi
done      
for MSG in $(find ~/Mail/entz/books/storybundle -maxdepth 1 -links 1 -regex '.*/[1-9][0-9]*$')
do 
   MD5=$(md5sum ${MSG}|awk '{print $1}')
   if ln ${MSG} ~/books/messages/${MD5} >/dev/null 2>&1
   then 
       uux 'vicar!storybundle'  "$( <${MSG} awk "/Here's your unique download link:/{print \$6}"|sed -e 's:\.$::')" "${MAILTO}"
      sleep 60
  fi
done

I don't read my e-mail on this VM which is why I queue the command for later execution on the VM via uucp.

Posted Fri Apr 21 15:59:42 2017 Tags:

Keeping my OpenPGP key safe

In order to keep it reasonably secure I do not keep the secret part of my PGP key on internet connected computers. I keep my signing,encryption and authentication keys on my FSFE membership card which doubles up as an OpenPGP card. The certifying key is kept, encrypted, on a small non-networked computer (and backed up elsewhere).

Signing OpenPGP keys

To simplify signing other people's keys I use caff from the Debian signing party package. The way caff works is to sign each uid on a key individually and send that signed key, encrypted to itself, to the e-mail address embedded in the uid. If the recipient can decrypt the message they demonstrate control of the key and thereby verify, to some degree, that the e-mail address and key are controlled by the same person.

Combing the two

Modern MTAs are designed to work in an Internet environment and by default send messages via SMTP. On my disconnected key signing computer there is no Internet and therefore no SMTP. On the face of it this presents a problem for caff .

The retro-computing solution

My first proper job was with a small, and now defunct ISP that, in addition to the then usual dial-up SLIP/PPP Internet connections offered a BBS and UUCP: a system for copying files and remote command execution that works in batched mode . My operating system of choice still supports this. UUCP normally works over serial-ports, phone lines, TCP or even ssh. However as I want an air-gap between my internet-connected computer and my key-signing machine none of these are suitable. My solution then is to run UUCP over Sneakernet.

Implementation

My solution uses the usbmount,uucpand openssh packages and a thumb drive. On the thumbdrive I create spool, log and pub directories to server as the data directories for a virtual UUCP system. I chown them appropriately. Fortunately UUCP is an old enough part of unix that it has a fixed uid and gid assigned to it meaning that it is the same across all my systems.

I created three directories on each node to support the sneakernet: /etc/opt/uusbcp, /var/opt/uusbcp and /opt/uusbcp/bin . In /etc/opt/uusbcp are most of the usual uucp files plus a file uuid that holds the uuid of the thumbdrive. The config file contains a few unusual entries that ensure the virtual machine can be distinguished from the host machine and stores its data on the thumbdrive when it is plugged in:

nodename        epistle
spool           /var/opt/uusbcp/spool/
pubdir          /var/opt/uusbcp/pub/
logfile         /var/opt/uusbcp/log/Log
statfile        /var/opt/uusbcp/log/Stats
debugfile       /var/opt/uusbcp/log/Debug

Rather than a sys file I created a sys.head that contains only defaults and a pointer to a special port for contacting the system into which the drive is plugged:

chat ""
port TCP
command-path /bin /usr/bin /usr/sbin
commands true
callback true
forward ANY
remote-send ~
remote-receive ~
local-send ~ 
local-receive ~


system dumain
port UUSBCP
time any

The port file defines that port:

port TCP
type tcp

port UUSBCP
type pipe
command /usr/bin/ssh -C -x -o batchmode=yes uucp@localhost

I use the uucp user's *~/.ssh configuration to force running uucico with the appropriate userid for the thumbdrive. Password logins are disabled.

I created a script /opt/uusbcp/bin/docall to be run when the thumb drive is plugged in to initiate the UUCP connection to the local machine:

#!/bin/bash
set -e
mount --make-private --bind /etc/opt/uusbcp /etc/uucp
mount --make-private --bind "$2" /var/opt/uusbcp
uuxqt
while  fuser -m /var/opt/uusbcp; do sleep 1 ;done
su - uucp -c "/usr/lib/uucp/uucico -z -x 2 -D -q -S $1"
uuxqt
while fuser -m /var/opt/uusbcp; do sleep 1 ;done

On my signing machine this is a little simpler:

#!/bin/bash
set -e
mount --make-private --bind /etc/opt/uusbcp /etc/uucp
mount --make-private --bind "$2" /var/opt/uusbcp
su - uucp -c "/usr/lib/uucp/uucico -z -x 2 -D -q -S $1"

Finally to ensure my script is called when needed I add a script under /etc/usbmount/mount.d/99_uusbcp:

 #!/bin/bash
 UUSBETC=/etc/opt/uusbcp/
 UUSBSYS="${UUSBETC}/sys"
 UUSBCP=/var/opt/uusbcp/
 UUNAME="$(uuname -l)"
 UUID=$(cat ${UUSBETC}/uuid)
 SNEAKERNET=$(findmnt -n -o TARGET UUID="${UUID}")
 if [  "${SNEAKERNET}" = "${UM_MOUNTPOINT}" -a -n "${SNEAKERNET}" ] 
 then
    set -e
    ls  -1 "${SNEAKERNET}/spool" |grep '^[a-z][a-z0-9-]*[a-z0-9]$'|grep -v -- "^${UUNAME}"'$'|sed -e 's/^/system /' -e 's/$/\nforward ANY/'|cat "${UUSBSYS}.head" - >"${UUSBSYS}.new"
    mv "${UUSBSYS}.new" "${UUSBSYS}"
    mount -o remount --make-rprivate /
    unshare -m /opt/uusbcp/bin/docall "${UUNAME}" "${SNEAKERNET}"
    umount "${SNEAKERNET}"
    e2fsck /dev/disk/by-uuid/${UUID}
 fi

The long line beginning with ls -1 generates the sys file used for the thumbdrive based on the sys.head file and the contents of the spool directory on the thumbdrive.

Those of you familiar with UUCP may be wondering why I used unshare and private mounts to mount /etc/opt/uusbcp over /etc/opt/uucp rather than just using the -I option to uuxqt and uucico to specify an alternate config file. The reason is that uuxqt does not pass this option on to uucp when executing multi-hop copies.

On each host in the sneakernet there is an entry in the sys file for the thumb drive uucp host (epistle)with appropriate permissions (essentially only copying into uucppublic for the key-signing machine).

Email

My signing machine's MTA sends mail by piping it into a double hop uux command that will execute rsmtp on my desktop machine the next time I move the thumb drive from the first to the second.

uux epistle!dumain!rsmtp  

Apart from the double hop this is fairly standard for sending mail via UUCP to a smarthost.

Getting keys onto the signing host

The signing machine is a pure satellite system from an e-mail perspective so I can't use e-mail to get keys there. However I can use a double hop uucp command to copy a keyring from my desktop to the /var/spool/uucppublic directory on the signing machine from whence I can pick them up and sign if appropriate.

Future Improvements

I may look into making one uucico call the other directly rather than using ssh and possibly mounting the regular filesystems read-only in the thumb drive environment to improve the protection against hostile thumb-drives. At the moment protection relies on checking the uuid and restricted command execution via UUCP.

Posted Sat Feb 25 19:02:17 2017 Tags:

I'm going to upgrade this site to use HTTPS, HSTS and forward secrecy this year in order to help Reset the Net. They might get a bit further if they didn't insist of the URL of a tweet before I can submit this blog post. I don't use twitter as I prefer not to put everything in the hands of a giant american corporation.

Posted Thu Jun 5 18:32:36 2014

So I've been working on getting my PGP key better connected into the web of trust. I've been to a couple of key signing parties and got my key signed by CACert and the PGP Global Directory all of which has made my key fairly well connected.

However this only underscores the fundamental problem with OpenPGP: relatively few people use it and only a fraction of them are connected into the strong set. This is in part a bootstrapping problem. With the web of trust connecting so few people it is hard to find someone to sign your key and key signing parties are a fair amount of work to organize.

So my idea to help OpenPGP users connect: a mobile phone app that tells you when you are close to a fellow user with whom you have not exchanged signatures.

Features

  1. Authentication either with the key or (for those who don't want to keep their key on their phone) by a signed token.
  2. User determines required proximity before detection occurs
  3. Variable levels of visibility: Invisible,Headcount only,Contact details,Location
  4. Ability to ignore certain users.
  5. Encrypted IM if you have your key.
Posted Mon Apr 21 20:25:47 2014 Tags:

For the past couple of days I haven't been able to access Goodreads. I get a response page that reads:

403 Forbidden

Request forbidden by administrative rules.

Using google I could find no evidence that Goodreads was down and Is it down right now claims it is up and has been so for the last week. A little poking around and I found I couldn't access Goodreads directly over my normal internet connection or via TOR but could access it just fine using my phone as a mobile hotspot.

As the IP I normally browse from also functions as a restricted TOR exit node I conclude that Goodreads has started blocking TOR exit nodes. This is rather tricky to Google due to frequent references to Tor books and Goodreads together on the internet. Oddly enougth Goodreads owner Amazon don't block me so I guess they only object to TOR when there isn't any money in the offing.

Posted Fri Apr 18 18:45:13 2014 Tags:

As everyone knows now Google Reader will be shutting down on July 1st. This has caused me to actually start working on my long planned switch to a self-hosted solution. Looking at what I actually use Google Reader for it looks like I really need multiple readers. I've already switched my audio podcast consumption to a dedicated podcatcher program on my mobile phone. Unfortunately getting enough content for my walks home will exceed my the "fair use" limits on my "unlimited" plan so I'll have to download it in advance via wi-fi.

For webcomics news and people I follow regularly a River of News style aggregator like planet looks to be what I need.

However there are still some feeds for which I would prefer the mailbox style of news provided by reader. Unfortunately most of the options here seem to be either designed for massive hosting sites or written in PHP. While I'm sure it is possible to write secure PHP it doesn't seem to be the norm.

I'm also looking for something that can split link posts into multiple entries and ideally merge multiple links to the same article.

Posted Sun Apr 14 15:47:12 2013 Tags:

The internet derives its strength and flexibility from its design as a decentralised system with the bulk of the inteligence on the edges rather than in the network "core". Unfortunately it is still too centralised in many respects. Much of this centralisation stems from early technological constraints that either no longer apply or will shortly cease to apply. The early internet required central management because it relied on a protocol with a relatively small (32 bit) address space and routers that operate under severe memory constraints.

We can reasonably assume that the number of independent networks will be of roughly the same order of magnitude as the number of people on the planet ie a few billion. Since modern computers come with several gigabytes of RAM we can work on the assumption that storing the routing table is now trivial. Likewise network link speeds are increasing so transmitting the table should not be prohibitive.

What might be expensive is the need to look routes up quickly. This might require very fast RAM on core routers and storing the entire routing table therein would be prohibitive. This could be avoided by making use of a source routing protocol like MPLS to enable the workload to move to the network edge.

Given the above we no longer need routing tables to be efficient. We should be able to afford one table entry per network easily. This means we no longer need management to ensure compact allocations. I could be wrong but I suspect I'm only wrong by a few years if so.

Although allocation compactness is no longer a concern we still can't allocate at random with IPv6 there might be accidental collisions. However we don't need an authority to prevent this just an agreed standard. One mechanism would be to assign each router a network address based on its physical location on the surface of the earth. One could use any map projection that produces a roughly square map without distorting shape or area too badly and simply take the router's cartesian co-ordinates to make up the network address. With a resolution of a square meter this would take up about 50 bits, comfortably within the 64 bits reserved for the network. By interleaving the bits from the X and Y co-ordinates one might even be able to shrink the routing table back down again.

Of course that doesn't prevent hijacking an IP address as there is no central registry of who legitimately controls which address. If one is prepared to throw out IPv6 compatibility and increase the address space then one could just use a hash of the router's public key to identify the network.

Unfortunately Zooko's Triangle causes some problems when trying to decentralise human meaningful names so I'll leave those to a later post.

Posted Mon Apr 1 16:16:17 2013 Tags:

The ongoing copyright wars between the various media industry associations and file sharers have a tendency to create collateral damage in the form of laws that severely restrict the internet. In my arrogant opinion it looks likely that in a technical arms race the file sharers will win. This is problematic as it means that copyright supporters can only win by legally hobbling the internet. To avoid this it seems to me two things need to happen.

The first thing that needs to happen is acceleration of the technology arms race. If we have a maximally effective file sharing technology soon then the only viable counter will be laws so draconian that they will be clearly unacceptable. If the technology continues to get better in small steps then the laws will continue to get worse in small steps that may be individually tolerable but collectively end up returning us to the dark ages of centrally managed media.

The second thing that needs to happen is elimination of the the perceived need for such draconian laws. The fundamental problem copyright addresses is the non-excludability of the fixed costs of cultural production. While there is no reason to support those whose stake in the copyright wars is the protection of an inefficient distribution system we still want as many performers and demiurges as possible to be able to obtain their livelihood by following their muse. Two factors suggest that the threat of legal force is unnecessary to ensure this. Firstly there is the fact that file sharers buy more which suggests we don't need to worry overly about freeloaders. Secondly the moral repugnance most people feel for plagiarists will likely ensure that anyone engaging in passing off someone else's work as their own will be quickly detected and boycotted. The problem therefore is not how to make people pay but how to let them.

Which brings me to my idea for how to let people pay. A web site where authors and performers or those with their approval can upload their works. The users of the site can freely download the works and award them stars which are used as input to a collaborative filtering algorithm that helps find other works they might like. The trick is that they can only award a limited number of stars without paying and once they have awarded sufficient stars then the artists receive a percentage of the money paid for those stars. As the users award more stars they get better recommendations and the stars change their nominal material indicating the percentage going to the artists. The number of free stars would have to be chosen carefully to ensure that useful recommendations are received before they run out while still leaving room for improvement. Given that stars would have to be cheap enough that people would not feel inhibited about awarding them it is unlikely that this could support a full length movie or book but it might be possible to make a living from producing short stories, films and music this way.

Posted Sun Mar 24 14:34:37 2013 Tags:

So having stated my intention to post here once a week I should set out how I'm going to achieve that when my average posting rate in the past has been closer to biannual. I've experimented on and off with incorporating bits of Getting things done into my life. So far this has been mostly about maintaining my list of next actions in a tracks instance. I find that as long as I stick to that it helps me "get things done". Therefore my initial plan for organising this is to simply add a project for my blog to tracks and add the subjects I want to write about as actions.

I don't find tracks to be a perfect solution though which brings me to my first idea: A better time management tool. I've encountered some annoying niggles with tracks like support for exporting data but not importing it and broken sync with Shuffle but there are a lot of tools for time management out there so I had better explain why I think we need another one.

The main strength of tracks and similar time management tools is that is personal but that is also a weakness. It organises my todo list but at the same time it lacks real support for sharing the work with others. I think the ideal time management tool in addition to helping me break down my work into manageable bites would support sharing work with others. In consequence it would be able to do some of the same things as Trac or Redmine but organised from the perspective of the individual and their goals rather than a free floating 'project'. Our tools shape how we think and using collaborative time management as a means of organising our projects helps us focus on what is important in all this - people and their goals. This would hopefully support the same sort of collaborative non-hierarchical organisations as Loomio.

From a technical perspective I think the main thing we need for collaborative time management tool is an open standard protocol for exchanging and sharing tasks and an associated data format so people can pick their own tool. I hope it would be possible to use CalDAV and iCalendar for this but it may not be. My ideal tool would also talk nicely to existing calendaring and project management tools. From a user interface perspective I'd like the tool to prompt me to make time estimates and break down and share large tasks.

Posted Sun Mar 17 17:10:34 2013 Tags:

Over the years I've had a lot of ideas for things that could be done with computers, the web or the internet. I've done something with essentially none of them. Other people have had the same or similar ideas later and done something with them. Sometimes these people have made a lot of money. So the idea behind the blog was that I would document my ideas on the blog and I could then look back and see how much money I hadn't made. Of course a lot of people had similar ideas, acted on them and still made nothing. The internet is something of a winner takes all environment. The blog title is meant to imply that if I had a dollar for every good idea I'd had over the years I would have quite a lot of money.

Obviously I haven't used this blog much for its original purpose or indeed any other. I'm going to try to revive it by posting something to it at least once a week and I'll probably start that off by returning it to its original purpose of recording ideas I've had for improving the web and the world.

I should add that I live in Great Britain so perhaps I should have titled this blog "If I had a pound...". On the other hand I also favour the dissolution of all government issued currency in favour of something like http://www.ripple.com/ possibly as an intermediate step towards anarcho-communism.

Posted Sun Mar 17 15:05:27 2013

This blog is powered by ikiwiki.