Welcome to my blog.
Have a look at the most recent posts below, or browse the tag cloud on the right. An archive of all posts is also available.
I'm going to upgrade this site to use HTTPS, HSTS and forward secrecy this year in order to help Reset the Net. They might get a bit further if they didn't insist of the URL of a tweet before I can submit this blog post. I don't use twitter as I prefer not to put everything in the hands of a giant american corporation.
So I've been working on getting my PGP key better connected into the web of trust. I've been to a couple of key signing parties and got my key signed by CACert and the PGP Global Directory all of which has made my key fairly well connected.
However this only underscores the fundamental problem with OpenPGP: relatively few people use it and only a fraction of them are connected into the strong set. This is in part a bootstrapping problem. With the web of trust connecting so few people it is hard to find someone to sign your key and key signing parties are a fair amount of work to organize.
So my idea to help OpenPGP users connect: a mobile phone app that tells you when you are close to a fellow user with whom you have not exchanged signatures.
- Authentication either with the key or (for those who don't want to keep their key on their phone) by a signed token.
- User determines required proximity before detection occurs
- Variable levels of visibility: Invisible,Headcount only,Contact details,Location
- Ability to ignore certain users.
- Encrypted IM if you have your key.
For the past couple of days I haven't been able to access Goodreads. I get a response page that reads:
Request forbidden by administrative rules.
Using google I could find no evidence that Goodreads was down and Is it down right now claims it is up and has been so for the last week. A little poking around and I found I couldn't access Goodreads directly over my normal internet connection or via TOR but could access it just fine using my phone as a mobile hotspot.
As the IP I normally browse from also functions as a restricted TOR exit node I conclude that Goodreads has started blocking TOR exit nodes. This is rather tricky to Google due to frequent references to Tor books and Goodreads together on the internet. Oddly enougth Goodreads owner Amazon don't block me so I guess they only object to TOR when there isn't any money in the offing.
As everyone knows now Google Reader will be shutting down on July 1st. This has caused me to actually start working on my long planned switch to a self-hosted solution. Looking at what I actually use Google Reader for it looks like I really need multiple readers. I've already switched my audio podcast consumption to a dedicated podcatcher program on my mobile phone. Unfortunately getting enough content for my walks home will exceed my the "fair use" limits on my "unlimited" plan so I'll have to download it in advance via wi-fi.
For webcomics news and people I follow regularly a River of News style aggregator like planet looks to be what I need.
However there are still some feeds for which I would prefer the mailbox style of news provided by reader. Unfortunately most of the options here seem to be either designed for massive hosting sites or written in PHP. While I'm sure it is possible to write secure PHP it doesn't seem to be the norm.
I'm also looking for something that can split link posts into multiple entries and ideally merge multiple links to the same article.
The internet derives its strength and flexibility from its design as a decentralised system with the bulk of the inteligence on the edges rather than in the network "core". Unfortunately it is still too centralised in many respects. Much of this centralisation stems from early technological constraints that either no longer apply or will shortly cease to apply. The early internet required central management because it relied on a protocol with a relatively small (32 bit) address space and routers that operate under severe memory constraints.
We can reasonably assume that the number of independent networks will be of roughly the same order of magnitude as the number of people on the planet ie a few billion. Since modern computers come with several gigabytes of RAM we can work on the assumption that storing the routing table is now trivial. Likewise network link speeds are increasing so transmitting the table should not be prohibitive.
What might be expensive is the need to look routes up quickly. This might require very fast RAM on core routers and storing the entire routing table therein would be prohibitive. This could be avoided by making use of a source routing protocol like MPLS to enable the workload to move to the network edge.
Given the above we no longer need routing tables to be efficient. We should be able to afford one table entry per network easily. This means we no longer need management to ensure compact allocations. I could be wrong but I suspect I'm only wrong by a few years if so.
Although allocation compactness is no longer a concern we still can't allocate at random with IPv6 there might be accidental collisions. However we don't need an authority to prevent this just an agreed standard. One mechanism would be to assign each router a network address based on its physical location on the surface of the earth. One could use any map projection that produces a roughly square map without distorting shape or area too badly and simply take the router's cartesian co-ordinates to make up the network address. With a resolution of a square meter this would take up about 50 bits, comfortably within the 64 bits reserved for the network. By interleaving the bits from the X and Y co-ordinates one might even be able to shrink the routing table back down again.
Of course that doesn't prevent hijacking an IP address as there is no central registry of who legitimately controls which address. If one is prepared to throw out IPv6 compatibility and increase the address space then one could just use a hash of the router's public key to identify the network.
Unfortunately Zooko's Triangle causes some problems when trying to decentralise human meaningful names so I'll leave those to a later post.
The ongoing copyright wars between the various media industry associations and file sharers have a tendency to create collateral damage in the form of laws that severely restrict the internet. In my arrogant opinion it looks likely that in a technical arms race the file sharers will win. This is problematic as it means that copyright supporters can only win by legally hobbling the internet. To avoid this it seems to me two things need to happen.
The first thing that needs to happen is acceleration of the technology arms race. If we have a maximally effective file sharing technology soon then the only viable counter will be laws so draconian that they will be clearly unacceptable. If the technology continues to get better in small steps then the laws will continue to get worse in small steps that may be individually tolerable but collectively end up returning us to the dark ages of centrally managed media.
The second thing that needs to happen is elimination of the the perceived need for such draconian laws. The fundamental problem copyright addresses is the non-excludability of the fixed costs of cultural production. While there is no reason to support those whose stake in the copyright wars is the protection of an inefficient distribution system we still want as many performers and demiurges as possible to be able to obtain their livelihood by following their muse. Two factors suggest that the threat of legal force is unnecessary to ensure this. Firstly there is the fact that file sharers buy more which suggests we don't need to worry overly about freeloaders. Secondly the moral repugnance most people feel for plagiarists will likely ensure that anyone engaging in passing off someone else's work as their own will be quickly detected and boycotted. The problem therefore is not how to make people pay but how to let them.
Which brings me to my idea for how to let people pay. A web site where authors and performers or those with their approval can upload their works. The users of the site can freely download the works and award them stars which are used as input to a collaborative filtering algorithm that helps find other works they might like. The trick is that they can only award a limited number of stars without paying and once they have awarded sufficient stars then the artists receive a percentage of the money paid for those stars. As the users award more stars they get better recommendations and the stars change their nominal material indicating the percentage going to the artists. The number of free stars would have to be chosen carefully to ensure that useful recommendations are received before they run out while still leaving room for improvement. Given that stars would have to be cheap enough that people would not feel inhibited about awarding them it is unlikely that this could support a full length movie or book but it might be possible to make a living from producing short stories, films and music this way.
So having stated my intention to post here once a week I should set out how I'm going to achieve that when my average posting rate in the past has been closer to biannual. I've experimented on and off with incorporating bits of Getting things done into my life. So far this has been mostly about maintaining my list of next actions in a tracks instance. I find that as long as I stick to that it helps me "get things done". Therefore my initial plan for organising this is to simply add a project for my blog to tracks and add the subjects I want to write about as actions.
I don't find tracks to be a perfect solution though which brings me to my first idea: A better time management tool. I've encountered some annoying niggles with tracks like support for exporting data but not importing it and broken sync with Shuffle but there are a lot of tools for time management out there so I had better explain why I think we need another one.
The main strength of tracks and similar time management tools is that is personal but that is also a weakness. It organises my todo list but at the same time it lacks real support for sharing the work with others. I think the ideal time management tool in addition to helping me break down my work into manageable bites would support sharing work with others. In consequence it would be able to do some of the same things as Trac or Redmine but organised from the perspective of the individual and their goals rather than a free floating 'project'. Our tools shape how we think and using collaborative time management as a means of organising our projects helps us focus on what is important in all this - people and their goals. This would hopefully support the same sort of collaborative non-hierarchical organisations as Loomio.
From a technical perspective I think the main thing we need for collaborative time management tool is an open standard protocol for exchanging and sharing tasks and an associated data format so people can pick their own tool. I hope it would be possible to use CalDAV and iCalendar for this but it may not be. My ideal tool would also talk nicely to existing calendaring and project management tools. From a user interface perspective I'd like the tool to prompt me to make time estimates and break down and share large tasks.
Over the years I've had a lot of ideas for things that could be done with computers, the web or the internet. I've done something with essentially none of them. Other people have had the same or similar ideas later and done something with them. Sometimes these people have made a lot of money. So the idea behind the blog was that I would document my ideas on the blog and I could then look back and see how much money I hadn't made. Of course a lot of people had similar ideas, acted on them and still made nothing. The internet is something of a winner takes all environment. The blog title is meant to imply that if I had a dollar for every good idea I'd had over the years I would have quite a lot of money.
Obviously I haven't used this blog much for its original purpose or indeed any other. I'm going to try to revive it by posting something to it at least once a week and I'll probably start that off by returning it to its original purpose of recording ideas I've had for improving the web and the world.
I should add that I live in Great Britain so perhaps I should have titled this blog "If I had a pound...". On the other hand I also favour the dissolution of all government issued currency in favour of something like http://www.ripple.com/ possibly as an intermediate step towards anarcho-communism.
I was recompling the kernel on the GuruPlug I use as a router from Squeeze's linux-source package in order to add support for br2684.ko when I ran into an error very similar to this one. The build process complained "/bin/sh: arm-linux-gnueabi-ld: not found". It appears that the build process thinks the host architecture is arm and the targets the entirely distinct armel and it needs to cross-compile. I got past this by specfying --arch armel on the make-kpkg command line since the native binutils and gcc produce armel code.
This is another test of pushing changes from my laptop via git. Hopefully this will show up on my website.
This blog is powered by ikiwiki.