Tag: software

Goodbye Wordpress...

This blog has been running for just over a quarter of a century now, and for a significant part of that time, it has been powered by Wordpress (which I host on my own servers - I don't use wordpress.com).

I've long wanted to switch it to something else, and over the last two or three weeks I've been building a replacement based on Wagtail, which, for those who don't know, is a content management system written in Python and layered on top of the Django framework. It's a popular replacement for older PHP-based systems like Wordpress and Drupal.

This post is being published on the Wordpress site. All being well, it will be the last one before the switch, and the transition should be mostly invisible. But if you follow these posts using some other system and suddenly find, in a couple of weeks, that this is the last post you've received, please let me know!

Starting a website using Wagtail is not too hard, though it's very likely to involve some coding to get it the way you want it. But converting an existing site, which has a fair number of readers, 25 years of history, 3500 posts, various categories and tags, a custom theme, subscribers who follow it using RSS readers and others who get it in their email inboxes, and so on, is quite a different challenge. I also wanted to replicate the functionality of certain Wordpress plugins which don't exist for Wagtail, to transfer all the uploaded media, posts, comments and categories, and, most importantly, of course, to preserve the original URL structure so that all the historic posts will still be found in the same place!

All of this explains (a) why it's taken me so long to get around to this job, and (b) why I have become a convert to the use of AI for tasks like this. Because after I created the basic structure, almost everything else has been a joint effort between me and Claude Code.

I've been telling it about the components I need, it has been adding them to my framework, and I've been checking the code. There's nothing I couldn't have done myself, though it would almost certainly have taken a lot longer. But for me, the key benefit has been that small things which would be nice to have but tedious to code are now worth doing. Status-Q is now not just my own blog, it is my own software platform tailored to do the things I want it to do, without too much clutter.

I'll be writing more about this concept of 'personal software' soon. If you see any posts in the days and weeks after this one, you'll know it worked!

It's not too late to avoid paying for AI...

Back in January I wrote about how Microsoft had increased their Office subscription prices by a third, but you could still get it for the old price by saying that you wanted to cancel, and then selecting the 'Microsoft 365 Family Classic', which comes without all of the AI features that lead to the extra cost.

Well, our subscription just came up for renewal... and I found that they've now removed that option from the website. In fact, there's nothing on the website to suggest that writing a letter without the aid of AI is something you might want to do... or appreciating that you might not want to pay for it.

Undeterred, though, I used the online chat system. It was AI, of course, but, to be fair, I was able to get through to a human pretty quickly. She had some standard auto-generated responses about all the wonderful things AI could do for me, and a set of questions she needed to ask me about why I didn't want AI to improve my productivity in my Office suite. I said, roughly:

  • (a) It costs money.
  • (b) I'm concerned about the environmental impact.
  • (c) Im concerned about the privacy implications.
  • (d) I've used the the tools, and know that the supposed productivity improvements are mostly a myth unless you're writing stuff that nobody would want to read... in which case, why bother?
  • (e) We went to school, so we already know how to write.

I could have added that:

  • (f) I almost never use Microsoft Office, so wouldn't look there for any of this stuff anyway, and
  • (g) Modern Microsoft apps are quite bloated enough without wanting to add anything more, and
  • (h) The only things I might want to use AI for I can get for free from chat.bing.com or chatgpt.com or aistudio.google.com or claude.ai, so I'd rather spend my 25 quid on fish and chips and beer at a nice waterside pub, thank you very much.

But even without those additions, in the end she admitted that she could actually renew my Microsoft 365 Family Classic subscription for the old price.

So it's still possible, if you can manage to talk to a human. But I wonder for how much longer...

The success of Django... and when the machines take over.

The Django web framework is now 20 years old. Within a few months of its launch, I discovered it, liked it, and we rather daringly decided to bet on it as the basis for the software of my new startup. (Here's my post from almost exactly 20 years ago, explaining the decision.)

For those not familiar with them, web frameworks like this give you a whole lot of functionality that you need when you want to use your favourite programming language to build web sites and web services. They help you receive HTTP requests, decide what to do based on the URLs, look things up in databases, produce web pages from templates, return the resulting pages in a timely fashion, and a whole lot more besides. You still have to write the code, but you get a lot of lego bricks of the right shape to make it very much easier, and there are a lot of documented conventions about how to go about it so you don't have to learn, the hard way, the lessons that lots of others had to learn in the past!

Anyway, I had made a similar lucky bet in 1991 when the first version of the Python programming language was released, and I loved it, and was using it just a few weeks later (and have been ever since).

Django was a web framework based on Python, and it has gone on to be a huge success partly because it used Python; partly because of the great design and documentation build by its original founders; partly because of the early support it received from their employer, the Kansas newspaper Lawrence Journal-World, who had the foresight to release it as Open Source; and partly because of the non-profit Django Software Foundation which was later created to look after it.

Over the last two decades Django has gone on to power vast numbers of websites around the world, including some big names like Instagram. And I still enjoy using it after all that time, and have often earned my living from doing so, so my thanks go out to all who have contributed to making it the success story that it is!

Anyway, on a podcast this week of a 20th-birthday panel discussion with Django's creators, there was an amusing and poignant story from Adrian Holovaty, which explains the second part of the title of this post.

Adrian now runs a company called Soundslice (which also looks rather cool, BTW). And Soundslice recently had a problem: ChatGPT was asserting that their product had a feature which it didn't in fact have. (No surprises there!) They were getting lots of users signing up and then being disappointed. Adrian says:

"And it was happening, like, dozens of times per day. And so we had this inbound set of users who had a wrong expectation. So we ended up just writing the feature to appease the ChatGPT gods, which I think is the first time, at least to my knowledge, of product decisions being influenced by misinformation from LLMs."

Note this. Remember this day. It was quicker for them to implement the world as reported by ChatGPT than it was to fix the misinformation that ChatGPT was propagating.

Oh yes.

Are you unwittingly paying for AI?

I probably use a Microsoft Office product only about once or twice a year, since, for ages now, I've preferred Apple's Pages, Keynote and Numbers for normal day-to-day stuff. (I definitely recommend getting to grips with them if you're in the Apple world and aren't taking advantage of them yet.)

But Rose needs to use Word and so, like many others, we reluctantly pay for an annual Office 365 Family subscription, and a few months ago, the price of that went up by a little over 30%, from £80 to £105.

But here's the thing. You don't have to pay that new price.

You see, in a move that is particularly sneaky even by Microsoft standards, what they actually did was to add a new feature that not many people want: the Copilot AI system. They called this enhanced plan 'Microsoft 365 Family' and migrated everyone on the old 'Microsoft 365 Family' to it, charging them for the new facility.

If, like me, you positively dislike it when your software pops up and says, "Would you like me to write this letter for you?", then you should know that you can switch to 'Microsoft 365 Family Classic' and go back to the old price, and this also gets you a new feature: the absence of annoying AIs!

They don't make this option easy to find, though. In my case, I was set up for recurring payments, and on the web site I had to go to the 'Manage subscriptions', say I wanted to cancel the recurring payment for 'Microsoft 365 Family', and was then given the 'Classic' option at the old price of £79.95.

My sincere thanks to the AtomicShrimp YouTube channel for this video which alerted me to this dastardly practice!

Taking things literally

John Naughton linked to a splendid post by my friend and erstwhile colleague Alan Blackwell, entitled "Oops! We Automated Bullshit."

I won't try to summarise it here, or even discuss the topics he raises, because you should cetainly go and read the article. But I did like the aside where he questions his own use of the word "literally":

Do I mean ""literally""? My friends complain that I take everything literally, but I'm not a kleptomaniac.

A work of art is never finished

"A work of art", so the saying goes, "is never finished, merely abandoned."

This assertion rings true in many artistic spheres, to the extent that I've seen variations attributed to people as diverse as Leonardo da Vinci and W.H.Auden.

Paul ValeryThe site 'Quote Investigator' suggests that it actually originated in a 1933 essay by the poet Paul Valéry:

Aux yeux de ces amateurs d'inquiétude et de perfection, un ouvrage n'est jamais achevé, – mot qui pour eux n'a aucun sens, – mais abandonné ...

 and they offer this approximate translation:

In the eyes of those who anxiously seek perfection, a work is never truly completed—a word that for them has no sense—but abandoned ...

My knowledge of French idiom falls short of telling me how significant Valéry's use of the word 'amateur' is, though. Is he saying that it's the professionals who really know when a work is complete?

~

Anyway, the same original core assertion is sometime used when speaking of software: that it's never finished, only abandoned.

It's rare that any programmer deems his code to be complete and bug-free, which is why Donald Knuth got such attention and respect when he offered cheques to anyone finding bugs in his TeX typesetting system (released initially in the late 70s, and still widely-used today).  The value of the cheques was not large... they started at $2.56, which is 2^8 cents, but the value would double each year as long as errors were still found. That takes some confidence!  

He was building on the model he'd employed earlier for his books, most notably his epic work, The Art of Computer Programming. Any errors found would be corrected in the next edition. It's a very good way to get diligent proofreaders.

Being Donald Knuth does give you some advantages when employing such a scheme, though, which others might want to consider before trying it themselves: first, there are likely to be very few errors to begin with.  And second, actually receiving one of these cheques became a badge of honour, to the extent that many recipients framed them and put them on the wall, rather than actually cashing them!

For the rest of us, though, there's that old distinction between hardware and software:

Hardware eventually fails.  Software eventually works.

~

I was thinking of all this after coming across a short but pleasing article by Jose Gilgado: The Beauty of Finished Software.  He gives the example of WordStar 4, which, for younger readers, was released in the early 80s.  It came before WordPerfect, which came before Microsoft Word.  Older readers like me can still remember some of the keystrokes.  Anyway, the author George R.R. Martin, who apparently wrote the books on which Game of Thrones is based, still uses it.

Excerpt from the article:

Why would someone use such an old piece of software to write over 5,000 pages? I love how he puts it:

"It does everything I want a word processing program to do and it doesn't do anything else. I don't want any help. I hate some of these modern systems where you type up a lowercase letter and it becomes a capital. I don't want a capital, if I'd wanted a capital, I would have typed the capital."

-- George R.R. Martin

This program embodies the concept of finished software — a software you can use forever with no unneeded changes.

Finished software is software that's not expected to change, and that's a feature! You can rely on it to do some real work.

Once you get used to the software, once the software works for you, you don't need to learn anything new; the interface will exactly be the same, and all your files will stay relevant. No migrations, no new payments, no new changes.

 

I'm not sure that WordStar was ever 'finished' , in the sense that version 4 was followed by several later versions, but these were the days when you bought software in a box that you put on a shelf after installing it from the included floppies.  You didn't expect it to receive any further updates over-the-air.  It had to be good enough to fulfill its purpose at the time of release, and do so for a considerable period.

Publishing an update was an expensive process back then, and we often think that the ease which we can do so now is a sign of progress.  I wonder...

Do read the rest of the post.

Clippy comes of age?

I'm old enough that I can remember going into London to see the early launch demos of Microsoft Word for Windows. I was the computer officer for my Cambridge college at the time, and, up to that point, everyone I was helping used Word for DOS, or the (arguably superior) WordPerfect.

These first GUI-enabled versions of Word were rather good, but the features quickly piled on: more and more buttons, toolbars, ribbons, bells and whistles to persuade you, on a regular basis, to splash out on the next version, unwrap its shrink-wrapped carton, and install it by feeding an ever-increasing number of floppy disks into your machine.

ClippyAnd so for some of us, the trick became learning how to turn off and hide as many of these features as possible, partly to avoid confusing and overwhelming users, and partly just to get on with the actual business of creating content, for which we were supposed to be using the machines in the first place. One feature which became the iconic symbol of unwanted bloatware was 'Clippy' (officially the Office Assistant), which was cute for about five minutes and then just annoying. For everybody. We soon found the 'off' switch for that one!

These days, I very seldom use any Microsoft software (other than their truly excellent free code editor, VSCode, with which I earn my living), so I certainly haven't sat through any demos of their Office software since... well, not since a previous millennium.

But today, since it no longer involves catching a train into London, I did spend ten minutes viewing their demo of 'Microsoft 365 Copilot' -- think Clippy endowed with the facilities of ChatGPT -- and I recommend you do too, while remembering that, as with Clippy, the reality will almost certainly not live up to the promise!

Still, it's an impressive demo (though somewhat disturbing in parts) and though, like me, you may dismiss this as something you'd never actually use, it's important to know that it's out there, and that it will be used by others.

ChatGPT is famous for producing impressively readable prose which often conceals fundamental factual errors. Now, that prose will be beautifully formatted, accompanied by graphs and photos, and therefore perhaps even more likely to catch people unawares if it contains mistakes.

The text produced by these systems is often, it must be said, much better than many of the things that arrive in my inbox, and that will have some advantages. One challenge I foresee, though, is the increasing difficulty in filtering out scams and spams, which often fail at the first hurdle due to grammatical and spelling errors that no reputable organisation would make. What happens when the scammers have the tools to make their devious schemes grammatically correct and beautiful too?

I would also be interested to know how much of one's text, business data etc is uploaded to the cloud as part of this process? I know that most people don't care too much about that -- witness the number of GMail users oblivious to the fact that Google can read absolutely everything and use it to advertise to them and their friends -- but in some professions (legal, medical, military?), and in some regimes, there may be a need for caution.

But it's easy to dwell on the negatives, and it's not hard to find lots of situations where LLMs could be genuinely beneficial for people learning new languages; struggling with dyslexia or other disabilities; or just having to type or dictate on a small device a message that needs to appear more professional at the other end.

In other words, it can -- to quote the announcement on Microsoft's blog page -- help everyone to 'Uplevel skills'.

Good grief. Perhaps there's something to be said for letting the machines write the text, after all.

Unblocked?

One of the great benefits of the internet, of course, is its ability to give you a smug sense of satisfaction when you find others who agree with your point of view. This can be further enhanced after a short period if you feel that historical events have actually proved you were right all along.

So powerful is this effect that I've just been to check whether the domain IToldYouSo.com was still available. But it wasn't. "Well", you're probably saying, "I could have told you that..."

I can't help wondering whether, if you added it up on a global scale, the tears shed in recent days over the collapse of the FTX crypto exchange have been balanced by all the small self-affirming boosts for those of us who always felt this cryptocurrency stuff was too good to be true, and are now experiencing emotions somewhere between Schadenfreude and "There but for the grace of God..."!

The key technology behind most cryptocurrencies is, of course, the blockchain: a distributed ledger consisting of entries that are like the laws of the Medes and Persians; once written, they cannot be changed. What's more, this system doesn't require you to trust Medes, Persians or anyone else to maintain it because this ledger is distributed over many tens of thousands of independent machines. It's often described as a zero-trust system.

It's particularly appealing to conspiracy theorists who distrust all big corporations and governments, and also to those who live in regimes that are genuinely untrustworthy, or where the rule of law is not well-established. Once your purchase, contract, will, marriage certificate, patent application or whatever is recorded on a blockchain, there's theoretically nothing anybody can do to get rid of that record. I'm reading Nineteen Eighty-Four again at the moment, and one of the keys to The Party's absolute power in that book is their ability to rewrite history at any time, and erase all evidence of having done so. Not so with blockchains!

Sounds wonderful, doesn't it? Especially if you ignore for now the fact that most implementations turned out to be phenomenally power-hungry to run. It is a clever technology, and quite apart from the ridiculous amounts of cash that have been converted to and from cryptocurrencies and similar gambles like NFTs, huge amounts have also been invested in startups that are building things using blockchain technologies.

But there's a problem.

In its first 14 years, at least, despite vast amounts of interest and investment, it's been very hard to identify more than a small handful of real use cases of the blockchain. (The Cambridge Centre for Carbon Credits is run by very smart friends of mine, and may well prove to be an example of a great application.)

But in general, yes, there are lots of things you can build using Distributed Ledger Technologies (to give the more formal generic term), and there are many systems that would probably be better if they were built that way, but it almost always turns out to be much easier just to use a database and trust somebody! If you don't want to trust any individual organisation, then you can create an industry-wide standards body or something similar to run your database.

Sometimes you might use an irreversible ledger, but again, if you can just trust somebody to look after it, you can avoid all that nasty messing about with the complexity and environmental impact of the proof-of-work algorithm: the normal way of avoiding the need for trust.

All of the above is a very long introduction to Tim Bray's interesting article about how Amazon's AWS team, providers of the largest computing facilities in the world, basically came to the same conclusion about blockchains as I did, which made me feel smug.

History, of course, may tell a different story, but I'll have edited this blog post by then, because it's in a database.

Thanks to John Naughton and Charles Arthur, both of whom linked to Tim's article.

Healthchecks in a Docker Swarm

This is a very geeky post for those who might be Googling for particular details of Linux containerisation technologies. Others please feel free ignore! We were searching for this information online today and couldn't find it, so I thought I'd post it myself for the benefit of future travellers...

How happy are your containers?

In your Dockerfile, you can specify a HEALTHCHECK: a command that will be run periodically within the container to ascertain whether it seems to be basically happy.

A typical example for a container running a web server might try and retrieve the front page with curl, and exit with an error code if that fails. Something like this, perhaps:

HEALTHCHECK CMD /usr/bin/curl --fail http://localhost/ || exit 1

This will be called periodically by the Docker engine -- every 30 seconds, by default -- and if you look at your running containers, you can see whether the healthcheck is passing in the 'STATUS' field:

$ docker ps
CONTAINER ID   IMAGE           CREATED          STATUS                     NAMES
c9098f4d1933   website:latest  34 minutes ago   Up 33 minutes (healthy)    website_1

Now, you can configure this healthcheck in various ways and examine its state through the command line and other Docker utilities and APIs, but I had always thought that it wasn't actually used for anything by Docker. But I was wrong.

If you are using Docker Swarm (which, in my opinion, not enough people do), then the swarm ensures that an instance of your container keeps running in order to provide your 'service'. Or it may run several instances, if you've told the swarm to create more than one replica. If a container dies, it will be restarted, to ensure that the required number of replicas exist.

But a container doesn't have to die in order to undergo this reincarnation. If it has a healthcheck and the healthcheck fails repeatedly, a container will be killed off and restarted by the swarm. This is a good thing, and just how it ought to work. But it's remarkably hard to find any documentation which specifies this, and you can find disagreement on the web as to whether this actually happens, partly, I expect, because it doesn't happen if you're just running docker-compose.

But my colleague Nicholas and I saw some of our containers dying unexpectedly, wondered if this might be the issue, and decided to test it, as follows...

First, we needed a minimal container where we could easily change the healthcheck status. Here's our Dockerfile:

FROM bash
RUN echo hi > /tmp/t
HEALTHCHECK CMD test -f /tmp/t
CMD bash -c "sleep 5h"

and we built our container with

docker build -t swarmtest .

When you start up this exciting container, it just goes to sleep for five hours. But it contains a little file called /tmp/t, and as long as that file exists, the healthcheck will be happy. If you then use docker exec to go into the running container and delete that file, its state will eventually change to unhealthy.

If you're trying this, you need to be a little bit patient. By default, the check runs every 30 seconds, starting 30s after the container is launched. Then you go in and delete the file, and after the healthcheck has failed three times, it will be marked as unhealthy. If you don't want to wait that long, there are some extra options you can add to the HEALTHCHECK line to speed things up.

OK, so let's create a docker-compose.yml file to make use of this. It's about as small as you can get:

version: '3.8'

services:
  swarmtest:
    image: swarmtest

You can run this using docker-compose (or, now, without the hyphen):

docker compose up

or as a swarm stack using:

docker stack deploy -c docker-compose.yml swarmtest

(You don't need some big infrastructure to use Docker Swarm; that's one of its joys. It can manage large numbers of machines, but if you're using Docker Desktop, for example, you can just run docker swarm init to enable Swarm on your local laptop.)

In either case, you can then use docker ps to find the container's ID and start the healthcheck failing with

docker exec CONTAINER_ID rm /tmp/t

And so here's a key difference between running something under docker compose and running it with docker stack deploy. With the former, after a couple of minutes, you'll see the container change to '(unhealthy)', but it will continue to run. The healthcheck is mostly just an extra bit of decoration; possibly useful, but it can be ignored.

With Docker Swarm, however, you'll see the container marked as unhealthy, and shortly afterwards it will be killed off and restarted. So, yes, healthchecks are important if you're running Docker Swarm, and if your container has been built to include one and, for some reason you don't want to use it, you need to disable it explicitly in the YAML file if you don't want your containers to be restarted every couple of minutes.

Finally, if you have a service that takes a long time to start up (perhaps because it's doing a data migration), you may want to configure the 'start period' of the healthcheck, so that it stays in 'starting' mode for longer and doesn't drop into 'unhealthy', where it might be killed off before finishing.