Returning (again) to WordPress

Every few years I try to move my blog away from WordPress. I tried again earlier this year, but here I am back in WordPress before even a month has gone by! Basically, nothing is as conducive to writing for the web.

I love MediaWiki (which is what I shifted to this time; last time around it was Dokuwiki and for a brief period last year it was a wrapper for Pandoc that I’m calling markdownsite; there have been other systems too) but wikis really are general-purpose co-writing platforms, best for multiple users working on text that needs to be revised forever. Not random mutterings of that no one will ever read, let alone particularly need to edit on an on-going basis.

So WordPress it is, and it’s leading me to consider the various ‘streams’ of words that I use daily: email, photography, journal, calendar, and blog (I’ll not get into the horrendous topic of chat platforms). In the context of those streams, WordPress excels. So I’ll try it again, I think.

New feature for ia-upload

I have been working on an addition to the IA Upload tool these last few days, and it’s ready for testing. Hopefully we’ll merge it tomorrow or the next day.

This is the first time I’ve done much work with the internal structure of DjVu files, and really it’s all been pretty straight-forward. A couple of odd bits about matching element and page names up between things, but once that was sorted it all seems to be working as it should.

It’s a shame that the Internet Archive has discontinued their production of DjVu files, but I guess they’ve got their reasons, and it’s not like anyone’s ever heard of DjVu anyway. I don’t suppose anyone other than Wikisource was using those files. Thankfully they’re still producing the DjVu XML that we need to make our own DjVus, and it sounds like they’re going to continue doing so (because they use the XML to produce the text versions of items).

Wikisource Hangout

I wonder how long it takes after someone first starts editing a Wikimedia project that they figure out that they can read lots of Wikimedia news on https://en.planet.wikimedia.org/ — and when, after that, they realise they can also post to the news there? (At which point they probably give up if they haven’t already got a blog.)

Anyway, I forgot that I can post news, but then I remembered. So:

There’s going to be a Wikisource meeting next weekend (28 January, on Google Hangouts), if you’re interested in joining:
https://meta.wikimedia.org/wiki/Wikisource_Community_User_Group/January_2017_Hangout

Internet Archive 20th anniversary celebration

I can’t believe I’m going to miss this by two days! I’m going to be in San Francisco for the first time since 1997 for the week before. What are the odds.

HOW TO BUILD AN ARCHIVE--Banner

“For 20 years, the Internet Archive has been capturing the Web– that amazing universe of images, audio, text and software that forms our shared digital culture. Now it’s time to celebrate and we’re throwing a party! Please join us for our 20th Anniversary celebration on Wednesday, October 26th, 2016, from 5-9:30 pm.”

Installing AtoM

Access to Memory is a brilliant archival description management system, written in PHP and available under the CC-BY-SA license. The installation documentation is thorough… but of course I just want to get the thing running and so didn’t bother actually reading it all! I mean, where’s the tldr?!

So here are the essential bits (for a more-or-less bog standard Ubuntu install with Apache, PHP, and Node.js), running as a normal user and installing to a subdirectory.

To start, clone the repository from https://github.com/artefactual/atom.git and check out the latest stable branch (e.g. stable/2.2.x).

Then make the config, cache, and log directories writable by the web-server user: chgrp -R www-data {cache,log,data} (or whatever your webserver runs as, of course).

Now change into the /plugins/arDominionPlugin directory and run make; this will build the CSS files.

Navigating to the installation now will redirect to the installation system, and probably throw up a bunch of errors. Probably to do with missing dependencies, or permissions; sort these out (e.g. sudo php5enmod xsl and you should be good to go.

(Good to go to the next step, that is.)

Now install Elasticsearch. It’s easier than the AtoM docs admit: just do it the normal way with sudo apt-get install elasticsearch. (I’d submit a change to the AtoM docs to remove the “it’s not in the Ubuntu repositories” line, but I’m not quite sure how yet.)

Start Elasticsearch (in the background; the -d switch) with sudox /usr/share/elasticsearch/bin/elasticsearch -d and carry on with the installation procedure. The rest seems to be fairly straight forward.