Returning (again) to WordPress

Every few years I try to move my blog away from WordPress. I tried again earlier this year, but here I am back in WordPress before even a month has gone by! Basically, nothing is as conducive to writing for the web.

I love MediaWiki (which is what I shifted to this time; last time around it was Dokuwiki and for a brief period last year it was a wrapper for Pandoc that I’m calling markdownsite; there have been other systems too) but wikis really are general-purpose co-writing platforms, best for multiple users working on text that needs to be revised forever. Not random mutterings of that no one will ever read, let alone particularly need to edit on an on-going basis.

So WordPress it is, and it’s leading me to consider the various ‘streams’ of words that I use daily: email, photography, journal, calendar, and blog (I’ll not get into the horrendous topic of chat platforms). In the context of those streams, WordPress excels. So I’ll try it again, I think.

Manually upgrading Piwigo

There’s a new version of Piwigo out, and so I must upgrade. However, I’ve got things installed so that the web server doesn’t have write-access to the application files (as a security measure), and so I can’t use the built-in automatic upgrader.

I decided to switch to using Git to update the files, to make future upgrades much easier and without having to make anything writable by the server (even for some short amount of time).

First lock the site, via Tools > Maintenance -> Lock gallery, then get the new code:

$ git clone https://github.com/Piwigo/Piwigo.git photos.samwilson.id.au
$ cd photos.samwilson.id.au
$ git checkout 2.8.3

Copy the following files:

/upload (this is a symlink on my system)
/local/config/database.inc.php
/local/config/config.inc.php

The following directories must be writable by the web server: /_data and /upload (including /upload/buffer; I was getting an “error during buffer directory creation” error).

Then browse to /upgrade.php to run any required database changes.

I’ve installed these plugins using Git as well: Piwigo-BatchDownloader, Flickr2Piwigo, and piwigo-openstreetmap. The OSM plugin also requires /osmmap.php to be created with the following (the plugin would have created it if it was allowed):

<?php
define( 'PHPWG_ROOT_PATH', './' );
include_once( PHPWG_ROOT_PATH . 'plugins/piwigo-openstreetmap/osmmap.php' );
?>

That’s about. Maybe these notes will help me remember next time.

3 sites gone from Planet Freo

Three feeds are gone from Planet Freo. Only two of the sites are kaput though; the SFFC still has a site, but they’ve ditched the feed (not on purpose, I’d say, because they’ve still got an icon for it on their homepage).

  1. South Fremantle Football Club http://southfremantlefc.com.au/
  2. FreOasis http://freoasis.org
  3. Fremantle Carnevale http://www.fremantlecarnevale.com.au/

Planet Freo back online

Sorry to everyone who noticed; Planet Freo‘s been offline for 48 hours. I thought I needed access to my home machine to fix it; turns out I didn’t, but anyway I waited till I was home (and powered by a dram of Ardmore) to fix it. It is now fixed.

I’ve updated the FreoWiki page that lists the feeds (if anyone’s keeping track of who’s been censured).

Anyone know of other blogs that should be on the list? Let me know!

What goes Where on the Web

Every now and then I recap on where and what I store online. Today I do so again, while I’m rather feeling that there should be discrete and specific tools for each of the things.

Firstly there are the self-hosted items:

  1. WordPress for blogging (where photo and file attachments should be customized to the exact use in question, not linked from external sites). Is also my OpenID provider.
  2. Piwigo as the primary location for all photographs.
  3. MoonMoon for feed reading (and, hopefully one day, archiving).
  4. MediaWiki for family history sites that are closed-access.
  5. My personal DokuWiki for things that need to be collaboratively edited.

Then the third-party hosts:

  1. OpenStreetMap for map data (GPX traces) and blogging about map-making.
  2. Wikimedia Commons for media of general interest.
  3. The NLA’s Trove for correcting newspaper texts.
  4. Wikisource as a library.
  5. Twitter (although I’m not really sure why I list this here at all).

Finally, I’m still trying to figure out the best system for:

  1. Public family history research. There’s some discussion about this on Meta.

Archiving a password-protected site with wget

The combination of wget and the Export Cookies add-on for Firefox is useful for creating offline, complete, static archives of websites that are only accessible with a password:

  1. First log in to the site and export cookies.txt,
  2. Then run
    wget \
    	--recursive \
    	--no-clobber \
    	--page-requisites \
    	--html-extension \
    	--convert-links \
    	--restrict-file-names=windows \
    	--domains example.com \
    	--no-parent \
    	--load-cookies cookies.txt \
    	--reject logout,admin* \
    	example.com/sub/dir
    

The rejection of logout URLs is especially useful, because otherwise one will probably be logged out by wget accessing the logout link.

Preventing duplicate rows in a tabular HTML form

I am working on a bespoke issue-tracking system at the moment (not for code issue-tracking, in case anyone thinks we’re cloning Redmine; although there certainly are overlaps…) in which each issue has a list of personnel, each of whom have a role on the issue.

The task at hand is to prevent people selecting the same combination of role and person more than once. Of course, the application and database will reject such an occurence; this is to fix the UI so that users can’t easily submit the duplicates. For the purposes of this explanation, we’re working only in HTML and Javascript (jQuery).

The UI looks something like the screenshot to the right (there is also a means of adding new rows to the table—that doesn’t change how this validation works, but it is why we’re using .live() below):

The Problem

In an HTML table full of form elements,
where new rows can be added (dynamically),
we want to prevent duplicate rows being selected.

The Plan

After changing a value in any row,
get a list of the values in that row
and then go through all of the rows
and see if those values are there.
If we find more than one instance of them,
tell the user
and return the changed value to what it was before.

The Solution

The final code is below (it was built using jQuery 1.6.1 and jQuery UI 1.8.5), and a demonstration is available elsewhere.