Sam's notebook

Publishing on the indiweb July 1st, 2017, 8AM

Open content

I’ve been reading about POSSE and PESOS, and getting re-inspired about the value in a plurality of web tools. I sometimes try to focus just on one Software package (MediaWiki, at the moment, because it’s what I code fornear at work. But I used to love working on WordPress, and I’ve got a couple of stalled projects for Piwigo lying around. Basically, all these things will be of higher quality if they have to work with each other and with all the data silos (Facebook, Twitter, etc.).

The foundational principles of the IndiWeb are:

  1. Own your data.
  2. Use visible data for humans first, machines second. See also DRY.
  3. Build tools for yourself, not for all of your friends. It’s extremely hard to fight Metcalfe’s law: you won’t be able to convince all your friends to join the independent web. But if you build something that satisfies your own needs, but is backwards compatible for people who haven’t joined in (say, by practicing POSSE), the time and effort you’ve spent building your own tools isn’t wasted just because others haven’t joined in yet.
  4. Eat your own dogfood. Whatever you build should be for yourself. If you aren’t depending on it, why should anybody else? We call that selfdogfooding. More importantly, build the indieweb around your needs. If you design tools for some hypothetical user, they may not actually exist; if you build tools for yourself, you actually do exist. selfdogfooding is also a form of “proof of work” to help focus on productive interactions.
  5. Document your stuff. You’ve built a place to speak your mind, use it to document your processes, ideas, designs and code. At least document it for your future self.
  6. Open source your stuff! You don’t have to, of course, but if you like the existence of the indie web, making your code open source means other people can get on the indie web quicker and easier.
  7. UX and design is more important than protocols, formats, data models, schema etc. We focus on UX first, and then as we figure that out we build/develop/subset the absolutely simplest, easiest, and most minimal protocols & formats sufficient to support that UX, and nothing more. AKA UX before plumbing.
  8. Build platform agnostic platforms. The more your code is modular and composed of pieces you can swap out, the less dependent you are on a particular device, UI, templating language, API, backend language, storage model, database, platform. The more your code is modular, the greater the chance that at least some of it can and will be re-used, improved, which you can then reincorporate.
  9. Longevity. Build for the long web. If human society is able to preserve ancient papyrus, Victorian photographs and dinosaur bones, we should be able to build web technology that doesn’t require us to destroy everything we’ve done every few years in the name of progress.
  10. Plurality. With IndieWebCamp we’ve specifically chosen to encourage and embrace a diversity of approaches & implementations. This background makes the IndieWeb stronger and more resilient than any one (often monoculture) approach.
  11. Have fun. Remember that GeoCities page you built back in the mid-90s? The one with the Java applets, garish green background and seventeen animated GIFs? It may have been ugly, badly coded and sucky, but it was fun, damnit. Keep the web weird and interesting.

[No comments] [Keywords: , , , , ] [Permanent link]

WikiCite 2017 May 23rd, 2017, 1PM

Open content Programming

(Firefox asked me to rate it this morning, with a little picture of a broken heart and five stars to select from. I gave it five (’cause it’s brilliant) and then it sent me to a survey on mozilla.com titled “Heavy User V2”, which sounds like the name of an confused interplanetary supply ship.)

Today WikiCite17 begins. Three days of talking and hacking about the galaxy that comprises Wikipedia, Wikidata, Wikisource, citations, and all bibliographic data. There are lots of different ways into this topic, and I’m focusing not on Wikipedia citations (which is the main drive of the conference, I think), but on getting (English) Wikisource metadata a tiny bit further along (e.g. figure out how to display work details on a Wikisource edition page); and on a little side project of adding a Wikidata-backed citation system to WordPress.

The former is currently stalled on me not understanding the details of P629 ‘edition or translation of’ — specifically whether it should be allowed to have multiple values.

The latter is rolling on quite well, and I’ve got it searching and displaying and the beginnings of updating ‘book’ records on Wikidata. Soon it shall be able to make lists of items, and insert the lists (or individual citations of items on them) into blog posts and pages. I’m not sure what the state of the art is in PHP of packages for formatting citations, but I’m hoping there’s something good out there.

And here is a scary chicken I saw yesterday at the Naturhistorisches Museum:

Scary chicken (Deinonychus antirrhopus)

[No comments] [Keywords: , , , , , , , ] [Permanent link]

MediaWiki Documentation Day 2017 May 12th, 2017, 4PM

Programming

It’s MediaWiki Documentation Day 2017!

So I’ve been documenting a couple of things, and I’ve added a bit to the Xtools manual.

The latter is actually really useful, not so much from the end-user’s point of view because I dare say they’ll never read it, but I always like writing documentation before coding. It makes the goal so much more clear in my mind, and then the coding is much easier. With agreed-upon documentation, writing tests is easier; with tests written, writing the code is easier.

Time for a beer — and I’ll drink to DFD (document first development)! Oh, and semantic linebreaks are great.

[No comments] [Keywords: , , , , , , ] [Permanent link]

Ficra and Gibson Park Freospaces gone May 9th, 2017, 9AM

Miscellaneous

The Freospace blogs for Ficra and Gibson Park have gone offline. The former I guess because they’ve merged with the Fremantle Society, and maybe the latter is just not active at all? Would have been nice to at least put a notice up on their sites explaining what’s going on.

Anyway, I’ve removed their feeds from Planet Freo.

[No comments] [Keywords: , , , , , ] [Permanent link]

Lenovo ThinkPad Carbon X1 (gen. 5) April 23rd, 2017, 11AM

Open content Programming

Five years, two months, and 22 days after the last time, I’m retiring my laptop and moving to a new one. This time it’s a Lenovo ThinkPad Carbon X1, fifth generation (manufactured in March this year, if the packaging is to be believed). This time, I’m not switching operating systems (although I am switching desktop’s, to KDE, because I hear Ubuntu is going all-out normal Gnome sometime soon).

So I kicked off the download of kubuntu-16.04.2-desktop-amd64.iso and while it was going started up the new machine. I jumped straight into bios to set the boot order (putting ‘Windows boot manager’ right at the bottom because it sounds like something predictably annoying), and hit ‘save’. Then I forgot what I was doing and wondered back to my other machine, leaving the new laptop to reboot and send itself into the Windows installation process. Oops.

There’s no way out! You select the language you want to use, and then are presented with the EULA—with a but ‘accept’ button, but no way to decline the bloody thing, and no way to restart the computer! Even worse, a long-press on the power button just suspended the machine, rather than force-booting it. In the end some combination of pressing on the power button while waking from suspend tricked it into dying. Then it was a simple matter of booting from a thumb drive and getting Kubuntu installed.

I got slightly confused at two points: at having to turn off UEFI (which I think is the ‘Windows boot manager’ from above?) in order to install 3rd party proprietary drivers (usually Lenovo are good at providing Linux drivers, but more on that later); and having to use LVM in order to have full-disk encryption (because I had thought that it was usually possible to encrypt without LVM, but really I don’t mind either way; there doesn’t seem to be any disadvantage to using LVM; I then of course elected to not encrypt my home directory).

So now I’m slowly getting KDE set up how I like it, and am running into various problems with the trackpoint, touchpad, and Kmail crashing. I’ll try to document the more interesting bits here, or add to the KDE UserBase wiki.

[No comments] [Keywords: , , , , , , ] [Permanent link]

Importing to Piwigo April 18th, 2017, 9AM

Open content

Piwigo is pretty good!

I mean, I mostly use Flickr at the moment, because it is quick, easy to recommend to people, and allows photos to be added to Trove. But I’d rather host things myself. Far easier for backups, and so nice to know that if the software doesn’t do a thing then there’s a possibility of modifying it.

To bulk import into Piwigo one must first rsync all photos into the galleries/ directory. Then, rename them all to not have any unwanted characters (such as spaces or accented characters). To do this, first have a look at the files that will fail:

find -regex '.*[^a-zA-Z0-9\-_\.].*'

(The regex is determined by $conf['sync_chars_regex'] in include/config_default.inc.php which defaults to ^[a-zA-Z0-9-_.]+$.)

Then you can rename the offending files (replace unwanted characters with underscores) by extending the above command with an exec option:

find -regex '.*[^a-zA-Z0-9\-\._].*' -exec rename -v -n "s/[^a-zA-Z0-9\-\._\/]/_/g" {} \;

(I previously used a more complicated for-loop for this, that didn’t handle directories.)

Once this command is showing what you expect, remove the -n (“no action”) switch and run it for real. Note also that the second regex includes the forward slash, to not replace directory separators. And don’t worry about it overwriting files whose normalized names match; rename will complain if that happens (unless you pass the --force option).

Once all the names are normalized, use the built-in synchronization feature to update Piwigo’s database.

At this point, all photos should be visible in your albums, but there is one last step to take before all is done, for maximum Piwigo-grooviness. This is to use the Virtualize plugin to turn all of these ‘physical’ photos into ‘virtual’ ones (so they can be added to multiple albums etc.). This plugin comes with a warning to ensure that your database is backed up etc. but personally I’ve used it dozens of times on quite large sets of files and never had any trouble. It seems that even if it runs out of memory and crashes halfway, it doesn’t leave anything in an unstable state (of course, you shouldn’t take my word for it…).

[No comments] [Keywords: , , , , , , ] [Permanent link]