meta

Posts concerning this very website.

Caoimhe

After a year I have finally added a functioning tagging system. I had been tagging posts already but they were pretty much non-functional, just sitting at the bottom looking pretty. Now every post has some category icons under it. Mouseover each for titles if it’s unclear what they’re meant to me. As well as posts the tag pages include reviews and gallery exhibits.


Caoimhe

I first started working on this website three years ago, in September of 2022. Then it was just a gallery of projects that I’ve made built on top of the default Jekyll minima template. I’ve been continuing to add to it, turning it into my own custom nightmare with a bespoke build process for generating podcast feeds and customised Pico-8 cartridges. A website is just something that you can endlessly tinker with with you have the inclination to. Then one year ago today it sprouted a bog1.

Like many other blogs that popped up at a time this was a reaction to the announcement that Cohost was going to shut down, something that I am still deeply upset about. For a lot of people Cohost was a reminder that websites can actually be fun. That they can be a place for creative output and joy and not just doomscrolling or reposting every blorbo that you see. That HTML is a canvas that you can use to paint, even if all you are painting is a silly bit of bespoke CSS or javascript to make something silly. I still have a todo list for things I wand to add to the site the length of my arm2. Making my first posts a year ago I really had no idea if this was something I would stick to and I am glad and somewhat surprised that I have.

I have also tried to re-evaluate how I use the internet as well, trying to be more deliberate with how I spend time on it and what I’m reading and giving my attention to. I was using an RSS reader long before the Cohost shutdown and I am probably preaching to the choir here but I will continue to evangelise them as the best way to follow anything and most things do still expose an RSS feed even if they can be a little hidden. Youtube, Mastodon, Bluesky, Substack and Tumblr accounts all have RSS feeds that you can use to subscribe and read rather than having to log into each site and be bombarded with ads and algorithmically boosted posts designed to piss you off. I use Feedbin but there are loads of others.

I do still find myself falling into old, bad habits, especially on days when I am down or lack energy, but I really do recommend being deliberate in what you read online. Unfollow, mute or block people who post stuff that just pisses you off. It might feel like you are keeping on top of important news but you are probably just drowning yourself in misery (and hey, news sites also have RSS feeds if you want to actually stay informed). Read what matters to you, write what matters to you and share with people the things you find that are worth sharing. Or just post nonsense about Sonic the Hedgehog and Seto Kaiba. That is also important.

  1. The name seemed funny at the time and now I am stuck with it forever. 

  2. Now that there is an entire year of posts on here I should probably get around to setting up pagination rather than the entire bog being one huge page. 



Caoimhe

After some deliberation I have decided to split off all of my reviews into their own separate page. Introducing: FWD:RE:views.

For RSS readers if follow the existing beag feed you should not see any changes. If you follow the bog and want to still see reviews you can add the new reviews feed to your newsreader.

To sum up the new situation quickly:

  • New—FWD:RE:views: reviews RSS
  • Modified—Bog: original posts + Fediverse posts RSS
  • Unchanged—Beag: only original posts RSS

I enjoy logging what I’m watching but I haven’t been happy with how all the stuff I was copying over from other sites drowning out my other posts. I was originally thinking about it in terms of my original posts versus “syndicated” stuff and was considering making the “beag” page the default and a separate one for everything else but I after ruminating on it I decided it made more sense to me to split just the reviews out as their own thing. I took my time before making this decision as I didn’t want to end up changing up my RSS feeds repeatedly and making a mess of people’s newsreaders.

The title FWD:RE:views is meant to reflect how most of the reviews are “forwarded” from my Letterboxd, Serializd, Backloggd and Goodreads accounts but there is currently one review that isn’t and there may be more if I want to log other things that don’t have entries in the databases of those website.



Caoimhe

I have done some tweaks to fonts on the site. In particular I optimised the fonts I use for headings on the homepage using the pyftsubset command in Font Tools to consist only of the characters needed to display the text used on the site and no more. This should reduce the download footprint of the site a bit.

I don’t remember who I saw link to pyftsubset originally so can’t give credit for where I saw it, unfortunately.


Caoimhe

I need to write posts that aren’t just technical updates about the site itself and the stuff copied over from my other accounts.

I have plans for stuff to write I have just been tired and sick and busy.


Caoimhe

The story so far: I was using Cusdis to provide a comments section for the bog but it proved to be broken and unmaintained so I replaced it with a self-hosted instance of Comentario.

I am going to walk through what I did to set up Comentario and import old comments from Cusdis. This is not a guide and the scripts that are posted below have serious problems that should be fixed before being used and I am not going to be the one to do that and you would obviously need to change any references to oakreef.ie to your own site.

Subdomain

First of all I needed to have an address to host the Comentario instance at. I chose a new subdomain at comments.oakreef.ie and had to update my Let’s Encrypt certificates to cover that new subdomain. I did not save the commands I used to do that but it was pretty straightforward to do from the command line.

Docker

Then I installed Docker on my server and following Damien’s example with a few tweaks I created my docker-compose.yml and secrets.yaml files.

docker-compose.yml

version: '3'

services:
  db:
    image: postgres:17-alpine
    environment:
      POSTGRES_DB: comentario
      POSTGRES_USER: {INSERT POSTGRES USERNAME HERE}
      POSTGRES_PASSWORD: {INSERT POSTGRES PASSWORD HERE}
    ports:
      - "127.0.0.1:5432:5432"

  app:
    restart: unless-stopped
    image: registry.gitlab.com/comentario/comentario
    environment:
      BASE_URL: https://comments.oakreef.ie/
      SECRETS_FILE: "/secrets.yaml"
    ports:
      - "5050:80"
    volumes:
      - ./secrets.yaml:/secrets.yaml:ro

secrets.yaml

postgres:
  host:     db
  port:     5432
  database: comentario
  username: {INSERT POSTGRES USERNAME HERE}
  password: {INSERT POSTGRES PASSWORD HERE}

Changing the ports configuration to 127.0.0.1:5432:5432 means that the Postgres database is only accessible from the server locally and not publicly available. I also don’t have an email setup for the Comentario instance currently.

Launching the instance is then just a matter of:

sudo docker compose -f docker-compose.yml up -d

Nginx

Then I needed to modify my Nginx config to direct comments.oakreef.ie to the Comentario instance running on port 5050.

server {
	server_name comments.oakreef.ie;

	listen 443 ssl;

	ssl_certificate     /etc/letsencrypt/live/oakreef.ie/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/oakreef.ie/privkey.pem;
	include /etc/letsencrypt/options-ssl-nginx.conf;
	ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

	location / {
		proxy_pass http://127.0.0.1:5050;
		proxy_redirect off;
		proxy_http_version 1.1;
		proxy_cache_bypass $http_upgrade;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection keep-alive;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Forwarded-Host $server_name;
		proxy_buffer_size 128k;
		proxy_buffers 4 256k;
		proxy_busy_buffers_size 256k;
		add_header Cache-Control "private";
	}
}

Importing comments

Once there were a few comments on the new system I used the export feature in Comentario to get a JSON file and looked at how Comentario defined comment data in that. I also manually went through all the comments on the old system and made a basic CSV file of all of them with the author name, date posted, the URL of the post the comment was on and the text of each comment. I then wrote this Python file to take the exported Comentario comments—named basedata.json—and the CSV with the old Cusdis comments—comments.csv—and exported a new file with the combined data in the Comentario format.

There are some problems with this!

  1. When importing data Comentario does not check for duplicates. I ended up creating duplicates of all the new Comentario comments that already existed on the site doing this and had to manually delete them. If you are doing this do not include existing comments as part of the file you are creating to import.
  2. I did not include replies at all. I decided to try importing replies I had made to people as a second, separate, step (see the second Python script below). This made things more awkward down the line. Do everything in one batch.
import csv
import json
from datetime import datetime, timezone
from dateutil.parser import parse
from pprint import pprint
from uuid import uuid4

now = datetime.now()
pages = {}
site_url = 'https://oakreef.ie'
date_format = "%Y-%m-%dT%H:%M:%SZ"

my_id = "ADMIN USER UUID"

with open('comments.csv', newline='') as csv_file:
		csv_reader = csv.reader(csv_file, delimiter=',', quotechar='"')
		for row in csv_reader:
				author, date, url, text = row
				date = parse(date)
				
				if url not in pages:
					pages[url] = {
						'comments': []
					}
				
				pages[url]['comments'].append({
					'author': author,
					'date': date,
					'text': text
				})


with open('basedata.json') as json_file:
		data = json.load(json_file)

domainId = data['pages'][0]['domainId']

for url, page in pages.items():
	page_id = str(uuid4())
	data['pages'].append({
		'createdTime': now.strftime(date_format),
		'domainId': domainId,
		'id': page_id,
		'isReadonly': False,
		'path': url,
	})

	for comment in page['comments']:
		comment_id = str(uuid4())
		data['comments'].append({
			"authorCountry": "IE",
			'authorName': comment['author'],
			'createdTime': comment['date'].strftime(date_format),
			"deletedTime": "0001-01-01T00:00:00.000Z",
      "editedTime": "0001-01-01T00:00:00.000Z",
      "html": f"\u003cp\u003e{comment['text']}\u003c/p\u003e\n",
			'id': comment_id,
			'isApproved': True,
			'isDeleted': False,
			'isPending': False,
			'isSticky': False,
			'markdown': comment['text'],
			"moderatedTime": comment['date'].strftime(date_format),
			'pageId': page_id,
			'score': 0,
			'url': f'{site_url}{url}#comentario-{comment_id}',
			'userCreated': '00000000-0000-0000-0000-000000000000',
			"userModerated": my_id
		})


with open('import.json', 'w') as import_file:
	json.dump(data, import_file)

When that was done I put it away for a while as I wasn’t feeling well and eventually came back to do replies. I, again, manually went through all replies I had made to comments on the old system and made a CSV file with the reply date, URL of the page, the UUID of the parent comment as it existed in the new Comentario system, the UUID of the page the parent comment is on in teh new Comentario system and the text of the reply.

Two things are important to note about this:

  1. It was a pain in the hole. If I had done replies at the same time as the rest of the comments I could have used the UUIDs that I was generating in the script rather than going to find them manually and making them into a CSV.
  2. The initial upload failed as apparently Comentario couldn’t match the page and user IDs to what was in the database and it needed those to be in the import file. I got around this by doing another export and copying the entries for pages and commenters from that into the new one and uploading. This was not a good way to do this! It could have gone badly or had unexpected side effects. Again, if you’re doing this do not import comments and replies as two separate steps!
  3. It still didn’t fully work anyway. My replies did import and do show up on the right pages but they are not nested properly as replies. It’s like looking at a comment section on a very old Youtube video where reply chains are broken and everything just displays as individual comments. I don’t think that I am going to bother trying to fix this as I don’t have that many comments on this site and I think everything reads understandably as it is but if you want to try this approach you will want to figure out a way of not fucking up importing the replies.
import csv
import json
from datetime import datetime, timezone
from dateutil.parser import parse
from pprint import pprint
from uuid import uuid4



now = datetime.now()
site_url = 'https://oakreef.ie'
date_format = "%Y-%m-%dT%H:%M:%SZ"

my_id = "ADMIN USER UUID"

data = {
  "version": 3,
  "comments": [],
}

with open('replies.csv', newline='') as csv_file:
		csv_reader = csv.reader(csv_file, delimiter=',', quotechar='"')
		for row in csv_reader:
				date, url, parent_id, page_id, text = row
				date = parse(date)

				comment_id = str(uuid4())

				data['comments'].append({
					"authorCountry": "IE",
					"createdTime": date.strftime(date_format),
					"deletedTime": "0001-01-01T00:00:00.000Z",
					"editedTime": "0001-01-01T00:00:00.000Z",
					"html": f"\u003cp\u003e{text}\u003c/p\u003e\n",
					"id": comment_id,
					"isApproved": True,
					"isDeleted": False,
					"isPending": False,
					"isSticky": False,
					"markdown": text,
					"moderatedTime": date.strftime(date_format),
					"pageId": page_id,
					"parentId": parent_id,
					"score": 0,
					"url": f'{site_url}{url}#comentario-{comment_id}',
					"userCreated": my_id,
					"userModerated": my_id
				})



with open('reply-import.json', 'w') as import_file:
	json.dump(data, import_file)

My avatar

One last thing is that Comentario doesn’t allow GIF avatars, but I like my sparkly Jupiter. After looking at the Postgres database I could see that user avatars are simply stored as binary data in the table cm_user_avatars with three sizes avatar_l, avatar_m and avatar_s corresponding to 128×128, 32×32 and 16×16 pixels, respectively, so I made some GIFs in the appropriate sizes, converted them to binary strings, and overrode the avatar_l and avatar_m entries in the cm_user_avatars table manually (I left the avatar_s as a JPEG).

UPDATE cm_user_avatars SET avatar_m = '\xBINARY_DATA_HERE'  WHERE user_id = 'UUID_HERE';

This seems to work without any problems and my avatar in my own comments section is sparkly now.

Conclusions

That’s it I hope I don’t have to worry too much about this setup again for some time.


Caoimhe

I have finally set up a replacement for Cusdis. Following Damien’s example I have set up a self-hosted instance of Comentario.

You can comment with or without setting up an account. If you create an account please don’t forget your password because I do not have an email account set up for it to send out password resets but also please don’t reuse a password because you should not trust me with that.

I will try to transfer old comments from Cusdis over but that won’t happen immediately.

I have also set up a page that has all my bog posts without the stuff syndicated from other sites mirroring the existing Atom feed that does that.



Caoimhe

Cusdis appears to not be refreshing my monthly comment allowance so I am not able to approve any new comments. I reported this issue a week ago but I think that the developer is not currently working on the project. I may look into getting up a self-hosted version set up and migrating all existing comments to it but I am not sure when I’ll be able to get that done.


Caoimhe

I made some updates to the site yesterday, including a smaller feed at /beag.atom1 that only includes posts I write for this site itself, not the ones copied from my Letterboxd, Serializd, Backloggd or Fediverse accounts or things that I repost. Update: It will still include reposts.

I also added some art to the Transy page by Kate Barrett that I had forgotten to.

Otherwise it was mostly some layout and styling tweaks and fixes. Text in dark mode should be a bit brighter now, there’s custom text selection colours and on the homepage there’s one new 88×31 pixel button and an infobox telling you to install an adblocker if you don’t have one.

  1. Beag is Irish for “small”. 



Caoimhe

I’ve been testing how I want to handle rebogs on this site.

The previous post was rebogged entirely manually. I wrote a post in the normal format for Jekyll and defined metadata for the rebog information to link back to Freja’s website and display her avatar. Now I have to figure out how I want to streamline that process.

For posts syndicated from Letterboxd and such I have a setup where when I build the site there’s some code that checks those feeds and processes them into their own special folders that then get processed and added into the list of posts.

For rebogs though I think I’m going to do it differently and write a script, probably in Python, that I can run from my command line and give a link to a post that will attempt to parse the content of it then write it to a file directly into the same folder as my normal posts but with the extra metadata I’ve defined for for rebogs.


Caoimhe

Continuing to crib from Natalie I have finally gotten around to trying out webmentions for this site.

I had bookmarked her posts on it and made notes and was going to get around to implementing it myself when I thought “hey I’m using Jekyll has a webmentions plugin already been made for Jekyll?” and the answer was of course it had. Adding it was very straightforward and hopefully it works out of the box.


Caoimhe

The previous version of this site was originally just the gallery and it included an Atom feed honestly mostly just because I wanted to understand better how RSS worked and it was an interesting and fun thing to make. When I made the bog instead of retiring the old feeds I added a new one and then made a combined one that had everything and as I’ve reorganised things this has become a pain.

And the gallery is a record of past things I’ve worked on. The dates listed are retroactive. Even when I add new things they’re usually pretty heavily back-dated. It’s not really an appropriate use of an RSS feed. You shouldn’t really be adding things that have dates months or even years in the past. And when I add something there now I’m probably going to have a post about it anyway. So I’ve decided to simplify things. I am going to remove the other feeds and redirect them to the one for the bog. If I add something to the gallery there’ll be a post about it and obviously the two podcasts have their own feeds. Because they’re podcasts and that’s how podcasts work.


Caoimhe

I bookmarked a couple of posts from Natalie ages ago about h-entry and have finally gotten around to marking up my posts with them.

Hopefully I didn’t mess anything up and everything parsable now. I should have done this sooner as it was fairly simple but better late than never.

Now I have that set up as well as syndicating posts from my Backloggd and Letterboxd feeds. Next steps in trying to get set up to be part of the sociable web: Webmentions and figuring out how I want to handle rebogging individual posts.




Caoimhe

I’ve decided to start posting reviews to my Backloggd account. First one is of Shadow Generations (it’s good). You won’t really need to follow me there, though, as any reviews I post there should also show up on this bog.

They current implementation is really simple. When I build the site it reads the Backloggd RSS feed and copies the post to here but it’s on my todo list to start caching rebogs locally to have a copy preserved and hopefully to stop this getting too slow if the number of posts I’m syndicating starts to get too large.

In the RSS feed the link points to the original review on Backloggd rather than to the copy on this site. Not sure about which it should link to. If you have an opinion on that feel free to share.

I am also planning on doing this for Letterboxd as well and maybe something similar for books. Is there any decent alternative to Goodreads?


Caoimhe

This post demonstrates custom CSS that won’t display in RSS readers.

One of the things that Cohost taught me is that CSS is actually fun. Styling a website is a really lovely form of self-expression and I have been really enjoying styling this this website1. And I thought I’d highlight some of the things I’ve done.

Colours

The site has two different colour schemes for dark mode and light mode. I much prefer the dark mode one but I generally use dark mode for everything I can. The dark mode has a cool, blue palette while the light mode uses a warmer colour scheme with oranges and peach colours. Most of the colours I used are picked from the Pico-8 palette.

There is a gradient as you scroll down the page in both colour schemes ending in a different footer images2. In dark mode stars also come out as you scroll down.

External links and internal links have different colours3 and also some links have special decorations. If I link to the atom feed for the bog or my page about Snolf they have little Nintendo dialogue icons appended to them or if I link to Transy it uses the typeface that she talks in: Hobo.

This applies whenever those specific things are linked to and I don’t need to do anything special with this post to apply them.

Cursors

The site also has custom cursors based off of old Windows cursors. If you mouse over the above links you might have noticed that there are also different cursors depending on what type of link they are.

Fonts

For the Irish language portions of my site I use Mínċló GC from Gaelċlo instead of Crimson Text which is used for English text. I also use it for the title of The Bog because using silly fancy text for headings is fun. Other examples: Gallery is Tate Regular, The the Ring Podcast uses Some Rings and a bunch of other fonts I use for titles on my homepage are references to Sonic the Hedgehog because of course they are.

Buttons!

The most important part of any site is 88×31 pixel buttons, obviously, to which I have a crippling addiction. I’ve copied some CSS from Hyphinett to embiggen them when you mouseover them and also set rendering mode to pixellated to keep them nice and crispy.

oakreef.ie

If you have your browser set to prefer reduced motion the mouseover effect is disabled and all the animated buttons are replaced with static ones.

For sites that don’t have buttons I use a little 88×31 image of a little piece of paper that I tore up with the names rendered on top slightly askew in Cinema Calligraphy.

Layout

The homepage divides into multiple columns depending on the screen width. Other pages generally have a single-column layout with navigation elements on either side that collapse to the top of the page if the screen is narrow enough, like on mobile. The avatar for the bog also snaps to the top on narrow screens and otherwise sits beside posts and scrolls with the page.

Gallery pages have sets of links next to/under the title that all change to the site’s link hover colour when mouseovered. This applies is applied to images using a combination of -webkit-filter sepia and hue-rotate. This also changes with light and dark mode. Projects with git repos have an icon here that expands into an info box with the git repo address.

And sometimes I just do little bespoke things for pages, such as the vertical Ogham text on the Cló Piocó-8 page. Trivia: Ogham is one of the few scripts that is written bottom-to-top.

Printing

I also have some custom CSS for printing. I don’t really plan on printing pages from this site nor do I expect anyone else to, but it was fun to play with. Colour is drained out of styling to save on coloured ink, links are instead underlined and the addresses they point to appended after them in brackets. Videos and audio players are hidden, the link icons in gallery pages are turned into a bullet point list under the header and the comment box is hidden.

When there’s no CSS

Printing is just one alternative way I like to think about how my site could be displayed. While I don’t test the site with Netscape Navigator4, I do read back over posts in my RSS reader and sometimes check the site in the terminal-based web browser Lynx.

Again I don’t really expect people to be navigating this site in the terminal but it does make me mindful of how the site functions in terms of pure HTML content elements without the fancy styling and I think it’s important to keep it understandable and navigable in that mode too. That is how the site is going to be parsed by accessibility tools. I also try to have as little Javascript involved as possible as well and not use it to render page content5.

At the top of this post there is a little infobox warning. There is CSS to make this eye-catching but it’s also defined as an <b> element so that even in the absence of CSS it will display bold and be a little attention-grabbing.

On gallery pages, and especially on podcast episode pages, there is a credits/links section at the bottom of the page in smaller text. There is a heading about this section saying “Credits” but it’s hidden by CSS as I thought the page flowed better without it. It’s still there for if the page is being read without CSS and the styling can’t be used to differentiate it as a separate element from the main page text as clearly.

I used to some invisible horizontal rules across the page, set to not display using CSS, that would divide the header and footer of the site from the main content to try and make it read cleaner in situations where there was no CSS. That was before I simplified the site layout somewhat and took out the more divided header and footer areas with links in them that the site used to have.

Conclusion

That’s all that I can think of off the top of my head. Bye.

  1. When it is not making me tear my hair out at least. 

  2. Both footers are modified Sonic the Hedgehog backgrounds. 

  3. Green and blue in dark mode, brown and orange in light mode. 

  4. Sorry, Luna

  5. Other than comments and webring stuff on the homepage. 



Caoimhe

I am not posting as much on here as I was on Cohost. One reason is that I have just been really busy and tired recently.

Another is simply I spent a lot of time on Cohost. I have praised it a lot and how it felt more deliberate and less of a trap than other social media sites, but it was still a place I could open to kill time and scroll be driven by the small joys of getting notifications. I felt I got more out of it than other places where I did that, and it fostered that addiction at bit less, but it still did it.

But also there’s a psychological barrier. I still have this feeling that this site is something serious and I have to write in clear semi-formal prose and have something to say. Just posting “I won’t tell anyone if I win the lottery but there will be sins” feels wrong. Which is silly. This is my website. I can write anything I want on here. And I enjoy shitposting. I should be doing it. Maybe If I can get into the flow of treating this place more casually I can feel a bit more open again. Perhaps if I can get over the embarrassment of it I might even post some kink stuff.

But there is another reason too: Posting on here is much more deliberate. I use Jekyll as a static-site generator so making a post involves creating a new text file on my PC and running a small command line script to build and push the changes to my server. It’s not a huge effort, but it’s certainly more than using a website. I like building this site.

Sylvia (quoting someone else) said that “a personal website is like a model train set, in that it’s never really done and you work on it constantly in the hopes that someone will see it” and I think that’s a great comparison! I have a todo list for ideas for this site as long as my arm and as a programmer by trade I both enjoy and know how to make extra work for myself doing it. It was originally just a site built with the default Jekyll minima theme but I have bolted on a lot of extra features and generators. There’s a pipeline to build Pico-8 games from source and an entire podcast processing system and these all run every time I regenerate the site. The overhead with these wasn’t too bad at first but over time it has just taken longer and longer to build and push the site for small changes.

But this is just another problem to solve! I enjoy doing this. I already do have a janky system in place for testing the site while skipping some of the more intensive steps for testing, but I can’t use it to push changes to the site because it would fuck up certain pages which would get replaced with versions that are missing things.

I have some idea already of how I want to go about this and it involves dividing up the site a bit more cleanly into discrete parts. This is going to result in moving some stuff around and in particular I think I will be moving everything in the gallery to a new URL scheme so any current links to my exhibits are going to break. Sustaining a few 404s is fine and if I do find anyone linking to specific pages I might set up some manual redirects but I don’t want to have to set up a million redirect rules. I have too many already and I think I’m going to be removing most of them other than for the Atom feeds to reduce clutter as well.

And then, maybe shitposting?


Caoimhe

I am trialling a comments section using Cusdis. There should be a comment section below this and every other bog post as well as every entry in the gallery.

This means most pages on here now use Javascript which makes me a little sad but maybe it is worth it.

Or maybe I will just decide to remove this again! We’ll see.

In any case feel free to say hello in a comment below.


Caoimhe

I have decided to move the location of my RSS1 feeds. I will set up some redirects and hopefully everything will go smoothly but I decided to write this to let anyone following them know just in case it breaks something.

I’ll publish this post first then move everything little while later to give it a chance to be picked up in RSS readers before anything has the chance to go wrong.

I am also going to change the URL scheme for posts from /year/month/day/title to /bog/title.

/blog.xml -> /bog.atom

/feed.xml -> /everything.atom

/gallery.xml -> /gallery.atom

/foṫa.xml -> /dánlann.atom

  1. Technically Atom as the new links make obvious, but everyone just calls it RSS anyway. 


Caoimhe

A different GIF will displayed below depending on your browser’s prefers-reduced-motion and prefers-color-scheme settings. There’s four different possibilities:

A white cat

I hadn’t used prefers-reduced-motion before but I saw a chost from Kore linking to a blog post about accessibility and GIFs and decided I wanted to follow it but I also didn’t want to have to manually write the HTML code for it each time. Thankfully programming is the art of being tactically lazy and I can put some effort in up front and solve an interesting problem once and then let my site generator handle it automatically from then on.

Also thankfully I had done something like this before after taking inspiration how Luna’s blog handles images. I don’t have high D.P.I. images but I do have different dark and light mode versions of images for the The “the Ring” Podcast series tracker chart and the Dracula International diagram I made.

The way I had initially done that was, characteristically, a mess. I wrote a custom custom Liquid tag to handle it which meant that instead of actually using the existing, basic Markdown syntax I had to put images into my posts with something like this:

{% image /bog/images/easóg.gif %}

So revisiting this to include prefers-reduced-motion options I decided to do it differently this time. A way that would allow me to just type the normal Markdown syntax and let my code handle everything else.

![A white cat](/bog/images/easóg.gif "Easóg")

The next step was to look into how to extend and customise Jekyll’s Markdown parsing and output but that sounds hard and I didn’t want to do that so I just used a regular expression1:

/((!!?)\[([^\[\]]*)\]\((.*?) *("([^"]*)")?\))/

This runs against the raw Markdown before it’s parsed into HTML and pulls out the link, alt text and title. That last part is also a big improvement over the custom tag I previously made as that didn’t support alt text or titles at all.

The code then takes the link and checks if there are any alternative versions listed in the site’s static file list like easóg.dark.gif, easóg.static.gif or easóg.dark.static.gif. when writing a new post now I don’t have to do anything extra other than have those other versions with the right naming scheme in the same folder as the original image.

From there it it compiles it into HTML and replaces the original Markdown in the document:

<picture>
  <source srcset="/bog/images/easóg.dark.gif" media="(prefers-color-scheme: dark) and (prefers-reduced-motion: no-preference)" />
  <source srcset="/bog/images/easóg.gif" media="(prefers-reduced-motion: no-preference)" />
  <source srcset="/bog/images/easóg.dark.static.gif" media="(prefers-color-scheme: dark)" />
  <img src="/bog/images/easóg.static.gif" alt="A white cat" title="Easóg" loading="lazy" />
</picture>

Well, actually it does something else too. You might have noticed in the regular expression up above I am actually checking for an optional, second exclamation mark at the start of the image tag. That’s my own extension of the syntax. If I’m doing my own parsing I might as well go wild with it. If there are two exclamation marks at the start of the tag it also wraps the image in a link to itself and adds an extra class:

<a href="/bog/images/easóg.static.gif" class="dynamic-image-link">
  <picture>
    <source srcset="/bog/images/easóg.dark.gif" media="(prefers-color-scheme: dark) and (prefers-reduced-motion: no-preference)" />
    <source srcset="/bog/images/easóg.gif" media="(prefers-reduced-motion: no-preference)" />
    <source srcset="/bog/images/easóg.dark.static.gif" media="(prefers-color-scheme: dark)" />
    <img src="/bog/images/easóg.static.gif" alt="A white cat" title="Easóg" loading="lazy" />
  </picture>
</a>

The classes are to enable a little bit of Javascript2 to swap out the destinations of the links on the fly when swapping if the user’s media preferences change. Whichever one you currently see in the browser is the one you’ll go to if you click on it.

I might review the double bang syntax if I can figure out something that could be added to the tag that would get stripped out and ignored by a normal Markdown parser for better compatibility. If only Markdown had comments.

Is this a robust solution? Absolutely not! Will I eventually run into annoying weird cases that make me bang my head against the wall as a result of this? I already have! While writing this very bog post! Because the regular expression cannot tell that the markdown code example I have above is not meant to be parsed and turned it into HTML, making it impossible to show the before part of the before and after. Did this make me go back and implement this in a better way? No!

I added some metadata to this post telling it do disable my custom image parsing, made the parser skip doing anything if it finds that metadata on a page and then hardcoded the example at the top of this page. That’s right: This post isn’t actually using the one thing it’s meant to be demonstrating!

  1. I could have also tried parsing the resulting HTML instead of the Markdown like Luna did but that also seemed like it would take slightly more effort. 

  2. One of only three four a sadly increasing number of things Javascript is used for on the site. 


Caoimhe

I have created a new type of communication where I write articles and then “post” them to my log on the web. A “web log” if you will, or “bog” for short.

This site was originally just a gallery of things I made presented in a kind of formal, terse, way. I’m pivoting it to a personal site, though the gallery is still here.

This left me to figure out what I was going to marry the two functions for a redesign and also how to handle the transition with the existing RSS feeds. What I have settled on is having a feed for this bog, a separate feed for the gallery and having the existing feed combine both.

There is also the feed for the Irish-language version of the gallery which will remain in place. Maybe I might try bogging in Irish too to practise it more again, in which case that will probably also become a combined feed for the two. But first the codebase and CSS for this site needs a major cleanup.


Oak Reef Zone

You’re on it right now.

Credits


An Caoṁlann

Tá tú uirṫi anois.

Creidiúintí