After a year I have finally added a functioning tagging system. I had been tagging posts already but they were pretty much non-functional, just sitting at the bottom looking pretty. Now every post has some category icons under it. Mouseover each for titles if it’s unclear what they’re meant to me. As well as posts the tag pages include reviews and gallery exhibits.
I have done some tweaks to fonts on the site. In particular I optimised the fonts I use for headings on the homepage using the pyftsubset command in Font Tools to consist only of the characters needed to display the text used on the site and no more. This should reduce the download footprint of the site a bit.
I don’t remember who I saw link to pyftsubset originally so can’t give credit for where I saw it, unfortunately.
I am going to walk through what I did to set up Comentario and import old comments from Cusdis. This is not a guide and the scripts that are posted below have serious problems that should be fixed before being used and I am not going to be the one to do that and you would obviously need to change any references to oakreef.ie to your own site.
Subdomain
First of all I needed to have an address to host the Comentario instance at. I chose a new subdomain at comments.oakreef.ie and had to update my Let’s Encrypt certificates to cover that new subdomain. I did not save the commands I used to do that but it was pretty straightforward to do from the command line.
Docker
Then I installed Docker on my server and following Damien’s example with a few tweaks I created my docker-compose.yml and secrets.yaml files.
Changing the ports configuration to 127.0.0.1:5432:5432 means that the Postgres database is only accessible from the server locally and not publicly available. I also don’t have an email setup for the Comentario instance currently.
Launching the instance is then just a matter of:
sudo docker compose -f docker-compose.yml up -d
Nginx
Then I needed to modify my Nginx config to direct comments.oakreef.ie to the Comentario instance running on port 5050.
Once there were a few comments on the new system I used the export feature in Comentario to get a JSON file and looked at how Comentario defined comment data in that. I also manually went through all the comments on the old system and made a basic CSV file of all of them with the author name, date posted, the URL of the post the comment was on and the text of each comment. I then wrote this Python file to take the exported Comentario comments—named basedata.json—and the CSV with the old Cusdis comments—comments.csv—and exported a new file with the combined data in the Comentario format.
There are some problems with this!
When importing data Comentario does not check for duplicates. I ended up creating duplicates of all the new Comentario comments that already existed on the site doing this and had to manually delete them. If you are doing this do not include existing comments as part of the file you are creating to import.
I did not include replies at all. I decided to try importing replies I had made to people as a second, separate, step (see the second Python script below). This made things more awkward down the line. Do everything in one batch.
importcsvimportjsonfromdatetimeimportdatetime,timezonefromdateutil.parserimportparsefrompprintimportpprintfromuuidimportuuid4now=datetime.now()pages={}site_url='https://oakreef.ie'date_format="%Y-%m-%dT%H:%M:%SZ"my_id="ADMIN USER UUID"withopen('comments.csv',newline='')ascsv_file:csv_reader=csv.reader(csv_file,delimiter=',',quotechar='"')forrowincsv_reader:author,date,url,text=rowdate=parse(date)ifurlnotinpages:pages[url]={'comments':[]}pages[url]['comments'].append({'author':author,'date':date,'text':text})withopen('basedata.json')asjson_file:data=json.load(json_file)domainId=data['pages'][0]['domainId']forurl,pageinpages.items():page_id=str(uuid4())data['pages'].append({'createdTime':now.strftime(date_format),'domainId':domainId,'id':page_id,'isReadonly':False,'path':url,})forcommentinpage['comments']:comment_id=str(uuid4())data['comments'].append({"authorCountry":"IE",'authorName':comment['author'],'createdTime':comment['date'].strftime(date_format),"deletedTime":"0001-01-01T00:00:00.000Z","editedTime":"0001-01-01T00:00:00.000Z","html":f"\u003cp\u003e{comment['text']}\u003c/p\u003e\n",'id':comment_id,'isApproved':True,'isDeleted':False,'isPending':False,'isSticky':False,'markdown':comment['text'],"moderatedTime":comment['date'].strftime(date_format),'pageId':page_id,'score':0,'url':f'{site_url}{url}#comentario-{comment_id}','userCreated':'00000000-0000-0000-0000-000000000000',"userModerated":my_id})withopen('import.json','w')asimport_file:json.dump(data,import_file)
When that was done I put it away for a while as I wasn’t feeling well and eventually came back to do replies. I, again, manually went through all replies I had made to comments on the old system and made a CSV file with the reply date, URL of the page, the UUID of the parent comment as it existed in the new Comentario system, the UUID of the page the parent comment is on in teh new Comentario system and the text of the reply.
Two things are important to note about this:
It was a pain in the hole. If I had done replies at the same time as the rest of the comments I could have used the UUIDs that I was generating in the script rather than going to find them manually and making them into a CSV.
The initial upload failed as apparently Comentario couldn’t match the page and user IDs to what was in the database and it needed those to be in the import file. I got around this by doing another export and copying the entries for pages and commenters from that into the new one and uploading. This was not a good way to do this! It could have gone badly or had unexpected side effects. Again, if you’re doing this do not import comments and replies as two separate steps!
It still didn’t fully work anyway. My replies did import and do show up on the right pages but they are not nested properly as replies. It’s like looking at a comment section on a very old Youtube video where reply chains are broken and everything just displays as individual comments. I don’t think that I am going to bother trying to fix this as I don’t have that many comments on this site and I think everything reads understandably as it is but if you want to try this approach you will want to figure out a way of not fucking up importing the replies.
importcsvimportjsonfromdatetimeimportdatetime,timezonefromdateutil.parserimportparsefrompprintimportpprintfromuuidimportuuid4now=datetime.now()site_url='https://oakreef.ie'date_format="%Y-%m-%dT%H:%M:%SZ"my_id="ADMIN USER UUID"data={"version":3,"comments":[],}withopen('replies.csv',newline='')ascsv_file:csv_reader=csv.reader(csv_file,delimiter=',',quotechar='"')forrowincsv_reader:date,url,parent_id,page_id,text=rowdate=parse(date)comment_id=str(uuid4())data['comments'].append({"authorCountry":"IE","createdTime":date.strftime(date_format),"deletedTime":"0001-01-01T00:00:00.000Z","editedTime":"0001-01-01T00:00:00.000Z","html":f"\u003cp\u003e{text}\u003c/p\u003e\n","id":comment_id,"isApproved":True,"isDeleted":False,"isPending":False,"isSticky":False,"markdown":text,"moderatedTime":date.strftime(date_format),"pageId":page_id,"parentId":parent_id,"score":0,"url":f'{site_url}{url}#comentario-{comment_id}',"userCreated":my_id,"userModerated":my_id})withopen('reply-import.json','w')asimport_file:json.dump(data,import_file)
My avatar
One last thing is that Comentario doesn’t allow GIF avatars, but I like my sparkly Jupiter. After looking at the Postgres database I could see that user avatars are simply stored as binary data in the table cm_user_avatars with three sizes avatar_l, avatar_m and avatar_s corresponding to 128×128, 32×32 and 16×16 pixels, respectively, so I made some GIFs in the appropriate sizes, converted them to binary strings, and overrode the avatar_l and avatar_m entries in the cm_user_avatars table manually (I left the avatar_s as a JPEG).
You can comment with or without setting up an account. If you create an account please don’t forget your password because I do not have an email account set up for it to send out password resets but also please don’t reuse a password because you should not trust me with that.
I will try to transfer old comments from Cusdis over but that won’t happen immediately.
Small little bit of Javascript added to the site for navigation with Vim-style keyboard shortcuts. Ctrl+→ and Ctrl+← will navigate to the previous and next post, respectively. This will work on both pages for individual posts and on main bog page and also on pages for podcast episodes and gallery exhibits.
Cusdis appears to not be refreshing my monthly comment allowance so I am not able to approve any new comments. I reported this issue a week ago but I think that the developer is not currently working on the project. I may look into getting up a self-hosted version set up and migrating all existing comments to it but I am not sure when I’ll be able to get that done.
I made some updates to the site yesterday, including a smaller feed at /beag.atom1 that only includes posts I write for this site itself, not the ones copied from my Letterboxd, Serializd, Backloggd or Fediverse accounts or things that I repost. Update: It will still include reposts.
I also added some art to the Transy page by Kate Barrett that I had forgotten to.
Otherwise it was mostly some layout and styling tweaks and fixes. Text in dark mode should be a bit brighter now, there’s custom text selection colours and on the homepage there’s one new 88×31 pixel button and an infobox telling you to install an adblocker if you don’t have one.
I’ve been testing how I want to handle rebogs on this site.
The previous post was rebogged entirely manually. I wrote a post in the normal format for Jekyll and defined metadata for the rebog information to link back to Freja’s website and display her avatar. Now I have to figure out how I want to streamline that process.
For posts syndicated from Letterboxd and such I have a setup where when I build the site there’s some code that checks those feeds and processes them into their own special folders that then get processed and added into the list of posts.
For rebogs though I think I’m going to do it differently and write a script, probably in Python, that I can run from my command line and give a link to a post that will attempt to parse the content of it then write it to a file directly into the same folder as my normal posts but with the extra metadata I’ve defined for for rebogs.
Continuing to crib from Natalie I have finally gotten around to trying out webmentions for this site.
I had bookmarked herposts on it and made notes and was going to get around to implementing it myself when I thought “hey I’m using Jekyll has a webmentions plugin already been made for Jekyll?” and the answer was of course it had. Adding it was very straightforward and hopefully it works out of the box.
The previous version of this site was originally just the gallery and it included an Atom feed honestly mostly just because I wanted to understand better how RSS worked and it was an interesting and fun thing to make. When I made the bog instead of retiring the old feeds I added a new one and then made a combined one that had everything and as I’ve reorganised things this has become a pain.
And the gallery is a record of past things I’ve worked on. The dates listed are retroactive. Even when I add new things they’re usually pretty heavily back-dated. It’s not really an appropriate use of an RSS feed. You shouldn’t really be adding things that have dates months or even years in the past. And when I add something there now I’m probably going to have a post about it anyway. So I’ve decided to simplify things. I am going to remove the other feeds and redirect them to the one for the bog. If I add something to the gallery there’ll be a post about it and obviously the two podcasts have their own feeds. Because they’re podcasts and that’s how podcasts work.
I bookmarked a couple of posts from Natalie ages ago about h-entry and have finally gotten around to marking up my posts with them.
Hopefully I didn’t mess anything up and everything parsable now. I should have done this sooner as it was fairly simple but better late than never.
Now I have that set up as well as syndicating posts from my Backloggd and Letterboxd feeds. Next steps in trying to get set up to be part of the sociable web: Webmentions and figuring out how I want to handle rebogging individual posts.
This post demonstrates custom CSS that won’t display in RSS readers.
One of the things that Cohost taught me is that CSS is actually fun. Styling a website is a really lovely form of self-expression and I have been really enjoying styling this this website1. And I thought I’d highlight some of the things I’ve done.
Colours
The site has two different colour schemes for dark mode and light mode. I much prefer the dark mode one but I generally use dark mode for everything I can. The dark mode has a cool, blue palette while the light mode uses a warmer colour scheme with oranges and peach colours. Most of the colours I used are picked from the Pico-8 palette.
There is a gradient as you scroll down the page in both colour schemes ending in a different footer images2. In dark mode stars also come out as you scroll down.
Links
External links and internal links have different colours3 and also some links have special decorations. If I link to the atom feed for the bog or my page about Snolf they have little Nintendo dialogue icons appended to them or if I link to Transy it uses the typeface that she talks in: Hobo.
This applies whenever those specific things are linked to and I don’t need to do anything special with this post to apply them.
Cursors
The site also has custom cursors based off of old Windows cursors. If you mouse over the above links you might have noticed that there are also different cursors depending on what type of link they are.
Fonts
For the Irish language portions of my site I use Mínċló GC from Gaelċlo instead of Crimson Text which is used for English text. I also use it for the title of The Bog because using silly fancy text for headings is fun. Other examples: Gallery is Tate Regular, The the Ring Podcast uses Some Rings and a bunch of other fonts I use for titles on my homepage are references to Sonic the Hedgehog because of course they are.
Buttons!
The most important part of any site is 88×31 pixel buttons, obviously, to which I have a crippling addiction. I’ve copied some CSS from Hyphinett to embiggen them when you mouseover them and also set rendering mode to pixellated to keep them nice and crispy.
If you have your browser set to prefer reduced motion the mouseover effect is disabled and all the animated buttons are replaced with static ones.
For sites that don’t have buttons I use a little 88×31 image of a little piece of paper that I tore up with the names rendered on top slightly askew in Cinema Calligraphy.
Layout
The homepage divides into multiple columns depending on the screen width. Other pages generally have a single-column layout with navigation elements on either side that collapse to the top of the page if the screen is narrow enough, like on mobile. The avatar for the bog also snaps to the top on narrow screens and otherwise sits beside posts and scrolls with the page.
Gallery exhibits
Gallery pages have sets of links next to/under the title that all change to the site’s link hover colour when mouseovered. This applies is applied to images using a combination of -webkit-filtersepia and hue-rotate. This also changes with light and dark mode. Projects with git repos have an icon here that expands into an info box with the git repo address.
And sometimes I just do little bespoke things for pages, such as the vertical Ogham text on the Cló Piocó-8 page. Trivia: Ogham is one of the few scripts that is written bottom-to-top.
Printing
I also have some custom CSS for printing. I don’t really plan on printing pages from this site nor do I expect anyone else to, but it was fun to play with. Colour is drained out of styling to save on coloured ink, links are instead underlined and the addresses they point to appended after them in brackets. Videos and audio players are hidden, the link icons in gallery pages are turned into a bullet point list under the header and the comment box is hidden.
When there’s no CSS
Printing is just one alternative way I like to think about how my site could be displayed. While I don’t test the site with Netscape Navigator4, I do read back over posts in my RSS reader and sometimes check the site in the terminal-based web browser Lynx.
Again I don’t really expect people to be navigating this site in the terminal but it does make me mindful of how the site functions in terms of pure HTML content elements without the fancy styling and I think it’s important to keep it understandable and navigable in that mode too. That is how the site is going to be parsed by accessibility tools. I also try to have as little Javascript involved as possible as well and not use it to render page content5.
At the top of this post there is a little infobox warning. There is CSS to make this eye-catching but it’s also defined as an <b> element so that even in the absence of CSS it will display bold and be a little attention-grabbing.
On gallery pages, and especially on podcast episode pages, there is a credits/links section at the bottom of the page in smaller text. There is a heading about this section saying “Credits” but it’s hidden by CSS as I thought the page flowed better without it. It’s still there for if the page is being read without CSS and the styling can’t be used to differentiate it as a separate element from the main page text as clearly.
I used to some invisible horizontal rules across the page, set to not display using CSS, that would divide the header and footer of the site from the main content to try and make it read cleaner in situations where there was no CSS. That was before I simplified the site layout somewhat and took out the more divided header and footer areas with links in them that the site used to have.
Conclusion
That’s all that I can think of off the top of my head. Bye.
When it is not making me tear my hair out at least. ↩
Both footers are modified Sonic the Hedgehog backgrounds. ↩
Green and blue in dark mode, brown and orange in light mode. ↩
I am trialling a comments section using Cusdis. There should be a comment section below this and every other bog post as well as every entry in the gallery.
This means most pages on here now use Javascript which makes me a little sad but maybe it is worth it.
Or maybe I will just decide to remove this again! We’ll see.
In any case feel free to say hello in a comment below.
A different GIF will displayed below depending on your browser’s prefers-reduced-motion and
prefers-color-scheme settings. There’s four different possibilities:
I hadn’t used prefers-reduced-motion before but I saw a chost from Kore linking
to a blog post about accessibility and GIFs and decided I wanted to follow it but
I also didn’t want to have to manually write the HTML code for it each time.
Thankfully programming is the art of being tactically lazy and I can put some effort in up front and
solve an interesting problem once and then let my site generator handle it automatically from then on.
Also thankfully I had done something like this before after taking inspiration how Luna’s blog handles images.
I don’t have high D.P.I. images but I do have different dark and light mode versions of images for the
The “the Ring” Podcast series tracker chart and the Dracula International diagram I made.
The way I had initially done that was, characteristically, a mess. I wrote a custom custom Liquid tag
to handle it which meant that instead of actually using the existing, basic Markdown syntax I had to
put images into my posts with something like this:
{%image/bog/images/easóg.gif%}
So revisiting this to include prefers-reduced-motion options I decided to do it differently this time.
A way that would allow me to just type the normal Markdown syntax and let my code handle everything else.

The next step was to look into how to extend and customise Jekyll’s Markdown parsing and output
but that sounds hard and I didn’t want to do that so I just used a regular expression1:
/((!!?)\[([^\[\]]*)\]\((.*?) *("([^"]*)")?\))/
This runs against the raw Markdown before it’s parsed into HTML and pulls out the link, alt text and title.
That last part is also a big improvement over the custom tag I previously made as that didn’t support
alt text or titles at all.
The code then takes the link and checks if there are any alternative versions listed in
the site’s static file list like easóg.dark.gif, easóg.static.gif or easóg.dark.static.gif.
when writing a new post now I don’t have to do anything extra other than have those other versions
with the right naming scheme in the same folder as the original image.
From there it it compiles it into HTML and replaces the original Markdown in the document:
<picture><sourcesrcset="/bog/images/easóg.dark.gif"media="(prefers-color-scheme: dark) and (prefers-reduced-motion: no-preference)"/><sourcesrcset="/bog/images/easóg.gif"media="(prefers-reduced-motion: no-preference)"/><sourcesrcset="/bog/images/easóg.dark.static.gif"media="(prefers-color-scheme: dark)"/><imgsrc="/bog/images/easóg.static.gif"alt="A white cat"title="Easóg"loading="lazy"/></picture>
Well, actually it does something else too. You might have noticed in the regular expression up above
I am actually checking for an optional, second exclamation mark at the start of the image tag.
That’s my own extension of the syntax. If I’m doing my own parsing I might as well go wild with it.
If there are two exclamation marks at the start of the tag it also wraps the image in a link to itself
and adds an extra class:
<ahref="/bog/images/easóg.static.gif"class="dynamic-image-link"><picture><sourcesrcset="/bog/images/easóg.dark.gif"media="(prefers-color-scheme: dark) and (prefers-reduced-motion: no-preference)"/><sourcesrcset="/bog/images/easóg.gif"media="(prefers-reduced-motion: no-preference)"/><sourcesrcset="/bog/images/easóg.dark.static.gif"media="(prefers-color-scheme: dark)"/><imgsrc="/bog/images/easóg.static.gif"alt="A white cat"title="Easóg"loading="lazy"/></picture></a>
The classes are to enable a little bit of Javascript2 to swap out the destinations of the links
on the fly when swapping if the user’s media preferences change. Whichever one you currently see in the
browser is the one you’ll go to if you click on it.
I might review the double bang syntax if I can figure out something that could be added to the tag
that would get stripped out and ignored by a normal Markdown parser for better compatibility.
If only Markdown had comments.
Is this a robust solution? Absolutely not! Will I eventually run into annoying weird cases that
make me bang my head against the wall as a result of this? I already have!
While writing this very bog post! Because the regular expression cannot tell that the markdown code example I have above
is not meant to be parsed and turned it into HTML, making it impossible to show the before part
of the before and after. Did this make me go back and implement this in a better way? No!
I added some metadata to this post telling it do disable my custom image parsing, made the
parser skip doing anything if it finds that metadata on a page and then hardcoded the example
at the top of this page. That’s right: This post isn’t actually using the one thing it’s meant to
be demonstrating!
I could have also tried parsing the resulting HTML instead of the Markdown like Luna did but that also seemed like it would take slightly more effort. ↩
One of onlythreefour a sadly increasing number of things Javascript is used for on the site. ↩