-
With the flurry of interest in Mastodon and ActivityPub, I was hoping to run an instance on my own domain. However, to make things more challenging, I wanted to see if there was a way to do it without paying for any more infrastructure.
I set up my Raspberry Pi 400 as per the installing Mastodon from source instructions (though I needed to follow Tom’s advice and
export NODE_OPTIONS=--openssl-legacy-provider
in order for the JavaScript dependencies to install on Node.js 18) and set about exposing it to the internet.This led me to discover that my home network had a double NAT: my ISP’s router was running one network and my eeros were running another. After repeatedly breaking my entire network to try and fix it (which involved SSHing into my router as
engineer
to get my PPPoE credentials) before discovering my eeros don’t support connecting directly to the internet, I ended up forwarding both HTTP and HTTPS to my instance.As I didn’t want to pay for a static IP, I wrote a bash script to update a
CNAME
onmudge.name
every five minutes with my public IP as reported by ipify using GANDI’s LiveDNS API.I then followed the instructions to use
@mudge@mudge.name
rather than@mudge@social.mudge.name
by serving a response forhttps://mudge.name/.well-known/host-meta
that pointed to my instance. Tom pointed out this works because Mastodon will fall back to checkinghost-meta
if its WebFinger request fails with a 404.Once everything was working, I started to grow a little unsure of the wisdom of exposing two ports into my home network from the internet despite my attempts to lock things down with
iptables
and Fail2ban.Several days in, we had a power cut after which I could no longer SSH into my server. At that point, I decided I’d leave the hosting to James and stick with Ruby.social.
-
I’ve been working on a streaming API using Rack Hijacking. As this involves working directly with sockets, we needed a way to test this alongside our more typical Rails controllers.
We settled on using
UNIXSocket.pair
to simulate the two ends of a streaming connection: one socket for the client end and one socket for the server end. We can pass this into our requestenv
as a result of callingrack.hijack
and then read off anything that has been written by the application, e.g.it "streams the headers to the client" do client, server = UNIXSocket.pair get "/stream", env: { "rack.hijack" => -> { server } } headers = client.readline("\r\n\r\n") expect(headers).to eq("HTTP/1.1 200\r\nContent-Type: text/event-stream\r\n\r\n") end
-
In that same project, I had occasion to re-examine MIME negotiation in Rails. I previously wrote about how Rails will always prefer its notion of
format
(e.g. from a URL or file extension) over the HTTPAccept
header especially if it contains a browser-like wildcard. For our API, we don’t expect our clients to be browsers and would rather Rails only decide on the appropriate content type based on theAccept
header even if it has a wildcard.I wrote up an example of how to do this including a controller spec showing how it works with various examples. The crux of it is the following:
class ApiController < ApplicationController before_action :only_respect_accept_header private def only_respect_accept_header request.set_header( "action_dispatch.request.formats", requested_mime_types.select { |type| type.symbol || type.ref == "*/*" } ) end def requested_mime_types Mime::Type.parse(request.get_header("HTTP_ACCEPT").to_s).presence || [Mime::ALL] end end
-
I’ve been tooting with abandon on Ruby.social. It’s far easier when people aren’t mistaking you for another mudge.
Weeknotes #100
By Paul Mucur,
on