Asko Nõmm

Framework 16 as a Server

I started hosting my own Git server in February using Forgejo and I’ve been pretty happy with it, but it and all my sites still ran on 3 separate VPS’s that I rented from UpCloud, so while I was self-hosting these things, it wasn’t really me that was hosting them in the end.

It also cost me roughly 60-70€ per month, bulk of which going to just Forgejo as I relied on Forgejo for a number of things, from a test runner to a full CI/CD pipeline, meaning that it needed quite a bit of juice and that adds up quick in price.

Then at some point I had a thought - can’t I just use my Framework 16 as a server instead? It mostly sits on my desk as a Linux testing machine as I’m mostly daily driving a MacBook, and with a whopping 3.5TB of storage (can fit up to 26TB with the Dual M2 adapter!), 64GB of RAM, Ryzen AI 9 HX 370 CPU .. the thing is a beast.

It being Framework also means that I could easily swap parts out if they fail, and I already have plenty spare parts, like the previous Ryzen 7840HS mainboard just sitting in a closet.

And so I purchased a static IP from my ISP, which runs only 6€ per month, set up the ports on my router, set up my Framework as a server, and I’ve been running everything on my own machine, from my own home, for about a week now.

The thing flies! An absolute behemoth of a home server. It being a laptop also means that if electricity goes away, it’ll be fine, though I live in a place where that happens maybe once every few years, and for just a few minutes. I’ve also made a little uptime tracker (note that the 3 min downtime there is me just testing the tracker).

I’ve got daily root filesystem backups running, so worst case scenario it should be possible to get up and running again in half an hour or so, but I am contemplating getting a second Framework machine just for replication, in case one fails. We’ll see. For now though, all my hosting needs are met with just a 6€ per month static IP fee and some electricity cost. A thing of beauty, really. Here’s a photo.


The Product Engineer

It recently dawned on me that I might just be part of one of the last generations of software engineers who still understand code. Code - an artifact created to serve as a middleman between human intent and logic gates on a processor - is becoming less and less valuable in this post-LLM world.

Code is cheap now. You can feed a few sentences into an AI client and have it generate endless amounts of it in any language you want. You don’t have to “know” the code anymore; indeed, many no longer do. I suspect this trend will only accelerate.

I don’t see much incentive for knowing code anymore, other than helping you make educated decisions on performance or architectural topics, or perhaps identifying the subtle mistakes AI makes. But I know that’s only because of the transition period we’re in. Frontier models coupled with good AI tooling make even those skills feel increasingly niche.

If knowing how to write code is no longer the primary value-add, then how, as a software engineer, can I provide value? Well, maybe our job title needs to change.

I think we need to zoom out a bit. What we really do - and what we’re here for - isn’t just writing code. It’s solving business problems by creating digital products. Code is just the medium we’ve used thus far. As the level of abstraction has increased over the decades, so has the range of our responsibilities.

Perhaps a fitting title for us in this brave new world is Product Engineer.

Think about it: we define and architect detailed specs based on customer needs and business constraints. We know exactly the level of fidelity required for an AI to produce useful software. We are active participants in an iterative process, ensuring quality along the way and culminating in a finished solution.

Throughout this process, we communicate with various teams and divisions to bring disparate knowledge together into one cohesive whole: the product. We’re becoming product engineers, touching every aspect of the build.


Invobi 1.0

I’m happy to say that after many iterations, I’m finally done and out with my invoicing service, Invobi.

Initially it was a web-based invoicing service, then web-based invoicing service that interfaced with AI’s through API’s so you could just describe your invoice, and while that worked reasonably well and I was just about to announce that as the version I was happy and done with, the amount of bot signups and other horribleness that the modern web has become made me reconsider.

At work I’ve been building MCP’s and with the fuck-ups of Microsoft and others ignored, I do see massive utility in today’s frontier model capabilities in using tools via MCP’s, and if done well, can totally see how it will shape the future of how we interact with technology, so I figured why not just make Invobi into an MCP as well.

There’s no web sign-ups, no database I have to maintain, no nothing. The only service I now run is the license/download service, but it has no database, and the downloads are proxied through my own Forgejo instance. In essense I just distribute a binary that is the MCP server and sell (offline) license keys to that MCP server. It’s just a one-time payment of $5 and the license key is valid for a year. Enjoy!


Ruuter 2.1

We’re rolling now! Just as I got ruuter 2.0 released with huge performance improvements I went ahead and added support for Jank as well. Ruuter is now a 4-runtime library: Clojure, ClojureScript, Babashka and Jank.

I’m very excited for Jank, even though it is the worst of 4 right now when it comes to performance, it should become the fastest, considering it runs on the LLVM.


Ruuter 2.0

Ruuter, my zero-dependency, runtime-agnostic Router for Clojure, ClojureScript and Babashka has a new release out. A pretty hefty one at that:

  • Best-match routing: Routes are now matched by specificity instead of first-match-wins. Literal segments beat parameters, parameters beat optionals, optionals beat wildcards. Route order in the vector no longer matters.
  • Segment trie: Routes are compiled into a trie (prefix tree) data structure for O(path-depth) matching instead of O(N) linear scan. This yields 4-380x performance improvements depending on route count and match type.
  • compile-routes function: New public function for explicit route compilation. Routes are also compiled implicitly and cached via memoization when using route directly.
  • Single wildcard constraint: Wildcard parameters (:name*) must now be the last segment in a path. Multiple wildcards per path are no longer supported.
  • No regex: Route matching no longer uses regular expressions. Matching is done via direct string comparison of path segments against a trie.
  • deps.edn only: Leiningen (project.clj) has been retired. All build, test, and benchmark tasks use deps.edn and bb.edn.

Massive performance improvements come with this release, which is something that’s been lacking a lot so far:

  • JVM (Clojure)
    • Small route sets: 1.6–4.1x faster
    • Medium route sets: 39–139x faster
    • Large route sets: 162–345x faster
    • Peak throughput: ~9.8M ops/sec (literal match)
  • ClojureScript (Node.js)
    • Small route sets: 0.9–6.5x faster (literal-first is within noise; params, wildcards, and misses see large gains)
    • Medium route sets: 14–40x faster
    • Large route sets: 38–167x faster
    • Peak throughput: ~1.3M ops/sec (literal match)
  • Babashka
    • Small route sets: 2.0–6.4x faster
    • Medium route sets: 11–32x faster
    • Large route sets: 32–182x faster
    • Peak throughput: ~1.1M ops/sec (miss/404 — fast trie rejection)