Fablehenge Tech Stack

At Fablehenge, we value craftsmanship. We try to choose technologies that we love working with and that allow us to move quickly and effectively.

This article, like all of our technical articles, will be a deep dive, of more interest to software developers out there than authors. Not to worry! We have many articles queued up on that what and why of writing.

Fablehenge is written in Typescript using the Svelte framework. We don’t mind Typescript. We love Svelte. We originally wrote the product in React, but rewrote it to Svelte when struggling with available React drag and drop libraries. The rewrite took less time than we had spent fighting with React for one feature. I wrote in more detail on this process on my personal blog if you are interested.

Svelte is such a delight to use. It gets out of our way and allows us to ship things quickly. Neither of us ever wants to work with React again, if we can avoid it. If you aren’t on the Svelte bandwagon, join us. It makes Frontend web development fun… for the first time in the history of Frontend web development!

We currently use Flowbite-svelte as our component library, but we don’t love it. Flowbite itself is pretty awesome, but the fact that it is styled with tailwind is a huge drawback. For legitimate reasons, tailwind is very popular in the Svelte community; Svelte doesn’t have a great way of passing styles down to child components, so people have resorted to using classes instead, and Tailwind supports that very well. But it’s the wrong paradigm. If we had it to do all over again, we would not use a component library at all. We’d use melt-ui for managing the state of complex components, and style everything with open-props. In fact, this is how we chose to style this blog, and we are hoping to migrate the main application to it someday. But probably not until after Svelte 5 is released.

We operate in a “nearly backendless” environment, thanks to the excellent Dexie Cloud service. Dexie Cloud takes care of almost everything for us, including storage, syncing, and backups. We could rely on it entirely, but we find there are a few things we’d rather have more control over, so we do make extensive use of the REST API and customizations that Dexie Cloud offers. If you are thinking of making a web application, Dexie Cloud comes with our highest level of recommendation.

Fablehenge is deployed as a statically generated site. We experimented with Sveltekit’s built-in SSR modes, but felt that the overhead of maintaining a server was not worth the cost of not being able to deploy on the edge. To be honest, I think SSR is almost always a mistake when it comes to web applications. In our case, the one benefit of SSR over a static site (being able to load data on the server) are almost completely negated by Dexie cloud. We have very few fetch requests in our app, and none of them happen on initial page load.

We deploy our services on Render.com. They are quick and seamless. It’s kind of weird but the best thing I can say about Render.com is that we have no complaints. I know that doesn’t sound like much of an endorsement, but every other infrastructure provider I’ve tried over my career has caused me and my teams endless complaints. “No complaints” is exactly what I want from a cloud infrastructure provider, and no more!

We use Passage (by 1password) as our authentication and identity provider. It is a young service that has had some teething problems, but considering the awful experiences we have had with various competitors, we knew we wanted to try something new. We love that Passage has doubled down on the new Passskeys technology. In the long run this will be safer and more enjoyable for our authors, but we know that Passkeys don’t work for everyone yet. Luckily, Passage has reasonable fallbacks in place.

We do maintain a small backend for a few features that Dexie Cloud either doesn’t implement or doesn’t implement the way we want it to. The main endpoint is for linking Dexie Cloud to Passage, but we also have several endpoints for things like Stripe webhooks and authenticated queries to our AI providers.

Our backend is implemented in Typescript and built on the high-speed Hono framework, deployed with Bun for performance and ease-of-use. We don’t have strong feelings about any of these technologies. We had originally implemented the backend in Go (mostly because I wanted to study something new), but we found Go not to be very compelling. On the one hand, it isn’t terribly safe and doesn’t support fearless concurrency, but on the other hand it doesn’t give us the usability of Typescript or Python, nor the power and performance of Rust or C++. Go feels like all the wrong compromises.

We use Log Rocket and Honeycomb for observability and love them both. Not a lot to say about them except that they work well. As with deployment, this is something you would think it is reasonable to expect from an observability framework, but we’ve had so many bad experiences that “work well” is huge accolades. So we highly recommend both services.

For our AI services, we explicitly do not use OpenAI. We do not trust OpenAI to place its users’ well-being at the forefront. Instead, our AI services are provided by Cohere (for text generation) and Stability (for images). Stability is far superior to Dall-e for image quality and Cohere is faster than ChatGPT and about as accurate, at least for our workloads.

Finish your novel faster with Fablehenge.

A novel writing platform that keeps all your notes at your fingertips.