This page is part of a static HTML representation of the TiddlyWiki at https://mws.tiddlywiki.com/

2025-02-18 - User Management

24th June 2025 at 12:42pm

Described here is a full blown role-based identity & access management system. However we are unlikely to need this level of sophistication immediately.

Logical groupings and definitions

  • A user is a logical grouping of logins.
  • An organization ("content space") is a logical grouping of users and the content they interact with.
  • The organization owner is a user who sets the privileges other users have in the organization.
    • If enabled, every user is the owner of their own personal organization.
    • If enabled, they may own extra organizations.
    • The organization owner has all privileges in the organization except those they explicitly deny themself.
    • They cannot deny themself the ability to change privileges in the organization.
    • They may delegate to managers.
  • Admins are users with site-wide privileges.
    • The admin role is separate from the owner role. They are not site-wide owners.
    • Admins can take various actions on extra orginizations, depending on site settings.
    • Admins can manage users access to the site and take related actions.
    • Depending on settings, admins cannot view content that does not belong to them, unless shared (the exact privacy details here aren't important, it's the technical features that I'm including this for).
  • Site admins can define a public content space which everyone can view, and an additional content space which authenticated users can view. These act as implicit organizations with admins given permissions as managers.

Even though it sounds like I'm expecting this to be some kind of public document sharing platform or online collaboration, the setup actually has multiple use-cases within a single organization that are just as complicated.

To recap the various roles in an organization

  • admin - Granted site-wide permissions.
  • owner - Granted organization level permissions.
  • manager - Delegated permission from owners.
  • user - Explicitly invited and added to the organization user list by owners or managers.
  • auth - Visitor signed in and not in the user list.
  • anon - Visitor not signed into the site.

There are two permission levels within the wiki

  • writer - can edit tiddlers, optionally filtered
  • reader - can view tiddlers, optionally filtered

Owners can create permission profiles which define reader and writer filters, then assign them to users or groups (or to their access to a specific wiki), and when they update the permission profile the changes apply everywhere.

I mean look, at some point I'm just implementing an entire Identity Access Management service.

JSON settings file

A short list of options in a JSON file alongside the database (or with the database settings) which determines some permanent settings that depend on the use case.

  • Whether admins can change user email address or oauth identity and set user passwords (account takeover).
  • Whether admins can view site content unrestricted (account privacy).
  • Sets the max visibility owners and admins can set (since that shouldn't need to change).
  • The default default visibility (before orgs change it).

settings for content spaces (organizations)

  • Whether new users get their own personal content space (personal organization).
  • Whether non-admin users can be given additional content spaces (extra organizations).
  • Whether non-admin users can create their own additional content spaces (extra organizations).
  • Whether additional content spaces created by non-admin users can be removed from them by admins (this will depend on the use case).

2025-04-14 - client plugins ✔️

24th June 2025 at 1:06pm

Currently we're adding recipe tiddlers into the wiki page dynamically, which is expected, but there are like six tiddlers that are being rendered statically, which doesn't really make sense. It also doesn't make sense that we're dumping some plugins into the database but rendering core from tiddlywiki, since this results in a version mismatch.

At the same time, we really don't want to be rendering plugins every time. They do need to be cached somewhere so they can be loaded quickly. I wonder if it would work to cache them in the wiki folder, per tiddlywiki version, so if you upgrade it would just create a new folder. The boot tiddlers would be cached in the folder as well. If we do it right, we wouldn't even have to parse the file, just read it onto the wire. We'd probably need an index file to keep everything straight. We could add plugins/themes/languages support to the wiki folder as well, which would also get cached in the same way. We could make a way for caching to be disabled, perhaps by adding a field to plugin.info.

It would be useful if the plugin syntax could specify NPM modules. We already have the + and ++ syntax. I'm not sure exactly how it'd work, but the NPM package would need to determine it's own path, which is fairly simple, and then export that so it can be imported via the standard import mechanism. Obviously the package would need to be installed, and it should probably be imported into the run file and then added as an absolute path to the list of imports. Actually, I guess that's already possible, so I just need to add the list of imports part of it.


Add a tiddler cache folder and render the tiddlers on startup

  • Store them either in-memory or in the file-system, depending on user preference.
  • The core tiddler and boot tiddlers could be stored in memory regardless because they are always served.
  • Would it save some memory to store them as a buffer?
  • The stored content would be read directly onto the wire, optionally checking the hash.

Add a plugin selector to the recipes form

  • The loading order needs to be changeable.
  • It would just be an array of titles, and possibly other describing fields.

Add a plugin field to the recipes table

  • Not sure if I need to create a plugins table but I probably just need a string array of plugin names. Titles tend to be authoritative in TiddlyWiki so the plugin title should be enough. Plugin titles which cannot be found would be somehow marked as not found, probably with a custom disabled plugin tiddler in the client and logging a warning in the server.

What about SQL Filters?

  • I don't know, but we would be caching plugins either way. Everything about it is handled differently, so no matter what, there is almost guaranteed to be a clear separation between the two kinds of bags anyway. We'll worry about that when we get there.

Retrospective - 2025-06-24

I ended up caching it according to the file path. The tiddlywiki file path works fine for those plugins. Third-party plugins will need to declare their own folder name somehow. The files are compressed and stored on disk. The cache is built on startup. The cache stores a json, js, and js.gz version of each plugin.

2025-05-04 - event hooks ✔️

24th June 2025 at 1:00pm

We need a way to add hooks at most significant points of the process. Maybe all code should just be hooks. Plugins need to be able to hook in and change what they need. I'm still trying to figure out how to put all the pieces together.

I want routes to call the register function, rather than being called by setup code. But then I have to figure out where to put the register functions.

Plugins need to hook into a whole bunch of points in the setup process, but they aren't bootstrapped until after the database is connected.

I think the biggest problem is that I was originally planning to architect it with everything in the config file (similar to how a build system or server works), but it seems everyone wants to use the UI and store config in the database as much as possible, so I'll probably end up with something a lot more like WordPress. Once I rearrange everything more logically using imports, it should be fairly obvious how to use plugins.

I probably need a startup state object, which basically has the site config and rootRoute. I also need to setup the CLI listen command as outlined here.

I guess this is actually a significant rewrite, so once I get some of the initial pieces done, it should be more obvious.

-- Arlen22

Retrospective - 2025-06-24

It was a significant rewrite. I separated MWS into three packages, one package for the events, one package for generic web server stuff, and one package for all the MWS logic. Getting config from the database is pretty simple. We do it on startup and then if a setting can be changed live, we just update the config property. For those settings, we store the value in the request state to make sure each request uses the same setting for the duration of the request.

The only downside is that we have to make sure we import all the files that use the server events, but that's a small price to pay for the much higher flexibility. I'm sure there's a lint rule we could create somehow, but I haven't gotten to that yet.

2025-05-06 - syncer design

24th June 2025 at 1:03pm

There have been several different posts complaining about the long wait times when importing huge volumes of tiddlers. I took a look at the syncer and syncadaptor this evening, and realized that most of this is caused by the designed of the syncer itself, not the syncadaptor. The syncer only saves tiddlers one at a time. There's no batching. I'm not even sure if there is an option for batching.

It's probably going to be a good idea to redesign the syncer from the ground up. The syncer hooks directly into the change events on the wiki. From there it calculates which tiddlers need to be synced. I think it would be good to get as close to the wiki as possible, so hooking directly into the change events is probably a good idea.

For the purposes of MWS, the syncer is actually pretty basic, and most of the features are probably not used, however, the syncer presents a standard API surface between the UI internals and the sync adapter. If I can modify it to handle more bulk updates, that would help.

I think the syncer also needs to be updated to be more aware of the recipe. Being able to understand the recipe is kind of important in understanding how to deal with deletions efficiently. It needs to be aware of read-only tiddlers.

Speaking of recipes, I keep bouncing this idea around in my mind of recipes which are multi-layered. Each level of the recipe could be opened by admins in order to easily make changes to the bag without creating a separate recipe. You can't just edit an individual bag because you need the bags below it in order to make sense out of it.

There also isn't a good way to move tiddlers between levels. Admins will need to be given some extra tools to do this with.

I'm not good at wikitext, but I can provide the endpoints for all of these operations as well as the client-side adapters and actions to use them.

-- Arlen22

Access Control

28th April 2025 at 5:46pm

Access Control is implemented separately for both recipes and bags, but bags can inherit the ACL of recipes they are added to.

Permission Inheritance

  • Users receive combined permissions from all assigned roles
  • When roles grant different permission levels for the same resource, the higher access level is granted. For example, if one role grants "read" and another grants "write" access to a recipe, the user receives "write" access since it includes all lower-level permissions. If a role grants "admin", it inherits both "read" and "write".

"Readonly" permission

If this were reversed, and users with explicit "read" access were forbidden from writing, it would significantly complicate roles.

Imagine a group of engineers working on several projects. Each project has people who are responsible for editing the documentation for everyone else. So everyone needs to be granted the read permission on all projects, but only a few people are granted the write permission on each project.

The easiest approach is to maintain one role which grants "read" access to all users, and then maintain additional roles to grant "write" access to the users responsible for each project.

If granting read access explicitly prevented a user from writing, we would need to create two roles for each project, one which which grants write access for the project, and another which grants read access to all other projects.

Every time a new project is added, all other projects would need to grant "read" access to that new "all projects but one" role on every single wiki for every single project in the entire organization.

Allowed Methods

13th May 2025 at 10:27pm

OPTIONS, GET, HEAD, POST, PUT, DELETE

HTTP allows any methods so I have no idea why I should only allow these. We could really go all out.

Application Access & Security

13th June 2025 at 12:19am

Permissions

  • READ - Read tiddlers from an entity.
  • WRITE - Write tiddlers to an entity.
  • ADMIN - Update ACL for an entity.

Entities

  • Bag - A collection of tiddlers with unique titles
  • Recipe - A stack of bags in a specific order. Bags may inherit the ACL of a recipe they are included in.

Roles (aka Groups)

  • Roles have names and descriptions
  • Multiple users can be assigned to a role
  • Roles are given permissions on entities

Access Levels

Conceptually, there are 6 access levels.

  • Site owner - has file system access to the server, and can run CLI commands. Presumably they also have a site admin account and they can always create one via the CLI.
  • Site admin - Users with the admin role. They can assign owners and change permissions, and have full read and write access.
  • Entity owner - Owner of an entity (bag or recipe). They can change permissions for that entity, and have full read and write access.
  • Entity admin - Granted admin permission, they can manage permissions for the entity.
  • Entity write - Granted write permission, they can list, read, create, update, and delete tiddlers, but cannot change permissions.
  • Entity read - Granted read permission on an entity. They can list and read tiddlers, but cannot do anything else.

Architecture

12th June 2025 at 5:04pm

Bags and Recipes

9th March 2024 at 2:21pm

The bags and recipes model is a reference architecture for how tiddlers can be shared between multiple wikis. It was first introduced by TiddlyWeb in 2008.

The principles of bags and recipes can be simply stated:

  1. Tiddlers are stored in named "bags"
  2. Bags have access controls that determines which users can read or write to them
  3. Recipes are named lists of bags, ordered from lowest priority to highest
  4. The tiddlers within a recipe are accumulated in turn from each bag in the recipe in order of increasing priority. Thus, if there are multiple tiddlers with the same title in different bags then the one from the highest priority bag will be used as the recipe tiddler
  5. Wikis are composed by splicing the tiddlers from the corresponding recipe into the standard TW5 HTML template

A very simple example of the recipe/bag model might be for a single user who maintains the following bags:

  • recipes - tiddlers related to cooking recipes
  • work - tiddlers related to work
  • app - common tiddlers for customising TiddlyWiki

Those bags would be used with the following recipes:

  • recipes –> recipes, app - wiki for working with recipes, with common custom components
  • work –> work, app - wiki for working with work, with common custom components
  • app –> app - wiki for maintaining custom components

All of this will work dynamically, so changes to the app bag will instantly ripple into the affected hosted wikis.

A more complex example might be for a teacher working with a group of students:

  • student-{name} bag for each students work
  • teacher-course bag for the coursework, editable by the teacher
  • teacher-tools bag for custom tools used by the teacher

Those bags would be exposed through the following hosted wikis:

  • student-{name} hosted wiki for each students work, including the coursework material
  • teacher-course hosted wiki for the coursework, editable by the teacher
  • teacher hosted wiki for the teacher, bringing together all the bags, giving them an overview of all the students work

Body Format

14th May 2025 at 1:43am
ignore
state.data will be undefined. state.reader will be draining or already closed.
stream
state.data will be undefined. You must call state.reader to get the request.
buffer
state.data will be a Buffer.
string
state.data will be a string.
json
state.data will be read as a string, same as string and then parsed as JSON, with some sanity checks. If the string is zero-length, undefined will be returned without attempting to parse. This is the only case where undefined can be returned. If the JSON fails to parse, a 400 INVALID response will be sent. __proto__ and constructor.prototype are also checked.
www-form-urlencoded and www-form-urlencoded-urlsearchparams
Parse the body using URLSearchParams optionally calling Object.entries on it.
const data = state.data = new URLSearchParams((await state.readBody()).toString("utf8"));
if (state.bodyFormat === "www-form-urlencoded") state.data = Object.fromEntries(data);

Checklists and Features

13th June 2025 at 12:39am

ACL

  • Verify that anonymous users only have the access defined by allowAnon.
    • No access to owned bags.
    • No access to bags with ACL defined.
  • Verify that logged in users only have the access expected
    • No access to bags owned by other users.
    • Unless they are in the ACL for the bag.
    • No more access than what is granted by the ACL.
  • Verify that all admin permissions are based on the admin role, not the first user.
  • Verify that admin's cannot remove the admin role from themselves.

Client Plugins

13th June 2025 at 12:45am

Client Plugins are normal TiddlyWiki plugins. They are cached on the server and served directly to the client as needed. Server Plugins, perhaps acting as third-party plugin libraries, may register additional Client Plugins and either add them to the cache immediately (if it's some small built-in plugin) or only when a recipe requires them (like a plugin library would).

On startup, all server plugins are called to generate their client plugins. The client plugins are saved in cache/${path}/plugin.*, using a relative path of the plugin's choosing. The same thing happens when a recipe's list of plugins changes.

The wiki index file itself is also rendered with an empty store and saved at cache/tiddlywiki5.html.

When a wiki page is opened, the index file is loaded and parsed. Plugins are either read directly into the store area, or script tags are inserted which point to the location of the cache folder.

MWS saves a hash of the plugin in memory so it can be served as an external JavaScript file with the script integrity attribute. The advantage of external JavaScript files is that the browser can cache them on subsequent page loads, reducing bytes transferred. Plugins can also be served directly in the index file, just like a normal single-file TiddlyWiki would. Regardless of which option is used, MWS optimizes memory usage by piping the file directly into the response (and onto the network) rather than reading everything into memory first.

HelloThere

12th June 2025 at 10:23pm

TiddlyWiki is Growing Up

MultiWikiServer is a new development that drastically improves TiddlyWiki's capabilities when running as a server under Node.js. It brings TiddlyWiki up to par with common web-based tools like WordPress or MediaWiki by supporting multiple wikis and multiple users at the same time.

Planned features include:

  • Hosting multiple wikis at once, using the Bags and Recipes mechanism for sharing data between them
  • Full support for SQLite, as well as MariaDB/MySQL, Postgres, and Microsoft SQL Server
  • Robust built-in synchronisation handlers for syncing data to the filesystem
  • Flexible authentication and authorisation options
  • Improved handling of file uploads and attachments, allowing gigabyte video files to be uploaded and streamed
  • Instantaneous synchronisation of changes between the server and all connected clients
  • Workflow processing on the server, for example to automatically compress images, or to archive webpages

MWS is currently under development at GitHub but it is already functional and usable, except for user security.

HTTP API

13th June 2025 at 12:36am

Installation

12th June 2025 at 11:57pm

These instructions require minimal knowledge of the terminal and require NodeJS to be installed.

  1. Open a terminal window and set the current directory to the folder you want to create the project folder in.
  2. The init command creates the project folder and installs the required dependencies and config files. You can change the name to whatever you like.
    npm init @tiddlywiki/mws@latest "new_folder_name" 
  3. Set the current directory to the project folder that was just created:
    cd "new_folder_name" 
  4. Start MWS:
    npm start
  5. Visit http://localhost:8080 in a browser on the same computer.
  6. When you have finished using MWS, stop the server with ctrl-C

See Troubleshooting if you encounter any errors.

Updating MWS

To update your copy of MWS in the future with newer changes will require re-downloading the code, taking care not to lose any changes you might have made.

  1. Make a backup: Copy or zip your project folder to a safe backup folder.
    tar -cf archive.tar my_folder
  2. Get the latest version. Notice that the second word is install instead of init. This pulls the latest version from NPM and installs it.
    npm install @tiddlywiki/mws@latest --save-exact
  3. Start the server. On startup, MWS checks the database schema and updates it automatically if there are changes. Normally this works just fine, but it can fail, which is why it's important to save a backup first.
    npm start

Git repo

It is recommended to save a history of your project configuration using git,

  • On Windows you can use GitHub Desktop.
  • On Linux, git is usually preinstalled or available via the default package manager for your distro.

Multi-user Trust

28th May 2025 at 8:21pm

If a user can write to a bag in a recipe, they can insert wiki-text which exports all the tiddlers in all recipes which contain that bag.

There are no real mitigations to this which can mitigate this completely, but there are some partial solutions which may help.

  • Only allow loading modules from some bags.
  • Partitioned bags which only allow each user to write tiddlers prefixed with their username.
  • Use Content-Security-Policy headers to make the webpage read-only and block network requests.

I think what it really comes down to is that if you can't trust your users, don't give them write access.

Don't add untrusted bags to your recipes.

MWS and SQLite

17th April 2025 at 7:41pm

Introduction

SQLite is a very popular open source, embedded SQL database with some unusual characteristics. It has proved itself to be robust, fast and scalable, and has been widely adopted in a range of applications including web browsers, mobile devices, and embedded systems.

The "embedded" part means that developers access SQLite as a library of C functions that run as part of a larger application. This contrasts with more familiar database applications like Microsoft's SQL Server or Oracle that are accessed as network services.

MWS uses SQLite for the tiddler store and associated data. It brings many advantages:

Misconceptions

TiddlyWiki 5 has always incorporated a database. Until MWS, that database has always been a custom tiddler database written in JavaScript. Over the years it has been enhanced and optimised with indexes and other database features that have given us reasonably decent performance for a range of common operations.

One particular misconception to avoid is the idea that SQLite replaces the folders of .tid files that characterise the Node.js configuration of TiddlyWiki. Those files are generated by a separate sync operation. They are not the actual database itself. In the context of MWS, SQLite is a fast and efficient way to store tiddlers between requests. Regardless of how tiddlers are stored internally, MWS can still save .tid files to the file system, just as TW5 does today.

Database Engines

SQLite is perfect for MWS because it doesn't require any extra setup. But MWS is not restricted to SQLite. It uses Prisma for the database access layer, which supports several other database engines, including MariaDB (the MySQL fork) and Postgres.

Better-SQLite3

Currently WAL mode is not enabled. It has plenty of advantages, and a few minor disadvantages, but mostly it just takes extra thought to use correctly. It has more advantages for high-traffic servers that need serious concurrency. Better-SQLite3 defaults to synchronous=NORMAL for WAL mode. Eventually we will probably add a setting to enable it.

Better-SQLite3 supports multi-threading via Node workers. Either way we have to implement proper support for transactions, which mostly just means reserving a worker for the duration of the transaction.

Better-SQLite3 has foriegn keys enabled by default.

Better-SQLite3 uses native addons. If your platform isn't supported, or you need a wasm-only solution, feel free to open an issue on Github sharing your use-case.

MWS uses Prisma to communicate with SQLite, and in theory, MWS should work with anything Prisma supports.

MWS Banner.png

Notes

24th June 2025 at 12:57pm

Oracle Attacks

25th May 2025 at 5:37pm

Oracles are clues that give information about a response without revealing the actual response.

Obviously none of this matters if your data isn't encrypted with HTTPS, since it's all open for everyone to read. But if it's encrypted, there are still ways of determining content.

Compression Oracles

A compression oracle is when an attack takes advantage of the deduplication that compression attempts to do by somehow managing to get their own plaintext inserted into the compression stream and then checking whether the compressed output increases in size or not. If it doesn't, then obviously that particularly plaintext was already in the response for that request.

Length Oracle

A similar attack involves inspecting the request length to determine whether a request returned any results. This is especially true of search features. A very big response implies returned results and a very tiny response implies no results. But there are also other ways to use the length which can come down to determining the exact number of bytes returned.

Mitigations for normal oracles

The attacks are usually done based on the timing of the response, so even if the response itself is opaque, the timing can usually still be inferred.

  • Disable compression (ultimate mitigation but there are better solutions)
  • Disable third-party cookies (SameSite=strict) so cookies are only sent for requests that originate from a loaded webpage. However, this prevents the cookie from being sent to the initial page load.
  • We could set two cookies and only enable compression if the cookie with SameSite=strict is present. Any code which can make a SameSite=strict request is normally considered privelaged code anyway, so we have bigger problems if an attacker can do that.

MWS-specific oracles

A cross-bag compression oracle would allow an attacker with write access in one bag to infer the contents of a different bag in the recipe which they do not have access to by somehow attacking someone who does have access to that recipe and watching how the browser compresses various things. Without directly reading the bag's contents, they can write to the bag they have access to and watch the responses being sent to the other user to infer the contents of the recipe overall.

If batching is enabled, and if updates have side effects which trigger a save which is then loaded by other browsers, and if the attacker manages to add a second save with the correct timing to load in the same batch as the update of the side effect, an oracle attack can occur.

MWS mitigations

  • The simplest mitigation is to never compress the contents of two bags together.

Reference

12th June 2025 at 5:03pm

Request Handling

13th June 2025 at 1:22am
  1. Entry Point: Router.handle()

    • Takes HTTP/HTTP2 request and response objects
    • Wraps handleRequest() in error handling
  2. Main Request Processing: Router.handleRequest()

    • Emit middleware event
    • Apply Helmet middleware
    • Create Streamer instance
    • Call handleStreamer()
  3. Stream Processing: Router.handleStreamer()

    • Emits streamer event
    • Finds matching route using findRoute()
    • Performs security checks (CSRF protection)
    • Processes request body based on bodyFormat
    • Creates ServerRequest instance
    • Calls handleRoute()
  4. Route Matching: Router.findRoute() and findRouteRecursive()

    • Recursively matches URL path against defined routes
    • Handles nested routes
    • Matches HTTP methods
    • Returns array of matched route segments
  5. Route Handling: Router.handleRoute()

    • Executes handlers for matched routes in sequence
    • Emits handle event
    • Falls back to 404 if no handler sends response

Body Format Processing

The router supports multiple body formats:

  • stream: Raw streaming data
  • string: UTF-8 string
  • json: Parsed JSON data
  • buffer: Raw buffer
  • www-form-urlencoded: Parsed form data as object
  • www-form-urlencoded-urlsearchparams: Form data as URLSearchParams
  • ignore: Ignores request body (default for GET/HEAD)

Security Features

  • Built-in Helmet middleware for security headers
  • CSRF protection via x-requested-with header checks
  • JSON security parsing (protects against prototype pollution)
  • Method matching validation
  • Path validation

The routes are hierarchical, allowing for nested routes with inherited properties and progressive URL path matching.

It is important that the entire request path be awaited and eventually resolve or reject. A promise should never be left hanging. The Router class takes care of making sure every request has finished with some response, but if the promise never resolves or rejects, the request will eventually time out.

Roadmap

24th June 2025 at 12:40pm

Route Definers

13th May 2025 at 11:13pm

Basic Route Definer

The original route definer which I used to type TiddlyWiki server routes, but the types ended up being extremely complicated for what I actually ended up needing. It is still used internally to actually define the routes, but the JavaScript side of it is very simple.

method
A subset of the Allowed Methods.
path
A regex starting with ^/ which matches the request. The first route handler which matches is used. Routes may be nested, and the full match is removed from the URL before matching children. If a parent route matches, it will be called, even if it has no child matches.
denyFinal
If this route matches, but none of its children do, the server will return 404 NOT FOUND. Otherwise, its state handler will be called and expected to handle the request, even if none of its children match.
pathParams
An array of key names for regex match groups for the pathParams object.
bodyFormat
The Body Format which the request wishes to receive. If the method is only GET and HEAD, this is ignored, as no request body is expected. Internally, the request is probably drained early, just in case a body was sent.
handler - a separate callback argument
If the route matches, the handler is called. The handler is called at each level in order, so parents may add additional (out of type) properties to the state object or handle some requests and allow others to go through to the matched child.

Match result

The StateObject has a routePath parameter containing the "path" through the "tree" of route definitions. In other words, it has the first matched route, and then the first matched child of that route, and then the first matched child of that route, and so on.

It is an array of objects with the following properties.

route
an object containing the options for the route listed above
params
an array of the match groups (match.slice(1))
remainingPath
The remaining URL to match (if this is zero-length, it will be a /)

Zod Route Definers

The rest of the route definers are used by creating a class with the route definitions as properties, and then calling that class on server startup to register the routes.

class RoutesClass { test = zodManage(z => z.any(), async e => null) }
const RoutesKeyMap: RouterKeyMap<RoutesClass, true> = { test: true }
registerZodRoutes(root, new RoutesClass(), Object.keys(RoutesKeyMap));

RoutesKeyMap would have the keys of all the routes in the class, and the type makes sure no routes have been missed, while also allowing the class to have extra properties that are not routes.

zodRoute

The zodRoute function creates type-safe route definitions with Zod validation. It takes a single configuration object with the following properties:

method: string[]
An array of HTTP methods (e.g., ["GET", "POST"]). Must be a subset of allowed methods.
path: string
A slash-separated string path with optional parameters prefixed with : (e.g., "/recipes/:recipe_name/tiddlers/:title").
bodyFormat: BodyFormat
The expected body format: "ignore", "string", "json", "buffer", "www-form-urlencoded", "www-form-urlencoded-urlsearchparams", or "stream". For GET and HEAD requests, this is always treated as "ignore".
zodPathParams: (z: Z2<"STRING">) => Record<string, ZodType>
A function that returns an object defining Zod validations for path parameters. The keys must match the parameter names in the path. If validation fails, returns 404.
zodQueryParams?: (z: Z2<"STRING">) => Record<string, ZodType>
Optional function defining Zod validations for query parameters. Query params are arrays of strings by default.
zodRequestBody?: (z: Z2<BodyFormat>) => ZodType
Optional function defining Zod validation for the request body. Only valid for "string", "json", and "www-form-urlencoded" body formats. If validation fails, returns 400.
securityChecks?: { requestedWithHeader?: boolean }
Optional security checks. If requestedWithHeader is true, requires the x-requested-with: fetch header for non-GET/HEAD/OPTIONS requests.
corsRequest?: (state: ZodState<"OPTIONS", "ignore", P, Q, ZodUndefined>) => Promise<symbol>
Optional CORS preflight handler for OPTIONS requests. Cannot authenticate but can provide endpoint information.
inner: (state: ZodState<Method, BodyFormat, PathParams, QueryParams, RequestBody>) => Promise<JsonValue>
The main route handler that receives a fully validated and typed state object.

Example

const getUser = zodRoute({
  method: ["GET"],
  path: "/users/:user_id",
  bodyFormat: "ignore",
  zodPathParams: z => ({
    user_id: z.string().uuid()
  }),
  zodQueryParams: z => ({
    include_roles: z.enum(["yes", "no"]).array().optional()
  }),
  inner: async (state) => {
    const { user_id } = state.pathParams; // typed as { user_id: string }
    const { include_roles } = state.queryParams; // typed as { include_roles?: ("yes"|"no")[] }
    
    return await getUserById(user_id, include_roles?.[0] === "yes");
  }
});

admin Helper Function

The admin function is a convenience wrapper around zodRoute specifically for admin API endpoints. It automatically sets up:

  • Method: ["POST"]
  • Path: "/admin/$key" (where $key is replaced with the property name)
  • Body format: "json"
  • Security: Requires x-requested-with: fetch header
  • Database transactions: Automatically wraps the handler in a Prisma transaction
  • Authentication: Provides access to authenticated user state

Signature

function admin<T extends ZodTypeAny, R extends JsonValue>(
  zodRequest: (z: Z2<"JSON">) => T,
  inner: (state: ZodState<"POST", "json", {}, {}, T>, prisma: PrismaTxnClient) => Promise<R>
): ZodRoute<"POST", "json", {}, {}, T, R>

Parameters

zodRequest: (z: Z2<"JSON">) => ZodType
Function defining the expected shape of the JSON request body.
inner: (state, prisma) => Promise<JsonValue>
Handler function that receives the validated state and a Prisma transaction client.

Example

const user_create = admin(z => z.object({
  username: z.string().min(3),
  email: z.string().email(),
  role_id: z.string().uuid()
}), async (state, prisma) => {
  // state.data is typed based on the zodRequest schema
  const { username, email, role_id } = state.data;
  
  // Create user within the automatic transaction
  const user = await prisma.users.create({
    data: { username, email, role_id }
  });
  
  return { user_id: user.user_id, username, email };
});

registerZodRoutes Function

This function registers multiple Zod routes from a class instance to a parent route. It's the bridge between route definitions and the actual router.

Signature

function registerZodRoutes(
  parent: ServerRoute,
  router: object,
  keys: string[]
): void

Parameters

parent: ServerRoute
The parent route to register child routes under.
router: object
An instance of a class containing route definitions as properties.
keys: string[]
Array of property names to register as routes. Usually Object.keys(RouterKeyMap).

Usage Pattern

export class UserManager {
  static defineRoutes(root: ServerRoute) {
    registerZodRoutes(root, new UserManager(), Object.keys(UserKeyMap));
  }

  user_create = admin(z => z.object({
    username: z.string(),
    email: z.string().email()
  }), async (state, prisma) => {
    // Implementation
  });

  user_list = admin(z => z.undefined(), async (state, prisma) => {
    // Implementation  
  });
}

export const UserKeyMap: RouterKeyMap<UserManager, true> = {
  user_create: true,
  user_list: true,
};

// Register during server startup
serverEvents.on("mws.routes", (root) => {
  UserManager.defineRoutes(root);
});

Security

12th June 2025 at 10:43pm

Use Cases

The primary use cases for MWS are

  • Internal corporate information tools
  • Public TiddlyWiki hosting
  • Classroom scenarios
  • Unrestricted trusted collaboration

Still in Development

While these are the goals of MWS, it is still in early development, so none of these security restraints have actually been put in place yet. This is a work in progress, and the direction we're headed.

Potential weaknesses

  • Not using HTTPS
  • Oracles that leak information through observable side-effects, such as network traffic, without directly revealing the contents.
  • Multi-user: Users with write access can modify bags to gain access to other tiddlers in a different recipe that are supposed to be private.

Don't add untrusted bags to your recipe.

Generally speaking, wiki text is quite powerful, almost as powerful as JavaScript. Button clicks can run widget actions, which can read and change any tiddler in the current recipe. Any user could easily cause TiddlyWiki to write tiddlers to other bags via actions.

A possible protection from this is using the referrer header to restrict edits to from a wiki to only follow the rules for the recipe that is open. In other words, if you have a recipe open, code in that page cannot use your credentials to somehow write to unrelated bags.

Protection strategies

I just needed some place to list all the different things we could do to secure a site. Some of these would be optional depending on the level of mitigation required.

  1. Obvious, "tie your shoes" security precautions.
  2. Reasonable prevention of normal attack vectors.
  3. Optional hardening to prevent targeted attacks.
  4. Pedantic defenses against advanced attackers with full read-only access to the system or a backup.

HTTPS

  • Enable HTTPS site-wide using let's encrypt or another free certificate service.

Cookies

MDN - SetCookie

  • Secure, HttpOnly, SameSite=strict, attributes
  • Setting separate session cookies for the admin and wiki paths doesn't work because it's based on the request path, not the page path.
  • Having a separate login subdomain and using oauth is a more complicated option.

no-cors

  • JavaScript no-cors mode allows GET, HEAD, and POST and sends relevant cookies.
  • no-cors prevents most headers, including custom headers, from being set.
  • Our standard x-requested-with header cannot be set.

CORS headers

  • Set CORS header to only allow expected origins. This doesn't prevent external CLI tools from accessing the site, only browser-based tools. This could also be set for certain endpoints or recipes to only allow specific bags to receive external requests.

Referer header

  • Only allow a wiki to access its own recipe endpoints for extra tight security. It is difficult to predict when a specific request might be malicious if tools are allowed to make requests from one recipe to another on a user's behalf. Specific referer headers could be white-listed as approved tool wikis.
  • Don't allow wiki pages to access admin APIs.

Content-Security-Policy header

  • Able to block the page from making network requests, putting it in mostly read-only mode. This doesn't stop JavaScript from setting location.href nor prevent the user from clicking on a href links. This is more of a quick and dirty read-only, since it prevents ALL requests, including requests that might not actually change anything on the server.

Other site configuration

  • Don't allow admins to edit a wiki unless they are added to the ACL. The CSP header could be used to disable all network requests if the admin role does not have read permission, thus giving them true read-only access to the wiki.
  • If compression is used, do not compress two bags in the same compression stream. This is a fairly extreme precaution with a limited likelihood.

Other ideas

When you visit a page, you visit with page permissions only, and the page has to ask permission to use your full account permissions. The request could also be granular, where the page has to request the specific bag or wiki it wants access to.

Server Plugins

13th May 2025 at 10:06pm

Startup Cache

13th May 2025 at 9:44pm

TableOfContents

TiddlerFields

13th May 2025 at 11:33pm

A tiddler fields object is the serializable tiddler exchange format. If in doubt, stringify. Tiddler field values are supposed to be strings but they aren't always. Tiddler field keys could contain the colon, which would break .tid files.

The definitive serializable form of a tiddler in TW5 is obtained by retrieving the tiddler from the wiki, and then calling tiddler.getFieldString on each field value. This is the form in which TiddlyWiki saves and loads single file wikis.

.tid files are not equivalent because they do not escape the colon character in field names. The official "workaround" is simply to save the tiddler as a JSON file.

Troublesheeting gyp/prebuild Installation Errors

17th April 2025 at 7:26pm

Installation may fail with errors related to gyp or prebuild. These errors are caused by missing dependencies or incorrect versions of dependencies.

Note that in most cases, these errors occur because of the use of the npm module better-sqlite3. This module is mostly written in C, and thus requires compilation for the target platform. MWS supports switchable database engines, and also supports the use of the node-sqlite3-wasm module which is written in JavaScript and does not require compilation and so may avoid these errors. See Database Engines for more details of how to switch between engines.

The following steps may help resolve errors involving gyp or prebuild:

  • Ensure that you have the latest version of Node.js installed. You can download the latest version from the Node.js website.
  • Update npm to the latest version by running the following command in your terminal:
    npm install -g npm@latest
  • Clear the npm cache by running the following command in your terminal:
    npm cache clean --force
  • Delete the node_modules folder in your project by running the following command in your terminal:
    rm -rf node_modules
  • Reinstall the dependencies by running the following command in your terminal:
    npm install
  • If you continue to encounter errors, try running the following command in your terminal:
    npm rebuild
  • If you are still experiencing issues, you may need to manually install the gyp and prebuild dependencies. You can do this by running the following commands in your terminal:
    npm install -g node-gyp
    npm install -g prebuild
  • Once you have installed the dependencies, try reinstalling the TiddlyWiki dependencies by running the following command in your terminal:
    npm install

Troubleshooting

Usage

12th June 2025 at 10:31pm

Once MWS is successfully installed, you can access it by visiting http://localhost:8080 in a browser on the same computer.

On first start, a default user is created with username admin and password 1234. However, also by default, the server is only accessible to browsers on the same machine.

If you intend to make an MWS installation available on the Internet, the server should first be secured with the following steps:

  • Change the administrator password
  • Install HTTPS server certificates from an online certificate service like Let's Encrypt.

x-mws-tiddler

13th June 2025 at 12:42am

The .tid file format does not support field names with colons. Rather than trying to catch all the unsupported variations that may appear, we'll just use JSON to send it across the wire, since that is the official fallback format in TW5. However, parsing a huge string value inside a JSON object is very slow, so we split off the text field and just send it after the other fields.

{ "title": "New Tiddler" }

test

Note that because the text field is optional, if the entire string is a single line, the text field will not be set, but if it ends with two line breaks, the text field will be set to an empty string.

Putting a tiddler into this format:

const fields = tiddler.getFieldStrings();
const text = fields.text;
delete fields.text;
const data = JSON.stringify(fields) 
  + (typeof text === "string" ? ("\n\n" + text) : "");

And parsing it out again:

const splitter = body.indexOf("\n\n");
if(splitter === -1) {
  return JSON.parse(body);
} else { 
  return {
    ...JSON.parse(body.slice(0, splitter)),
    text: body.slice(splitter + 2)
  }
}

There may be edge cases I'm not aware of, but since it's literally just sending strings to NodeJS, there shouldn't be any problems. Text fields are expected to be strings and binary types are base64-encoded.