Holster Haza Website

Holster has been working well this week, my test application has so far stored over 4MB on disk and file splitting looks good now.

I was asked in gun chat about a public relay for Holster, so decided to set one up at holster.haza.website. I have also been playing with github.com/1j01/simple-console and wanted to use it to allow people try out the API, so the website provides both!

I don't want to provide permanent storage for client applications, as I don't believe it's a good model to rely on free services. So at this stage data is removed from the server each day. Saying that it will probably sync back while browsers are open, since that's how it's meant to work 🙂. Hopefully in the future we will add better support for distributed storage and relays coming and going won't be a problem.

Radisk

My last update finished with "time to throw lots of data at it again and see how it holds up!", sadly it didn't hold up for more than a day. ðŸ˜£

It's pretty easy to see that data was still not being written to disk properly, so first step was to write some unit tests to replicate the issue. That wasn't too hard either, and just required having a slightly complicated data structure already on disk when the limit is hit. That way the radix tree has enough nodes and properties to make it complicated for the radisk code to try and split across files. Holster now has a test file, split.test.js just for this problem.

So back to the radisk code to work out what was going on. The issue I first noticed was just an edge case, but I then realised I could keep adding edge cases to test and break and fix... so it pretty quickly felt like the existing file splitting code was too brittle.

As I was testing I also noticed that radisk was doing all the work to encode the data for writing to disk, but if it hit the file size limit it would halve the size and start again, throwing away the existing data. I decided to try keeping the existing encoded data and write that to disk first, and deal with the extra data after that. This means that the first file stays at the file size limit and smaller files are created after, but the final code change was so much simpler I decided to stick with it. Plus it could handle every test case I could think of 🎉. Yes I am going to jinx myself for writing this.

I didn't want to write anything until I saw this working in a real application, and am glad to see that it is. In the process of testing it I found some unrelated timing issues so release is now up to 1.0.6.