Building a Secure, Distributed File System

Stage 1: Read Only, One Archive

Stage 2: Read Only, Many Archives

Stage 3: Writes

Stage 4: Multiple readers / writers.

We have no shared, writable data, but we’re simulating shared writable data in the form of any of the files on the filesystem.

Stage 5: Attributes and Deletes

There are two clear problems at this point:

Strategy to fix:

Stage 6: Network Support

This requires a server, and a new program to handle inter-machine syncing.

New program:

We now have a distributed filesystem. Running this on multiple machines should work, although syncs could be delayed or need to be triggered manually.

Stage 6: Garbage Collection

The current design will make a new copy of a file every time it’s changed, and never deletes anything. When a zip file contains only old versions of files, it can be removed.

But… we want to be careful that zip / index files aren’t deleted before new versions have propagated over the network.

In the filesystem program:

In the sync program:

Stage 8: Crypto

Problem: How to distribute the key to all machines sharing the file system?

Advanced Version

Stage 1: Block Cache

In the current filesystem design, reads are slow since they’re unpacking a zip file every time.

Once we get some data once, we shouldn’t have to reunzip to get it again. But, some files are big and we’ll reuse only small pieces. So we should have a block cache rather than a file cache.

Now’s a good time to move to C++ if you haven’t already.

Problem: The cache can get out of date when concurrent writes happen.

Solution:

Problem: What to do with dirty blocks?

Stage 2: Efficiency

Here’s where things get tricky.

In the design so far, small writes to large files are awful. They copy the whole file locally and across the network on every write.

Real file systems deal with this problem by manipulating blocks instead of files, but that makes the indexing problem way messier.

Here’s an approach that almost works:

We’ve made the problem from the block cache worse though: Two processes on different machines can make concurrent, incompatible partial updates to a file. File corruption is pretty likely at this point.

Probably should move away from zip files, and store both the blocks and the metadata in a file-backed persistent B-Tree.

Start here for that:

https://www.youtube.com/watch?v=T0yzrZL1py0