Instead of timestamping individual claims as they come into the node, we want to group them and timestamp a batch of claims only once every
nclaims per transaction
IPFS allows multiple files to be wrapped in a directory, which will have a hash of its own.
Directories are immutable, so each time we push a new file to the directory we create a new directory that contains the previous claims and the new claim and a new directory hash.
As claims come in, we push them to IPFS and get a hash back. Then we add each claim hash to the database.
n minutes, we grab all of the waiting claim hashes from the database and push them to an IPFS directory and timestamp the directory hash onto the blockchain.
In order to sync nodes, we will need to adjust the process slightly for directories.
In the current code, the node scans each block for Poet transactions. Each transaction has a hash that relates to an individual claim. The only difference now is that the hash may be a directory instead of an individual claim.
We can either handle this by a version bump (new version is always directory, old version is always single file) or simply ignore transactions that don’t point to directories (thus dropping all previous data).
If the hash is for an individual claim, we use the current flow just like now.
If the hash is a directory, we simply grab all the claim hashes from it and add them to the DB entry collection via the
Storage/ClaimControllers/download method to be downloaded individually.
We then mark the directory’s entry as complete so we don’t keep retrying.
QUESTION: How can we use automated testing to verify that this works and continues to work?
Note by Lautaro: testing that syncing of nodes works well would cover this.