The idea was simple, the way all ideas are simple before you try to implement them. Take the manuscript, hash it, put the hash on a blockchain, and now you have a timestamp that nobody controls. Author, publisher, court, or AI company — none of them can alter the block timestamp. The hash is either there or it isn't, and if it's there, the text that produces that hash existed at that moment.
The first implementation took about forty minutes. A Hardhat project, a minimal Solidity contract, one function called anchorHash that emits an event. Deploy to Polygon Mumbai testnet. Done. The book is "anchored."
That was L0. It took another four months to get from L0 to L5.
The Problem With Just Hashing Everything
The simple approach has a serious limitation: it's all-or-nothing. If you hash the entire manuscript and put that hash on-chain, you can prove the whole book existed at a given time. But you can't prove that chapter twelve specifically existed in that form without revealing the entire manuscript. If the manuscript is unpublished and you're anchoring it before sending to publishers, you've just defeated the purpose.
The solution is a Merkle tree over the chapters. Each leaf is a hash of one chapter. The root is committed on-chain. To prove that chapter twelve had specific content, you reveal that chapter's hash and the Merkle path up to the root. The prover learns nothing about the other chapters.
This is an existing data structure — it's what Bitcoin uses to prove transaction inclusion in a block. Applying it to book chapters is a straightforward extension. We thought.
The Edge Case That Became the Thesis
Three days into writing the Merkle implementation, a test broke in a way I didn't understand. The test was: given a proof generated by the off-chain tree, verify it on-chain. It was failing for chapter thirteen in a twelve-chapter book.
There is no chapter thirteen in a twelve-chapter book. That was the point. The test was checking what happened when an author added a new chapter after the initial anchor — and whether the new chapter's inclusion was provable without invalidating the original anchor.
The answer, with a static Merkle tree, is no. The root changes when you add a leaf. The original on-chain commitment no longer matches. You have to issue a new anchor, and now you have two anchors and the relationship between them is unclear.
Edition control is not an afterthought. The ability to prove that version 1.0 existed, and that version 1.1 contains specific additions, is the entire point of a provenance system for works that evolve over time. — From the LPS-1 whitepaper, Section 4.3: Append-Only Edition Management
The design choice we made was to commit to an immutable baseline and then support versioned anchors that reference the original. The contract stores a baselineRoot that can never be changed after the initial edition freeze, and a currentRoot that can be updated by reissuing from the same deployer address. Both are timestamped. The relationship between them is stored on-chain. A court can see: the book was baselined on date X, version 1.1 was issued on date Y, and here are the proofs that edition 1.1's chapter twelve is identical to edition 1.0's chapter twelve, while chapter thirteen is new.
The IPFS Decision
At L3, LPS-1 pins the manuscript to IPFS and stores the IPFS CID in the smart contract. This raised an obvious question: IPFS is content-addressed, not permanent. Pinning requires someone to maintain a node. If the pinner disappears, the content is no longer accessible even though the CID is on-chain.
This is a real limitation, and the spec addresses it honestly. The on-chain CID is a commitment, not storage. The hash pinned to IPFS is a verifiable pointer. If the content becomes unpinned, the commitment still proves it existed, but you can no longer retrieve it to verify the full text. For literary provenance purposes, this is acceptable: the proof of existence doesn't require the content to remain accessible forever. But for an open-access publishing model like XXXIII, we wanted the content accessible indefinitely.
The solution for our books is redundant pinning across three providers, plus a Cloudflare R2 copy served via the worker. The IPFS CID for The 2,500 Donkeys is bafybeicd6hxb7p5x6r5nvsbftf4vfmvuwrjmbkjgp4bzz3ahjezgbfmkpq. It's currently pinned by Pinata, web3.storage, and our own node. Even if all three fail, the on-chain hash proves the content existed.
What 293 Tests Actually Tests
The test suite covers things that sound obvious in retrospect but weren't obvious in advance:
- Does a Merkle proof generated for chapter N still verify after chapter N+1 is added?
- If two authors independently hash the same text, do they get the same leaf hash? (They must.)
- Can a bad actor submit a false Merkle path that happens to sum to the correct root? (They cannot — collision resistance.)
- Does the edition freeze function actually prevent further updates?
- If the deployer private key is compromised, can an attacker update a frozen edition? (They cannot — the freeze is irreversible.)
- Does the Bitcoin OpenTimestamps cross-reference at L5 actually validate against mainchain headers?
That last one took the longest. OpenTimestamps is a well-designed protocol, but integrating it as an L5 verification layer required writing a Solidity verifier for Bitcoin block headers, which is a somewhat unusual thing to do. The implementation is in the repo. It's been tested on six different blocks. It works.
What We Would Do Differently
The Merkle tree implementation is probably too clever. For most authors, L0 or L2 is the right choice: hash the manuscript, or hash individual chapters, anchor on Polygon, done. The full L5 implementation with Bitcoin cross-reference exists to demonstrate that it's possible, not because most authors need it.
We also would have written the documentation before the contracts. The spec evolved as we implemented, which means there are design decisions buried in commits that aren't explained anywhere except in the commit message. If you're building on LPS-1, read the whitepaper before the contracts, and if you find a discrepancy, file an issue.
Why It Matters Now
The LPS-1 protocol is not a response to a current crisis. It's a response to a crisis that is five years away and is exactly as predictable as AI's impact on authorship was once AI could write. The question of who wrote what, when, and whether the text has been modified, is going to become adversarial in publishing in a way it has never been before.
We built the answer. It's MIT licensed. It's on GitHub. Any author, any publisher, any legal team can use it. The contracts are deployed on Polygon Mainnet at the addresses in the repo. XXXIII's three books are anchored at L4. Use what's there, or build something better on top of it.
That's what actually happened when we anchored a book on-chain. We expected it to take a weekend. It took eight months, and it produced something better than we planned.