The Shadow Drive Architecture At A High Level
One thing that has been very different, I think, for the Solana ecosystem is how Steven, Levi and I have approached building the Shadow Drive and $SHDW.
We’re an unusual blend of proven track record and long term vision. Our origin story and delivery process isn’t conventional. On the one hand, we’re a huge thriving project…
…and yet on the other we’re expanding into new areas like decentralized storage. The thing is, GenesysGo and the Shadow Drive are not an academic endeavor, they are an incremental engineering effort.
How does Permanent Storage for the Shadow Drive Work?
One of the most nebulous things, I think, has been the permanent general storage aspect of the Shadow Drive and how it works. Where’s the whitepaper? Where’s the academic discussion with math formulas and all the equations with symbols that most of us vaguely remember from college Calculus?
This is where it’s important to remember… we’re coming from a very different position than nearly any other project you’re used to. Our engineering roadmap looks wildly different as a result. However, as different as this approach may be relative to what the “norm” is… this gives us an incredible advantage.
Let’s strip away for a second all the of whitepaper fluff for a moment… what does it take to create a permanent decentralized storage solution?
First Principles to Implement Permanent Decentralized Storage
- Decentralized Hardware — multiple independent machines in multiple locations
- Decentralized Ownership of the Hardware — independent operators in multiple locations
- Decentralized and Redundant Copies of the stored data — multiple independent copies of the data in multiple locations for redundancy
- The inability to “turn off” the system by a centralized party
Once you strip away all the equations and heady discussion, you’re left with four simple things.
Now… can we boil those four things down into some very basic requirements? Yes, yes we can…
Foundational Resources Needed to Execute on First Principles
- Compute Power to read/write the data
- Physical Storage to hold the data
- A community of independent operators networked together by a trustless consensus mechanism
That’s it… this is all you need. Now… let’s look at what GenesysGo has currently deployed and live…
- Unlimited decentralized scaling compute via our server network and our instant access to new hardware
- Unlimited scaling storage via the same decentralized network
- An ever increasing amount of Shadow Operators bringing independent machines online, all networked together by virtue of Solana’s consensus mechanism and a core engineering team already building the Shadow Drive.
This wording is key to understanding our engineering roadmap, “…currently deployed and live…”. This where I reference what I said earlier about the Shadow Drive being an incremental engineering effort and not an academic effort becomes critical in order to understand why the construction of the Shadow Drive feels different from what the community is used to.
Solana RPC Servers, arguably the most versatile and underrated piece of the Solana Network Stack
The only difference between a Solana Validator (the machine that builds blocks) and a Solana RPC Server (the machine that stores data and answers network requests) are a few simple settings.
This means that RPC servers speak the same language as Validators and they speak using the same UDP connections that all the Validators talk through. All data in these UDP tunnels is already encrypted, hashed, and securely immutable.
This means that our RPC network is already deeply interconnected to the immutable network framework of Solana. The advantage provided to us by having an existing RPC network is a powerful tool that allows us to expand and build upon a mature network, versus starting from scratch.
Creating the tunnels needed for machines to talk directly with one another on blockchain is hard but thankfully Solana has already done this. This is why I keep saying, the Shadow Drive is simply an exercise of inserting a hard drive into the Solana machine.
Solving Solana storage is a naturally aligned progression and maturity of the already existing GenesysGo framework.
Basically what I’m saying is our stack was meant for this.
To those who think this all sounds overly simplistic…
You’re right… and that’s the point! For us this is simplistic because we already have the fundamental resources in place that most projects have to work to put together.
We don’t need to solve the problem of compute, we a have massive excess of compute power. We don’t need to solve the problem of storage, there is storage capacity enough to store all of Solana’s data many times over. We don’t need to build a networking layer… that’s already done and has been powering over half of Solana’s traffic for months.
Essentially, we’re not recreating the wheel and we’re also not building from scratch. The parts all already exist and are in place, we’re simply adapting them for an expanded use case.
Many are looking for how we plan to connect a series of wider dots and the hope is that this article helped show that the gap between those dots is not as wide as they appear. We’re an unusual blend of proven ability to execute and long term vision.
It’s important to evaluate us by where we’ve been, by our brand, our outreach into the ecosystem and not a thesis we force-derived out of necessity because we haven’t built anything yet. When you have little but ideas, you are forced to produce papers in order to produce trust. We instead will continue to produce iterative solutions that build on top of one another as we have for months.
The real opportunity for those choosing to participate in the creation of the Shadow Drive lies in the fact that you have the opportunity to evaluate us on solid tangible evidence in lieu of papers with ideas.
As I said at the start… we’re not an academic endeavor, this is an incremental engineering effort.