Greenhouse Retrospective and FutureBy Forest Johnson On
Previous post in this series: 🥳 Greenhouse Enters Alpha Test Phase!! 🎉
I did the thing! I created greenhouse almost exactly like how I imagined it six years ago in 2016.
I recieved a lot of feedback on the project when I shared it with the world. While most people agreed it was pretty cool, I recieved a lot of criticism as well:
👹 Self-hosting is dangerous and you are inviting people to get hacked! The lawyers will come after you! The world is scary and everyone should just keep hiding in the little hole that the big-tech panopticons provide!
🤮 Your website sucks! It looks bad!
Some criticism came in the form of valid questions, questions to which my answers had to be disappointing. Because greenhouse is so different & doesn't really fit into a well-known category, it's architechture, feature set, and user experience confused and surprised many potential users:
🧐 How do I self-host the tunnnel gateway server part of greenhouse?
You can, but it's really hard. You aren't supposed to, you don't need to, that's the whole point. You just run the tunnel client daemon on your server computer and configure it with the CLI or GUI app. It's like this because I wanted it to be as easily as possible to get started with.
🤨 Does it support caching? Like on the "edge" server where caching is most effective?
No, it can't support caching like that because the greenhouse tunnel server can't read your traffic; the HTTP traffic stays encrypted via HTTPS. IMO this is a good thing.
🥱 Why does my page load so slowly? How can I make it faster?
It's because you live in Europe and right now there is only one greenhouse tunnel server, and it's in NA. In the future I'll have multiple servers in different regions but for now it's just the one.
Also, the tunnel introduces multiple network roundtrips right now. In the future I can at least cut those extra trips in half by replacing
yamux with the QUIC protocol, but the traffic will always have to go through the tunnel during the first page load no matter what, so it will always be higher latency than normal.
The harshest criticism, however, came in the form of usage statistics. Most people agreed it was a neat idea, and some people did try it out, but over time, it got used less and less. It did not grow organically.
new accounts - 5.25/mo all-time avg
traffic - 1684.93MB/mo all-time avg
So where does this leave me? Did I fail? What does it all mean?
It's no secret that greenhouse has always been an ideologically motivated project. It was practically fundamentalist in terms of how it viewed self-hosting: The greenhouse service was supposed to make it easier for someone to run thier own server, leave it on 24/7, without them having to relinquish any privacy, agency, ownership, and control. If you didn't want to run a server, greenhouse wouldn't be very useful to you.
But at the same time, in the interest of ease of use, it was designed from the ground up as a public-cloud-style Infrastructure As A Service (IaaS) provider. You pay the service some miniscule amount of money per unit of bandwidth that you consume; at the end of the month it would bill your credit card automatically. At least that was the plan.
Even though it was designed to follow the principle of least authority and defer all security-relevant decisions and processes to the self-hoster and thier server, at the end of the day greenhouse is a 3rd party service provider.
Through the combination of these two factors, I think it was always doomed to fail. On one had, it'll probably only appeal to self-hosting fundamentalists. But on the other hand, its a 3rd party service, and self-hosting fundamentalists tend to hate 3rd party services.
In terms of the technology and how it worked, greenhouse was fine. Great, even. But in terms of how it made people feel, it was pretty lame. And to make matters worse, because it required a server computer to be running 24/7, its usefulness to the average person was severely limited.
I spent a lot of time and effort creating a GUI for greenhouse and porting it to the Mac and Windows platforms because I wanted to include those platforms and include everyone who uses them. But I think in the end what I produced was nothing more than a curiosity. No one wants to leave thier Mac or PC laptop on all the time so the server will stay up.
How to fix it? How my thinking has changed
I mentioned that the idea for greenhouse is over 6 years old at this point. A lot has changed in 6 years, both in terms of the cutting edge of self-hosted services as well as my own personal experiences with them and feelings about them.
First of all, I saw the rise of "the fediverse" (ActivityPub-compatible social-media and microblogging servers starting with mastodon) and similar networks like matrix.
I also joined / helped build and maintain infrastructure for Cyberia Computer Club, including the loosely associated VPS provider capsul.org. At first I was reluctant to spend my time on capsul because I saw it as only marginally better than the large-scale commercial offerings like DigitalOcean. Same shit, different day kind of thing. My ideology was telling me that instead of custodial services, I should be focusing on projects that allow people to have ownership (physical custody) of their data and processes.
But over time, my outlook started to change. I had a lot of fun with Cyberia as it continued to grow and develop, including renting a commercial unit & founding a new hackerspace dubbed Layer Zero.
I also read some very passionate takes on the Fediverse architechture and how it can change the way we view software and networks, like Darius Kazemi's runyourown.social and Roel Roscam Abbing (
rra)'s Seven Theses On The Fediverse And The Becoming Of Floss. These ideas made a big impact on me. I think the short, pithy description on homebrewserver.club might summarize it best:
Take the ‘home’ in homebrewserver.club literally and the ‘self’ in self-hosting figuratively
That means we try to host from our homes rather than from data centres - a.k.a. ‘the cloud’ - and we try to host for and with our communities rather than just for ourselves.
That "for and with our communities" bit was the part I was missing. It's not "same shit, different day" like I had originally feared; with fediverse-style networks and similar community projects, yes, the person with the server can still surveil, censor, and falsify everything of "yours" that they host for you... But that power dynamic is different when it exists outside of commerce and capital, when it's a part of a local gift economy or federated network of "indie" & self-determined servers.
So I think I ultimately want to redesign and re-brand greenhouse/server.garden as software that folks mostly run on thier own computers, and as something that can be connected to form ad-hoc federated networks.
Not just tech, but also trust
Right now greenhouse is purely a "trustless" networking service, but redesigning it this way could open up new possibilities because it would open up a new dimension of trust: The trust the fedi-users have for thier server admins, and the trust that fedi-admins have for the fellow admins that they federate with.
"Trustless" can be cool, but having a server to trust is incredibly practical. I can imagine:
- Github Pages / Backblaze / neocities style static content hosting and object storage
- Amazon RDS style "managed" replicated relational databases
- Kubernetes style distributed linux container platform
- Specifically, an easy-ish way to do redundancy and failover for arbitrary server apps.
I have a lot of ideas for this, as usual I've been thinking about it a lot... In my imagination, two or more friends can set up thier own home servers and install this software on it, then trust each-other's servers so they'll federate with eachother. Then our two-or-more admins can create accounts for anyone who wants to use the servers. The admins' friends can use them for static content publishing, as a greenhouse-style network gateway for thier own server, and potentially even as a "cloud-ish" compute provider. The best part: if one of those servers goes down, the other one can pick up the slack.
Its a lot, but I think this is the direction I wanna go in. I'll probably start with the static content hosting / object storage feature, eventually integrate the threshold / greenhouse daemon tunnel gateway feature, and see how things progress from there.
I have all kinds of ideas for this; with both a tunnel gateway and static content capability, the platform could offer a kind of hybrid hosting where you run a server app on your computer or phone which is "live" while your computer is turned on, greenhouse style, but as soon as you turn it off, it falls back to a cached version that is hosted by the federated platform.
For example, folks could run thier own owncast video livestreams this way without having to set up a server for it. When they turn off their computer the stream goes down, but the website stays up. I think this could be useful for all kinds of apps, it would add a new dimension of flexibility where folks can casually self-host applications on the internet, even interactive applications, from computers which aren't servers — laptops/phones/gaming rigs which aren't on all the time.
By the way, I'm still trying to figure out what to call it. I need to come up with a name and a way to "market" it so that it makes people feel cool or "sexy" when they use it. (Thanks to my friend j3s for these insights 😛)
If you have any suggestions, leave a comment!