Is a Smaller Internet Better?
I did a thought experiment. The full experiment is at bottom. Here’s the core:
How can you build a website that doesn't host code, just instructions on how to find it.
If you're a seasoned dev, the answer is obvious, it's instructions all the way down. That is why we have JS, go, rust.
Information transmission is extremely important.
Apps on your phone, like Facebook, allow the company to serve the entire website with the UI kit and logic on your phone, and the UGC (text and images) served from FB, drastically cutting down the amount of info they have to send over the server. It's a huge cost cutting manoeuvre for them to serve smaller server responses to billions of daily users who are accessing new content every second.
“Serverless” deployment where you host and serve content and instructions, and the app on the users phone is what does all the logic is essentially the idea behind the fediverse.
There's often a push from business to make things bigger and cooler, and then when they slow things down, to make them faster.
But IMO the transformative "magic" concept is just making things less reliant on infrastructure. Sometimes I want to talk to my friends in person.
I’m really curious to know what is going on in this sector and what people think about it.
THOUGHT EXPERIMENT
Build a compiler that takes a line of instructions that tells the client how to search through the web for snippets to compile the DOM.
Here are some issues
Problem 1: Instructions It's essentially a compression problem. Every pathway to efficient compute can end up here. What is the most efficient way to write a piece of code. Then what is the most efficient way to store it. Can a single byte of code spring into 1000 different algorithms. Mathematically, I'm skeptical. A short int is 2 bytes, so getting below 1 "instruction set" per bit is tough.
Problem 2: Searching How do you know where to look in a way that results in fast code
Problem 3: Uniqueness How do you know, in a practically infinite internet, whether something is unique or not?
Problem 4: Searching Okay so you have your super compression Algo that can take a Blue sky string of hexcode and spit out a viable set of search instructions to an interpreter, that can then be found and expand into an even larger website.
How do you find it? If you aren’t rate-limited by search engines for hammering their sites for little pieces of code, you have to do a bunch of expensive, time intensive search.
Ping the server, go to the address, search the dom. Return pass or fail. Keep going.
You can "fix" for this by having some sort of Borges infinite library that you can search through.
Real use case: Stick instructions on a QR code, scan with your phone, load a website that is not hosted on a server, but instead fed directly to your phone by the code. ??? Profit
Result: There are better ways to achieve the above use case. Compress a website into a few pages of QR codes, make a video that can wipe across them and load it.
Information transmission and storage.
If you lean too heavily into efficiency, you shoot yourself in the foot because efficiency borrows from other systems.
The USDOT does repair to upgrade. Standards are rolled out as fast as things break. Things are constantly breaking, so things are constantly getting upgraded.
This speaks to the difficulty to which propagating new changes is really hard.
Information and transmission aren't free. Philosophically yes, but materially, it’s expensive.
You have to get into the nitty gritty of developing not just languages, but entire storage, transport, distribution systems of information. Like any "globally perfect" system, it's a network of different players all at cross purposes, designing different standards for each other.
So here we are, back where we started, with an extra iota of clarity.