HTTP(S) is the protocol used to retrieve content from the Internet, and files are stored in a server with all clients downloading files from this location. It works fine, but also comes with shortcomings such as traffic costs for the content provider, lack of resiliency if the server is down, and lack of persistence as for example all files hosted on GeoCities web hosting service are now gone. Having all files hosted on a single server also makes it too easy for governments or companies to censor content.
But while looking at FOSDEM 2019 schedule yesterday, I found out an initiative aiming to solve HTTP shortcomings had been in development for several years, IPFS (InterPlanetary File System) is a described as a peer-to-peer hypermedia protocol to make the web faster, safer, and more open, with the ultimate goal of replacing HTTP.
The four main advantages over HTTP listed for the protocol:
- HTTP is inefficient and costly – HTTP downloads a file from a single computer at a time, instead of getting pieces from multiple computers simultaneously. IPFS makes it possible to distribute high volumes of data with high efficiency, potentially saving up to 60% bandwidth costs for video content.
- Web pages are deleted daily – As hosted companies fold, the content hosted on their server eventually goes away with the average lifespan of a webpage being a little over 1,000 days. IPFS keeps every version of your files and makes it simple to set up resilient networks for mirroring of data.
- The web’s centralization limits opportunity – P2P wins over centralized servers when it comes to fighting against censorship
-
IPFS can work without Internet – Developing world. Offline. Natural disasters. Intermittent connections. IPFS enables persistent availability with or without Internet backbone connectivity.
Here’s how IPFS roughly works:
- Each file and all of the blocks within it are given a unique fingerprint (cryptographic hash).
- IPFS removes duplications across the network.
- Each network node stores only content it is interested in, and some indexing information that helps figure out who is storing what.
- When looking up files, you’re asking the network to find nodes storing the content behind a unique hash.
- Every file can be found by human-readable names using a decentralized naming system called IPNS.
IPFS works with Linux, Mac OS X, and Windows. Paula, who will talk about IPFS at FOSDEM 2019, also wrote a recent blog post explaining how to quickly get started with the protocol using Linux. You’ll find more details on IPFS documentation.
The video embedded below – dated 2015 – demonstrates IPFS protocol when it was still at the Alpha stage.
Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress
Wow. IPFS is a way to go …
IPFS need an efficient mutable content adress, very difficult in p2p!
If they call this internet, a static content by hash they can call it bittorrent…
It already has.
IPFS is not censorship resistant, the network does not maintain n copies on the network on its own. Like if I am the only one on earth to have a copy of a file, I can pull the plug; and the file will be gone forever. IPFS just solves the addressing and caching problem though.
Still a long way to go, GO for example is NOGO for embedded systems, a simple helloworld is 1.8MB!
>Still a long way to go, GO for example is NOGO for embedded systems, a simple helloworld is 1.8MB!
This blog is about “embedded systems” and most of the stuff on display has hundreds of megabytes of RAM.
so much pure bullshit coming from these “interplanetary” dudes, it aint even funny… just plain dumb.
I always have a great laugh whenever I see clueless people declare that something that has ruled the world for about 3 decades doesn’t work because they don’t understand it, and that they will “fix” it. They completely miss the most important aspect which is economy at scale : there are many places for actors to make money in the current ecosystem and this is what pays the infrastructure. Remove this and you’ll have to pay for your own access, hosting, file transfers etc because someone will have to. Who do you think pays for the boats unrolling submarine cables over the Atlantic ocean ? and the fiber in your street ? Sure you can have a direct connection with your neighbor but that limits your internet’s scope… There definitely is a reason why people are paying for CDNs and various optimization products.
Also immutability *is* a problem for the internet. A lot of content providers will not accept the idea that any accidental leak is definitive. Reminds me about the “oops I committed the database password in SVN, too late”.
And saying that HTTP is inefficient is fun when you know that it’s what drives all transport layer optimizations nowadays, resulting in mechanisms like TCP TFO and TLS 0-RTT which allow you to send a request and retrieve the response in a single RTT…
Bah, let’s just observe them and have fun.
You’re probably missing the whole point of IPFS. You can build your own CDN with IPFS in a matter of hours, and if IPFS is installed on requesting client side, he becomes a part of CDN too.
This is like improved Bittorrent, for web, and which does not require installing IPFS client.
For example, I’m hosting websites and serve terabytes of files over IPFS using Cloudflare’s IPFS gameway cloudflare-ipfs.com on a domain.
I hope Paula’s code is better than her Japanese[1].
Otherwise this seems like yet another wishy washy project that promises a lot but will never go anywhere.
[1] See link to her exclusive Japanese blog on her fosdem profile: https://fosdem.org/2019/schedule/speaker/paula/.
Zeronet its a better aproximation of an uncensored internet https://zeronet.io/
These are different projects: zeronet is generally for (dynamic) websites and very user friendly in that aspect, while IPFS is more like a generic tool to upload and access data over the network.