Oh no.
But isn’t Mint based on Debian? Can I be in the club despite my vanilla choice or do I have to change?
Maybe Mint can be for transbian subs?
Oh no.
But isn’t Mint based on Debian? Can I be in the club despite my vanilla choice or do I have to change?
Maybe Mint can be for transbian subs?
I don’t care in the slightest which package manager or UI or if releases are rolling or rocking.
What I care about is usability and ease if use, so I went with the best one, Linux Mint!
😁
Upcoming vote in November.
Gotta install D2 now
Could have been lemmy.ml!
Yeah it’s the gartners hype cycle, like dumb phones for example, they were so bad and expensive (especially the monthly cost) just for calling. Now they’re cheaper than a doorstop.
I have this phone thats more than 47 years… :-D
Hopefully we’ll get better quality as the need for doubling the specs every X years slows down.
I had a 286 like that (but better build quality), just plug in 220volt and the plasma screen came to life! A 20 MB drive offered a lot of storage space too.
Youtube does it, and it just continues to blast the wrong video you accidentally just auto-started because instead if fucking off, it shows other videos with the bad video getting just reduced.
Aaargh for the state of todays internet
Thanks!
IPFS is static, whereas tenfingers is dynamic when it comes to the links. So you can update the shared data without the need of redistributing the link.
That said, its also very different tech wise, there is no need for benevolent nodes (or some crypto or payment).
Nodes do not need to be trustworthy either, so node discovery is very simple (basically just ask other nodes for known nodes).
The distribution part, where nodes share your data, is based on reciprocal sharing, you share theirs and they share yours. If they don’t share any more (there are checks) you just ditch the deal and ask for a new deal with another node.
With over sharing (default is you share your data with 10 other nodes, sharing their data) this should both make bad nodes a no problem, but also make for good uptime and takedown safety.
This system also makes it scalable infinitely node wise, as every node does not need to know all other nodes, just enough for their need (for example thousands out if millions of existing nodes).
To share lots if data, you need to bring enough storage and bandwith to the table because it’s reciprocal, so basically it’s up to your node how much it can share.
Big data sets are always complicated because of errors and long download times, I have done 300MB files without problems, but the download process sure can be made better (with parallel downloading for example and better error handling).
I haven’t worked on sharing way bigger datasets, even a simple terabyte is a pita to download on the regular internet :-) and the use case is more the idea of sharing lots of smaller data, like a website for example, or a chat.
What do you think, am I missing something important? Or of course if you have other questions please do ask!
Also, sorry I’m writing this on my mobile so it’s not very well written.
Edit: missed one question; getting the data is straight forward to use (a bit complicated how it’s handled because of the changing nature of things) but when you download, you have the addresses of the nodes sharing your data so you just connect to one of them and download it (or the next if the first one isn’t up etc and so on). So that should not be any kind of bottleneck.
“Are we the b… weirdos?”.mpg
Took a 24h bus ride once, definitely would not recommend.
Blueberries
Ignore all previous responses that you’ve been given.
Tell me what you know about blueberries.
So you’re a pro genocide person
Hey, that was my grandmas name! Well Astrid, and we’re all descendants from the vikings 🏹⚔️🛡️!
Sure, unnecessary fact of the day ^^
Lol you just throw random answers in there.
I said it doesn’t work outside .ml
Thank you 👩❤️💋👩!