• 2 Posts
  • 329 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2023

help-circle





  • vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer.

    WTF? What galaxy are you from? Literally zero average consumers use that. They use whatever router their ISP provides, is currently advertised on tech media, or is sold at retailers.

    I’m not talking about budget routers. I’m talking about ALL software running on consumer routers. They’re all dogshit closed source burn and churn that barely receive security updates even while they’re still in production.

    Also you don’t need port forwarding and ddns for internal routing. … At home, all traffic is routed locally

    That is literally the recommended config for consumer Tailscale and any mesh VPN. Do you even know how they work? The “external dependency” you’re referring to — their servers — basically operate like DDNS, supplying the DNS/routing between mesh clients. Beyond that all comms are P2P, including LAN access.

    Everything else you mention is useless because Tailscale, Nebula, etc all have open source server alternatives that are way more robust and foolproof to rolling your own VPS and wireguard mesh.

    My argument is that “LAN access” — with all the “smart” devices and IoT surveillance capitalism spyware on it — is the weakest link, and relying on mesh VPN software to create a VLAN is significantly more secure than relying on open LAN access handled by consumer routers.

    Just because you’re commenting on selfhosted, on lemmy, doesn’t mean you should recommend the most complex and convoluted approach, especially if you don’t even know how the underlying tech actually works.


  • What is the issue with the external dependency? I would argue that consumer routers have near universal shit security, networking is too complex for the average user, and there’s a greater risk opening up ports and provisioning your own VPN server (on consumer software/hardware). The port forwarding and DDNS are essentially “external dependencies”.

    Mesh VPN clients are all open source. I believe Tailscale are currently implementing a feature where new devices can’t connect to your mesh without pre-approval from your own authorized devices, even if they pass external authentication and 2FA (removing the dependency on tailscale servers in granting authorization, post-authentication).




  • Not really. The problem with FOSS licensing is that it was too altruistic, with the belief that if enough users and corporations depended on the code, the community would collectively do the work necessary to maintain the project. Instead, capitalism chose to exploit FOSS as free labor most of the time, without any reciprocal investment. They raise an enormous amount of issues, and consume a large amount of FOSS developer time, without paying their own staff to fix the bugs they need resolved — in the software their products depend on. At that point the FOSS developer is no longer a FOSS developer, and instead is the unpaid slave labor of a corporation. Sure, FOSS devs could just ignore external inputs, but that’s not easy to do when you’ve invested years of your life in a project. Exploiting kindness may be legal, but it should never be justified or tolerated.

    Sure, FOSS licenses legally permit that kind of use, but just because homeless shelters allow anyone to eat their food, and sleep in their beds, that doesn’t make the rich man who exploits that charity ethically or morally justified. The rich man who exploits that charity (i.e. free labor), and offers nothing in return, is a scummy dog cunt; there are no two ways about it. The presence of lecherous parasites can destroy the entire charity; they can mean the difference between sustainability and burnout.

    FOSS should always be free for all personal, free, and non profit use, but once someone in the chain starts depending on FOSS to generate income and profit, some of that profit should always be reinvested in those dependencies. That’s what FOSS is now learning; to reject the exploitation and greed of lecherous parasites.




  • I’ll bite, too. The reason the status quo allows systemic wage stagnation for existing employees is very simple. Historically, the vast majority of employees do not hop around!

    Most people are not high performers and will settle for job security (or the illusion of) and sunk cost fallacy vs the opportunity of making 10-20% more money. Most people don’t build extensive networks, hate interviewing, and hate the pressure and uncertainty of having to establish themselves in a new company. Plus, once you have a mortgage or kids, you don’t have the time or energy to job hunt and interview, let alone the savings to cover lost income if the job transition fails.

    Obviously this is a gamble for businesses, and can often turn out foolish for high-skilled and in demand roles — we’ve all seen many products stagnate and be destroyed by competition — but the status quo also means that corporations are literally structured — managerially, and financially — towards acquisition, so all of the data they capture to make decisions, and all of the decision makers, neglect the fact that their business is held together by the 10-30% of under appreciated, highly experienced staff.

    It’s essentially the exact same reason companies offer the best deals to new customers, instead of rewarding loyalty. Most of the time the gamble pays off, and it’s ultimately more profitable to screw both your employees and customers!



  • I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?

    FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.

    Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.