• 1 Post
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • When most sites refer to passkeys, they’re typically talking about the software-backed kind that are stored in password managers or browsers. There are still device-bound passkeys though. Also since they’re just FIDO/WebAuthn credentials under the hood, you can still use hardware-backed systems to store them if you really want.

    While you’re right that device bound and non-exportable would be best from a security standpoint, there needs to be sufficient adoption of the tech by sites for it to be usable at all and sufficient adoption requires users to have options that have less friction/cost associated with them, like browser and password-manager based ones.

    Looking at it through the lens of replacing passwords instead of building the absolutely highest-security system helps explain why they’re not limited to device-bound anymore.


  • Sadly I’ve run into the same type of problem with a newer TLD as well. My solution was to get a domain in the older TLD space (e.g. .com, .net, .org). I doubt this will be the last site you run into that doesn’t support a newer TLD and the low likelihood that you’re going to be able to convince someone to fix the issue at every one of those outdated sites means that you’ll eventually need a backup domain for something.




  • I’d imagine that making it a user choice gets around some of the regulatory hurdles in some way. I can see them making a popup in the future to not use third-party cookies anymore (or partition per site them like Firefox does) but then they can say that it’s not Google making these changes, it’s the user making that choice. If you’re right that there’s few that would answer yes, then it gets them the same effective result for most users without being seen to force a change on their competitors in the ad industry.

    What’s the UK CMA going to do, argue that users shouldn’t be given choices about how they are tracked or how their own browser operates?






  • Spotlight7573@lemmy.worldtoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    I think it was more targeting the client ISP side, than the VPN provider side. So something like having your ISP monitor your connection (voluntarily or forced to with a warrant/law) and report if your connection activity matches that of someone accessing a certain site that your local government might not like for example. In that scenario they would be able to isolate it to at least individual customer accounts of an ISP, which usually know who you are or where to find you in order to provide service. I may be misunderstanding it though.

    Edit: On second reading, it looks like they might just be able to buy that info directly from monitoring companies and get much of what they need to do correlation at various points along a VPN-protected connection’s route. The Mullvad post has links to Vice articles describing the data that is being purchased by governments.


  • Spotlight7573@lemmy.worldtoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    6 months ago

    One example:

    By observing that when someone visits site X, it loads resources A, B, C, etc in a specific order with specific sizes, then with enough distinguishable resources loaded like that someone would be able to determine that you’re loading that site, even if it’s loaded inside a VPN connection. Think about when you load Lemmy.world, it loads the main page, then specific images and style sheets that may be recognizable sizes and are generally loaded in a particular order as they’re encountered in the main page, scripts, and things included in scripts. With enough data, instead of writing static rules to say x of size n was loaded, y of size m was loaded, etc, it can instead be used with an AI model trained on what connections to specific sites typically look like. They could even generate their own data for sites in both normal traffic and the VPN encrypted forms and correlate them together to better train their model for what it might look like when a site is accessed over a VPN. Overall, AI allows them to simplify and automate the identification process when given enough samples.

    Mullvad is working on enabling their VPN apps to: 1. pad the data to a single size so that the different resources are less identifiable and 2. send random data in the background so that there is more noise that has to be filtered out when matching patterns. I’m not sure about 3 to be honest.




  • The Internet Archive refused to follow industry standards for ebook licensing, because they aren’t a library.

    It’s worse than that. They did use “Controlled Digital Lending” to limit the number of people who can access a book at one time to something resembling the number of physical books that they had. And then they turned that restriction off because of the pandemic. There is no pandemic exception to copyright laws, even if that would make sense from a public health perspective to prevent people from having unnecessary contact at libraries. They screwed themselves and I can only hope that the Wayback Machine archives get a home somewhere else if they do go under.



  • https://fingfx.thomsonreuters.com/gfx/legaldocs/lbvggjmzovq/internetarchive.pdf

    [IA] professes to perform the traditional function of a library by lending only limited numbers of these works at a time through “Controlled Digital Lending,” … CDL’s central tenet, according to a September 2018 Statement and White Paper by a group of librarians, is that an entity that owns a physical book can scan that book and “circulate [the] digitized title in place of [the] physical one in a controlled manner.” … CDL’s most critical component is a one-to-one “owned to loaned ratio.” Id. Thus, a library or organization that practices CDL will seek to “only loan simultaneously the number of copies that it has legitimately acquired.

    Judging itself “uniquely positioned to be able to address this problem quickly and efficiently,” on March 24, 2020, IA launched what it called the National Emergency Library (“NEL”), intending it to “run through June 30, 2020, or the end of the US national emergency, whichever is later.” … During the NEL, IA lifted the technical controls enforcing its one-to-one owned-to-loaned ratio and allowed up to ten thousand patrons at a time to borrow each ebook on the Website.

    […]

    The Publishers have established a prima facie case of copyright infringement.

    First, the Publishers hold exclusive publishing rights in the Works in Suit …

    Second, IA copied the entire Works in Suit without the Publishers’ permission. Specifically, IA does not dispute that it violated the Publishers’ reproduction rights, by creating copies of the Works in Suit … ; the Publishers’ rights to prepare derivative works, by “recasting” the Publishers’ print books into ebooks …; the Publishers’ public performance rights, through the “read aloud” function on IA’s Website …; and the publishers’ display rights, by showing the Works in Suit to users through IA’s in-browser viewer

    Bold added.

    It’s pretty much not in dispute that Internet Archive distributed the copyrighted works of the publishers without permission, outside of what even a traditional library lending system would allow.



  • Internet Archive’s other projects like the Wayback Machine may be good but how they handled their digital lending of books during the pandemic was not. They removed the limit on the number of people that can borrow a book at a time, thus taking away any resemblance to traditional physical lending. You can argue that copyright laws are bad and should be changed (and I’d agree) but that doesn’t change the facts of what happened under the current law.