• weedazz@lemmy.world
    link
    fedilink
    arrow-up
    161
    arrow-down
    4
    ·
    edit-2
    1 year ago

    My mind immediately went to a horizon zero dawn like dystopia where the Mozilla AI is the only thing left protecting humans from various malevolent AIs bent on consuming the human race

      • weedazz@lemmy.world
        link
        fedilink
        arrow-up
        25
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I think by that point ChatGPT would be more like Apollo, keeping the knowledge of humanity. I feel like one of the more corporate AIs will go full HADES, I’m thinking Bard. It will get a mysterious signal from space that switches it’s core protocol from “don’t be evil” to “be evil.”

    • clanginator@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Imagining the Mozilla AI as a personified Firefox and Thunderbird fighting off Cortana, some BARD (sorry) and a bunch of generic evil corporate AIs just makes me excited that Mozilla would be the one fending everyone off.

  • katy ✨@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    94
    arrow-down
    2
    ·
    1 year ago

    Incredibly welcomed. We need more ethical, non-profit AI researchers in the sea of corporate for-profit AI companies.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    arrow-up
    55
    arrow-down
    1
    ·
    1 year ago

    I want to give them the benefit of the doubt. I really do. I am going to watch this with a critical eye, however.

  • Weeby_Wabbit@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    3
    ·
    1 year ago

    I’ll believe it when I see it.

    I’m so goddamn tired of “open source” turning into subscription models restricting use cases because the company wants to appease conservative investors.

    • blind3rdeye@lemm.ee
      link
      fedilink
      arrow-up
      37
      ·
      1 year ago

      Mozilla has a very strong track-record though. They’ve been around for a very long time, and have stuck to free open-source principles the whole time.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 year ago

      That’s basically only OpenAI, maybe some obscure startups as well. Mozzila is far too old and niche to get away with that anyway.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    33
    ·
    1 year ago

    As much as I love Mozilla, I know they’re going to censor it (sorry, the word is “alignment” now) the hell out of it to fit their perceived values. Luckily if it’s open source then people will be able to train uncensored models

    • DigitalJacobin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      86
      arrow-down
      14
      ·
      1 year ago

      What in the world would an “uncensored” model even imply? And give me a break, private platforms choosing to not platform something/someone isn’t “censorship”, you don’t have a right to another’s platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

      • Doug7070@lemmy.world
        link
        fedilink
        English
        arrow-up
        44
        arrow-down
        4
        ·
        1 year ago

        This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

        Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.

        • underisk@lemmy.ml
          link
          fedilink
          arrow-up
          19
          arrow-down
          3
          ·
          1 year ago

          It means they can’t make porn images of celebs or anime waifus, usually.

        • 👁️👄👁️@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          That’s not at all how a uncensored LLM is. That sounds like an untrained model. Have you actually tried an uncensored model? It’s the same thing as regular, but it doesn’t attempt to block itself for saying stupid stuff, like “I cannot generate a scenario where Obama and Jesus battle because that would be deemed offensive to cultures”. It’s literally just removing the safeguard.

        • RobotToaster@mander.xyz
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 year ago

          It’s a machine, it should do what the human tells it to. A machine has no business telling me what I can and cannot do.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’m from your camp but noticed I used ChatGPT and the like less and less over the past months. I feel they became less and less useful and more generic. In Februar or March, they were my go to tools for many tasks. I reverted back to old-fashioned search engines and other methods, because it just became too tedious to dance around the ethics landmines, to ignore the verbose disclaimers, to convince the model my request is a legit use case. Also the error ratio went up by a lot. It may be a tame lapdog, but it also lacks bite now.

          • Doug7070@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’ve found a very simple expedient to avoid any such issues is just to not use things like ChatGPT in the first place. While they’re an interesting gadget, I have been extremely critical of the massive over-hyped pitches of how useful LLMs actually are in practice, and have regarded them with the same scrutiny and distrust as people trying to sell me expensive monkey pictures during the crypto boom. Just as I came out better of because I didn’t add NFTs to my financial assets during the crypto boom, I suspect that not integrating ChatGPT or its competitors into my workflow now will end up being a solid bet, given that the current landscape of LLM based tools is pretty much exclusively a corporate dominated minefield surrounded by countless dubious ethics points and doubts on what these tools are even ultimately good for.

      • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
        link
        fedilink
        arrow-up
        21
        arrow-down
        1
        ·
        1 year ago

        I fooled around with some uncensored LLaMA models, and to be honest if you try to hold a conversation with most of them they tend to get cranky after a while - especially when they hallucinate a lie and you point it out or question it.

        I will never forget when one of the models tried to convince me that photosynthesis wasn’t real, and started getting all snappy when I said I wasn’t accepting that answer 😂

        Most of the censorship “fine tuning” data that I’ve seen (for LoRA models anyway) appears to be mainly scientific data, instructional data, and conversation excerpts

      • TheWiseAlaundo@lemmy.whynotdrs.org
        link
        fedilink
        arrow-up
        19
        arrow-down
        2
        ·
        1 year ago

        There’s a ton of stuff ChatGPT won’t answer, which is supremely annoying.

        I’ve tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

        Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn’t an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

        Sarcasm is, for the most part, very difficult to do… If ChatGPT thinks what you’re trying to write is mean-spirited, it just won’t do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it’s fine, and often unintentionally very funny.

        There’s plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I’m running Wizard 30B uncensored locally, and ChatGPT for everything else. I’d like to think I’m not a weirdo, I just like D&d… a lot, lol… and even with my use case I’m bumping my head on some of the censorship issues with LLMs.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Interesting, may I ask you a question regarding uncensored local / censored hosted LLMs in comparison?

          There is this idea censorship is required to some degree to generate more useful output. In a sense, we somehow have to tell the model which output we appreciate and which we don’t, so that it can develop a bias to produce more of the appreciated stuff.

          In this sense, an uncensored model would be no better than a million monkeys on typewriters. Do we differentiate between technically necessary bias, and political agenda, is that possible? Do uncensored models produce more nonsense?

          • TheWiseAlaundo@lemmy.whynotdrs.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            That’s a good question. Apparently, these large data companies start with their own unaligned dataset and then introduce bias through training their model after. The censorship we’re talking about isn’t necessarily trimming good input vs. bad input data, but rather “alignment” which is intentionally introduced after.

            Eric Hartford, the man who created Wizard (the LLM I use for uncensored work), wrote a blog post about how he was able to unalign LLAMA over here: https://erichartford.com/uncensored-models

            You probably could trim input data to censor output down the line, but I’m assuming that data companies don’t because it’s less useful in a general sense and probably more laborious.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        23
        ·
        1 year ago

        Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.

        • salarua@sopuli.xyz
          link
          fedilink
          arrow-up
          30
          arrow-down
          4
          ·
          edit-2
          1 year ago

          shit just went from 0 to 100 real fucking quick

          for real though, if you ask an LLM how to make a bomb, it’s not the LLM that’s the problem

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            12
            ·
            edit-2
            1 year ago

            If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that’s the point.

            Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you’d probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?

            To have a strong stance means also defending the potential harmful effects, since they’re inevitable. It’s hard to keep values consistent, even when there are potential harmful effects of something that’s for the greater good. Encryption is a perfect example of that.

            • Lionir [he/him]@beehaw.org
              link
              fedilink
              arrow-up
              10
              arrow-down
              1
              ·
              1 year ago

              This is a false equivalence. Encryption only works if nobody can decrypt it. LLMs work even if you censor illegal content from their output.

              • 👁️👄👁️@lemm.ee
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                2
                ·
                1 year ago

                You miss the point. My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides. Encryption can objectively cause harm, but it should absolutely still be defended.

                • Lionir [he/him]@beehaw.org
                  link
                  fedilink
                  arrow-up
                  9
                  arrow-down
                  1
                  ·
                  1 year ago

                  This is just enlightened centrism. No. Nobody needs to defend the harms done by technology.

                  We can accept the harm if the good is worth it - we have no need to defend it.

                  LLMs can work without the harm.

                  It makes sense to make technology better by reducing the harm they cause when it is possible to do so.

                • Solar Bear@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  2
                  ·
                  edit-2
                  1 year ago

                  What the fuck is this “you should defend harm” bullshit, did you hit your head during an entry level philosophy class or something?

                  The reason we defend encryption even though it can be used for harm is because breaking it means you can’t use it for good, and that’s far worse. We don’t defend the harm it can do in and of itself; why the hell would we? We defend it in spite of the harm because the good greatly outweighs the harm and they cannot be separated. The same isn’t true for LLMs.

              • jasory@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Encryption only works if certain parties can’t decrypt it. Strong encryption means that the parties are everyone except the intended recipient, weak encryption still works even if 1 percent of the eavesdroppers can decrypt it.

                • Lionir [he/him]@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  I mean, I don’t understand the point of an encryption that people can decrypt without it being intended. Just seems like theatre to me.

                  But yeah, obviously the intended parties have to be able to decrypt it. I messed up in my wording.

            • Spzi@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              If it has the information, why not?

              Naive altruistic reply: To prevent harm.

              Cynic reply: To prevent liabilities.

              If the restaurant refuses to put your fries into your coffee, because that’s not on the menu, then that’s their call. Can be for many reasons, but it’s literally their business, not yours.

              If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.

              Although I find it pretty surprising corporations self-regulate faster than lawmakers can say ‘AI’ in this case. That’s odd.

              • 👁️👄👁️@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                This is very well said. They’re allowed to not serve you these things, but we should still be able to use these things ourselves and make our glorious gun powder fries coffee with a spice of freedom all we want!

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            14
            ·
            edit-2
            1 year ago

            Do gun manufacturers get in trouble when someone shoots somebody?

            Do car manufacturers get in trouble when someone runs somebody over?

            Do search engines get in trouble if they accidentally link to harmful sites?

            What about social media sites getting in trouble for users uploading illegal content?

            Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.

            Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.

            • Spzi@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Do car manufacturers get in trouble when someone runs somebody over?

              Yes, if it can be shown the accident was partially caused by the manufacturer’s neglect. If a safety measure was not in place or did not work properly. Or if it happens suspiciously more often with models from this brand. Apart from solid legal trouble, they can get into PR trouble if many people start to think that way, no matter if it’s true.

                • Spzi@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Then let me spell it out: If ChatGPT convinces a child to wash their hands with self-made bleach, be sure to expect lawsuits and a shit storm coming for OpenAI.

                  If that occurs, but no liability can be found on the side of ChatGPT, be sure to expect petitions and a shit storm coming for legislators.

                  We generally expect individuals and companies to behave in society with peace and safety in mind, including strangers and minors.

                  Liabilities and regulations exist for these reasons.

        • Doug7070@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          My brother in Christ, building a bomb and doing terrorism is not a form of protected speech, and an overwrought search engine with a poorly attached ability to hold a conversation refusing to give you bomb making information is not censorship.

    • whoisearth@lemmy.ca
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      As an aside I’m in corporate. I love how gung ho we are on AI meanwhile there are lawsuits and potential lawsuits and investigative journalism coming out on all the shady shit AI and their companies are doing. Meanwhile you know the SMT ain’t dumb they know about all this shit and we are still driving forward.

    • VonCesaw@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      5
      ·
      1 year ago

      If ‘censored’ means that underpaid workers in developing countries don’t need to sift through millions of images of gore, violence, etc, then I’m for it

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    25
    arrow-down
    7
    ·
    1 year ago

    All I want to know is if they are going to pillage people’s private data and steal their creative IP or not.

    Ethical AI starts and ends with open, transparent, legitimate and ethically sourced training data sets.

    • azuth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      1 year ago

      Using copyrighted material for research is fair use. Any model produced by such research is not itself a derivative work of the training material. If people use it to create infringing (on the training or other material) they can be prosecuted in the exact same way they would if they created an infringing work via Photoshop or any other program. The same goes for other illegal uses such as creating harmful depictions of real people.

      Accepting any expansion of IP rights, for whatever reason, would in fact be against the ethics of free software.

      • donuts@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        1 year ago

        Using copyrighted material for research is fair use. Any model produced by such research is not itself a derivative work of the training material.

        You’re conflating AI research and the AI business. Training an AI is not “research” in a general sense, especially in the context of an AI that can be used to create assets for commercial applications.

        • azuth@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s not possible to research AI without training them.

          It’s probably also not possible to train a model whose creations cannot be used for commercial applications.

  • fosforus@sopuli.xyz
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I remember a time when open-source software was developed without a pre-order business model.

    "This new company will be led by Managing Director Moez Draief. Moez has spent over a decade working on the practical applications of cutting-edge AI as an academic at Imperial College and LSE, and as a chief scientist in industry. Harvard’s Karim Lakhani, Credo’s Navrina Singh and myself will serve as the initial Board of Mozilla.ai. "

    This money will 90% go to paying the salary of these managers.

      • fartsparkles@sh.itjust.works
        link
        fedilink
        arrow-up
        20
        arrow-down
        4
        ·
        1 year ago

        I feel the issue with AI models isn’t their source not being open but the actual derived model itself not being transparent and auditable. The sheer number of ML experts who cannot explain how their model produces a given result is the biggest concern and requires a completely different approach to model architecture and visualisation to solve.

        • cynar@lemmy.world
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          1
          ·
          1 year ago

          Unfortunately, the nature of the models is that it’s very difficult to get an understanding of the innards. That’s part of the point, you don’t need too. The best we can do is monitor how it’s built and what connects in and out of it.

          The open source bits let you see that it’s not passing data on without your permission. If the training data is also open source, you can get for biases e.g. 90% of faces being white males.

        • Jamie@jamie.moe
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          No amount of ML expertise will let someone know how a model produced a result, exactly. Training the model from the data requires a lot of very delicate math being done uncountable times to get a model that results in something useful, and it simply isn’t possible to comprehend how the work inside is done in a meaningful way other than by doing guesswork.

        • DigitalJacobin@lemmy.ml
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          We already know how these models fundamentally work. Why exactly does it matter how a model produced some result? /gen

  • CaptKoala@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    9
    ·
    edit-2
    1 year ago

    Couldn’t give a fuck, there’s already far too much bad blood regarding any form of AI for me.

    It’s been shoved in my face, phone and computer for some time now. The best AI is one that doesn’t exist. AGI can suck my left nut too, don’t fuckin care.

    Give me livable wages or give me death, I care not for anything else at this point.

    Edit: I care far more about this for privacy reasons than the benefits provided via the tech.

    The fact these models reached “production ready” status so quickly is beyond concerning, I suspect the companies are hoping to harvest as much usable data as possible before being regulated into (best case) oblivion. It really no longer seems that I can learn my way out of this, as I’ve been doing since the beginning, as the technology is advancing too quickly for users, let alone regulators to keep it in check.

  • bahmanm@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    1 year ago

    Something that I’ll definitely keep an eye on. Thanks for sharing!

  • Turun@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    In which ways does this differ from stability ai, which made stable diffusion and also have a LLM afaik?

    • RobotToaster@mander.xyz
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      The stability models aren’t open source, the moralistic licence they release under violates the open source definition.