Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and then some time on kbin.social.

  • 0 Posts
  • 335 Comments
Joined 9 months ago
cake
Cake day: March 3rd, 2024

help-circle


  • FaceDeer@fedia.iotoComic Strips@lemmy.worldThe algorithm
    link
    fedilink
    arrow-up
    11
    arrow-down
    7
    ·
    5 days ago

    As recent advances in AI have shown, humans are really quite predictable when you throw enough data and compute at the problem. At some point the algorithm will be sophisticated enough that it’ll be able to get to know you better than you know yourself, and will be able to provide you with things you had no idea were what you really wanted.

    Interesting times.



  • IMO the best feature of democracy is not that it results in better selection of who gets to lead, because it doesn’t really - the vast majority of the electorate is not educated in the sorts of things they’d need to be educated in to make truly good decisions about this. The best feature is that every few years we “throw the bums out” and put a new batch of people in charge.

    I used to be kind of ambivalent about term limits, I figured it was kind of suboptimal to have to get rid of a leader who’s doing well at some point. But with the size of the population of most democracies there’s really no constraint on the pool of perfectly adequate candidates to draw on. I’m starting to think that “one and done” might be an even better approach, at least for the highest levels. Make it so that there’s no motivation whatsoever to cling to power. Do the same with congressmen and senators, perhaps. Let them prove their capabilities with a political career in local politics, where it’s less important if someone ends up with some kind of corrupt fiefdom because the higher levels of government can keep them in check.






  • My point is that if we turn up our gibberish dial now then at least our llms will be learning the wrong thing & we have some control.

    We’d be covering ourselves in poop to prevent people from sitting next to us on the train. Sure, people will avoid sitting next to us, but in the meantime we’ll be covered in poop.

    And then other people will learn the trick, cover themselves in poop too, and now everyone’s poopy and the trick stops working.

    There is still a lot of understanding that we do automatically that an llm will never do.

    Are you willing to bet the convenience of comprehensible online discourse on that? “Automatically understanding stuff” is basically the one job of LLMs.

    LLMs model language, and coming up with some kind of “gibberish” filter is simply inventing a new language. If there’s semantic meaning in it the LLMs will figure it out just like any other language, and if there isn’t semantic meaning then we’ve lost the ability to communicate entirely. I see no upside.


  • I’m not talking about a summarizer, I’m talking about a classifier. It just needs to identify which parts of the page are advertising and which are not.

    The point of such a tool is that it would read the web page in exactly the same way that a human would, so using trickery like pre-rendered images of text or funky unicode wouldn’t really change anything. If a human can read it then so can the AI.


  • Well, the “at least for now” part is my point - if people start using “gibberish” to communicate or to hide their communication, that provides training material for LLMs to let them figure out how to use it too.

    LLMs learn how to communicate based on existing examples of communication. As long as humans are communicating with each other somehow then LLMs will be able to train how to do that too. They have the same communication capabilities that we do at this point, so there’s not really any way we can make a secret clubhouse that they can’t figure out how to infiltrate.

    Personally, I think there’s two main routes we can go to deal with this. Either we can simply accept that there’s no way to be 100% sure we’re talking to a human any more and evaluate the value of our conversation based on the content of the words spoken rather than the composition of the entity generating them, or we could come up with some kind of “proof of personhood” system to allow people to label the text the write as coming from them.

    The latter is extremely hard to do, of course, both from a technical and cultural perspective. And such a system would likely still allow someone’s “person token” to be sneakily used by AI, either by voluntarily delegating it (I could very well be retyping all of this out of a ChatGPT window) or through hackery.

    So I’m inclined toward the former. If I’m chatting with someone and I’m having a good time doing it, and then later I find out it was a bot, why should that change how much fun I had?


  • I don’t see how that would be practical. People who aren’t “in on the joke”, as it were, will call out the gibberish and downvote it. If enough people are “in on the joke” then the whole forum becomes useless and some other forum will be created to fill the role of the original. The AI will train off of that one.

    Basically, if you don’t want an AI training on your content, then don’t post your content in public where an AI will see it. The Fediverse is the last place you should be posting since its very nature is about openly broadcasting your content to whoever wants to see it.