Yes, but recent advances have really rubbed it in our faces in ways that are a lot harder to deny. Humans haven’t become fundamentally more or less predictable over time but recent advances have shown how predictable we are.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Yes, but recent advances have really rubbed it in our faces in ways that are a lot harder to deny. Humans haven’t become fundamentally more or less predictable over time but recent advances have shown how predictable we are.
As recent advances in AI have shown, humans are really quite predictable when you throw enough data and compute at the problem. At some point the algorithm will be sophisticated enough that it’ll be able to get to know you better than you know yourself, and will be able to provide you with things you had no idea were what you really wanted.
Interesting times.
Eh, not necessarily. Hollywood hates piracy and Trump hates Hollywood, it might actually be as simple as that.
IMO the best feature of democracy is not that it results in better selection of who gets to lead, because it doesn’t really - the vast majority of the electorate is not educated in the sorts of things they’d need to be educated in to make truly good decisions about this. The best feature is that every few years we “throw the bums out” and put a new batch of people in charge.
I used to be kind of ambivalent about term limits, I figured it was kind of suboptimal to have to get rid of a leader who’s doing well at some point. But with the size of the population of most democracies there’s really no constraint on the pool of perfectly adequate candidates to draw on. I’m starting to think that “one and done” might be an even better approach, at least for the highest levels. Make it so that there’s no motivation whatsoever to cling to power. Do the same with congressmen and senators, perhaps. Let them prove their capabilities with a political career in local politics, where it’s less important if someone ends up with some kind of corrupt fiefdom because the higher levels of government can keep them in check.
Those jobs are also being replaced by AI. Modern AIs are trained on synthetic data, which is data that was generated from source material specifically for training purposes by other AIs. AIs reformat, rewrite, and vet the source material more reliably and efficiently than humans.
AI models don’t actually contain the text they were trained on, except in very rare circumstances when they’ve been overfit on a particular text (this is considered an error in training and much work has been put into coming up with ways to prevent it. It usually happens when a great many identical copies of the same data appears in the training set). An AI model is far too small for it, there’s no way that data can be compressed that much.
Those sadly will also be replaced over time with machines and there is something we will really lose.
Not all of the things we lose will be sad, though. An AI researcher has the potential to be more thorough and less biased when it comes to digging up and interpreting resources.
If it’s not communicating anything, what’s the point?
My point is that if we turn up our gibberish dial now then at least our llms will be learning the wrong thing & we have some control.
We’d be covering ourselves in poop to prevent people from sitting next to us on the train. Sure, people will avoid sitting next to us, but in the meantime we’ll be covered in poop.
And then other people will learn the trick, cover themselves in poop too, and now everyone’s poopy and the trick stops working.
There is still a lot of understanding that we do automatically that an llm will never do.
Are you willing to bet the convenience of comprehensible online discourse on that? “Automatically understanding stuff” is basically the one job of LLMs.
LLMs model language, and coming up with some kind of “gibberish” filter is simply inventing a new language. If there’s semantic meaning in it the LLMs will figure it out just like any other language, and if there isn’t semantic meaning then we’ve lost the ability to communicate entirely. I see no upside.
I’m not talking about a summarizer, I’m talking about a classifier. It just needs to identify which parts of the page are advertising and which are not.
The point of such a tool is that it would read the web page in exactly the same way that a human would, so using trickery like pre-rendered images of text or funky unicode wouldn’t really change anything. If a human can read it then so can the AI.
Well, the “at least for now” part is my point - if people start using “gibberish” to communicate or to hide their communication, that provides training material for LLMs to let them figure out how to use it too.
LLMs learn how to communicate based on existing examples of communication. As long as humans are communicating with each other somehow then LLMs will be able to train how to do that too. They have the same communication capabilities that we do at this point, so there’s not really any way we can make a secret clubhouse that they can’t figure out how to infiltrate.
Personally, I think there’s two main routes we can go to deal with this. Either we can simply accept that there’s no way to be 100% sure we’re talking to a human any more and evaluate the value of our conversation based on the content of the words spoken rather than the composition of the entity generating them, or we could come up with some kind of “proof of personhood” system to allow people to label the text the write as coming from them.
The latter is extremely hard to do, of course, both from a technical and cultural perspective. And such a system would likely still allow someone’s “person token” to be sneakily used by AI, either by voluntarily delegating it (I could very well be retyping all of this out of a ChatGPT window) or through hackery.
So I’m inclined toward the former. If I’m chatting with someone and I’m having a good time doing it, and then later I find out it was a bot, why should that change how much fun I had?
I don’t see how that would be practical. People who aren’t “in on the joke”, as it were, will call out the gibberish and downvote it. If enough people are “in on the joke” then the whole forum becomes useless and some other forum will be created to fill the role of the original. The AI will train off of that one.
Basically, if you don’t want an AI training on your content, then don’t post your content in public where an AI will see it. The Fediverse is the last place you should be posting since its very nature is about openly broadcasting your content to whoever wants to see it.
Adblockers aren’t made by “corporate overlords.” This wouldn’t be either.
Someday soon my “adblocker” might be a personal AI that reads the spam-ridden website on a virtual display in memory, identifies the actual content while pretending to look at whatever ads the site demands, and then passes the information I’m actually looking for along to me. Good luck captchaing that.
You realize that this is only going to train LLMs how to recognize “gibberish?”
It’s more impressive when you use inpainting to preserve the beak, eye, and feet from the original source image.
It is better to be loved than feared.
Everybody wants money, that’s why they call it “money.”
You won’t be taken away for “study”, you’ll be taken away for pension fraud. Probably much earlier than 150.
Why would participating in studies be bad, though? Major pharmaceutical companies would pay you an absolute fortune in exchange for participation and you could advance medical science tremendously. You’d be a hero and get incredibly rich in the process.
Yeah, I like a light-hearted approach to life but that one particular “joke” should be shot on sight. I’m convinced it plays an actual role in why we haven’t seen much serious discussion of sending a probe there.