deleted by creator
deleted by creator
deleted by creator
deleted by creator
we need to pour funds and funds into the education system and move more focus onto critical thinking, logical fallacies and self-esteem / self-image, as it plays a large role in calling out injustices and building an accurate world view
Nope. Education is our own responsibility. Leave it to the state or organisations and get brain-raped. Teach yourself, teach your kids, teach your friends etc. For the most part we need to purify our own knowledge and learn to learn on our own.
You are right about critical thinking and such.
If you argue that any attempt to resolve an economic dispute(that apple is mine!) is through government, then yes, they will exist as long as we do.
I’ll stop fighting when Meta no longer exists.
“This password would take 1 century to crack!”
You listed a few interesting things and seemed miss an important one.
I might be a bit wrong in what I’m about to say, but the basics are right. Meta released a chatGPT like LLMs source code and had their weights leaked. Their model is named LLaMA.
People have been messing around with LLaMA inspired LLMs on their personal computers thanks to meta for months now.
Bad actor LLM bots are now a hobbyist level task. The fediverse is showing signs of not significantly caring or trying. Imo, Lemmy instances aren’t ready for this.
https://ai.meta.com/blog/large-language-model-llama-meta-ai/
So, where does that leave us? There’s always been unreliable knowledge from people.
I think we need to recognise our knowledge is sketchy, stolen, faked and from there, start to rebuild in our own way. It’s ok to accept that we don’t really know if we landed on the moon, or if E=mc2. I feel both of those are true btw, just sayin’.
I’ve found myself feeling better when I let go and swap to a “I feel that…” or “seems that…” style. My certainty is now mostly for things I’ve intellectually bled for, like my epistemological understanding that truth comes from the best attempt of reducing conflicting knowledge.
I’ve investigated conspiracies and scams, magic tricks, con artists, “truths” and statesmanship. In the end I realised that I ultimately faked that I knew, until i put in hard work to investigate, and after confirming some scary things, I stopped lying to myself and assuming I knew most. I might talk big sometimes, but I’ve got some heavy doubt that id mention if it was easier to type out.
Here’s a starting point to consider. Have you actually checked the words you type in the dictionary or an etymological dictionary? I found out i was massively assuming and guessing, and that I was totally wrong. It got to the point where I was checking definitions each day and feeling stupid and enlightened each step.
Now compare that to an LLM that feigns confidence and sounds coherent. Once I could see through peoples deceptions that they just pretended to know, I realise too that an LLM is really functionally the same.
Maybe we need to relearn tricks from the old irl days, even if that loses us some of what we could gain from globalised knowledge and friendship. Perhaps we can find new ways to apply these to our internet communities. I don’t think I’m saying anything new here, but I guess fostering a culture of thinking about truth and trust is good: maybe I’m helping that.
Yep! The geniuses that design my processor know the answer, not me. I can let go and claim a vague probably wrong idea of how it really works but I no longer need to pretend to know what I don’t. Our silly chains of trust are damning humanity. We assume that the products we use don’t have massive negative repercussions and now look at it all. We are destroying forests, poisoning rivers, pollution our bodies with PFOAs and microplastics, we are depressed and fat, lonely and anxious. We assume this is good because it’s progress. We FEEL that this isn’t great, but trust the experts.
Almost as an aside (so I don’t ramble twice as long like my crashed-firefox answer!): The best philosophical one-liner I’ve found for first-principleing trust, is, does this person show love? (Kindness, compassion, selflessness.) To me, and/or to others. Then that imparts some assumed value to their worldview and life understanding. Doesn’t make them an expert on any topic, but makes a foundation.
:D I once wrote the first voluntary essay of my adult life entitled “us vs them” and it boiled down to “If we don’t know which side is right, the easiest way to tell who is wrong, is to look at who is using dirty tricks and dishonesty”. What you wrote was the other half :) I would consider this the good/bad faith explanation. You described good faith, I described bad faith. Our instinct is to rely on good people and avoid bad people. It’s not perfect but our instinct helps us a lot with it.
Yes. If you mean, is their comment more noteable than most others, in a public debate, then no. But if you’re pointing towards, are their experience, understanding and internal processes valuable, then yes, and that’s important to me. (Though I’m not great enough to hear, consider or interact with everyone!)
I was talking about the average persons sapience in an absolute or relative way to other people. I didn’t think you would see it against AI stuff. Most people aren’t using their sapience like you seem to be, to question and wonder, to learn and process information. Most people are very fixed and will do as you said before, they will do what makes them sound right, or win favour. They are faking their skill. It takes a ton of effort to really do new things, and most people just don’t bother very much.
As for the GPT thing. It should be pretty bad and easy to spot right now, I’m still just worried about the secretly optimised ones and totally secret techs that are similar. In a real convo you can detect a persons sapience, but not the people talking in bad faith.
ChatGPT is optimised with high grade texts but a diff one could be trained in final stages to mimic incoherent bad faith arguments. The higher level arguments should give away chatGPT currently as it’d be heavily scrutinised.
Thanks for the reply btw, I know how annoying it is to lose a huge response lol. Don’t upvote while proof reading!!!
The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?
Part of the thing with chatgpt is it’s particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.
That’s why this is so scary! The average person on the internet is being fake the same way chatGPT based bots would be! haha… :(
Your whole comment is great, you understand the passable, seemingly coherent nature of it. It’s only a hair less coherent than the average person that would argue in bad faith, and if optimised with that specific data would be… scary
Here is something I mentioned before on a different topic to show you the flaws of people, more so than the capabilities of bots. https://lemmy.ml/comment/1318058
The thing that bothers me most is this thought exercise. If govt agencies and militaries are years ahead, and propaganda is so useful, shouldn’t there be an ultra high chance that secret AI chatbots are already practically perfected and mass usable by now?
We have seen such a shift towards a dead internet that these are our final chances. I think we should spend more effort on finding tricks to ID bots and do something about it, else take to the streets.
I desperately want to see bans for excessive bad faith and unnecessarily abusive accounts. I’m getting stressed out with the amount of abuse and garbage floating around by a hyper-vocal minority or bots/bad actors.
Feels like each day Lemmy becomes more like r/the_donald.
At some point it all stops mattering. You treat bots like humans and humans like bots. It’s all about logic and good/bad faith.
I’ve had an embarrassing attempt to identify a bot and learned a fair bit.
There is significant overlap between the smartest bots, and the dumbest humans.
A human can:
It’s too unethical to test, so I feel that the best course of action is to rely on good/bad faith tests, and logic of the argument.
Turing tests are very obsolete. The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?
A well made LLM can exceed a dumb person pretty easily. It can also be more enjoyable to talk with or more loving and supportive.
Of course there are things that current LLMs can’t do well that we could design tests around. Also long conversations have a higher chance to show a failure of the AI. Secret AIs and future AIs might be harder of course.
I believe dead internet theories spirit. Strap in meat-peoples, rides gonna get bumpy.
… :( i upvoted you and lost a huge reply… i uh… sad.
I’ll summarise.
We have a lot more power than we give ourselves credit for, or realise. People need to sit down and find their power and then apply it. They rigged the system but its just a minor thing if you could see through it. It’s mostly about increasing the % of people trying and increasing their effectiveness while trying. This system is on the verge of collapse and even tiny inputs at this stage are heavily in our favour.
We are lied to about how to solve the problem. They make problems and frame them as the root cause, we then waste our lives chimping out trying to deal with symptoms.
I think that it’s really up to ourselves to figure out our own code of ethics and how to apply them. It takes time and effort to slowly refine them. We will be forced to violate our standards against more than we like, but all that matters is we give it an honest go. I don’t expect people to hurt themselves trying too much, just as long as we try a bit.
As for worker coops or good companies, the real issue is that not enough people are aware or ready to band together and try to start one. I have no real idea how to do it properly and aren’t in a position to really try. If you could genie wish for people like you that are interested to be obviously visible, you might be surprised about how many would be willing to help near you.
Sometimes just finding locals to vent to can start something big or at least help you feel better about it.
“Shooting up lemmings” now has two meanings.
Distractions to make the govt look like they care and to shift blame onto the average person.
The parasite class
doesn’t really do much though. We hold the whips that enslave ourselves.
We would do so well if just 25% of the population were to:
I missread your comment and cant delete my reply… haha… oops
bahahahahaha