So we’re going to set the plagiarized information synthesis software loose on a set of theories that’s built to be entirely self-consistent and also conveniently unfalsifiable.
Garbage in, garbage out.
Chill, let the mad scientists have their fun
I mean I’m a scientist. String Theory being considered science at all is what a lot of us are grumpy about.
They’re not using LLMs. AI is a very broad field.
This is why the use of the word AI for these models is dumb. They all conflate one another, and they are all not true AI.
I highly doubt they are turning an LLM or image generator loose on scientific theory. It would make absolutely no sense.
The article does not say anything about LLMs or image generation, where did you get that from? They used MLP neural networks to calculate the Calabi-Yau metrics.
It was more a complaint about the many different uses for “AI”, and the problems caused by their multiple conflated uses.
There’s a user in this thread for instance, who is talking about plagiarism as a problem in this use case. Because they assumed AI in this case was either an LLM or an image generator.
Except that there are many different kinds of intelligence, and all of these ones are artificial, so yeah, “AI” does cover a lot of different things. From your previous use of the term “true AI”, I assume that you mean the type popularized by science fiction, which irl is referred to as artificial general intelligence. It has its own name because it is one of many different types of AI. The confusion for most people and the media comes from again, the tendency in science fiction to use the smaller term of AI to refer to AGI, likely for brevity. Much in the same way in how Sherlock Holmes stories confused people by referring to inductive reasoning as deductive reasoning.
AI is an extraordinarily broad field. It even includes painfully mundane things like pathfinding and decision trees.