Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of
It can write simple well known stuff. But as soon as you ask it more difficult things to code, it falls apart. Also, a program is not just 1 or 2 functions. It consists of a ton of code that needs to work well together has specific conditions it needs to meet for the program to work as expected.
I can ask it to write me a function that adds numbers, or do something with a well known python library. Or write some html code to display some shit. But writing an entire program is not easy.
Gpt just combines certain things it knows about. It does not know what the rest of your program is like or the software yours needs to work with. What it contains or what expectations need to be met.
Its like making a robot put a slice of cheese on bread and thinking it will replace a chef.
Just as what the article is about, it knows how to write a lot of bullshit and make it believable. The same goes with code. But things that have been written a million times before are easy to copy.