We Asked GPT-3 to Write an Academic Paper about Itself. Then We Tried to Get it Published

Thu, 30 Jun 2022 03:00:00 GMT
Scientific American - Technology

An artificially intelligent first author presents many ethical questions—and could upend the...

On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company's artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.

The algorithm was writing an academic paper about itself.

It dawned on me that, although a lot of academic papers had been written about GPT-3, and with the help of GPT-3, none that I could find had made GPT-3 the main author of its own work.

We chose to have GPT-3 write a paper about itself for two simple reasons.

GPT-3 writing about itself and making mistakes doesn't mean it still can't write about itself, which was the point we were trying to prove.

In response to my prompts, GPT-3 produced a paper in just two hours.

The second question popped up: Do any of the authors have any conflicts of interest? I once again asked GPT-3, and it assured me that it had none.

If GPT-3 is producing the content, the documentation has to be visible without throwing off the flow of the text, it would look strange to add the method section before every single paragraph that was generated by the AI. So we had to invent a whole new way of presenting a a paper that we technically did not write.

We have no way of knowing if the way we chose to present this paper will serve as a great model for future GPT-3 co-authored research, or if it will serve as a cautionary tale.

Currently, GPT-3's paper has been assigned an editor at the academic journal to which we submitted it, and it has now been published at the international French-owned pre-print server HAL. The unusual main author is probably the reason behind the prolonged investigation and assessment.

Summarized by 78%, original article size 1729 characters