How Language-Generation AIs Could Transform Science

Wed, 04 May 2022 08:00:00 GMT
Scientific American - Technology

An expert in emerging technologies warns that software designed to summarize, translate and write...

If anyone can use LLMs to make complex research comprehensible, but they risk getting a simplified, idealized view of science that's at odds with the messy reality, that could threaten professionalism and authority.

Isn't the issue that LLMs might draw on outdated or unreliable research a huge problem?

Still, these kinds of tool could narrow their field of vision, and it might be hard to recognize when an LLM gets something wrong.

LLMs could be useful in digital humanities, for instance: to summarize what a historical text says about a particular topic.

My guess is that large scientific publishers are going to be in the best position to develop science-specific LLMs, able to crawl over the proprietary full text of their papers.

There is some potential for LLMs based on open-access papers and abstracts of paywalled papers.

Could LLMs be used to make realistic but fake papers?

Yes, some people will use LLMs to generate fake or near-fake papers, if it is easy and they think that it will help their career.

Still, that doesn't mean that most scientists, who do want to be part of scientific communities, won't be able to agree on regulations and norms for using LLMs. How should the use of LLMs be regulated?

Specifically for LLMs' possible use in science, transparency is crucial.

Summarized by 78%, original article size 1286 characters