The first Scopus-indexed publication co-authored by ChatGPT-3

Still following a previous post from December 22nd titled “We envision a world where everyone, no matter their profession…” can have immediate access to dozens of experts”, and mainly following a more recent post, on January 23rd, about the “omnipresent” (and almost omniscient) artificial intelligence model, ChatGPT-3, and his interesting comments on The Value of Life for an Elderly Person in a Coma, I am disclosing the result of a search carried out today on the Scopus database, which reveals that among the more than 80 million scientific publications indexed on that platform, there is already one publication, by a researcher at the University of Manchester, entitled “Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?” in which ChatGPT-3 appears as the second author.

Regarding this primordial, substantial, and problematic event, I am recalling the three questions, which I posed in a previous post on August 14, 2020, when I commented on an article in The Economist, which, for the first time, reported the capacity of the aforementioned model of artificial intelligence, produce texts that appear to have been written by humans:

let´s imagine that a human improved a text generated by GPT-3, can we really say that the improved text has two co-authors ?  Or should it be that GPT-3 is the only real author and that the human contribution was a minor one and need only to be mentioned in the acknowledgment section ? But taking into account the intrinsic human narcissism (especially in the academic field, as per Brunell et al. (2011) findings), is it possible to believe that humans are capable of admitting that they had a minimal contribution to an article and that the merit belongs exclusively to GPT-3?

PS – On page 56 of The Economist, issue 4-10 February 2023, referring to an article about the AI race, and the challengers of ChatGPT-“the fastest-growing consumer application in history“-there´s an image (nº3) showing that Amazon and Meta “respectively produce two-thirds and fourt-fifths as much AI research as Stanford university…Alphabet and Microsoft churn out considerably more…“. This begs the question: In a world of rampant research misconduct, how can a corporation’s research results be trusted (remember Nikola, Theranos, or Volkswagen corporate frauds) if negative results can translate into a sharp decline in the value of its stock or even its bankruptcy?