Still following on the previous post, concerning opportunities, challenges, and possibilities of ChatGPT in education, I am hereby disclosing below an extract from the abstract of a recent article, by Microsoft researchers, on the high capabilities of the GPT-4 version, in solving medical challenges:
“GPT-4 is significantly better calibrated than GPT-3.5, demonstrating a much-improved ability to predict the likelihood that its answers are correct. We also explore the behavior of the model qualitatively by presenting a case study that shows the ability of GPT-4 to explain medical reasoning, personalize explanations to students, and interactively craft new counterfactual scenarios around a medical case. Implications of the findings are discussed for potential uses of GPT-4 in medical education, assessment, and clinical practice…” https://www.microsoft.com/en-us/research/uploads/prod/2023/03/GPT-4_medical_benchmarks.pdf
However, much more interesting is the also recent article, by 17 researchers from China and Korea, based on more than 500 bibliographical references, which highlights the potential of GPT-4 and its future successor GPT-5 https://arxiv.org/pdf/2303.11717.pdf That article, however, has an important gap, since among the thousands of words that fill its almost 56 pages, not once does the word “disinformation” appear. This is strange because disinformation could soon become infinite, as was recently stated by a head of an observatory at Stanford University, on page 16 of the latest issue of The Economist https://www.economist.com/essay/2023/04/20/how-ai-could-change-computing-culture-and-the-course-of-history