Takis Konstantopoulos, University of Liverpool, is joined by George Kesidis, Pennsylvania State University, for the first column in a series on the impact of AI on academia. They write:

 

In a previous Takis Tackles article ([4] in the December 2021 issue) one of us [TK] claimed that “we live in extraordinary times” considering the issues arising from the pandemic. The pandemic is “over,” but we arguably live in even more extraordinary times considering global political instabilities, including wars, suppression of free speech, the post-factual world of online social media, and environmental pollution causing rapid climate change and the spread of microplastics. But there’s another factor that makes current times extraordinary, which is how people have readily surrendered, to a poorly understood emerging technology, something that is quintessentially human: the ability to think and rationalize.

Of course, we’re talking about generative Artificial Intelligence (AI), a globally embraced technology which, like the Internet, was innovated by universities, then perfected by industry, and then impacted universities. Generative AI is being hailed as revolutionary. But what is the nature of this revolution? We will discuss some of the consequences of AI in universities, after defining its main characteristics.

We must first note that AI is not a new idea. Recent dramatic advances in it have been made possible largely thanks to the incredible progress in the availability of large-scale distributed computing. Presently, AI is synonymous with highly parameterized Deep Neural Networks (DNNs), and DNNs with 1012 parameters are emerging. Generative AI can roughly be described as an information processing, retrieval and synthesis system. Given a vast training dataset, where each data sample is a sequence of tokens (words, symbols, etc.), the parameters of a chatbot based on a Large Language Model (LLM) are, by gradient-based optimization, trained to predict the next token; thus, LLMs are “autoregressively” generative [7]. Thereafter, the LLM is refined by training samples that have been manually crafted in a process called Reinforcement Learning with Human Feedback (RLHF). The result is a genuinely remarkable ability of the chatbot, like GPT-4, to respond with coherent paragraph structure and good grammar. The chatbot can be prompted to produce code in a particular programming language that performs a specified function, to explain a scientific phenomenon, or to produce a “logical” argument of a specified claim. Sometimes these responses are technically correct even when they are literally not present in the training dataset. But sometimes the responses are technically faulty or outright wrong, even for surprisingly simple questions [2].

Chatbots are often said to be able to “think” [8] or “understand,” or are “intelligent,” even though these anthropomorphic terms are not clearly defined. Furthermore, how an AI is trained is different from how a human being learns [3], and its structure draws far more inspiration from ideas of signal processing than from the structure of a biological brain.

The usefulness of (generative) AI is not in question. Indeed, there is a lot one can do, provided one does not take AI’s “intelligence” for granted and judiciously checks, and if necessary corrects, its answers. Ask GPT-4 to compose a haiku for statistics and it may reply:

Numbers dance on charts,
Probability whispers,
Truth in data’s trance.

or…

Data’s silent tale,
In numbers, patterns unveil,
Statistical grace

That’s pretty good, and we could use it as an epigraph in a stats book chapter. Ask a generative AI to prove the Hahn–Banach theorem and it may produce the correct proof because the proof was included in its training dataset and embodied in its parameters, i.e., as if the proof was retrieved from storage. Ask an AI whether convergence in distribution of two sequences of random variables implies convergence of the sequence of pairs of random variables, and it will give you the correct answer; moreover, it will volunteer an example that is horribly wrong. Here is a true dialog with Prometheus:

TK: What you told me is wrong. Can’t you see that, for any ε > 0, the sequence min{, 1} converges to 1 because it is eventually equal to 1?

Prometheus: No, I can’t see [that], because it doesn’t. It converges to 0.

TK: You are wrong. Take ε = 1. Then min{n, 1} = 1 for all positive integers n.

Prometheus: I’m sorry, but you are wrong. When n is 100, the minimum between 100 and 1 is 100, not 1.

Thus, the AI can use the word “minimum” in a grammatically correct way but it does not “understand” its meaning.

To try to predict the possible impact of AI on university education, we first make some remarks about the current “university industrial complex.” The previous three decades have seen a dramatic expansion and industrialization of university education. In some countries, senior administrators of “non-profit” universities are paid like for-profit corporate executives. They take credit for attracting customers (students) and providing them with products (degrees) at increasingly high prices (tuition). That professors (front-line employees) constitute the university is now an obsolete notion [6]. Modern administrators perhaps now dream of a—to them—Utopian university based on courses not taught by professors, where students get “help” with their homework from chatbots and plug numbers into formulas on exams. One can thus increase the product throughput while reducing cost! Hence, universities can avoid financial crises [5] and focus on what really matters (to the trustees): pursuing expensive construction projects and revenue from student athletics.

A student, especially one trapped in a degree mill, often has no idea why an abstract concept may be important, particularly one for which they lack foundation to understand owing to an earlier failure to teach. This quickly leads to frustration and the conscientious professor is faced with a trade-off: remediate the missing background and then rush through the course syllabus, or don’t remediate, cover the syllabus and face poor student reviews. (We have found that attempts to remediate through additional lectures fail because many students simply don’t show up or senior administrators do not want to admit that remediation is necessary.) Either way, the result is poor educational outcomes for the average student [1] which may be contributing to the student-loan crisis. No student enters a university wishing to obtain a degree after having learned nothing. But students are vulnerable because they (and their families) typically do not really understand the nature of the degree program when they enroll. They trust in the professors to guide them through their curriculum, particularly the difficult elements (e.g., the mathematical abstractions) whose value, at the time of instruction, is not evident to the student.

We think that generative AI actually gives a glimmer of hope.

Firstly, generative AIs may set a desperately needed standard for university education: Why hire someone who can provide no more value than that which can be easily derived from a chatbot? In particular, why only teach students to plug numbers into formulas when an AI can conveniently do this? Though senior administrators may hope that generative AI will “increase throughput” (and no doubt there are plans somewhere for a degree program in “prompt engineering”), it may instead counter the trend to Trump University and enable professors to better help students to learn how to think about their chosen discipline more deeply.

Secondly, although generative AI is already very powerful and may become increasingly so, it’s not infallible, and is best thought of as a useful tool for the domain expert (whether a mathematician, programmer, or artist), just like other types of existing technology. It’s not clear whether generative AI will ever be able to manage the conceptual abstractions and connections needed to obtain creative ideas for complex new problems. Baruch Spinoza (1632–77), arguably one of the most important figures of the Age of Reason, generally linked happiness to the ability of a human to think freely. Generative AI could help the human domain expert to more freely think and to be more creative.


References

[1] K. Carey. “Americans Think We Have the World’s Best Colleges. We Don’t.” https://www.newamerica.org/education-policy/edcentral/americans-think-worlds-best-colleges-dont

[2] T. Chiang. “ChatGPT is a blurry JPEG of the web.” https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

[3] J. Fodor. “We’re told AI neural networks ‘learn’ the way humans do. A neuroscientist explains why that’s not the case.” https://theconversation.com/were-told-ai-neural-networks-learn-the-way-humans-do-a-neuroscientist-explains-why-thats-not-the-case-183993

[4] T. Konstantopoulos. “Fin-de-pandemic Math Education Miscellanea.” https://imstat.org/2021/11/16/fin-de-pandemic-math-education-miscellanea/

[5] M. Korn, A. Fuller, and J.S. Forsyth. “Colleges Spend Like There’s No Tomorrow. ‘These Places Are Just Devouring Money.’” https://www.wsj.com/articles/state-university-tuition-increase-spending-41a58100

[6] P. Lamal. “Review of B. Ginsberg’s The Fall of the Faculty.” https://www.jstor.org/stable/23744905?seq=3

[7] C. Newport. “What kind of mind does ChatGPT have?” https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have

[8] J. Rothman. “Why the godfather of A.I. fears what he’s built.” https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai