Return to homepage

From the roadmap for action report

Strategic Choices: Algorithms vs. Humans

It is only a matter of time before artificial intelligence further pervades campus decision-making in ways that impact equity, privacy, and allocation of resources.

2 mins read

The debate over using artificial intelligence as a substitute for human analysis is already playing out in many parts of society, including the corporate sector. As an example, Forbes recently developed a list of 15 business applications of artificial intelligence. Out of these 15 applications, at least five apply to academic institutions as well. Should institutions deploy artificial intelligence tools in the admission process? In recruiting staff? In reviewing and grading non-quantitative exam materials? In identifying potential malicious or unsafe student behavior before it occurs? Should students be able to use accelerated reading software? Should software provide a first line of student support, substituting for teaching assistants? The answer to each of these questions is complex and will vary across and within types of institutions.

One of the early use cases for algorithms on campus was plagiarism checking software such as Turnitin, which has recently sparked debates over accuracy, accountability, and bias. Books such as Weapons of Math Destruction and Algorithms of Oppression have highlighted how algorithms can perpetuate inequities through built-in biases and negative feedback loops. As such, there are significant ethical and legal implications of using algorithms to drive decision-making.

However, there also may be implications of not using them. Algorithms can process information more rapidly than humans and provide tailored services to students (such as adaptive learning) that would be cost prohibitive to deliver through faculty or staff. Also, while algorithms will inevitably contain biases that are built in from the start, machines can analyze datasets more consistently and efficiently than individual humans ever could.

Either way, it is only a matter of time before artificial intelligence further pervades campus decision-making in ways that impact equity, privacy, and allocation of resources. Academic senates, institutional governance boards, and other decision- making bodies should begin a dialogue over the pros and cons as soon as possible. Engaging in this debate in advance will help prepare institutions to be deliberate and strategic about deploying artificial intelligence in ways that are consistent with the institution’s culture, values, and risk tolerance.

About the authors

Portrait of Claudio Aspesi

Claudio Aspesi

A respected market analyst with over a decade of experience covering the academic publishing market, and leadership roles at Sanford C. Bernstein, and McKinsey.

Scholarly Publishing and Academic Resources Coalition

SPARC is a non-profit advocacy organization that supports systems for research and education that are open by default and equitable by design.