Six ways universities could embrace AI – and six ways to get it wrong

Afia Tasneem and Abhilash Panthagani distil their insight from discussing AI with university leaders

Afia Tasneem is Senior Director, Strategic Research at EAB


Abhilash Panthagani is Associate Director at EAB

Over the last six months, generative AI tools like ChatGPT have swiftly moved from being innovative novelties to a staple in many of our daily lives.

However, ChatGPT’s popularity—with 1.5 billion monthly visits to the tool’s site—is only the tip of the iceberg. AI technology, with its capacity to interrogate huge datasets at lightning speed to predict likely future patterns and make suggestions, promises to revolutionise our lives and even entire industries.

In recent months, we have been speaking with dozens of senior higher education leaders about the emergence of AI tools in higher education. While some expressed apprehension, most are eager to explore opportunities and applications of AI.

We’ve seen some great examples of how universities are using AI to democratise and personalise learning and teaching, as well as drastically increasing operational efficiency. We’ve also learned a good bit about what not to do when it comes to responding to the new availability of AI tools.

Six potential ways to leverage AI

Infuse AI into every discipline’s curriculum

To prepare its students for the workforce of the future, the University of Florida provides every student, regardless of programme, an opportunity to learn about AI. In 2020, the university began offering an introductory course teaching basic AI literacy and concepts to all students. Since then, colleges and departments have tailored AI courses to their specific needs and disciplines. For example, the business school now requires its students to take an AI and business analytics course to fulfil the requirements for their degree.

Holistically incorporating AI into teaching and learning will likely give students a head start to succeed in the modern workforce. The University of Florida currently has over 7,000 students enrolled in AI modules on campus (see requirements for UF’s AI Fundamentals and Applications Certificate here).

Provide students with personalised AI tutors

Studnt, an AI-powered tutor that offers personalised one-on-one tutoring and real-time guidance to students, is now available at a growing number of Canadian universities, including McGill University, Concordia University, Queen’s University, and the University of Ottawa. Other companies like Khan Academy are also testing AI-powered tutors like Khanmigo, which will assist students in endeavours ranging from writing papers to coding applications. The Virtual Operative Assistant at the Neurosurgical Simulation and Artificial Intelligence Learning Centre at the Montreal Neurological Institute-Hospital even teaches surgical technique and provides personalised feedback to students while they conduct procedures.

AI tutors could become an essential pillar in every student’s academic journey, from coaching them through learning difficult topics to providing real-time feedback while completing assignments. The more a student works with an AI tutor, the more it will be able to tailor its guidance for that student’s needs. This means explaining complex concepts in relatable terms and motivating them when they lose interest in a topic.

Support educators with AI teaching aids

Learning platform Coursera recently announced plans to release an AI-assisted course building feature to help instructors structure lesson plans and generate content based on readings and assignments they upload. Another AI feature soon to launch, Quick Grader, promises to streamline the assessment process with reusable comments. These AI teaching aids can serve as a brainstorming partner and partially automate repetitive tasks, helping to reduce assessment burden on teaching staff. Academics can then spend more time innovating in the classroom and finding productive ways to engage students.

Empower professional services teams with AI-enabled insights from enterprise data

In an experiment to help its financial advisors more efficiently access organisational information, Morgan Stanley trained OpenAI’s GPT-4 on 100,000 of its bank and research documents to help financial advisors quickly analyse large volumes of data, freeing up their time to focus on personalised customer service. Trained AI chatbots can similarly transform knowledge management in higher education, improving efficiency and decision-making across many functions.

For example, a trained AI chatbot could help legal services sift through large volumes of documents or support accounts payable by quickly filtering through past invoices. However, to ensure data privacy and security, institutions should partner with vendors or other institutions to develop their own instances of AI chatbots with built-in guardrails. Morgan Stanley has followed this approach, creating proprietary AI chatbots for internal use.

Boost recruitment and fundraising with AI-generated personalised content

In an effort to hyper-personalise content, ASIS International, an association of security management professionals, implemented rasa.io’s AI system in April. These smart newsletters boosted engagement, spurring an increase in ad revenue by 63 per cent.

As institutions compete for students and funds, recruitment and fundraising teams can similarly leverage AI tools to multiply their reach and output. AI tools can help them craft engaging, personalised content at scale, including social media posts, email campaigns, and fundraising appeals. For example, if institutions can train AI on existing personalised emails, the AI could autonomously tailor emails to prospective students and donors that align with their backgrounds and interests.

Expand use cases of helpdesk chatbots to conduct more complex tasks

At an emergency repair and improvement call centre in Chattanooga, Tennessee, a conversational AI chatbot named Charlie has set a high bar by independently resolving 15 per cent of all claims and helping agents answer customer inquiries in real time. During one storm, Charlie independently helped 10,000 customers schedule repairs and book claims while providing real-time prompts to agents as they handled their own claims and repairs.

Compared to previous generations, today’s chatbots can now handle a lot more than basic inquiries with canned answers. The next generation of AI chatbots will be able to provide improved service in even more complex environments (both independently and as support for live agents) because of their advanced deep-learning capabilities, capacity to generate text themselves, and ability to sound like a real person.

Six missteps in responding to the rise of AI

Despite AI’s seemingly unlimited promise, as with any revolutionary tool, missteps abound. Most of the below, we hypothesise, come about because of anxiety about the new technology, and some wishful thinking. University leaders are absolutely right to be critical and thoughtful about how technology could reshape some long-established practices, but those that delay or dial down the response risk getting left behind.

Dismissing AI as just the latest hype

It’s tempting to view AI as a passing trend, akin to Massive Open Online Courses (MOOCs) in the 2010s. However, AI is unlikely to follow a similar trajectory, especially given its extensive reach and diverse applications. Much like the internet and the personal computing revolution, AI’s influence on higher education is poised to be disruptive and enduring.

Several factors position AI as a particularly potent technological innovation: unprecedented user growth (a recent Educause quick poll survey revealed that 67 per cent of HE staff are already using ChatGPT for their work); broad application in nearly every part of campus operations and services; and its integration into popular applications (for example, Microsoft Office, YouTube, and Spotify). The ubiquitous nature of AI is making it impossible to ignore and redefining expectations for customer service, business workflows, and personal convenience.

Thinking we can distinguish between AI-generated and human-generated work

Concerned about academic integrity, many institutions have turned to plagiarism detection services in student assignments, if not banning AI outright. Despite these measures, savvy students often find ways to cheat these software programs. As AI continues to advance, differentiating between AI-generated and human-generated content will only become harder.

Rather than trying to stop students from using AI, institutions are thinking about how to redesign assessments and create new assignments that foster higher-order skills in concert with AI. There are lots of experiments underway, including the Sentient Syllabus Project, an academic collaborative devising assignments that incorporate AI but push beyond its limits so that students are forced to generate insights that AI cannot. AIs might create sample texts that students fact-check, critique, and improve. An AI bot might even take part in a group project and offer viewpoints that students haven’t considered. In short, using AI may prove to be the best way to teach students to surpass AI—a critical skill for all in an AI-driven world.

Not preparing students for an AI-fluent workforce

Angst over preventing “AI cheating” is also leading universities to ignore the broader impact of AI on the workforce. AI tools are already a staple in many workplaces, and AI’s presence will only grow over time. HE is no exception. One university’s creative team used ChatGPT to compose a 30-second commercial and determine the shot selection for its production. According to the chief marketing officer, the script required minimal adjustments, and eight of the ten suggested shots aligned perfectly with what her team had in mind. ChatGPT completed both tasks within a couple of minutes.

Since students must learn how to use AI to prepare for the workforce, educators must adapt their teaching and evaluation methods to accommodate AI. Institutions should teach students how to critically evaluate the accuracy, relevance, and potential biases of AI-generated content while leveraging the benefits of AI tools such as efficiency, personalised learning experiences, and enhanced research capabilities.

Adopting a piecemeal approach to AI

Many universities are forming taskforces and working groups to address specific aspects of AI, such as its impact on writing assignments or research grant proposals. While these smaller taskforces provide an excellent starting point, it is essential to establish university-wide taskforces that can systematically uncover AI’s impacts across campus. Universities must ensure that these taskforces include diverse representation who can contribute various perspectives on AI. A comprehensive plan can also prevent siloed thinking and redundant and unnecessary product purchases throughout the campus.

Assuming a formalised strategy is necessary before discussing AI with the campus community

There’s no doubt that many people on campus are already using AI to do their work. But many senior leaders have been silent on the subject. By not initiating the conversation on AI, they could inadvertently drive utilisation underground. Instead, progressive leaders are fostering AI conversations through workshops, focus groups, and “town hall” meetings. Open dialogues can identify campus “power users” and AI experimenters who can help inform the university’s AI strategy. This could be a starting point for a more formal strategy, or it could help leaders spotlight campus innovations.

Failing to raise awareness about AI risks, especially of public platforms like ChatGPT

AI models rely on vast amounts of data to learn and generate content. This raises concerns about privacy and data security, as sensitive information can be inadvertently shared or exposed. AI models can also generate responses that may be inappropriate, offensive, or biased, leading to ethical and reputational issues. These risks will not prevent campus communities from using a product that simplifies their daily lives. But training and awareness campaigns can mitigate these risks substantially. When possible, institutions should also create avenues for leveraging AI capabilities safely, like developing AI chatbots with built-in guardrails.

These are the opportunities and challenges we have identified, but we have no doubt there are more – do share your reflections in the comments.

This article is published in association with EAB.

2 responses to “Six ways universities could embrace AI – and six ways to get it wrong

  1. A few comments on this:
    The linked article claims a rather more plausible 100 million users for ChatGPT – 1.16 billion is the number of claimed page views, which is a very different number (this is the sort of subtle error of fact that Large Language Models are brilliant at making, incidentally). They’re popular toys, but at the 1-2% of internet users, not at the 20% of internet users level!

    Dismissing new super-hyped technology as overrated is a brilliant tech strategy. Sure, 1% of the time you miss smartphones and end up a few years behind on adoption when it turns out they’re important after all, but you can catch up easily enough and you mainly just get chance to learn from the early mistakes; the rest of the time you don’t overinvest in MOOCs or Blockchain or Inaccuracies-As-A-Service chatbots or whatever the latest brochureware is, instead of focusing on things like email, websites, wifi, etc. which people really need.

    “Inappropriate, offensive and biased” really isn’t the problem with chatbots in a university context. Sure, it’s a reputational risk, but they’re probably not saying anything worse than some of the staff and students already are. *Inaccurate* is the much bigger risk. If a university’s chatbot says to an applicant that the A123 programme is accredited when it isn’t, or has a year-in-industry option when it doesn’t, that’s considerably more of a problem; worse, the opaque nature of the LLM programmes currently being marketed as “AI” means that it’d be virtually impossible for a university to prove that its chatbot didn’t/couldn’t claim that. And since LLMs only have a concept of “words likely to succeed other words” rather than “accuracy”, this is likely to be unfixable.
    “Guardrails” won’t prevent inaccuracies unless you spend so long on the guardrails that you’re just creating a list of approved answers and using the LLM to just pick the most likely document for the question. (Which isn’t a bad use of the technology – improved website search is really good to have – but a long way from the hype)

  2. Thankyou to the last commenter for your scepticism about, AI, which seems to me to be a confidence trick by unethical corporations. I would just add in response to ‘1% of the time you miss smartphones’ that I miss smartphones 100% of the time! Have never owned one, never want to own one. Likewise, it is possible to refuse AI right from the start, if we have the courage to say that the emperor has no clothes, and we want no part of it in our daily lives, ever.

    Also, I think readers of this article should be told what EAB is, as there is a large potential conflict of interest. Is it ‘the European Association for Biometrics (EAB)… the leading voice for digital ID & biometrics in Europe’? Which would be sinister enough. Or is it the EAB that is ‘a consulting firm specializing in education institutions’? In which case it is a big profit-making business, headquartered in Washington DC, which should clearly therefore have no role in determining the future of public universities in the UK, or anywhere else.

Leave a Reply