Artificial intelligence at universities: a pressing issue University … – University Affairs

The overwhelming rise of text generators raises the need for reflection and guidelines to ensure their ethical use in an academic setting.

Just a year ago, the debate around artificial intelligence (AI) was largely theoretical. According to Caroline Quesnel, president of the Fdration nationale des enseignantes et enseignants du Qubec (FNEEQ-CSN) and literature instructor at Collge Jean-de-Brbeuf, the start of the 2023 winter semester marked a turning point as ChatGPT became a focal point in classrooms. Other forms of generative AI are also available to the public, such as QuillBot (text writing and correction), DeepL (translation) and UPDF (PDF summarization).

Martine Peters, a professor of educational science at the Universit du Qubec en Outaouais, surveyed 900 students and found that 22 per cent were already using ChatGPT (sometimes, often or always) to do their assignments. And that was in February!, she noted. It is an alarming statistic, particularly as neither faculty nor universities were prepared to deal with the new technology. Trying to ban AI now would be futile, so what can universities do to ensure its ethical use?

Dr. Peters is convinced that AI can be used for educational purposes. It can help a person understand an article by summarizing, translating or serving as a starting point for further research. In her opinion, outside of language courses (which specifically assess language skills), it could also be used to correct a text or improve syntax, much like grammar software or writing services that some students have relied upon for years.

However, plagiarism remains a major concern for academics. And for the moment, there is no effective tool for detecting the use of AI. In fact, Open AI, the company behind ChatGPT, abandoned its detection software this summer for lack of reliable results. This is a rat race were bound to lose, argued Dr. Quesnel. Should professors return to pen-and-paper tests and classroom discussions? Satisfactory solutions have yet to be found, but as Dr. Quesnel added, its clear that AI creates tension, especially considering the massive pressures in academia. Right now, were spending a lot of energy looking at the benefits of AI instead of its pitfalls.

Beyond plagiarism, AI tools raise all kinds of issues (bias, no guarantee of accuracy, etc.) that the academic community needs to better understand. ChatGPT confidently spouts nonsense and makes up references; its not very good at solving problems in philosophy or advanced physics. You cant use it with your eyes closed, warned Bruno Poellhuber, a professor in the department of psychopedagogy and andragogy at the Universit de Montral.

More training is needed to help professors and students understand both the potential and drawbacks of these technologies. You have to know and understand the beast, Dr. Poellhuber added.

Dr. Peters agreed. For years, we didnt teach how to do proper web searches. If we want our students to use AI ethically, we have to show them how, and right now nobody seems to be taking that on, she said.

Universities are responsible for training their instructors, who can then pass this knowledge on to students. Students need to know when its appropriate to use AI, explained Mlanie Rembert, ethics advisor at the Commission de lthique en science et en technologie (CEST).

The Universit de Montral and the Ple montralais denseignement suprieur en intelligence artificielle (PIA) organized a day of reflection and information for the academic community (professors, university management, etc.) in May. The aim was to demystify the possibilities of generative AI and discuss its risks and challenges, Dr. Poellhuber explained.

This event followed an initial activity organized by the Quebec Ministre de lEnseignement suprieur and IVADO, which gave rise to a joint Conseil suprieur de lducation (CSE) and CEST committee. The committee is currently conducting discussions, consultations and analysis among a wide range of experts on the use of generative AI in higher education. Our two organizations saw the need for more documentation, reflection and analysis around this issue, said Ms. Rembert, who coordinates the expert committees work. Briefs were solicited from higher education institutions and from student and faculty organizations. The report, scheduled to be released in late fall, will be available online.

Given the scale of the disruption, faculty members could also benefit from the experience of others and the support of a community of practice. Thats the idea behind LiteratIA, a sharing group co-founded by Sandrine Prom Tep, associate professor in the management sciences school at the Universit du Qubec Montral. Its all very well to talk about theory and risks, but teachers want tangible tools. They want to know what to do, she explained. Instead of letting themselves be outpaced by students who are going to use AI anyway teachers should adopt a strategy of transparency and sharing. If we dont get on board, students will be calling the shots.

Universities and government alike will have to take a close look at the situation and set concrete, practical and enforceable guidelines. We cant dawdle: AI is already in classrooms, said Dr. Quesnel, adding that faculty are currently shouldering a burden that should be shared by teaching institutions and the Ministre de lEnseignement suprieur. We need tools that teachers can rely on.

So far, very few universities have issued guidelines, those that exist are often vague and difficult to apply. There isnt much in terms of procedures, rules or policies, or tools and resources for applying them. Basically, teachers have to decide whether or not to allow AI, and make their own rules, Dr. Prom Tep added. Institutions will need to define clear policies for permissible and impermissible use, including but not limited to plagiarism (for example, how to use it for correcting assignments, how to cite ChatGPT, etc.).

Rolling out policies and legislation can take time. Its like when the web became prominent: legislation had to play catch up, noted Dr. Prom Tep. The Observatoire international sur les impacts socitaux de lIA et du numrique (OBVIA), funded by the Fonds de recherche du Qubec, is expected to make recommendations to the government, as is the joint expert committee. But is that enough? Do we need broader consultations? questioned Dr. Prom Tep, who would like to see permanent working groups set up. In her opinion, to avoid each institution reinventing the wheel, these reflections will have to be collective and shared, and neutral places for discussion will have to be created.

Link:
Artificial intelligence at universities: a pressing issue University ... - University Affairs

Related Posts

Comments are closed.