MDIA Logo - Malta Digital Innovation Authority Logo

ChatGPT in schools: Students must demonstrate skill not just reproduce facts

AI - A Threat, Opportunity or Both?

Share This Post

The assessment of students at university level must go beyond regurgitating facts and test their practical skills as well, professor of artificial intelligence Alexei Dingli said on Wednesday.

He was speaking during the conference “AI: Threat, opportunity or both?” a business breakfast organised by the Malta Digital Innovation Authority and supported by Times of Malta.

During the first panel, which also featured MDIA senior legal officer Annalise Seguna, AI and virtual augmented reality senior lecturer Vanessa Camilleri and computer science professor Joshua Ellul, Dingli was asked to react to concerns expressed by other University of Malta lecturers on the possibility of students using ChatGPT in assignments and exams.

Dingli said that, as an educator, he was not concerned about his students making use of the technology and said that perhaps it was time for educators to consider changing the way assessment is carried out.

“It’s time to change and we cannot keep looking towards the past. Like every disruptive technology, I believe that this was a wake-up call,”
– Alexei Dingli

“In my case, when it comes to assessment, my students cannot just regurgitate what I told them in class. We need them to go deeper and demonstrate not only that they have understood what they’ve been taught but to go beyond that.”

Dingli said that in a field like AI, students are facing a reality that by the time they graduate, some of what they learned may already be out-of-date.

“It’s no longer possible to get your degree and assume that you are set for life, so I hope that in that context my colleagues at the university will consider changing the way they have been doing things for the past 10 or 20 years.”

Discussing what impact the EU’s AI Act is set to have on technology like ChatGPT, Ellull said that Malta had received a headstart with its “sandbox” framework and placed it in a position where it already has critical systems to protect users similar to the AI Act’s already in place.

Regulatory sandboxes allow cutting-edge technology, which may not be compliant with the existing legal framework, to operate within a restricted sector to better understand its risks and opportunities before developing regulations around it.

Camilleri said that part of the ethical and regulatory issues posed by AI tools is that they are developed by big tech companies that are not as transparent about how and where the data they train AI models is coming from and whether it was obtained consensually.

“These companies have the money and resources to produce giant leaps but as consumers we are using these products without knowing where the data is coming from and maybe unconsciously we are interacting with AI in a way that is shaping our thoughts,” she said.

“We need to be more aware of the information that we are consuming.”

So can legislation keep up with the fast-paced nature of big tech? Ellul said that it was likely to be “an arms race”.

“The way that tools like ChatGPT can proliferate code so quickly is worrying and it is impossible to regulate every variation of it, but we need tighter regulatory tools to help in this regard, otherwise it’s going to be an arms race.”

Dingli also pointed out that by the time common EU legislation is finalised, it may already be out of date, pointing out that ChatGPT was launched after the consultation period for the AI Act closed.

“There is a lot of technology still being researched that is going to be out and available in the coming years, so by the time this legislation becomes effective, something different will be around.”

In a second panel, AI researcher Patrick Camilleri, EBO.ai CEO Gege Gatt, MDIA chief strategy officer Gavril Flores, AI lecturer Claudia Borg and AI graduate Anne Camilleri, discussed policy implementation and user trust in AI tools.

Gatt said that the matter was less about trust in the technology and more about the oversight that regulatory bodies are keeping on it as AI models get better.

“I think the issue is how are we aligning ourselves with this technology and its values and objectives to those that we have as a society, we need to understand this alignment as humans, but also as governments, NGOs and corporations,” he said.

“We are at a turning point on how to regulate the technology and distribute its wealth equitably as well as define what work means for us. We have to understand the foundation of the public policy that we are building the AI world on and we have to make sure that the cause and effect of this technology is understood by users.”

Flores said that, as a regulator, the MDIA is primarily considering operational constraints and the effects on the human person as main oversight considerations.

“We look at all the risk factors in AI systems, including things like safeguarding human rights and bias before it is put out to the market,” he said.

Gatt also spoke about how AI tools can give jobs new meaning and may present a “jump into rehumanising work”. This came with critically examining the role work played in one’s life, he said.

Most jobs, he added are “rule-based” and have functionally not changed in procedure for many years. However, AI tools now present the opportunity to automate repetitive tasks.

“When those repetitive and dehumanising tasks are removed from our portfolio of tasks and given to AI, it is an opportunity to really find meaning in what we do.”
– Gege Gatt

One sector that could benefit from this is the public service, Gatt added, saying that AI tools could be used to better connect citizens to services that they may not have previously known about.

 


Photo: Jonathan Borg

Subscribe To Our Newsletter

Get updates and learn from the best