色盒直播

卡西姆大学Computer says no: deploying AI as a bulwark against harmful online content

Computer says no: deploying AI as a bulwark against harmful online content

With the sheer volume of information on the internet, the job of tackling disinformation, hate speech and deepfakes is increasingly complicated. Researchers at Qassim University are training AI to flag harmful content and clean up digital environments

Internet technologies such as social media have redrawn the information landscape, making harmful content easier to produce and share. The speed at which information is circulated is bewildering, and in a democratised internet environment, in which anyone can be a content creator, there is no slowing it down.

Stopping the spread of malicious content, such as disinformation, hate speech and deepfakes, is a complex problem, and it is one that Suliman Aladhadh, associate professor in the Information Technology Department at Qassim University, is trying to solve. He believes that any effective response to the proliferation of deepfakes and misinformation must have artificial intelligence (AI) at the heart of it, and his research explores the development of AI-driven tools that identify and flag harmful online content.

“We can use AI to prevent harmful content and also to stop deepfake videos and hate speech,” he says. “AI can help with the catching and prevention of this kind of content.”

As with any AI-driven development, the tools for combatting harmful content require data – and plenty of it. Aladhadh applies deep learning models to spot deepfakes, which are growing more convincing by the day as face-swapping technology evolves. This, however, is not simply an IT problem. This is an issue requires cross-disciplinary solutions.

Healthcare is one domain where misinformation can prosper online. The pandemic offered a case study in how quickly it can spread. “People in information technology and computer science need people from different research domains to be included in such work,” Aladhadh says. “When you are working in health, you need some people from the medical side to help us understand the results.”

Some of Aladhadh’s collaborations are closer to home in the Information Technology Department. Colleagues working on large language models can be invaluable in providing specialist knowledge about how huge amounts of text can be analysed and interpreted, allowing for multilingual functionality to be embedded in AI-driven tools.

“If you are analysing where the data set itself came from, sometimes we use the large language models to be able to understand what people are writing,” he says. “There are a number of tools or models we can apply to that as well. This is a way of helping the model to understand the content and be able to catch harmful things.”

Much progress has been made but the development of AI tools to counter harmful content is still in its infancy. More research is needed to deliver consistent results, especially when applying AI-driven tools in different contexts. Aladhadh says. “More research is needed.”

about Qassim University.

Brought to you by