色盒直播

Universities told to be AI’s ethical watchdogs

Industry and government alone cannot oversee new technology, experts say  

十一月 28, 2019
Kate Devlin at 色盒直播 Live
Kate Devlin at 色盒直播 Live

Kate Devlin was blunt in her assessment that the UK was probably not going to be the world leader in the development of artificial intelligence.??

“That will happen in Silicon Valley, where the money is,” the senior lecturer in social and cultural artificial intelligence?at?King’s College London said during?Times Higher Education’s 色盒直播?Live. “There is already a brain drain from higher education to industry. Universities can’t compete with the salaries, the free beer and the beanbag chairs.”?

However, there was one space where both the UK and academia could lead: ethics.??

“The ethics community in Europe is strong,” Dr Devlin told a panel debate, adding that the?ethical use of AI could become a policy consideration for universities?in the future, in the way that sustainable development goals and climate change are now.?Recent documents such as the General Data Protection Regulation and The House of Lords’ 2018 report on AI’s impact pointed to greater awareness about the moral issues of using powerful technology, she said.

“Ethics is not owned by one party,” said Nathan Lea, senior research fellow at the?UCL?Institute of Health Informatics, who urged greater engagement with the public, industry and government. “We have to?humanise?this somehow.”?

There are a host of ethical problems with how technology is being used today – from misused personal data to “deepfake” digitally manipulated videos.?The panel speakers said that there was an ethical challenge in ensuring that datasets and algorithms?were “unbiased”. This was particularly important given that the outcomes of AI work were not always predictable.??

Dr Devlin used the example of an app developed by?Stanford University,?which claimed it could identify if someone was gay without their knowledge or consent. She asked what could potentially happen if that technology fell into the hands of a?nation that punished citizens for homosexuality.

Even within university administration itself, there were not always adequate policies in place on using AI responsibly.?The speakers expressed concern about using AI to monitor student progress, which they felt should still be done mostly in a personal, one-on-one capacity, especially if well-being issues were involved.??

“What if there is a mental health issue, and what if that data is wrong? It horrifies me that we would use that uncritically,” Dr Devlin said, saying that the practice potentially raised a “red flag”.??

“Having technology involved in that way makes me nervous,” Dr Lea agreed. “You lose that human contact. Would I trust student profiling based on an AI algorithm?”?

Dr?Devlin and Dr Lea both began their academic careers in the arts and humanities, only to switch to studying computer science at postgraduate level. They are now working in the emerging field of digital humanities, which attempts to bridge the gap between the scientific and human sides of technology.??

“The idea that ‘tech is neutral’ is a STEM approach, not a humanities approach,”?Dr Devlin concluded.??

joyce.lau@timeshighereducation.com?


Listen to the entire?panel discussion:?

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT