How AI is being used to fight fake news
Which sources do you trust to tell you what you should distrust?
At a granular level, the advent of ‘fake news’ can lead to annoyance, bewilderment or even mirth. On a global level, it is responsible for endangering lives, fair elections and the course of history.
University of Queensland researchers Professor Shazia Sadiq and Associate Professor Gianluca Demartini are at the frontline when it comes to this modern scourge.
While cinematic fiction and scam alerts are full of instances where artificial intelligence (AI) has been manipulated for misdeeds, the UQ Faculty of Engineering, Architecture and Information Technology scientists occupy the opposing corner.
They are bastions of good and truth – and have the financial support of tech giants Google and Facebook as they lead the ARC Centre for Information Resilience in a fight for fairness.
“The threats from incorrect usage of technology range from the mundane to sophisticated criminality that brings concerns for national security,” Professor Sadiq says.
“Working to identify and quarantine ‘fake news’ is one aspect of what we do, but on a broader scale we aim to achieve integrity across the usage of multiple data assets.
“We advocate for the transparency of algorithms, address biases that rear their heads in technology, apply ethical approaches to data governance, and remove barriers to entrusted, advantageous data sharing.
“Successful implementation of artificial intelligence occurs when robust mechanisms and capacity for information resilience exist.”
At this precise time, during the coronavirus pandemic, there is heightened awareness about the identification of ‘fake news’.
The debate concerning the respective values of censorship and free speech is an evolving drama, continuing to play out before our eyes.
Yet it’s far from the first time that social media and the internet have been used to spread misinformation about medical threats or put human lives at risk.
More than five years ago, American media outlet PBS highlighted how aspects of the Ebola outbreak were found to be grossly sensationalised, and how wild conspiracy theories led to a shooting in Washington DC.
Yet, at the same time, the objectivity of PBS itself was put on question.
Which brings us to the big dilemma – which sources do you trust to tell you what you should distrust?
“Fortunately, artificial intelligence and data-driven methods can be used to detect and control misinformation in the same way they are used to spread it,” Dr Demartini says.
“We have partnered with platforms to develop automatic fake news detection systems in the UQ AI Collaboratory, and are investigating ways we can give people the skills to recognise misinformation independently.
“Our technology not only identifies fake news, but also explains and substantiates why that is the case.
“My advice for people is to consider five questions when viewing information online. Firstly, what is the source of the information and is it reliable?
“Are there any conflicts of interest and do you understand who the author is working for? Is the purpose of the information to sell you a product or service, or does it have another motive you can identify?
“Is the information merely somebody’s comment or opinion, or is there credible evidence to support the statements?
“Finally, it’s important to ask yourself if you have any particular beliefs or preferences that could be affecting your own judgement.”
Although the term ‘fake news’ gained particular prominence in relation to Donald Trump (both his use of the phrase and perceived advantages received), the impact of political misinformation can be witnessed much closer to home.
In Australia’s 2019 federal election, the rapid online spread of a scare campaign about the possible introduction of death taxes was seen as pivotal to the election outcome.
Meanwhile, a study published in the Asian Journal for Public Opinion Research indicated more than 50 per cent of voters in Taiwan’s 2018 local elections acted upon incorrect information.
The research used supporting evidence to state Taiwan was “the most severely attacked country in the world in terms of being fed misinformation by foreign governments” and that China was “attempting to sway the island’s politics with a new ‘Russian-style influence campaign’.”
In assessing the western world’s own bias, it’s also important to recognise that nations such as North Korea have received unflattering reports intended to ridicule and discredit their leadership.
Claims that North Korea told its citizens it had defeated Brazil at the Football World Cup, and that Kim Jong-il recorded an unbelievable sequence of hole-in-ones in his first round of golf have largely been driven by external media, not State-controlled media.
“The ability to algorithmically spread false information through online social networks, together with the data-driven ability to profile and micro-target individuals has made for customised false content,” Dr Demartini warns.
“By developing artificial intelligence systems and collecting a range of online data over several years, UQ is helping to lead the way on both assessing and countering the role biases and belief systems play in technology.”
This content was paid for and created by The University of Queensland. The editorial staff of The Chronicle had no role in its preparation. Find out more about paid content.