With Great AI Comes Great Responsibility

Artificial Intelligence continues to evolve at an alarming speed. AI’s capability to boost our productivity, generate imagery and even catch us up on the Teams meeting we joined ten minutes late sounds incredibly promising … but just how reliable is this new technology, and are we prepared to have it so integrated into our lives?

With AU6 million dollars in funding being provided by the South Australian government to the University of Adelaide’s own Australian Institute for Machine Learning (AIML) over the next four years (and matched by the University), research into the technology and its limitations will ensure that South Australia becomes both a leader and a global asset in AI.

As with all innovative technologies, the parameters of use are difficult to ascertain and comprehend and are particularly complicated when applied to politics and health. That is where MAGPIE and CANAIRI come swooping in.

MAGPIE

Monitoring and Guarding Public Information Environment, or ‘MAGPIE’, is a University of Adelaide project that focuses on the protection of the public information environment to ensure that reliable information is favored over the unreliable. This is achieved through a situational awareness tool developed by Keith Ransom, Postdoctoral Researcher in the School of Psychology with Dr Rachel Stephens,  Professor Carolyn Semmler and Professor Lewis Mitchell.

Misinformation in AI is rampant and dangerously so if left unchecked – its speed, confidence and persuasiveness can be a recipe for disaster if it is overtly relied upon. AI has an enormous library of information to access, including from the internet and in prompted cases in software such as Microsoft’s Co-pilot (its newly launched AI assistant tool), which can access our own Word documents, emails, spreadsheets and other cloud-based files. But as we all know, the internet is rife with angry conflicted vitriol, conspiracy theories and scams shared by relatives on Facebook. Even our own work is never completely devoid of error (as much as we’d like to think it is).

AI does however thrive upon being fed accurate data that it can clearly define patterns in. It also provides better results when given clear and succinct prompting. For example, a command such as, “Please write a 250-word summary on the April – June quarterly report that lists the key achievements of the finance division in dot points” will likely provide an accurate synopsis.

But what happens when AI is used for propaganda, influencing elections, or deliberately sharing harmful and false information?

MAGPIE’s situational awareness tools aim to map, detect, and defend against such harmful information with the use of AI. It accomplishes this through AI’s ability to generate arguments and reasoning for agendas based on prompts and information fed to it. With that comes a better understanding of malicious pathways that can be taken to spread harmful information and will help to both identify and combat it. This tool is ever evolving with the advancements of the technology and, much like AI itself, will continue to grow stronger the more data it is fed.

CANAIRI

The Collaboration for trANslational AI tRIals or ‘CANAIRI’ group focuses on developing translational trials for accountable AI integration to ensure that AI systems are not only effective but also transparent, ethical, and accountable. This project is in collaboration with Dr Melissa McCradden from the University of Adelaide’s Australian Institute for Machine Learning, and Dr Xiao Liu from the University of Birmingham.

With AI pervading everyday life, it is important to establish that its integration into vital sectors such as our healthcare system has been well thought out, calculated and, most importantly, can cater to individuals’ needs. How exactly can we be sure that the Doctor’s AI tool to determine the outcomes of a test or diagnostic process is accurate when our circumstances differ from the patient before us? It is safe to say that we would not trust a drug without vigorous testing and research behind it before subjecting our bodies to it … so why should it be any different with AI?

The CANAIRI project endeavours to approach these AI tools through a series of translational trials that would measure the following:

- The accuracy of the model

- The bias when used across a large cohort

- How humans interact with the model and its implications on output

- The ethical considerations when dealing with private information and its use

- The implications on cyber security

- The environmental impacts of powering such tools

- Public engagement.

Through the evidence gathered during trials of these models, risks and pitfalls of using AI in such vital societal systems can be identified – ensuring that whatever is being adopted by those we trust most to take care of us is tried, tested and, most importantly, trusted.

The responsibility of such evolutionary technology that will ultimately become a staple of our planet’s future is not to be taken lightly. It is only with time, research and regulation that we will understand AI and how best to use it.

This content was paid for and created by The University of Adelaide. The editorial staff at The Chronicle had no role in its preparation. Find out more about paid content.