The world missed the boat with social media. It fuelled misinformation, fake news, and polarisation. We saw the harms too late, once they had already started to have a substantive impact on society.
With artificial intelligence 鈥 especially generative AI 鈥 we鈥檙e earlier to the party. Not a day goes by without a , open letter, product release or interview raising the public鈥檚 concern.
Responding to this, the Australian government has just . One is a report commissioned by the National Science and Technology Council (NSTC) on the opportunities and risks posed by generative AI, and the other is a consultation paper asking for input on possible regulatory and policy responses to those risks.
I was one of the external reviewers of the NSTC report. I鈥檝e read both documents carefully so you don鈥檛 have to. Here鈥檚 what you need to know.
Trillions of life-changing opportunities
With AI, we see a multi-trillion dollar industry coming into existence before our eyes 鈥 and Australia could be well-placed to profit.
In the last few months, two local (billion dollar companies) pivoted to AI. Online graphic design company Canva introduced its 鈥渕agic鈥 AI tools to generate and edit content, and software development company Atlassian introduced 鈥淎tlassian intelligence鈥 鈥 a new virtual teammate to help with tasks such as summarising meetings and answering questions.
These are just two examples. We see many other opportunities across industry, government, education and health.
AI tools to predict early signs of Parkinson鈥檚 disease? . AI tools to predict when solar storms will hit? . Checkout-free, grab-and-go shopping, courtesy of AI? .
The list of ways AI can improve our lives seems endless.
What about the risks?
The NSTC report outlines the most obvious risks: job displacement, misinformation and polarisation, wealth concentration and regulatory misalignment.
For example, are entry level lawyers going to be replaced by robots? Are we going to drown in a sea of deepfakes and computer generated tweets? Will big tech companies capture even more wealth? And how can little old Australia have a say on global changes?
The Australian government鈥檚 consultation paper looks at how different nations are responding to these challenges. This includes the US, which is adopting a light touch approach with voluntary codes and standards; the UK, which looks to empower existing sector-specific regulators; and Europe鈥檚 forthcoming AI Act, which is one of the first AI-specific regulations.
Europe鈥檚 approach is worth watching if their previous data protection law 鈥 the General Data Protection Regulation (GDPR) 鈥 is anything to go by. The GDPR has become somewhat viral; 17 countries outside of Europe now have similar privacy laws.
We can expect the to set a similar precedent on how to regulate AI.
The European Union鈥檚 GDPR regulations came into effect on May 25 2018, and have become a model for other nations around the world.
Indeed, the Australian government鈥檚 consultation paper specifically asks if we should adopt a similar risk and audit-based approach as the AI Act. The Act outlaws high-risk AI applications, such as AI-driven social scoring systems () and real-time remote biometric identification systems used by law enforcement in public spaces. It allows other riskier applications only after suitable safety audits.
China stands as far as regulating AI goes. It proposes to implement very strict rules, which would require AI-generated content to reflect the 鈥渃ore value of socialism鈥, 鈥渞espect social morality and public order鈥, and not 鈥渟ubvert state power鈥, 鈥渦ndermine national unity鈥 or encourage 鈥渧iolence, extremism, terrorism or discrimination鈥.
In addition, AI tools will need to go through a 鈥渟ecurity review鈥 before release, and verify users鈥 identities and track usage.
It seems unlikely Australia will have the appetite for such strict state control over AI. Nonetheless, China鈥檚 approach reinforces how powerful AI is going to be, and how important it is to get right.
Existing rules
As the government鈥檚 consultation paper notes, AI is already subject to existing rules. These include general regulations (such as privacy and consumer protection laws that apply across industries) and sector-specific regulations (such as those that apply to financial services or therapeutic goods).
One of the major goals of the consultation is to decide whether to strengthen these rules or, as the EU has done, to introduce specific AI risk-based regulation 鈥 or perhaps some mixture of these two approaches.
Government itself is a (potential) major user of AI and therefore has a big role to play in setting regulation standards. For example, procurement rules used by government can become de facto rules across other industries.
Missing the boat
The biggest risk, in my view, is that Australia misses this opportunity.
A few weeks ago, when the UK government to deal with the risks of AI, it also announced an additional 拢1 billion of investment in AI, alongside the several billion pounds already committed.
We鈥檝e not seen any such ambition from the Australian government.
gave us the iPhone, the internet, GPS, and wifi came about because of government investment in fundamental research and training for scientists and engineers. They didn鈥檛 come into existence because of venture funding in Silicon Valley.
We鈥檙e still waiting to see the government invest millions (or even billions) of dollars in fundamental research, and in the scientists and engineers that will allow Australia to compete in the AI race. There is still everything to play for.
AI is going to touch everyone鈥檚 lives, so I strongly encourage you to . You only have eight weeks to do so.
, Professor of AI, Research Group Leader,
This article is republished from under a Creative Commons license. Read the .