The increasing use of Artificial Intelligence (AI) and new surveillance technologies have . In Australia, the number of , particularly in the finance and healthcare sectors, has risen over the last few years. Information mishandling, opaque, excessive and pervasive surveillance, and the increasing use of facial recognition and other biometrics are just some of the latest developments causing concern.
As a result, Australians are worried about protecting the privacy of their data and have become聽 of the activities of businesses that collect, handle and share revealing information about their activities, interests and preferences.聽
With the over the last few years and , concerned consumers and regulators have forced businesses to undertake expensive rearchitecting of their data systems, data handling processes, and data project governance and assurance.
But many business models and data architectures were not designed to be private and secure by design and default, said Peter Leonard, Professor of Practice for the School of Information Systems & Technology Management and the School of Management & Governance at UNSW Business School.
Another reason businesses have struggled with privacy issues is that it has taken the Australian government over two decades into the 21st century to start a serious discussion about making the Australian Privacy Act fit for purpose, said Prof. Leonard.
It has taken the Australian government over two decades to start a serious discussion about data, AI and making the Australian Privacy Act fit for purpose, says UNSW Business School's Peter Leonard. Photo: Supplied.
How does AI affect your privacy?
The most difficult area to address in AI and data privacy is the issue of data profiling. For example, insurers could聽use AI for profiling to avoid taking on high-risk clients, undermining the pooling of risk that enables premiums to be affordable across a broad base of insured persons, said Prof. Leonard.聽Data privacy, consumer protection, and insurance sector-specific laws do not address profiling-enabled targeting and thereby ensure consumers are treated equitably.聽聽
鈥淢any of the concerns around AI, for example, are about the use of profiling to differentiate in treatment of individuals, either singly (fully 鈥榠ndividuated鈥) or as members of a segment inferred to have common attributes. There may be illegal discrimination (intentional or accidental) against individuals that have protected attributes (such as race, religion, gender orientation).
鈥淢ore often, differentiation between individuals is not illegal but may be regarded as unfair, unreasonably or simply unexpected. AI enables targeted differentiation to be automated, cost-effective and increasingly granular and valuable for businesses鈥, said Prof. Leonard.
鈥淭hen there鈥檚 a raft of other issues around nurturing, or at least not eroding, digital trust of persons with whom a business deals, and about how you use data about individuals without them feeling that you鈥檙e being 鈥榮pooky鈥, or otherwise unreasonable, in the data that you鈥檙e collecting.鈥澛
Without adequate AI regulation and data privacy laws and guidance, businesses are having to fill in the gaps. Photo: Shutterstock.
How can businesses protect users鈥 data?
It is no longer good enough to simply comply with current data privacy law, said Prof. Leonard. Trust and privacy concerns go hand in hand, but without adequate laws and guidance, businesses must fill in the gaps.
鈥淵ou have to start to guess where the law may go and fill gaps in the law in thinking about what is a responsible way to act or an ethical way to act.聽This is difficult for businesses to do and will require them to consult a broader range of stakeholders, including experts who can think about corporate social responsibility and ethics in the digital age.鈥澛
While it is clear that Australia鈥檚 privacy laws need reform to address modern problems in an increasingly digital world, reform is complex and contested, especially given that data privacy is inherently multifaceted and complex, explained Prof. Leonard.
Prof. Leonard recently published a design manifesto for an Australian Privacy Act 鈥渇it for purpose in the 21st century鈥. The paper was one of lodged with the Australian Attorney-General鈥檚 Department in response to the AGD Discussion Paper on .
Prof. Leonard has put forward several recommendations surrounding reform of data privacy law, focusing on proposals for reform of the Australian federal Privacy Act 1988 (C鈥檛h) (Australian Privacy Act) and comparable State and Territory data privacy and health information statutes.
See also:
鈥淢any of the issues are actually issues around data governance, how you make this, how management make decisions about how data is used, and how you architect your data holdings so that you can make the right decisions, have the right controls and safeguards in place to do all of this, without creating business models that are going to blow up in your face,鈥 said Prof. Leonard.
鈥淭he laws are important, but the law, in my experience, is usually less than a third of the issue that I鈥檓 addressing when I鈥檓 advising businesses around advanced data analytics and AI.鈥澛
鈥淧roper understanding of the limitations of the AI requires considerations of explainability and understanding of humans as to the limitations of the AI or the algorithm.鈥
Going forward, he urged businesses to consider whether crucial decisions might undermine users鈥 trust and whether their business models are sustainable for the long-term, given laws are responding to concerns of consumer advocates and citizens around these new uses of data at an ever-changing pace.
鈥淢ore often than not, the issue isn鈥檛 the AI, or the algorithm itself, but rather the over-reliance, the overuse of it, in a range of circumstances鈥 where it was inappropriate to use it,鈥 he said, citing the as a perfect illustration of this.
鈥淭he issues to be addressed are not black and white issues of can I, can鈥檛 I? Instead, they鈥檙e much more complex issues around what should a responsible organisation do, or not do?鈥 he said.
AI and data regulation needs a coherent joined-up policy approach to address the growing privacy debate, says UNSW Business School's Rob Nicholls. Photo: Supplied.
Human decisions are crucial in an AI-driven world
Agreeing with the recommendations put forward by Prof. Leonard, Rob Nicholls, Associate Professor in Regulation and Governance at UNSW Business School, said one of the fundamental ways humans can avoid potential issues with automation is to ensure that when using AI, its intention is that of a tool to support聽making decisions, but not as a tool to make decisions.
鈥淥ne of the critical things, particularly in the government鈥檚 or regulator鈥檚 use of AI, is that it needs to be a decision support tool, but the decision is a human decision,鈥 he said.
While Australia鈥檚 current privacy laws are inadequate for the problems faced by businesses and consumers in the modern world, more laws aren鈥檛 necessarily the answer. Instead, it鈥檚 fundamentally more important to consider how data uses can adversely affect humans or be socially beneficial, said A/Prof. Nicholls.
鈥淏usinesses must think about: What are you using this for, is it in support of a decision? Why is that important? Because it鈥檚 still a CEO鈥檚 head on the line鈥 the decision is made by a person. It鈥檚 a decision support tool, not pure automation. And I think it鈥檚 imperative to distinguish between the two. And where the biggest risk in business comes is when you haven鈥檛 drawn that distinction,鈥 he said.
鈥淎I regulation needs joined-up policy,鈥 said A/Prof. Nicholls. 鈥淲e need to be able to address data protection and privacy protection concurrently. That is a coherent policy approach across all these issues. Being able to walk and chew gum at the same time is critical and, sadly, very absent.鈥澛
The original version of this story was published as 鈥樷 on .