91成人版抖音

Media contact

Victoria Ticha
+61 2 9065 1744
v.ticha@unsw.edu.au

The modern business world is littered with examples where organisations hastily rolled out artificial intelligence (AI) and machine learning (ML)聽solutions without due consideration of ethical issues, which has led to very costly and painful learning lessons. Internationally, for example, IBM is getting sued after聽聽while Goldman Sachs is under investigation for using an聽allegedly discriminatory AI algorithm. A closer homegrown example was the聽, in which the federal government聽deployed ill-thought-through algorithmic automation聽to聽send out letters to聽recipients聽demanding repayment of聽social security payments dating back to 2010. The government settled a class action against it at an after the聽automated mailouts聽system聽targeted many legitimate social security recipients.聽

鈥淭hat聽targeting of legitimate recipients聽was clearly illegal,鈥 says UNSW Business School鈥檚 Peter Leonard, a Professor of Practice for the School of Information Systems & Technology Management and the School of Management and Governance at UNSW Business School. 鈥淕overnment decision-makers聽are required by law to take into account聽all聽relevant considerations聽and only relevant considerations, and聽authorising automated demands to be made of legitimate recipients was not聽proper application of聽discretions聽by聽an administrative decision-maker.鈥澛

Prof. Leonard says聽Robodebt聽is an important example of what can go wrong with algorithms in which due care and consideration is not factored in. 鈥淲hen automation goes聽wrong,聽it usually does so聽quickly聽and聽at scale. And when things go wrong at scale, you don鈥檛 need each payout to be much for it to be a very large amount when added together across聽a聽cohort.鈥澛

Accessing Centrelink via the myGov Australia website

Robodebt is an important example of what can go wrong with systems that have both humans and machines in a decision-making chain. Photo: Shutterstock

Why translational work is聽required聽

Technological developments are very often ahead of both government laws and regulations as well as organisational policies around ethics and governance. AI and ML are classic examples of聽this聽and Prof. Leonard explains there is major 鈥渢ranslational鈥 work to be done in order to bolster companies鈥 ethical frameworks.聽聽

鈥淭here鈥檚 still a very large gap between government policymakers, regulators, business, and academia. I don鈥檛 think there are many people today bridging that gap,鈥 he observes. 鈥淚t requires translational work, with translation between those different spheres of activities and ways of thinking. Academics, for example, need to think outside their particular discipline,聽department聽or school. And they have to think about how businesses and other organisations actually make decisions, in order to adapt their view of what needs to be done to suit the dynamic and unpredictable nature of business activity nowadays.聽So聽it isn鈥檛 easy, but it never was.鈥澛

Prof. Leonard says organisations are 鈥渇eeling their way to better聽behaviour聽in this space鈥. He聽thinks聽that many聽organisations聽now care about adverse societal impacts of their business practices, but聽don鈥檛聽yet know how to build governance and assurance to mitigate risks associated with data and technology-driven innovation.聽鈥淭hey don鈥檛 know how to translate what are often pretty high-level statements聽about聽corporate social responsibility,聽good聽behaviour聽or ethics 鈥 call it what you will 鈥撀爄nto consistently reliable action,聽to give practical effect to those principles in how they make their business decisions every day. That gap creates real vulnerabilities for many corporations,鈥 he says.聽

Data privacy serves as an example of what should be done in this space. Organisations have become quite good at working out how to evaluate whether a particular form of corporate聽behaviour聽is appropriately protective of the data privacy rights of individuals. This is achieved through 鈥減rivacy impact assessments鈥 which are overseen by privacy officers, lawyers and other professionals who are trained to understand whether or not a particular practice in the collection and handling of personal information about individuals may cause harm to those individuals.聽

鈥淭here鈥檚 an example of how what can be a pretty amorphous concept 鈥 a breach of privacy 鈥 is reduced to something concrete and given effect through a process that leads to an outcome with recommendations about what the business should do,鈥 Prof. Leonard says.聽

Binary code and a person programming on a computer

When things go wrong with data, algorithms and inferences, they usually go wrong at scale. Photo: Shutterstock

Bridging functional gaps in organisations聽

Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. 鈥淭here is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes,鈥 says Prof. Leonard. 鈥淪o, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.鈥澛

This problem exists in many fields. One聽field聽in which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology 鈥 which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teams聽don鈥檛聽speak the same language as each other in order to arrive at a strategically cohesive decision.聽

Some organisations are addressing this issue by creating new roles, such as a chief data officer or customer experience officer, who is responsible for bridging functional disconnects in applied ethics. Such individuals will often have a background in or experience with technology, data science and marketing, in addition to a broader understanding of the business than is often the case with the CIO.聽

鈥淲e鈥檙e at a transitional point in time where the traditional view of IT and information systems management doesn鈥檛 work anymore, because many of the issues arise out of analysis and uses of data,鈥 says Prof. Leonard. 鈥淎nd those uses involve the making of decisions by people outside the technology team, many of whom don鈥檛 understand the limitations of the technology in the data.鈥澛

Why regulators聽need聽teeth聽

Prof. Leonard聽was recently appointed to the聽聽鈥 the first of its kind for any federal, state or territory government in Australia聽鈥撀爐o聽advise the NSW Minister for Digital Victor聽Dominello聽on how to聽deliver on key commitments in the state鈥檚 AI strategy.聽One focus聽for the committee is聽how to reliably embed ethics in how, when and why NSW government departments and agencies use聽AI聽and other automation in their decision-making.聽聽

Prof. Leonard said governments聽and other organisations聽that publish aspirational statements and guidance on ethical principles of AI聽鈥撀燽ut fail to go further聽鈥撀爊eed to do better.聽鈥淔or example, the聽Federal Government鈥檚聽聽by public and private sector entities聽were published聽over聽18 months ago, but there is little evidence of adoption across the Australian economy, or that these principles are being embedded into consistently reliable and verifiable business practices鈥, he said.聽聽

鈥淲hat good is this? It聽is like the 10 commandments.聽They聽are聽a great thing. But are people actually going to follow them? And what are we going to do if they don鈥檛?鈥澛燩rof. Leonard said it is not聽worth publishing statements of principles unless聽they are supplemented with聽processes and methodologies for assurance and governance of all automation-assisted decision-making. 鈥淚t is not enough to ensure that the AI component is fair, accountable and transparent: the end-to-end decision-making process must be reviewed鈥.聽

Piles of paperwork in the office and laptop on the desktop

Technological developments and analytics capabilities usually outpace laws, regulatory policy, audit processes and oversight frameworks. Photo: Shutterstock

Why organisations need聽tools聽

While some regulation will聽also聽be needed to build the right incentives,聽Prof. Leonard said聽organisations need to first know how to assure good outcomes, before they are legally sanctioned聽and penalised聽for bad outcomes.聽鈥淭he problem for the public sector is more immediate than for the business and not for profit sectors, because poor algorithmic inferences leading to incorrect administrative decisions can directly contravene聽state and聽federal聽administrative law,鈥 he said.聽

In the business and not for profit sectors, the聽legal聽constraints are more limited聽in scope (principally anti-discrimination聽and聽scope consumer protection law). Because the legal constraints are limited, Prof. Leonard observed, reporting of聽the聽搁辞产辞诲别产迟听诲别产补肠濒别 has not led to聽similar聽urgency in the business sector as聽that in聽the聽federal government sector.聽

Organisations need to be empowered to think聽methodically across and聽through聽possible harms, while聽there also聽needs to be adequate transparency in the system 鈥 and government policy and regulators should not lag too far behind.聽鈥淎 combination of these elements will help reduce the reliance on ethics within organisations internally, as they are provided with a strong framework for sound decision-making.聽And then you come behind with a big stick if聽they鈥檙e聽not using the tools or they鈥檙e not using the tools properly. Carrots alone and sticks alone never work; you need the combination of two,鈥 said Prof.聽Leonard.聽

The Australian Human Rights Commission鈥檚聽聽was recently tabled in Federal Parliament.聽Human Rights Commissioner Ed聽Santow聽stated聽that the combination of聽learnings聽from聽Robodebt聽and the Report鈥檚 findings provide 鈥渁聽鈥榦nce-in-a-generation聽challenge and opportunity to develop the proper regulations around emerging technologies to聽mitigate the risks around them and ensure they benefit all members of the community鈥. Prof Leonard observed that 鈥渢he challenge is as much to how we govern automation aided decision making within organisations 鈥 the human element聽鈥撀燼s it is to how we assure that technology and data analytics are fair, accountable and transparent.聽聽

Artificial intelligence and internet of things concept

Many organisations don鈥檛 have the capabilities to anticipate when outcomes will be unfair or inappropriate with automation-assisted decision making. Photo: Shutterstock

Risk management, checks and聽balances聽

A good example of the need for this can be seen in the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry. It noted key individuals who assess and make recommendations in relation to prudential risk within banks are relatively powerless compared to those who control profit centres. 鈥淪o, almost by definition, if you regard ethics and policing of economics as a cost within an organisation, and not an integral part of the making of profits by an organisation, you will聽end up with bad results because you don鈥檛 value highly enough the management of prudential, ethical or corporate social responsibility risks,鈥 says Prof. Leonard. 鈥淵ou name me a sector, and I鈥檒l give you an example of it.鈥澛

While he notes that larger organisations 鈥渨ill often fumble their way through to a reasonably good decision鈥, another key risk exists among smaller organisations. 鈥淭hey don鈥檛 have processes around checks and balances and haven鈥檛 thought about corporate social responsibility yet because聽they鈥檙e not required to,鈥 says Prof. Leonard. Small organisations often work on the mantra of 鈥渕oving fast and breaking things鈥 and this approach can have a 鈥渧ery big impact within a very short period of time鈥,聽thanks to the potentially rapid growth rate of businesses in a digital economy.聽

鈥淭hey鈥檙e the really dangerous ones, generally. This means the tools that you have to deliver have to be sufficiently simple and straightforward that they are readily applied, in such a way that an agile 鈥榤ove fast and break things' type-business will actually apply them and give effect to them聽before they break things that really can cause harm,鈥 he says.聽