The Business of AI (Episode 3)
Privacy and Responsible AI
Privacy and Responsible AI
The Business of AI Episode 3: How can consumers and citizens safeguard their right to privacy when using AI, and how can businesses stay ahead of the AI privacy compliance curve?
Guests:
Find out more about AI and privacy below:
Narration:
The ten biggest data breaches in Australia over the past 16 months compromised the personal details of more than 147 million people and the reputations of the organisations tasked with keeping that data secure. With these very public and very costly breaches, data privacy is emerging as a very big issue for individuals and organisations alike.聽
Data privacy concerns have been heightened with the widespread use of new and fast-evolving technologies such as artificial intelligence. Today, AI is doing things most of us would never have thought possible 20 years ago. But while its development is rocketing ahead, privacy safeguards are lagging behind.聽
Many organisations are rushing to adopt AI-based technology, but big companies have already been stung by - and in some cases even banned the use of - AI platforms because of well-publicised leaks of confidential company information.
How can consumers and citizens safeguard their right to privacy and prevent it from being compromised? And how can businesses make sure they are ahead of the curve when it comes to complying with AI privacy standards?聽
In this episode of the Business of Leadership podcast, host Lamont Tang, Director of Industry Projects and Entrepreneur-in-Residence at AGSM @ UNSW Business School, is joined by Peter Leonard, UNSW Business School Professor of Practice and Principal and Director at Data Synergies, and Professor Mary-Anne Williams, Michael J Crouch Chair for Innovation; Founder, and Director of the UNSW Business AI Lab and Deputy Director of the UNSW AI Institute.聽
Lamont Tang:聽My name is Lamont Tang and I'm the Director of Industry Projects and teach data analytics strategy at the Australian Graduate School of Management. In our third series on the business of AI, we're honoured and thrilled to have with us two esteemed guests from UNSW Business School.
First, we have Professor Mary-Anne Williams, the Michael J Crouch Chair for Innovation at UNSW. Mary-Anne is a distinguished scholar, innovator and world-class expert in business AI. Professor Williams leads the newly established Business AI Lab at UNSW, where she collaborates to grow entrepreneurship and accelerate innovative thinking in Australia. The Business AI Lab is dedicated to researching and developing cutting-edge AI solutions for businesses by improving productivity, creating new business models and solving complex problems, which focus on three areas, AI and people, AI and strategy and innovation, and equitable and responsible AI.
For our second guest, we have Professor Peter Leonard, a data and technology business consultant and lawyer. Professor Leonard is a part-time Professor of Practice across the schools of management and governance and information systems and technology management within UNSW Business School. He was a founding partner of Gilbert + Tobin Lawyers and led its technology and data practice. Interestingly, he also serves on the New South Wales Government Artificial Intelligence Review Committee, which is tasked to review proposed applications of AI, machine learning and automated decision-making across New South Wales government agencies. He's also one of only two business sector members of the New South Wales Statutory Information and Privacy Advisory Committee.
Lamont Tang:聽Mary-Anne, can you tell us a bit more about yourself and about your work in the Business AI Lab and perhaps set the stage for today's conversation on AI and data privacy?
Mary-Anne Williams:聽Hi Lamont, I've been working in AI for more than 30 years. Last year we launched a new very exciting lab in the business school around business AI,听 responding to the need for business, government and civil society to begin to really understand the value that AI can create and that they can capture and deliver by focusing on not how many false positives and false negatives the AI technology is producing, but to focus on business performance, ROI, business outcomes, impact on society.
AI is collecting data 24/7. Whenever they're turned on, they're getting sensor data. So there is a lot of risk around what happens with that data, who can use that data, and I guess that's what we're going to be talking about today.
Lamont Tang:聽Thank you, Mary-Anne. And Peter, can you tell us a little bit about your various roles and perhaps maybe a little light on your involvement in the New South Wales government committees and how you work to address AI and privacy concerns?
Peter Leonard:聽Yes. Thanks, Lamont. I've worked around technology and data and the law related to their use for about 40 years, and in the last 10 years, most of my work has been in advanced data analytics and applications of advanced data analytics and from that work in the AI space, and what's been interesting about that transition from traditional computation to algorithmic inferences to more recently large language models and other foundational models is that each of those iterations raises new and different issues and challenges around understanding how to do AI responsibly. And that is a challenge for, well, everyone in the economy, but most particularly for government because government has access to so much sensitive information relating to individuals.
The New South Wales government set up the AI review committee to basically review all applications of automated decision-making and artificial intelligence in New South Wales government, and those applications come before us to look at and suggest how those applications might be done in a way that is transparent, fair, equitable, understood by citizens, trusted by citizens, and done in a way that the reliance that the government places on the uses of AI and automated decision making is appropriate for the reliability of the technology and the data that's used.
Lamont Tang:聽Obviously, you guys are both deeply invested in the intersection of data privacy and responsible AI. Mary-Anne, can you tell us a bit more about the Business AI Lab approach to responsible AI development and automated decision-making that Peter just talked about?
Mary-Anne Williams:聽Yeah, sure. This is one of our sort of three pillars because there's a lot of AI out there, there's been a huge investment in the AI tech, but the amount of AI that's actually been deployed is very little, and one of the big problems is of course privacy. There are other problems around bias and discrimination, fake information, hallucinating AI and all of that, but privacy is really first base. You can't do anything unless you can build the trust you need to gather the data you need to make better decisions, whether you're a government or a business. Unless you put that first, you don't... It's very hard to get to third base via different means other than first base, and so there's real movement for more than a decade around privacy by design, where you put privacy first, rather than tack it on at the end after you've built your system.
The data is really the critical piece and it's also the piece that really advantages different businesses. Data is the new oil, I'm sure everyone has heard that and it's really true. And it's also kind of a limitation on what we can do in universities as well. We are very good at building the models, but we don't necessarily have access to the data we need to make better models or to even investigate how good these models are or stress test them to see if they really do satisfy the elements that Peter was talking about in terms of responsible AI.
And one of the really big ones for me is contestability. If I get an adverse outcome from an AI, like my loan application has been rejected, I would like to know why. I need to know that the data that was used is actually correct. Maybe that data is outdated, maybe that data's incorrect, and I need a way to do that, and so we have to adopt and deploy AI where that is possible, and that turns out to be a huge technical challenge, not impossible, but it means that business and government have to actually invest in that, otherwise it's not going to happen organically, and for that reason, there are kinds of barriers to deploying real AI in the real world.
We've seen how ChatGPT was released by OpenAI before they could really determine if it was safe. It was an experiment. They were testing that technology in the wild to see what would happen, and there are examples of other companies doing that where they've had to actually pull that AI off the shelves, so to speak, because it turned out to be dangerous.
Lamont Tang:聽Peter, do you want to build on that?
Peter Leonard:聽I just wanted to give a sort of practical example of how ChatGPT is already being used in a work environment and the kinds of privacy and confidentiality issues that can come up. And this one is a real-world example. So health professionals in Australian hospitals today have to, amongst other things, write up their patient notes in the course of the day and then use those notes to prepare, amongst other things, discharge summaries for when a patient is leaving the hospital, and one of the things that ChatGPT does very well is summarising unstructured information, like the notes in a medical record, and from that can produce a pretty damn good first draft of a discharge summary. That's perhaps not an issue in terms of the reliance of the health professional because one would hope that the health professional would read both the summary in the notes and the discharge summary very carefully to see whether ChatGPT had hallucinated or otherwise produced an unreliable summary, but just think for a second as to what's going on here.
To actually upload that information to get ChatGPT to do the summary, effectively you've disclosed a patient record into the ChatGPT database, and until very recently, we didn't know much about how ChatGPT might retain and use that information in other ways. So there's a classic example of the kinds of patient confidentiality and privacy issues that can arise through the use of large language models such as ChatGPT, just through the kinds of information that are fed into the system in order to prompt the system to do something to help a human in their everyday work environment.
Lamont Tang:聽And if we might just rewind for our audience, just you mentioned a few things like hallucinations, and I think we've talked a little bit about large language models. Mary-Anne, do you mind rewinding and just help us outline what are these large language models?
Mary-Anne Williams:聽Just sort of riffing off what Peter was saying, because many of those issues are really related to having data in the cloud. But I think what's important to understand here is that because AI actually聽 ingests all this data and uses it to generate new data, it is a material difference around privacy, and I just kind of wanted to emphasise that because that is actually at the heart of this whole discussion. Foundation models as they're called is a type of AI or machine learning that have been trained on vast amounts of data, and with billions of examples, like the entire internet that's accessible out there.
Previously, we would train an AI with, say, images and we would tell the AI when there was a cat in that image or not, and that's called supervised learning, and what is really different about the AI behind generative AI is that it is self-supervised, and that is where all the power is coming from. That is why it can just ingest millions and billions of data examples.
So these models are able to use self-supervision to learn, and there is almost no limit to how much data they can ingest. And they're able to create new data that no one's ever seen before. That's where the generative idea comes from. So they ingest a bunch of examples and then they produce new examples that no one has seen.
These models do not need a specific context. So they can work with any kind of data, really, and it can be textual, but it can be about banking, it can be about government services, it can be about university students, for example. There's no limitation on the kind of data also around whether it's textual, image, so it's what we call multimodal, and this new generation of AI can actually take text and images together to produce new sort of data, new ideas, new suggestions, and there's obviously a lot of concern around the potential for misuse and for biases in the training data that can lead to privacy issues, but also discriminatory types of recommendations. And we've seen this play out in other areas and in previous generations. We've seen hand dryers in bathrooms only work with white hands. The risk that data being biased or being used to design new systems that don't take everyone into consideration, and there are real issues around inclusiveness and diversity in addition to the privacy problems.
Lamont Tang:聽Peter, would you like to riff on that?
Peter Leonard:聽Yeah, I'd like to move the discussion to images for a minute because I think they're a particularly interesting area. There are two major suppliers of stock photographs in the world. Shutterstock and Getty Images, and Getty Images recently launched a legal action in the US against Stable Diffusion who were kind of like the ChatGPT of the image space, and Getty basically are claiming in the US court that Stable Diffusion have breached their copyright in 12 million images taken from the Getty website.
Now, the clever thing that AI can do in the particular Diffusion foundation model is look at an image, look at the labels on an image, and from that generate a new image. So it can look, for example, at an image that is labelled a NASA astronaut, and it might see an image of a Russian cosmonaut that is not labelled, but then it can work out that it looks like an astronaut, so we'll put a label against it, or it might see an image of whatever the Chinese call their astronauts, and know to attach that label as well. And you can then prompt Stable Diffusion along the lines of, "Give me a photograph of an astronaut riding a horse, and it will produce a pretty good photographic image of an astronaut riding a horse. It mightn't get the positioning of the hands just right, it mightn't look like the astronaut's leaning forward the right amount, but then the human can instruct Stable Diffusion to, as it were, fine up the image to give a good final image.
In that case, is that a breach of copyright of the photographer or Getty image who owned the copyright in the original photograph. The argument that Stable Diffusion is mounting is that it's an entirely new image, and yes, it was derived from a photographic image, but that photographic image was one of literally millions or billions that were analysed to then cause Stable Diffusion to create an entirely new image. And if you think about it's a little like when if you sit down to write a novel, you can't sort of unthink or forget everything that you've ever read before. You will be influenced by the style of Shakespeare or Raymond Chandler or Agatha Christie or whoever is your favourite author because their influences on how you think and how you write, and one of the key questions that are now coming up with these kinds of models is should they be regarded as different, or are they just absorbing the zeitgeist of all of the information on the internet, including those 12 million Getty images that they analysed.
Lamont Tang:聽Thank you, Peter. And so where are you on that fine line, Mary-Anne?
Mary-Anne Williams:聽Technically there's a concept of fair use, and that is usually determined by the volume that you might be reusing, the purpose, but also the similarity is something that's actually surprisingly easy to measure. 鈥淕ood artist copy, great artist steal鈥 that was Picasso, and it's my favourite quote, and Steve Jobs picked up on that. He borrowed from Sony, from Stanford, everywhere, and he packaged it and he didn't just copy, he built on it and made it better. That is at the heart of innovation and ingenuity, and societies and business need innovation. Without innovation, we're very static and we can't solve problems and challenges would overwhelm us because we wouldn't have a way to solve them. I mean, just consider vaccines for COVID. Science and technology drive solutions facing humanity, and that is one of the reasons why generative AI is so important to understand and consider because it is undermining some very human abilities and expertise.
We can use technology to solve a lot of the problems, and then we need to understand, "Well, what part of the problem can't technology help us with?" There are deeper problems that really confront us as humans in business and just in society generally. And I don't know if anybody's noticed, but a lot of these images being generated by Stable Diffusion and Dali, they kind of look the same. They have a very similar look and feel.
How long is it going to take before they sort of plateau? Because we've put all the human artists out of business, they're no longer contributing to the database, they're no longer making it more interesting, no longer taking us in new directions. And maybe the next generation of AI can actually be more creative instead of sort of integrating what's already out there or extrapolating what's already out there. I mean, will there be a Picasso AI? Will there be a Rembrandt AI.
We need to be asking the right questions firstly, and I'm not sure we鈥檙e there yet. We need to entertain the possibilities and really use our own imagination, and I think this is something that society hasn't really invested in. I mean, when was the last time you really heard an original idea?
Lamont Tang:聽Thank you, Mary-Anne. Peter, can you tell us a little bit more about your experience serving on the New South Wales Government Artificial Intelligence Review Committee? And to talk to the point about should we pause or to what extent do we slow down? Maybe you can speak to some of the issues that are coming up now, such as the pause on AI.
Peter Leonard:聽Let me start with the pause because I think there's zero chance of a pause, but we can move forward carefully and deliberately, and that's really what the New South Wales Government's AI Review Committee has as its mantra. It is we don't want to stop the uses of AI, but we want the uses of AI to be responsible, and that requires a reflective view and input from a range of people around a table, including lawyers, ethicists, data scientists, AI experts, who can each bring a perspective on what being responsible means.
You need a range of skills to evaluate and give input as to what responsible is, and it's all around how you do it, what kind of technical, legal, operational safeguards and controls you put in place about how a process is undertaken, whether the inputs have been adequately evaluated for quality and reliability, and whether the outputs from the analysis that you're doing are properly curated and presented with appropriate warnings as to their safety and reliability that whoever it is that will be using those outputs are likely to understand.
And where you get into the really big challenge with these new large language models and other users of foundational models is that there's much less transparency around what is going on.
Lamont Tang:聽Thank you, Peter. And what about you, Mary-Anne?
Mary-Anne Williams:聽Do I think a pause is realistic? Probably not. We need some time to really think about and digest, and as I said earlier, identify the questions we ought to be asking, because if you think about what's happened over the last six months, the world woke up on the day ChatGPT was released to discover that AI's gone ahead in leaps and bounds in ways they had no idea.
They made it free and they made it easy to use, a little search bar, and it has a phenomenal capability of having a conversation. You can't have a conversation with Siri. You can ask it questions, but it can't process the context with within which you are asking those questions. So if you say, "Siri, turn the bathroom light off," and then say, "Oh, and the one in the kitchen," it won't know what the one in the kitchen is, even though you've been talking about lights. Now, ChatGPT is able to have a pretty impressive conversation with just about anything on any topic, and that is a truly sort of groundbreaking thing.
The fact that all of society really discovered this at the end of last year is kind of telling. They were very much in catch-up mode. And even people like me who have been working in the area for some time, and the people who built the tech, I mean, they did not know what would happen next. They didn't know if it would go racist, they didn't know if people would ignore it, they didn't know how they would use, they didn't know how humans would hack it, what sort of problems they would use it to solve. Nobody knew that. And we're still learning things every day about its usage and the things that it can do, and we're still processing that.
It doesn't understand the words in the sentences, it's just found patterns that allow it to predict missing words or the next word, and that is what's surprising, that it can reason, it can solve actual reasoning problems by just looking at the relationships between words to that degree. And we've also seen big jumps from the various versions, from version 3 to 3.5, which is used in ChatGPT and more recently, GPT4. There is a big jump from the different versions. So you can expect ChatGPT 5 to be really much, much better than 4, and I don't think we are ready for that. I really don't. I think we need to digest where we're at right now. Everyone in society needs to think about, "What does this mean?" And have the conversations that many engineers and others interested in this for the last couple of decades, actually... I think it was recognised early, even in the 1950s when artificial intelligence was coined, people realised that this technology, the speed with which computers can process data and information, is going to fundamentally change everything, and we are here at that moment.
I think it's a good idea to have a pause and not rush into solutions. That's when danger really happens. We need to pause. We need to think about what is the implication and the impact of this technology on every single thing. I guess the other side of this is if GPT and other generative models of this kind could solve cancer tomorrow, well, of course we wouldn't want to pause it. We would want cancer solved tomorrow because that would save lives, but there's no inkling that these technologies are going to have some sort of breakthrough that is going to save a lot of lives, or add tremendous value that we can't wait for a few more months for.
Lamont Tang:聽Thank you, Mary-Anne.聽 Peter, do you want to add to that?
Peter Leonard:聽Let me give you one example, Goldman Sachs' research put out a report that said that natural language processing generative AI applications alone, so not looking at images or music or other foundation models, but ChatGPT-type things could drive a 7 per cent increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period. Now, if there's that much value that is to be generated... And Goldman Sachs can be a little bit hype-oriented, but generally they're not a hype purveyor.
If there's that much value to be generated, it will happen. And I think that the alternative approach is what the UK government announced, which is that they're going to put a hundred million pounds into establishment of an AI experts group with a particular focus around generative AI applications and work to the UK being a global leader in generative AI, but done responsibly, and I think that's where the world will go. Sensible governments will think, "How do we sponsor generative AI to be done responsibly? How do we take a lead in this?" And hopefully by example, lead or drag private entities to improve their own practises.
Lamont Tang:聽Thank you, Peter. And on that point of what we can do as governments, universities and businesses. And maybe I'll toss this to you, Mary-Anne. What can we do in Australia, instead of just being implementators, how can we become leaders or where do you see us being leaders?
Mary-Anne Williams:聽Australia can access models, these models are all sort of published. There's nothing really particularly special about them. We need to build our capability and not just think about talent around engineers and more engineers or computer scientists. ChatGPT has really taught us that it's not about the tech, it's about how the tech can be used to affect change and drive transformation. That's where we want to be. We can lead in some way, and I personally think that we have a lot going for us.
Australia's a fairly harmonious society, and I think if you look at all of the elements you need to be a leader in AI, we have them, and in fact, I believe we have the most coherent set of them. We've got an awesome university system, 40 our universities in the top 500. Every single university in Australia is awesome,听and we have fantastic industry, we have robust law, we trust each other in a society. We have a very sort of harmonious society relative to others, and so I feel like we can really integrate all of these elements to put ourselves in a position where we can really not just compete, but lead.
Lamont Tang:聽Thank you Mary-Anne. So what jobs do I tell my kids or our cohorts to gravitate towards or avoid?
Peter Leonard:聽So the flavour of the week is prompt engineers, we will increasingly be using AI in the development and coding process, and that then means that we need to think very carefully about the right skill sets of humans to instruct the AI and then evaluate what the AI produces for reliability and safety.
Lamont Tang:聽Thank you, Peter. And what about you, Mary-Anne?
Mary-Anne Williams:聽So there's a little adage going around that I love, and that is, you won't be replaced by AI, you will be replaced by someone who can use AI.
If I was deciding what I would do if I was going to uni is ethics and understanding human values. We need to go deep on that. What does it even mean and what is the relationship between ethics and the law in practical terms when it comes to technology?
And then what about just training the AI? How do we set it up? I mean, we do tend to forget the human contribution to these AIs. And security, we talked a lot about privacy, unless the data's secure, well, you don't even have any privacy.
And then there's the generative AI data manager and product manager. And then of course, there's just the policy advisory. That's going to provide a lot of new work for a lot of people, and this is an area where we need diversity. It's critical to be inclusive and have diverse teams looking at these issues. And then of course, there's education. We're going to be doing a lot of training and helping to upskill every part of society and business.
Lamont Tang:聽Thank you, Mary-Anne. And to close, how do you keep up with this fast-moving space?
Mary-Anne Williams:聽LinkedIn is a tremendous source, look at the people you are following, make sure there's a lot of women in there, make sure there's a lot of people from diverse places in business and society. The first one would be the CTO of OpenAI, she's the one who really kicked it off by creating a database that the first generation of highly successful machine learning algorithms used to get better and allowed us to really measure progress, that's a massive contribution, and she is a very strong voice around human-centred AI.
Lamont Tang:聽Great, and what about you, Peter?
Peter Leonard:聽I read a number of the newsletters. One of my favourite ones is Azeem Azhar's Exponential View, which I think is fabulous. I always look at The Economist and the Financial Times and the information which comes out of Silicon Valley and is a really interesting publication on what's happening in the Valley. I think you need a variety of sources and a variety of perspectives.
Lamont Tang:聽Excellent. Thank you, Mary-Anne and Peter for sharing your insight and perspectives on AI and privacy. We've discussed many ways that artificial intelligence is transforming the world and impacting us all on a professional and personal level. We look forward to building a better and safer future together. You can connect with Mary-Anne, Peter and myself on LinkedIn and subscribe to businessthink.unsw.edu.au for the latest news on the UNSW AI Institute, the UNSW Business AI lab, and more.
狈补谤谤补迟颈辞苍:听Thank you for joining us for AGSM鈥檚 The Business of AI: AI and Privacy聽
Want to learn more about our research and work in the area? Check out the show notes on our website to get more information on the research discussed on today鈥檚 podcast. Just search the business of podcast online.听听
Or, drop us a line if you have feedback at聽brand@agsm.edu.au听听
New to the podcast? There鈥檚 a whole catalogue for you to explore. From mental health and AI to the use of AI in finance and banking, you can check them all out today.听听
Please share, rate and review and subscribe to AGSM鈥檚 leadership podcast on your favourite podcast platform and look out for future episodes.听听聽
In the meantime, follow AGSM at UNSW Business School on LinkedIn and Facebook for more industry insights for an accelerating world or find us at