91³ÉÈ˰涶Òô

About the episode

Do you ever ask your virtual assistant about the weather? Or maybe play your favourite playlist for your trip to work? As humans, we interact with AI more often than we think, and that’s only going to grow. Some analysts say that within the next two years, at least 35% of all customer interactions will be handled by a virtual assistant or AI. 

For business, this means greater efficiencies and facilitating better decision making. But at the same time, there’s some frightening possibilities around AI. It poses an even bigger threat to our existence than future pandemics, climate change and even nuclear war.

In this episode, we’re unpacking The Business of AI, and finding out how business and government can balance the benefits with the potential harms of artificial intelligence. 

Professor Nick Wailes, Senior Deputy Dean and Director AGSM is joined in conversation by Dr Catriona Wallace, technology entrepreneur and Founder & CEO of Ethical AI Advisory. Dr Wallace explains how to create governance frameworks around AI that work toward a fair and equitable future.

Finally we hear from Dr Sam Kirshner, Senior Lecturer in the School of Information Systems and Technology Management at the University of New South Wales Business School. Dr Kirshner looks at what businesses that are lagging behind the curve on AI can do to catch up.

Speakers:

  • Professor Nick Wailes, Senior Deputy Dean and Director AGSM
  • Dr Catriona Wallace, Founder & CEO of Ethical AI Advisory
  • Dr Sam Kirshner, Senior Lecturer in the School of Information Systems and Technology Management, UNSW Business School.
  • Narration:

    How many times a day do you interact with artificial intelligence? A lot? A little? No idea?

    Analysts are saying that within the next two years, at least 35% of all customer interactions will be handled by a virtual assistant or AI. In business, it’s creating efficiencies and facilitating better decision making.

    But at the same time, there’s some frightening possibilities around AI. Philosopher Toby Ord says it has the potential to be one of our most significant existential threats - an even bigger threat to our existence than future pandemics, climate change and even nuclear war. 

    Welcome to The Business of AI, an episode in ‘The Business of…’ podcast brought to you by the Australian Graduate School of Management at the UNSW Business School. 

    In this episode, we’ll explore how business and government can balance the benefits with the potential harm of AI. How can we better understand it, and how can we create governance frameworks that work toward a fair and equitable future. 

    Professor Nick Wailes, Senior Deputy Dean and Director AGSM is joined in conversation by Dr Catriona Wallace, technology entrepreneur and Founder & CEO of Ethical AI Advisory.

    We’ll also hear from Dr Sam Kirshner, Senior Lecturer in the School of Information Systems and Technology Management at the University of New South Wales Business School

    First up - Nick Wailes in conversation with Dr. Catriona Wallace. 

    Nick Wailes:

    Well, Catriona, lovely to touch base with you again, and welcome to the podcast. Maybe we could start just by you talking a little bit about your background. So, I know you've got a fantastic PhD from AGSM, and you're an adjunct faculty there, adjunct professor with AGSM, but tell us about your other roles because you've got a number of different roles.

    Catriona Wallace:

    I do, Nick. Thanks. Yeah, great to be speaking with you. So, I have an entrepreneurial path in addition to my academic path, and the entrepreneurial path has been around building companies. First company I built was I co-founded a market research firm called ACA Research, then a human-centred design firm called Fifth Quadrant, both of those businesses now going 15-plus years. And then the one I'm probably more well known for is Flamingo Ai. 

    I'm also now sitting on a bunch of boards, so some very interesting boards. I'm the executive chair of venture capital fund Boab AI, which is funded by Australia's largest venture capital fund for startups that is our teaching capital. So, we give money to startups and scale-ups in AI. I sit on the board of the Garvan Institute, and my role there will be to help transform Garvan towards the data science, machine learning, and AI capability. And I sit on the board of Reset Australia, which is an interesting venture funded by the founder of eBay. We work with politicians to educate them on really the harms of AI, and then probably the last relevant thing to mention, Nick, is that I am working with two of your UNSW professors, including Sam Kirshner and Richard Vidgen, and we are writing a book called SurvivaI.

    Nick Wailes:

    Well, Catriona, it sounds like you're the perfect person to give me an introduction to AI because I think, like a lot of people, I need to check that I actually understand what AI is and I know what I'm talking about. So, have you got a simple answer for that question, what is AI?

    Catriona Wallace:

    Yeah, super simple. So, AI is any software that can mimic or replicate human intelligence. That's kind of the basis of it, which means that it gets a very broad definition. But a more simple way is to look at the components of anything that is artificially intelligent, and that is it has data, it has algorithms, it has an analytical capability, a decision-making capability, and then an automation capability., and the core, I guess, fundamental component we talk about now with most artificial intelligence technologies is machine learning as a capability, and machine learning is simply a software programme that is able to learn on its own accord without needing to be explicitly reprogrammed or programmed by humans. So, it'll learn. With every task that it does, it'll get smarter and smarter and smarter.

    Nick Wailes:

    How pervasive is AI? Because we've heard talk about it, but how much is it impacting our daily lives as individual consumers, but also in organisations?

    Catriona Wallace:

    Yeah, in a profound way that most people don't realise, Nick. So, AI is fastest-growing tech sector in the world currently. This year, it was valued in US dollars at $327 billion. It will grow in the next three to four years to around $500 billion worth of value. So, it is getting huge amount of investment into it. Analysts like Gartner predict that within the next two to three years, 80% of all foundational components of most technologies will have an AI underpinning. And in this book that I'm writing with Sam and Richard, we have analysed all AI touchpoints that humans have, and there are 18 categories in which you or I, Nick, may bump into AI in a given day, and the average middle-aged person, such as myself, will interact with AI around 28 times a day. The average teenager will interact with AI around a hundred-plus times a day.

    Nick Wailes:

    Catriona, I'd like to drill into the enterprise uses of AI because I think that's really interesting, but a number of core enterprise processes are being impacted by AI, and I'd like in this discussion just to explore what's good about AI before we turn to some of the challenges and the things that are potentially bad about AI. So, from an enterprise point of view, what are some of the applications and some of the benefits that enterprises are getting out of AI?

    Catriona Wallace:

    Yeah. Well, if we start sort of at the high level, the main benefits of AI is efficiency, and that's the automation of tasks that were previously manual or tasks that were done by inferior technology. So, efficiency is a big one, and then it is analytics and better decision-making. They would be the three core benefits of AI at this stage. And so, you'll notice there, Nick, I'm not saying, "Wow, fabulously improving customer experience or the human experience or employee experience." 

    It's not that just yet. It is definitely around efficiency and better decision-making. And then we look at what are the main applications and where is AI mostly used. So, it is in analytics. So, using big data, using algorithms, using AI to do analytics within enterprise. And then it is around marketing and customer experiences is actually the biggest area that we're seeing AI being deployed in. So, this is predominantly the personalization of everything, we call it, which is using machine learning and algorithms to be able to assist enterprises better understand their customer's intentions, and then to deliver to them a marketing or a sales opportunity for them to buy. So, this is predominantly where we're seeing the use cases for AI.

    Nick Wailes:

    Okay. So, all of those things, efficiency, analytics, better decision-making, someone running a business, they're types of things I'm interested in. I am really fascinated by you sort of implied that AI is not quite there for fundamentally shifting the human experience or the customer experience, but could you look forward a little bit and tell us what you think that might look like and how AI might play that role?

    Catriona Wallace:

    Yeah. Well, it's definitely going to impact the customer experience and the employee experience, even though if we listed the top 10 use cases of AI, employee experience is probably around the 10th. That makes me a bit sad because I think there's great things to be done there, and in the AI business, we built at Flamingo, we built a non-biological brain or a subject matter expert robot that did assist employees access knowledge. So, I think there's definitely good use cases for that. With regard to customer experience, again, if we look at what the analysts like McKinsey and Gartner are saying, it's within the next two years, at least 35% of all customer interactions will be handled by a virtual assistant or a robot or AI of some sort.

    And now the advantage to that is it's likely to be fast, 24 by 7, the robots won't go on annual leave or sick leave. They'll always be there. They won't complain. They won't have any other human-like challenges. And so more and more, we will see the customer experience becoming automated using virtual assistants and robots to do that customer experience. And it's not yet where it's an excellent experience, but I do believe we will go a long way towards these machines really being able to understand us better than we understand ourselves, to anticipate our needs before we even know we have a need and to be able to curate or deliver really great offers or products or services to us.
    We'll also see it in healthcare. So, this is where I think the big exciting field is, and where we're seeing actually the majority of investment in artificial intelligence funding is going into healthcare. So, we will see vast leaps forward in disease diagnosis, in even manufacturing of vaccines, these sorts of things, or establishing protocols to figure out where diseases are likely to spread to, analytics across the world so we know and can track disease. So, we will see some fabulous, great steps ahead in healthcare which is very exciting.

    Nick Wailes:

    And that's really a function of the two things being able to handle huge amounts of data, and also being able to learn them and bringing those two together that you're allowed those sort of diagnostic breakthroughs, and those types of things will really drive off those two things, won't they?

    Catriona Wallace:

    That's right. It is that. It's the ability of the machines to be sometimes up to a trillion times faster than a human in analysing and interpreting data.

    Nick Wailes:

    So, that's a sort of exciting picture of the future that businesses will be more efficient. They'll be able to provide analytics. They'll make the customer and employee experience better, and they'll even be able to cure all these diseases. So, you're making AI sound like a fantastic thing, but I know from your work and some of the things you've said, there's also a potential dark side of AI, and that's what I'd like to explore now, what are the things that we should be worried about, or what are the challenges with this type of general-purpose technology, which is becoming so pervasive. 

    So, maybe you could just start us off with that. What are the challenges or what are the things that we should be concerned about to start off with?

    Catriona Wallace:

    Yeah, it's a really important topic for us to talk about, Nick. So, I believe and I think most of the big AI thinkers in the world share the same view that there will be a very light side to AI, and we've just talked now about some of those great benefits, but it will have an equal dark side. And the dark side is largely because this type of technology is very difficult to understand, to explain, and also to control, and by the nature of what we've just talked about, the fact that it can learn on its own accord means that often the humans who've programmed it will not be able to understand the machines over time as they learn and make their own decisions and eventually not have any need for their human masters. That's one thing, just because the machines are so smart.

    Second thing is there's very little regulation, legislation, rules, or guidelines that provide a framework for those who are building AI or those who are deploying AI to work within. And so, pretty much, it's an unregulated space, which is in itself frightened, and there's a lot of great work done by our recently ex-human rights commissioner, Ed Santow, on putting together the Australian Human Rights Commission's guidelines around algorithm decision-making, and I worked very closely with Minister Karen Andrews' office in putting together Australia's ethical AI principles, which were launched in November 2019.


    So, there are definitely frameworks and recommendations, but there are no hard rules or laws yet that really govern this technology, and the technology is very little understood, and in Australia, Nick, I'm afraid to say that we are not particularly advanced in this field. We have about a 10th of the funding per capita that goes into AI as compared to the US. Australia's level is about $2 per capita, and the US is around $20 per capita. So, we are quite a way behind, and we are also not particularly mature as a country with regard to what we call responsible AI, and ethical AI is a component of that.

    So, the dark side of it comes from these incredibly powerful software being used by bad actors, and that could be in warfare, in bioengineering disease, in manipulation of populations and elections as we've seen over the last few years, or it could come from the machines themselves not being aligned with human values and starting to behave on their own in order to perpetuate their own goals.

    Nick Wailes:

    There's a lot in there to unpack, and I thought I'd start with the things that many people would have seen in the media that the concerns about bias in AI and particularly issues about bias on race and gender, and some of those types of things. So, you talked about us not understanding what's in the machine but isn't a lot of that a consequence of how those machines have been trained and the datasets that they've been given?

    Catriona Wallace:

    Yes, absolutely, the data is the first component, and this is where really, it all starts. The datasets that are used to train the algorithms have to be without bias and without discrimination. They need to be fair. And the challenge we've got with that is that a lot of the datasets that are being used to train the algorithms that run the AI are datasets off historical data. So, it could be financial data or health data, and so within the history of these datasets are already society's existing biases. So, it might be the absence of women or the absence of minority groups and a skew towards a certain part of the population.

    Now, a great example of this, Nick, which was very well-publicised was when Apple released its Apple card in partnership with Wells Fargo about 12 months ago, and their algorithm would have been trained on historical data. They went out through social media to offer the Apple card to people who'd like to apply for them, and in many cases, a husband and a wife applied with exactly the same financial details to get the Apple card. And interestingly enough, Steve Wozniak, the co-founder of Apple did that, and the AI came back with Steve getting 10 times the credit limit his wife was given because the machines had been trained that women have a higher credit risk than men because that's what they would have had programmed, historically.

    So, this kind of blew up and went nuts on social media, and started to show the world that what we have the potential to do with AI if we use historical datasets that have bias built into them, we will just hard code at scale all of society's existing ills, and with no regulation around that, this is already happening.

    Nick Wailes:

    How do we address that? You've been working on these principles of ethical AI. What are some of the solutions to address that type of problem?

    Catriona Wallace:

    Yes. So, there are good guidelines available now for enterprises or technology developers to start to look at what are the core principles of doing AI ethically or responsibly, and very excellent resources on the World Economic Forum website, and certainly, this is the work that my company does at Ethical AI Advisory, but I can step through, Nick, the eight-core principles, which might be interesting, and when you hear them, they're definitely related to AI, but you could also go, "Look this could be related to technology generally, or it could be related to business generally."

    Nick Wailes:

    I think it would be really useful to go through those because I think, for all of us, we're going to confront a situation where we'll have AI deployed in our organisations, or we'll be thinking about it, and we're going to need some sort of framework to help us make decisions and know where we should be paying our attention. So, if you can take them through that, that would be great.

    Catriona Wallace:

    Yeah, right. And thinking about the purpose of this framework is to help organisations avoid unintended harms. So, that's very important, to help organisations avoid unintended harms. So, the first principle is that AI must be built with human, society, and the environment's benefits in mind. So, it must not come at a cost to those three groups. Second principle is that AI must be built with human-centred values in mind. Third is the AI must be fair. It must not discriminate. Fourth is the AI must be reliable and safe. Fifth, it must adhere to privacy and security requirements.

    Six, and this is interesting and this is where I'm going to introduce Mrs. Wozniak to the conversation, Nick, so we can really make this come alive. Before we get to the sixth principle, let's go back to Mrs. Wozniak. She's got her 10 times less credit than her husband, and she's pretty annoyed. So, the sixth principle is contestability. So, if AI has made a decision against a person or a group, then that person or group must be able to contest the decision. So, Mrs. Wozniak goes, "Hey, I'm really unhappy about that. I'm going to contest this because I think I've been unfairly treated or unjustly treated." So, contestability. So, enterprises then must have a contestability path for say consumers in this case who have been unfairly treated.

    Now, if you think about this, Nick, that's one Mrs. Wozniak, that application went global. Let's go on scale. All of the other women in this case who felt unfairly treated, turning up to Goldman Sachs and Apple saying, "Right, we need to contest this. We're unfairly treated. What is the process?" The organisation is going to have to handle that. So, that's contestability, number six.

    Then it gets more tricky. Number seven is the AI must be transparent and explainable. And so, those of your listeners, Nick, who know a bit about technology know that it's really difficult to open these boxes up and have people look in and understand the algorithms. It's hard for the programmers to do that. The traditional AI is what we call black box AI which is sort of unexplainable AI, and what we're looking for in the future is organisations to be building white box AI. So, you can take the lid off, look in, and actually see how the algorithms are working. And so, transparency.

    And then explainability. So, not only they have to show it, the company would have to be able to explain to Mrs. Wozniak what happened. This is how it made its decisions. Now, again, anyone who knows anything about machine learning knows that that's enormously difficult because as these machines are learning and adapting and learning and adapting each task, then they kind of take a bit of a path of their own, and sometimes it's enormously difficult for organisations' data scientists to explain what their algorithm has done. 

    And then the last one is accountability. So, if that organisation has caused some damage or unfairness to Mrs. Wozniak, then they need to be accountable for that, and also the vendor who provided the technology that did the harm needs to be accountable, and likely there needs to be some reparation. And so, these last three, Nick, is when the chief operations officers of a lot of the companies I work with start getting pretty nervous.

    Nick Wailes:

    Yeah, I bet because typically, you'd be going to a vendor and saying, "We're looking for a deployable solution around this," and they would have sold it to us on this increased efficiency, greater customer satisfaction, all these types of things. And then all of a sudden, you have these problems, and knowing, as an organisation, knowing what to do or how to resolve some of these things will be very challenging. 

    What's the advice that you would be giving to leaders and managers in businesses that are looking to unlock the benefits of AI, but mindful of these challenges in the future?

    Catriona Wallace:

    Yeah, I think it needs to be both a top-down and a bottom-up approach. And so, if we start with top-down, this is around the board in particular being educated on artificial intelligence and responsible and ethical AI. So, very, very important that the board understands and knows that there are frameworks and guidelines to do this. My experience is kind of the higher we go up in an organisation, the less they know about this, but it's definitely something that should be at the board and at the executive team level. And I think this is such a great need, particularly in Australia, that we've just designed a couple of programmes for board members and for executive team members to take them through what ethical AI is, what responsibility AI is, and how to, again, reduce the likelihood of any unintended harm.

    And then we'd go down to the engineers. In speaking to some of the engineers they tell us that often the responsibility for doing ethical AI is pushed way down to them and that they are required to know how to code ethically or to make sure that the datasets have no bias, and they don't believe that the senior management really has any idea about this.

    Now, that to me is a very dangerous situation, to be delegating your ethics and responsibility to your engineers who are very well-intended, I'm sure, but also a huge pressure on them to be finishing code, to be shipping product, to be doing things efficiently, to be working under their agile planning frameworks, et cetera, where it may not be the place where they start and have time to think about how they would do it ethically. And so, we partner with the Gradient Institute who have training programmes for data scientists. So, we put the data scientists in a room for two days and they're trained on how to handle data, algorithms, how to do tests, et cetera, to make sure that they've got as much of this ethical component built into how they code as possible.

    Nick Wailes:

    But, it does imply a role for two things. One is a role for government or some sort of regulatory framework, and I know you've been very involved in conversations around that. So, maybe I could get you to talk about that, but then I'd like to come back to the role of management and governance in this sort of thing. So, particularly the role of government and the role of regulation, how do you see that playing out?

    Catriona Wallace:

    Yeah. So, government has to take a much stronger role than they have done. We've seen in this last federal budget, there is $124 million allocated to AI which is a good thing. It will be the first time that there's been a real dedicated budget, and I am privileged enough to sit on the committee that will help allocate that funding. And in that, there is a lot of education components to what the government is intending to do. So, I'm impressed with that.

    But still, I'm not sure, without getting too political, that the government currently is particularly technology-oriented, and it is my strong urging to the federal government and politicians that we need to get right up to speed very quickly. Australia will be left behind again if the government doesn't have kind of stronger teeth around what they're doing with AI and then stepping up into making sure it's responsible. And I do think Australia has a great opportunity because it is early days for us that if we start putting these responsible AI regulations and frameworks in place now, it could actually be a very nice competitive advantage for us in the future.

    Nick Wailes:

    Okay. So, that's interesting. So, I don't think it's political to say that the current government isn't particularly technology savvy. I think that's just a statement of fact. We'd all like to see some advancements there, but this idea that we could actually differentiate on building ethical and responsible AI in Australia that could be deployed on a global scale, and that could be an advantage for us. So, I think that's really interesting to explore.

    Catriona Wallace:

    Yeah. And before we get into the enterprise level and what could be done there, I want to raise another thing, which I'm very keen for your listeners to know about too, and that is why do we do this, why would we bother doing this. And we would bother doing this because we don't want injustice and bias and harm. That's great. That's a fact, but there's also a much bigger risk that is playing out at the moment. And there's been a very, very good book by Toby Ord, an Australian who is at Oxford University, called Precipice, and it talks about existential risk. And in this book, Toby Ord identifies there are around six core existential risks, an existential risks being will something destroy humanity, kill everyone by the end of the century, or will it severely reduce humanity's potential. And if we look at the existential risks, they are nuclear war, climate change, asteroid colliding with the earth, pandemic, bio-engineered disease, and artificial intelligence.

    Now, these first five have a risk factor according to the academics in this field of about a one in a thousand to a one in a hundred thousand chance that any of these, including climate change, will destroy humanity by the end of the century. Artificial intelligence, however, is not a one in a thousand chance. It is a one in six chance that AI will cause or go near to causing the destruction of humanity by the end of the century. So, for me, there's a bigger core here. We absolutely need to start regulating, monitoring this technology because it's not just our businesses are at risk or not getting a credit card is at risk, there's far greater stakes. And AI is now regarded as one of the most serious threats to humanity unless it is controlled. And then, where's the leadership? It's not coming from the tech giants.

    So, I say, Nick, it comes from the business schools, organisations such as mine. It comes from your students. It comes from business leaders who need to step into this ethical leadership, start to learn about this, understand both the benefits and the risks that this technology is bringing.

    Nick Wailes:

    Okay. So, completely terrifying to hear you say that, but I think a great call to action. So, you've set us that challenge. We, business schools, your organisation, but also our alumni who are managers who are running businesses and thinking about the future, we have to take responsibility for this. So, if you think about it, that alumni sitting in a general management role, what are we going to ask them to do or what are the few steps that they should be taking now to help build a successful future that leverages AI, but also avoids its challenges?

    Catriona Wallace:

    Yeah, great. And I think it's actually quite simple. So, to understand is there an AI strategy within the organisation and then, two, to understand has there been any work done in building responsible AI or ethical AI frameworks in which the strategy sits, and then to what degree do the people who are running any part of the AI strategy or programme in the business, have they been trained or do they understand ethical AI, from the governance to the policy, to the practise, to the coding level, and just start asking the questions. And if there isn't much progress yet in any of those fields, then there are certainly resources available, quite easily available. For them to start to learn. And so, I'm really calling, Nick, for these leaders we're talking about to start to understand how to do responsible infrastructure, responsible technology, and ethical AI, but they don't have to be data scientists. They literally don't have to know anything about the software itself. Those principles I talked you through are quite straightforward to understand. They're actually just good business.

    Nick Wailes:

    Well, that's everybody who's listening's homework is to go and ask those three questions. Is there an AI strategy? What work’s being done on building the ethical AI framework? To what extent do the people involved understand ethical AI? So, that's the questions for everyone. Catriona, it's been a fascinating conversation. You've sort of delighted and terrified me at the same time, but it's I think for-

    Catriona Wallace:

    Well, we have a word for that, Nick. It's terror-sighted.

    Nick Wailes:

    Okay. Well, I'm definitely terror-sighted, but it's an exciting area to watch. I know it's going to impact all of our lives, and the businesses that we're in, and great to get your I think really crystal clear way of helping us understand it. So, great to have you back at AGSM, and thank you so much for your input.

    Catriona Wallace:

    Such a pleasure, Nick. Thank you.

    Narration:

    It’s clear from listening to Dr Wallace that the rate at which we encounter AI in our daily lives will rise exponentially in the years to come. 

    But while we’re comfortable with asking Siri for a weather forecast, or letting an algorithm choose our playlist for the trip to work, you might also be wondering where and if we’d draw a line.  
    We asked Dr Sam Kirshner about some of those boundaries and here’s what he told us.  

    Sam Kirshner:

    Hi, I'm Sam Kirshner, I'm an academic here at the UNSW Business School. I'm from Toronto, Canada, which is pretty much just a colder version of Sydney.

    My work on AI this far is primarily focused on understanding when individuals, whether they're consumers or employees within an organisation use and listening to AI. On the consumer side, people tend to exhibit tremendous heterogeneity in terms of whether they'll actually use AI predictions recommendations, and some of the key factors are really the characteristics of these tasks.

    If you think about forecasting the weather, that's very inconsequential and people are obviously happy to use AI devices for that, but things like medical diagnoses and recommending and disease treatments, obviously because that's such a huge consequential decision. People really prefer to listen to human doctors rather than AI, which can often be superior.

    You can kind of think of examples, like getting dating recommendations on different types of platforms. With this type of recommendation, people often prefer human judgement just because they feel their own needs for dating are very unique. Whereas maybe something more objective like predicting financial portfolios, people are much happier to actually listen to either FinTechs or other types of machine learning algorithms, which can make recommendations and can predict stocks.

    Although we have a natural predisposition of whether we would trust AI or not in the scenarios, organisations can really carefully design their user experience of their AI, and the recommender agents to kind of create greater trust within consumers. If you think about just even things like chat bots, most firms now whether they're service oriented firms or just firms that have large customer support services, typically there'll be using chat bots.

    But then there's so much choice in terms of how do we actually present this chat bot to the users? Do we give them avatars? Are they a robot? Are they a person? Do we give the chat bot a name? Do we give the chat bot some sort of personality? Do we make it humorous? These are really all just very basic questions, really just around the user experience that firms have to consider in order to make sure that people are actually engaging with the tech that they're putting out.

    Obviously getting consumers to use AI is still a very substantial and a large challenge for organisations, but organisations also face many of the same challenges with their own employees.

    Narration:

    For many businesses, and some entire industries, knowing where to start with digital transformation can often be the hardest part. 
    We asked Dr Kirshner about the businesses who are ahead of the curve when it comes to AI, and what those who are lagging behind can do to catch up.  

    Sam Kirshner:

    I guess in general, the way I think of AI is that, it's really just a superior method for predictions. Ultimately, that's what AI is doing. It's kind of taking input data and then just making its best guess around whatever the task may be. And so if you think about businesses in general, almost everything we do, can be changed into a prediction problem.

    And what's kind of interesting about AI in general is that, the people that are pioneering and have the most sophisticated capabilities are typically these very large tech firms or huge institutions like banks and consulting firms. Alternatively, a lot of startups are also heavily based around AI. And what's kind of interesting with the startup world and entrepreneurs is often it can be data-driven.

    And this is not unique for the startup world, but what's kind of more interesting is startups are really great at doing AI. The big tech companies and large institutions are really great at using AI, but a lot of companies in between are just not really there.

    And I guess, a lot of this kind of has to do with the skill gap of a lot of the workers currently, where people are still really looking to uplift digitally, let alone uplifting their capabilities around data literacy.

    And without these types of capabilities within organisations, where lots of the organisation has the skillsets, it's really hard to actually implement AI and get people to really buy into using AI to better the company and to better the value proposition offered by the firm. I guess, is noted by Tim Fontane, who's a senior partner at McKinsey here in Sydney.

    He notes that you can't just plug AI into an existing process and just hope that it will work or that you'll get great insights. But really it comes down to re-imagining business and business models and taking a very structured approach or else AI just simply won't scale within the organisation.

    Really what you're looking for is kind of Goldilocks conditions for these organisations. You need to find an application that's not something that's so critical to the business that it involves hundreds of people, but you really need to find something that's meaningful for a select group of business leaders or champions, where the project will move the needle, but it doesn't have dozens of people arguing about the accountability, or the direction that these projects go.

    And then once you demonstrate the value of AI, kind of in this niche application, then more people in the organisation will be willing to actually adopt it.

    Narration:

    Well there you have it, our deep dive into The Business of AI. We hope you’re better equipped to recognise the pitfalls, but more importantly the potential of what’s to come. 

    To find out more about today’s episode, search for AGSM’s the-business-of-podcast online.
    Please share, rate and review and subscribe to AGSM’s business podcast on your favourite platform and look out for future episodes.

    In the meantime, you can follow AGSM at UNSW Business School on LinkedIn and Facebook for more industry insights for an accelerating world or find us at agsm.edu.au.

    Until next time, thank you for listening.

Listen via streaming services