In this episode of Tech Talks Daily, I sit down with Dr. Mfon Akpan, an expert in accounting and finance from Methodist University, to explore the rapidly evolving role of artificial intelligence in the accounting profession.
As AI technologies continue to advance, they are increasingly outperforming humans in various domains, including those within the accounting sector. Dr. Akpan shares insights on how AI is reshaping workflows, boosting productivity, and introducing a new paradigm where AI-generated drafts are refined by human professionals—a process that is not only enhancing efficiency but also challenging traditional approaches.
A significant focus of our discussion centers on the ethical considerations that accompany AI's integration into accounting. Dr. Akpan raises critical concerns about privacy, the need for transparency in disclosing AI usage to clients, and the potential for a digital divide created by unequal access to AI tools.
As major accounting firms invest heavily in AI, the question of how to implement these technologies responsibly becomes ever more pressing. Dr. Akpan also delves into the impact of AI on education, emphasizing the importance of exposing students to these tools to ensure they are workforce-ready while also teaching them to navigate the ethical complexities that arise.
Dr. Akpan's work, including his recent publication on aligning AI with ethical accountancy, introduces frameworks designed to ensure that AI adoption in the accounting profession remains aligned with the core values of trust and integrity.
As we explore these themes, Dr. Akpan also reflects on the challenges facing educators, policymakers, and practitioners as they collaborate to keep pace with the rapid advancements in AI and ensure that ethical standards are upheld.
Join us as we discuss the transformative potential of AI in accounting, the ethical challenges it presents, and the crucial role of lifelong learning in preparing the next generation of professionals for a future where AI plays an integral role in their work.
[00:00:01] [SPEAKER_01]: Is technology merely a tool? Or can it profoundly reshape industries and ethical boundaries?
[00:00:09] [SPEAKER_01]: Well, today here on Tech Talks Daily I'm going to be joined by Dr Mofan Afkan and he's a
[00:00:14] [SPEAKER_01]: renowned figure from Methodist University and he has an infectious enthusiasm for all
[00:00:20] [SPEAKER_01]: things technology. Especially how artificial intelligence is reshaping the field of accounting.
[00:00:27] [SPEAKER_01]: But with his extensive experience in academia and service on the audit committee in North
[00:00:34] [SPEAKER_01]: Carolina, for me he brings a unique perspective on the intersection of AI, ethics, accounting,
[00:00:42] [SPEAKER_01]: education and so much more. Honestly, he's the kind of guy I could listen to for hours
[00:00:47] [SPEAKER_01]: and so much so I had to stop myself going down a rabbit hole at an additional side
[00:00:52] [SPEAKER_01]: mission as we were talking today so I will be getting him back on later in the
[00:00:55] [SPEAKER_01]: year. But no more spoilers from me.
[00:00:58] [SPEAKER_01]: Delivering daily content to 140,000 of you wonderful monthly listeners across the globe is no small
[00:01:04] [SPEAKER_01]: feat. I don't want to take all the credit here because it wouldn't be possible without the backing
[00:01:08] [SPEAKER_01]: of our dedicated sponsors and partners. And I also want to shine a light on the fact that legacy
[00:01:13] [SPEAKER_01]: managed file transfer tools are looking dated. They often lack the security that today's remote
[00:01:18] [SPEAKER_01]: workforce demands and companies that continue relying on that outdated tech, they're in
[00:01:25] [SPEAKER_01]: IT professionals. Are you tired of juggling these multiple servers for secure file sharing,
[00:01:31] [SPEAKER_01]: integrated shared folders and email plus a comprehensive rest API?
[00:01:36] [SPEAKER_01]: Kiteworks simplifies your workflow. For administrators, you can experience unmatched
[00:01:40] [SPEAKER_01]: functionality and integration that traditional MFT servers just can't touch.
[00:01:45] [SPEAKER_01]: Step into the future of secure managed file transfer with Kiteworks by going to
[00:01:50] [SPEAKER_01]: kiteworks.com to get started. That's kiteworks.com and remember Kiteworks is also FedRAMP
[00:01:56] [SPEAKER_01]: moderate authorised. Thank you for your patience today, this is the moment you've been waiting
[00:02:00] [SPEAKER_01]: for. It's time to welcome my guest onto the show. Buckle up and hold on tight because no matter
[00:02:06] [SPEAKER_01]: where you are in the world right now, it's time for me to beam your ears all the way to
[00:02:10] [SPEAKER_01]: North Carolina where my guest is waiting to talk with us. So a massive warm welcome to the
[00:02:18] [SPEAKER_01]: show. Kelly Tidewell listening a little about who you are and what you do.
[00:02:23] [SPEAKER_00]: Well, first Neil, I'm excited to be here. My name is Dr. M. Fodokpun. I'm assistant professor
[00:02:29] [SPEAKER_00]: of accounting at Methodist University in Fayetteville, North Carolina. I teach
[00:02:36] [SPEAKER_00]: graduate and undergraduate level courses in accounting. I am an AI and metaverse enthusiast.
[00:02:46] [SPEAKER_00]: So I love technology. I think a better way of saying that is I really love technology.
[00:02:51] [SPEAKER_01]: And I love that. That's one of the reasons I invite you on here. And what I try and do
[00:02:55] [SPEAKER_01]: every single day on this podcast is talk about something that people are talking about in
[00:03:00] [SPEAKER_01]: businesses and maybe demystify some of the complex technologies and maybe ease a few concerns.
[00:03:06] [SPEAKER_01]: And right now everyone's talking about AI, but the reason I invited you on today was to talk
[00:03:11] [SPEAKER_01]: about AI ethics, which is another huge talking point right now. So to begin with and set the
[00:03:17] [SPEAKER_01]: scene for our conversation today, can you just walk me through the key principles behind the
[00:03:22] [SPEAKER_01]: accounting framework for AI ethics? And also how that aligns with the AI code of professional
[00:03:30] [SPEAKER_01]: conduct as well. There's a lot going on there, but just for anyone that's hearing about this
[00:03:35] [SPEAKER_01]: subject for the first time, could you just set the scene for me? I think that is a huge
[00:03:41] [SPEAKER_00]: area that is still being worked on as far as AI ethics. And tying that in to the professional
[00:03:51] [SPEAKER_00]: code of conduct, so the AI CPA code of conduct, there's still discussion on that.
[00:03:56] [SPEAKER_00]: There's still challenges with that. And I think as the technology continues to develop,
[00:04:03] [SPEAKER_00]: there'll be even more and more challenges that we'll see as far as how AI is used, privacy issues,
[00:04:13] [SPEAKER_00]: it's particularly data. I think that's a big issue. What area that I'm interested in is
[00:04:21] [SPEAKER_00]: devices. So we've got so many of these AI powered devices. I've had conversations myself
[00:04:30] [SPEAKER_00]: with a colleague, which this may be interesting. So I've got, if you're familiar with the Rayband,
[00:04:39] [SPEAKER_00]: the Meta Rayband glasses, AI glasses, I've got a pair of those. So I had a conversation with
[00:04:48] [SPEAKER_00]: my colleague. So I'm like, well, how would you, if I was working at a company and I had
[00:04:55] [SPEAKER_00]: a pair of these glasses, number one, I could come into the office. I could tap, tap on my glasses
[00:05:04] [SPEAKER_00]: and take pictures stuff on my screen. I could go with a client, take pictures of a client. I can
[00:05:10] [SPEAKER_00]: record parts of the conversation. I can take pictures of documents. So there's a lot of
[00:05:16] [SPEAKER_00]: things that I could do with these glasses. And no one would know. So the question was, well,
[00:05:23] [SPEAKER_00]: what policy would this fall under? And my colleague was like, I believe it will fall under a smartphone
[00:05:30] [SPEAKER_00]: policy, which I thought was interesting. So the way we have a smartphone about how you use your data,
[00:05:36] [SPEAKER_00]: how you're not supposed to, but again, how would you know if I have my phone,
[00:05:42] [SPEAKER_00]: you know that that's a phone. But if I have these glasses on, unless I tell you,
[00:05:48] [SPEAKER_00]: you really wouldn't know. So I think that you could say that's a challenge. It's not a challenge.
[00:05:56] [SPEAKER_00]: You can say, well, okay, we can use smartphone policy. But does that really cover something
[00:06:01] [SPEAKER_00]: like that with these devices? And then when I thought about these the rabbit pin and the
[00:06:06] [SPEAKER_00]: human, you know, when you see these different pins and these other device applaud is another
[00:06:11] [SPEAKER_00]: one where you can put it on your phone and it records everything. So, you know, I start to
[00:06:17] [SPEAKER_00]: think about how, how are we going to mitigate those challenges? And especially from what is right,
[00:06:23] [SPEAKER_00]: what is wrong? Is it okay for me to record these things with my applaud device or take pictures
[00:06:29] [SPEAKER_00]: of documents with my meta AI glasses? And does is that considered a smartphone? Or if I'm in
[00:06:37] [SPEAKER_00]: trouble because I'm using it, can I say, well, it's not a phone. These are my glasses. So
[00:06:43] [SPEAKER_00]: so I, you know, I think there's a lot of challenges. And I think these challenges with AI and this is
[00:06:51] [SPEAKER_00]: all stemming from AI are coming at us pretty quickly. And the fact that we're having these
[00:06:56] [SPEAKER_00]: conversations, I think there is something there. You know, it's not cut and dry.
[00:07:03] [SPEAKER_01]: And I'm glad you raised that. There'll be a few people who probably don't know about
[00:07:06] [SPEAKER_01]: the Ray-Ban glasses. Obviously, we know about the Vision Pro, that big bulky headset. There's
[00:07:13] [SPEAKER_01]: Mixed Reality headsets that are big and bulky. But these Ray-Bans look just like a pair of glasses.
[00:07:19] [SPEAKER_01]: Are they like the Google glasses is a little red light? Come on, when you're recording something or
[00:07:23] [SPEAKER_01]: if I was sat next to you in a bar or a restaurant, would I know that you were recording with these
[00:07:29] [SPEAKER_00]: glasses? These are my glasses. I don't know if they'll be able to see, but they have a little
[00:07:36] [SPEAKER_00]: on the side of them. They have a little camera on the side, but pretty much you
[00:07:41] [SPEAKER_01]: wouldn't be able to know. What's been your experience with those just out of interest?
[00:07:49] [SPEAKER_00]: They've been great. I mean, they work better than expected.
[00:07:54] [SPEAKER_00]: You're able to talk to the AI so you can ask it questions. What is that? If you see something,
[00:08:00] [SPEAKER_00]: what is this? So you can ask Vision so he can read things. I mean, it's a very
[00:08:07] [SPEAKER_00]: interesting technology. So if you take a picture, it goes to your phone.
[00:08:13] [SPEAKER_00]: So then you have it on your phone and then you can post it to social media or save it in your phone
[00:08:19] [SPEAKER_00]: and you have those documents or you have those images.
[00:08:23] [SPEAKER_01]: Wow, that's incredible. I think we could almost dedicate an entire episode to that because I
[00:08:28] [SPEAKER_01]: know you are passionate about that area. It sounds like we're going to need to get
[00:08:31] [SPEAKER_01]: you back on there. But if we go back to the world of accounting for a moment, what would you say are
[00:08:37] [SPEAKER_01]: the most significant ethical considerations when integrating AI technologies into the account
[00:08:43] [SPEAKER_01]: in sector? Because again, there's a lot of businesses now they're rushing in, they're
[00:08:47] [SPEAKER_01]: getting the new AI stuff. But what kind of things should they be thinking about, especially from
[00:08:54] [SPEAKER_00]: standpoint? One thing we hear privacy. So I think that's one big thing also outside use.
[00:09:04] [SPEAKER_00]: So there's a statistic where you have, I think was it 75% of knowledge workers are using AI,
[00:09:15] [SPEAKER_00]: but they're using it not at work. So I think that could be a concern
[00:09:22] [SPEAKER_00]: where you may not either you have the tools at the office or you don't have it at the office,
[00:09:30] [SPEAKER_00]: but then you have employees who are using it because they want to be more productive.
[00:09:34] [SPEAKER_00]: They want to get more done. So they're using this on their own computers, which may not have
[00:09:40] [SPEAKER_00]: the same privacy protocols if you have an enterprise license for some of these models.
[00:09:47] [SPEAKER_00]: So it ties into privacy, people using it, I guess off the books. I don't know if that's a way to say
[00:09:53] [SPEAKER_00]: it. I think that could be a big concern as well because you don't know how they're using it and
[00:09:59] [SPEAKER_00]: it's not managed. They're using it outside of the workplace. So you can't really manage how
[00:10:06] [SPEAKER_00]: they're using it or understand how they're integrating it into their work processes.
[00:10:13] [SPEAKER_01]: So if we've got an accountant listening to our conversation today, anywhere in the world, maybe
[00:10:18] [SPEAKER_01]: they're sat on the fence. Maybe one side of them is thinking, oh, it's all overblown. It's overhyped
[00:10:23] [SPEAKER_01]: or it's set for the future or I'm too scared. I want nothing to do without ethical reasons.
[00:10:28] [SPEAKER_01]: Can you just set the scene on how AI is transforming the account profession in terms
[00:10:34] [SPEAKER_01]: of things like productivity and accuracy with some real-world use cases and also on
[00:10:39] [SPEAKER_01]: the flip side, just some of those potential risks that need to be managed to ensure ethical AI
[00:10:45] [SPEAKER_01]: adoption. I understand there's a lot going on in there as well but can you share around that?
[00:10:51] [SPEAKER_00]: That was a big question and it was a good question. The first part when you said it's
[00:10:58] [SPEAKER_00]: overblown, I would argue it's underblown. It's the opposite.
[00:11:06] [SPEAKER_00]: Why would I say that? From my perspective and I get it, we tend to focus on the negative side.
[00:11:17] [SPEAKER_00]: So we focus on hallucinations, meaning the AI lies. In other words, it hallucinates.
[00:11:26] [SPEAKER_00]: It's not very reliable. You ask it nine questions, it'll get those nine questions right,
[00:11:32] [SPEAKER_00]: but that tenth one, it'll get wrong. It's not like a calculator. You sit with your calculator,
[00:11:39] [SPEAKER_00]: you can type in 2 plus 2 equals 4 10 times and you'll get the same answer.
[00:11:47] [SPEAKER_00]: So we don't have that reliability. Yes, issues with privacy. That's there.
[00:11:53] [SPEAKER_00]: But here's the flip side. If you think about the capabilities,
[00:12:00] [SPEAKER_00]: so whenever I look at these assessments, so if you go right now to Anthropics website
[00:12:08] [SPEAKER_00]: and you look at the performance for, there's a Claude 3.5 sign and you look at that,
[00:12:15] [SPEAKER_00]: it'll say for the undergraduate assessment, the benchmark scores 86%.
[00:12:22] [SPEAKER_00]: So he gets 86% of these questions right. I think chat GPT is about 86% as well. Then the master's
[00:12:31] [SPEAKER_00]: level assessment is around 50% for Claude and I think 30% for a chat GPT. Why is that interesting?
[00:12:43] [SPEAKER_00]: And why is that underblown? Well, if you look at education attainment data in the US,
[00:12:54] [SPEAKER_00]: about 40% of Americans have a bachelor's degree. So that's below, that's not even 50%.
[00:13:01] [SPEAKER_00]: You look at graduate degrees, somewhere 13 to 15%.
[00:13:07] [SPEAKER_00]: Right. And then you think about a doctorate is somewhere around 3%, 5% there.
[00:13:15] [SPEAKER_00]: So when you hear them talk about these models, we want to get it to be better
[00:13:20] [SPEAKER_00]: to work at a PhD level. I'm saying to myself, well, it's already working
[00:13:27] [SPEAKER_00]: at an undergraduate level, which is at a level higher than most Americans.
[00:13:33] [SPEAKER_00]: Yeah. So when they talk, you see what I'm so when you talk about AGI, you talk about how powerful it is.
[00:13:41] [SPEAKER_00]: The other thing is you think about the literacy rate, literacy rate in the US about 50 something,
[00:13:49] [SPEAKER_00]: 54, I think it's 54% of Americans read at a sixth grade level. You've got a tool
[00:13:56] [SPEAKER_00]: that scores 86% at an undergraduate level. So I think this when they talk about AGI being
[00:14:03] [SPEAKER_00]: smarter than the average human, Ali is already smarter than the average person in my opinion.
[00:14:08] [SPEAKER_00]: That may be controversial, but if you kind of compare where are normal people because many
[00:14:15] [SPEAKER_00]: you see them talking, I hear Mira, Marati, PhD, PhD, PhD, 3% in at least in the US are PhDs.
[00:14:26] [SPEAKER_00]: And then you've got to peel back the onion on that too. My doctorate's in a county.
[00:14:33] [SPEAKER_00]: I couldn't if I went to a large language model and asked a PhD question in physics,
[00:14:40] [SPEAKER_00]: I would know if it's right or wrong or what it is. I'd have no idea. So I think when you talk about
[00:14:48] [SPEAKER_00]: it's being over high, I don't think it's being over. I think it's actually the opposite.
[00:14:52] [SPEAKER_00]: This they're not really talking about how powerful and capable the models are that we have right
[00:14:59] [SPEAKER_00]: now. Now to the flip side about the issues when we talk about use and ethics in workflow,
[00:15:10] [SPEAKER_00]: that's a challenge because you have to do so there's all these questions in accounting.
[00:15:16] [SPEAKER_00]: If I'm working on something and do I tell my client I'm using AI to do this or do I tell them
[00:15:23] [SPEAKER_00]: that I'm using it, that's an ethical issue. They're paying me to do it but hey, I'm using AI to do it.
[00:15:31] [SPEAKER_00]: Do I tell them that hey, about 80% of this was done by AI and 20% was done? So those questions
[00:15:40] [SPEAKER_00]: can come up and I think that can be a concern as well as the privacy issue as well.
[00:15:48] [SPEAKER_01]: 100% with you and I also agree with you when you said we do focus on the negative too much sometimes.
[00:15:55] [SPEAKER_01]: Another reason I invite you on today was a paper that you co-authored also discusses
[00:16:01] [SPEAKER_01]: the transformative potential and challenges of AI in accounting education but can you tell me a
[00:16:07] [SPEAKER_01]: bit more about how AI can enhance learning experiences and prepare students for the
[00:16:12] [SPEAKER_01]: future workforce because this isn't something we should be celebrating, right?
[00:16:17] [SPEAKER_00]: Yeah, we should because they're using it. Everybody, I mean when you think about the large,
[00:16:22] [SPEAKER_00]: so you think about the big four firms, they're heavily investing billions of dollars in the AI
[00:16:29] [SPEAKER_00]: partnering, implementing, adding it to workflows because they want to increase productivity.
[00:16:35] [SPEAKER_00]: They want to get things done faster and over and over again I hear from my colleagues at
[00:16:42] [SPEAKER_00]: these firms generate and review, generate and review. So what that means is that
[00:16:51] [SPEAKER_00]: you're looking for ways to add it to the workflow. Okay, if there's some sort of process
[00:16:58] [SPEAKER_00]: that would take me six hours to do it. Now, if I use AI to generate it in maybe,
[00:17:07] [SPEAKER_00]: it may take 10 seconds to generate whatever it is I need and then I spend two hours reviewing,
[00:17:16] [SPEAKER_00]: I've saved about four hours, right? So that is the main, there's other use
[00:17:26] [SPEAKER_00]: came but that's the main one. Generate and review, increasing that productivity
[00:17:32] [SPEAKER_00]: and getting things done a lot faster than you would before. So then you can move on to other work,
[00:17:39] [SPEAKER_00]: you've got more capacity and as the lotto's get better and when I mean better as far as reliability.
[00:17:48] [SPEAKER_00]: So when you think about it from that generate and review, a paradigm,
[00:17:54] [SPEAKER_00]: eventually the models are going to get better and better, the review time is going to get
[00:17:59] [SPEAKER_00]: shorter and shorter and shorter because you're going to become more precise. So it'll only enhance
[00:18:05] [SPEAKER_00]: your capability. Then by adding it to workflows, you start to understand because these models for the
[00:18:13] [SPEAKER_00]: lowest part are general. So you can add it here, you can try to figure out how I can use it here,
[00:18:19] [SPEAKER_00]: maybe I can save 20 minutes here in this process. So you look for different ways to
[00:18:24] [SPEAKER_00]: use it and implement it and that productivity adds up in the classroom having students being exposed to
[00:18:32] [SPEAKER_00]: it and using it. It helps number one, they'll understand how it works, the good side, the bad
[00:18:39] [SPEAKER_00]: side to it, but also how it can be implemented and put into workflows because this is new. So
[00:18:47] [SPEAKER_00]: new technology, but it develops their skills with the technology so they're not so that
[00:18:53] [SPEAKER_00]: they're not just blind, blind or starting at zero with.
[00:18:59] [SPEAKER_01]: And something else we need to highlight that this stuff is here right now. It's not something for the
[00:19:04] [SPEAKER_01]: future. It's here right now as you mentioned a few moments ago and how smart it is right now. So
[00:19:10] [SPEAKER_01]: how important is it for educators, policymakers and industry leaders to collaborate right
[00:19:16] [SPEAKER_01]: now as well in preparing students for the evolving technology landscape? And
[00:19:21] [SPEAKER_01]: what steps can they take now to foster this collaboration? Anything that you're seeing here?
[00:19:27] [SPEAKER_00]: Yeah, to foster implementing the technology, making it very accessible because it's very important
[00:19:34] [SPEAKER_00]: that that's a good point. It is here right now. People are using it and to put yourself in the
[00:19:43] [SPEAKER_00]: sh-. And so the other thing is, I'm a professor, I'm going to call it students. I've got to
[00:19:48] [SPEAKER_00]: prepare them. If I got a freshman, I've got to prepare them for the world four years from now.
[00:19:54] [SPEAKER_00]: And we're using it here now, but it's going to be here even more and we're going to be using
[00:19:59] [SPEAKER_00]: it at a different pace. So I've got to try to expose them to as much as possible, give them
[00:20:06] [SPEAKER_00]: as much exposure and skills as possible now so that they can have a leg up when they get out
[00:20:12] [SPEAKER_00]: to the workforce. When I always use this analogy, say you're an employer, you got four candidates,
[00:20:21] [SPEAKER_00]: right? You got four candidates. Let's say they're all mathematicians, same education,
[00:20:27] [SPEAKER_00]: everything is the same except for their tools that they use to do their work,
[00:20:35] [SPEAKER_00]: their mathematics, their calculations. You got one mathematician. He only uses a pencil and paper
[00:20:43] [SPEAKER_00]: to do his calculations. So he'll be there, he'll get the job done, but it might take him two days
[00:20:52] [SPEAKER_00]: to get it done. Now you've got the other mathematician, he comes in, he's got the calculator.
[00:21:00] [SPEAKER_00]: He does all these calculations on this calculator. He can get the work done in one day.
[00:21:06] [SPEAKER_00]: Now you've got another guy comes in, he brings in his laptop computer.
[00:21:10] [SPEAKER_00]: He's got the internet and spreadsheets. He's got everything there. He can get everything done
[00:21:15] [SPEAKER_00]: in about six hours. Now you've got your other one. He's got a computer and he's got AI
[00:21:23] [SPEAKER_00]: and he can get all of these work done in one hour. Now if you're an employer, businessman
[00:21:29] [SPEAKER_00]: and you want that mathematician, which one would you want to hire?
[00:21:35] [SPEAKER_00]: Which one would you want the guy with the pencil and paper? No, you want the guy with the AI.
[00:21:42] [SPEAKER_00]: And that's where we are right now. So when we're talking about education, we got to think about
[00:21:48] [SPEAKER_00]: it this way. Employers, hey, they want you to execute. They want the output. They're going
[00:21:56] [SPEAKER_00]: to say, well, that's great. He can do all of this on pencil and paper, but it takes him 10 hours
[00:22:03] [SPEAKER_00]: to do it. I'll let the guy get it done in 15 minutes. So that's where we are right now.
[00:22:10] [SPEAKER_00]: And that's only going to in-cruits. So when I use that analogy, you kind of think about, well,
[00:22:17] [SPEAKER_00]: okay, that makes sense. Okay, I'll understand that. Why this is important and why it's
[00:22:23] [SPEAKER_00]: important for students to learn about it. And you hear me say, get exposed to it.
[00:22:31] [SPEAKER_00]: I say get exposed because it's changing so fast. So by the time you, you know,
[00:22:38] [SPEAKER_00]: I had an activity in my class last fall using Claude II. Now we've got Claude III.5.
[00:22:45] [SPEAKER_00]: By the time we start classes again, I don't know what we'll have. Right? So you got to try
[00:22:51] [SPEAKER_00]: to expose them to as much as possible. So they're familiar with it, the general basis,
[00:22:59] [SPEAKER_00]: not necessarily diving too deep because it's going to keep changing. The goalpost is going to keep
[00:23:03] [SPEAKER_01]: moving. You raise such a great point now because when I scroll down my news feed, I do see a lot of
[00:23:09] [SPEAKER_01]: education institutions that are saying, hey, we need to get everyone's smartphone, put them in
[00:23:14] [SPEAKER_01]: a pouch, lock them away. It's class time and AI, if we catch you using AI, we're going to
[00:23:20] [SPEAKER_01]: punish you for that as well. There's almost kind of two worlds going on. I understand the reasons
[00:23:25] [SPEAKER_01]: for it and the distractions, et cetera, and learning for yourself. But when these students
[00:23:30] [SPEAKER_01]: leave the education, they're entering a world where everyone's got all of these tools and
[00:23:36] [SPEAKER_01]: relying on them. So is it better to give them access and teach them how to use it responsibly?
[00:23:42] [SPEAKER_01]: It's such a big topic. Where do you stand on that?
[00:23:45] [SPEAKER_00]: It's better to give them access and have them use it responsibly because they're going to be in the
[00:23:51] [SPEAKER_00]: workplace. They're not going to be sitting in the classroom and we're preparing them for work.
[00:23:57] [SPEAKER_00]: So I think definitely getting them exposed to as many tools as possible only helps them
[00:24:05] [SPEAKER_00]: when they leave, because employers, again, they're going to be looking for the technology
[00:24:13] [SPEAKER_00]: skills. They're going to be looking for these specific skills when they get out there and
[00:24:19] [SPEAKER_00]: they want them to be able to use them, understand them, and have some level of proficiency. It
[00:24:24] [SPEAKER_00]: doesn't have to be perfect but to have some sort of proficiency with the tools. So definitely,
[00:24:30] [SPEAKER_00]: particularly in a county. It's here. And when you think about the big four firms,
[00:24:37] [SPEAKER_00]: okay, they're heavily invested. All of this trickles down. So eventually you've got other firms,
[00:24:45] [SPEAKER_00]: so you've got your regional, your smaller firms, they're going to be invested in it and using it
[00:24:51] [SPEAKER_01]: as well. Yeah. And even when you leave school, it's not a case of then getting a job and then
[00:24:57] [SPEAKER_01]: sticking in that job for 40 years and getting a wristwatch at the end of it. We're all on this
[00:25:02] [SPEAKER_01]: path of continuous learning now. So as AI continues to evolve, where do you see lifelong learning or
[00:25:09] [SPEAKER_01]: what role do you see lifelong playing in helping accounting professionals? I think indeed anyone
[00:25:14] [SPEAKER_01]: staying current with these emerging technologies and ethical standards and any tips on how they can
[00:25:20] [SPEAKER_01]: do that because there's that much information coming out as the speed of change is going at
[00:25:25] [SPEAKER_01]: break next speed but at the same time, there's almost a realization that he's not going to
[00:25:29] [SPEAKER_00]: move this slow again. It won't. So there's a theory, they call it the 10-10 rule and it's
[00:25:37] [SPEAKER_00]: in one of my favorite books. It's called Where Good Ideas Come From by Stephen Johnson.
[00:25:42] [SPEAKER_00]: And they talk about this 10-10 rule. So and this is before it would take 10 years to develop
[00:25:51] [SPEAKER_00]: a technology and then 10 years for it to be widely adopted. So let's say the color TV.
[00:25:57] [SPEAKER_00]: Took 10 years to develop it and then 10 years for it to be widely adopted. You think about DVDs
[00:26:05] [SPEAKER_00]: and CDs and et cetera. So you think about if you put that 10-10, that's 20 years as a generation.
[00:26:14] [SPEAKER_00]: So there's almost, if you say 20 years of generation, that doesn't exist anymore.
[00:26:18] [SPEAKER_00]: And it gets to the point where you have to really be, to some point, technology
[00:26:26] [SPEAKER_00]: agnostic or platform agnostic. Meaning not just marrying yourself. It's important to learn
[00:26:37] [SPEAKER_00]: one platform but learn what you need on that platform that you can use on another platform.
[00:26:43] [SPEAKER_00]: What do I mean by that? Well, if you think about Gemini, you think about Claude,
[00:26:49] [SPEAKER_00]: you think about ChatGPT. You think about all of these platforms. What do they have in common?
[00:26:55] [SPEAKER_00]: You input with prompting. Okay, I need to learn prompting. So you get good at prompting
[00:27:02] [SPEAKER_00]: at the different types of prompting. There's zero shot, few shot prompting, chain of thought,
[00:27:10] [SPEAKER_00]: tree of thought, nova. So you start learning prompting because now you can use those prompts
[00:27:16] [SPEAKER_00]: on any of those platforms. So you have to start thinking about, well, okay, I could stick with
[00:27:24] [SPEAKER_00]: one. I like one but what skills can I learn that I could carry across the platforms?
[00:27:30] [SPEAKER_00]: What skills can I get good at even if these things keep changing which they do? Just like
[00:27:35] [SPEAKER_00]: I gave the example. Last fall we were using Claude 2.0, now there's 3.5. But the through line with that
[00:27:45] [SPEAKER_00]: is prompting. So you still have that prompting piece of getting good at the prompting and
[00:27:50] [SPEAKER_00]: understanding how to build skills across these various platforms. I think that's very important.
[00:27:57] [SPEAKER_01]: Completely agree with you. And we've talked about the private sector and consumers. They're
[00:28:03] [SPEAKER_01]: racing ahead with this stuff and getting to grips with it. But we also mentioned a little
[00:28:07] [SPEAKER_01]: bit about the public sector and how they could be slow to catch up or see the value in it.
[00:28:11] [SPEAKER_01]: We talked about education but in the world of public sector accounting, how do you see AI
[00:28:18] [SPEAKER_01]: impacting that area? And are there any ethical considerations or anything that's relevant
[00:28:24] [SPEAKER_01]: to them that might be listening right now? It might feel just overwhelming but as we've said
[00:28:28] [SPEAKER_01]: multiple times, he's here right now. We need to deal with it.
[00:28:32] [SPEAKER_00]: I think one issue is that when you look at the data as far as who's using it, it can be very
[00:28:40] [SPEAKER_00]: confusing. And to tell you, I don't think we really know how many people are using it,
[00:28:47] [SPEAKER_00]: who's using it and exactly how they're using it. Why is that? Because to that point,
[00:28:55] [SPEAKER_00]: you have many people using it at home and they may not use it at work and they may not let their
[00:29:02] [SPEAKER_00]: colleagues or their employer know that, hey, I'm using AI to get a lot of more work done.
[00:29:09] [SPEAKER_00]: So and because many of these tools, you can use them at a certain capacity for free.
[00:29:16] [SPEAKER_00]: So it's not paid subscription. So I think as far as people using the tools,
[00:29:22] [SPEAKER_00]: they're more in my opinion, this is just my opinion. I think there's more people using them
[00:29:28] [SPEAKER_00]: than we know. However, they're not using them directly at the workplace.
[00:29:35] [SPEAKER_00]: And then at the workplace, if your company has not embraced or invested in AI,
[00:29:45] [SPEAKER_00]: it may be challenging for them to figure out how to implement it, how to manage it,
[00:29:50] [SPEAKER_00]: how to tackle these ethical issues around it. So they may push away from it.
[00:29:58] [SPEAKER_00]: So I think that we're in this gray area right now where you've got people using it,
[00:30:05] [SPEAKER_00]: they're not using it, there's benefit and there's not benefit. Some workplaces are using it,
[00:30:10] [SPEAKER_00]: some are not. You imagine if you have your own shop, your own practice,
[00:30:17] [SPEAKER_00]: do I tell my clients I'm using it? What will happen if my clients know? Will they want,
[00:30:22] [SPEAKER_00]: do I have to, will I lose clients? Do I charge them less money? Do I let them know these things?
[00:30:30] [SPEAKER_00]: Even though offshoring, so you think about a tax practice, many people offshore that their
[00:30:37] [SPEAKER_00]: preparation, right, they hire people overseas to do a lot of their work and then they reveal
[00:30:43] [SPEAKER_00]: it. Is that the same thing? Or do people see that in the same thing? So there's all these different
[00:30:50] [SPEAKER_00]: gray areas that we have with this that has to be sorted out.
[00:30:55] [SPEAKER_01]: Another area I'd love to ask you about is around accessibility, because I think in recent years
[00:31:00] [SPEAKER_01]: we've seen a lot of authoritative content that's slowly but surely been locked away behind
[00:31:06] [SPEAKER_01]: paywalls and that could be anything from science papers to, I don't know,
[00:31:10] [SPEAKER_01]: time magazine or a newspaper or something. It's all locked behind paywalls, only
[00:31:13] [SPEAKER_01]: certain people can access information, which means everyone goes then to
[00:31:19] [SPEAKER_01]: information that's not verified maybe. Maybe that's got something to do with the misinformation
[00:31:22] [SPEAKER_01]: we see so much of, I'm not sure. But if we look to the future, if we've got to have
[00:31:28] [SPEAKER_01]: a members of society that don't have access to $20 a month for chat GPT or Claude, etc.,
[00:31:35] [SPEAKER_01]: certain members of society get left behind because this information almost helps them be
[00:31:41] [SPEAKER_01]: limitless and improves their life. They're locked out from that world. Is there a digital divide?
[00:31:47] [SPEAKER_00]: Do you see anything around that? Yeah, we have it now because you just mentioned it. There's
[00:31:52] [SPEAKER_00]: a paywall and it's throttle. So I pay for the enterprise, I have the enterprise
[00:32:00] [SPEAKER_00]: chat GPT-40. So I can use it as much as I could sit on there most of the day and use it.
[00:32:06] [SPEAKER_00]: Someone that does it, they're not able to do that. If I go to Claude 3.5,
[00:32:13] [SPEAKER_00]: I get in there and I'm doing work in there. Eventually he's going to tell me you've used
[00:32:18] [SPEAKER_00]: up all your time. You got to come back in four hours or five hours. So it's there. And I think
[00:32:25] [SPEAKER_00]: as this technology continues to scale and improve, I do believe we're going to see more and more
[00:32:35] [SPEAKER_00]: of a divide. There is a divide. We're going to see a greater divide because of the network
[00:32:40] [SPEAKER_00]: effect. You're going to have more. Eventually you'll have more and more people using these tools.
[00:32:46] [SPEAKER_00]: It's not as if in 2023 to 2024, people stop using, they are signing up for these to more and more
[00:32:58] [SPEAKER_00]: people are. And as that increases, that divide is going to increase because you'll have more people
[00:33:03] [SPEAKER_00]: using it for free. And then you'll have those people who are behind that paywall
[00:33:09] [SPEAKER_00]: and have more access, which gives them access to more productivity, more intelligence.
[00:33:16] [SPEAKER_00]: So now there's a digital divide around intelligence. And I think that goes back to what I was
[00:33:22] [SPEAKER_00]: mentioning before. When you look at these education attainment levels, you look at the
[00:33:28] [SPEAKER_00]: performance levels. Now you're going to have individuals who are going to have access
[00:33:32] [SPEAKER_00]: to a lot of capable information, intelligence and others who don't.

