3241: Transparency, Trust, and AI: Atlassian's Legal Framework in Action
Tech Talks DailyApril 14, 2025
3241
23:2717.95 MB

3241: Transparency, Trust, and AI: Atlassian's Legal Framework in Action

At Team '25 in Anaheim, I had the unique opportunity to sit down with Stan Shepard, General Counsel at Atlassian, for a conversation that pulled back the curtain on how legal and technology are intersecting in the age of AI.

Stan's journey from journalism to law to shaping legal operations at one of the world's most forward-thinking companies is as fascinating as it is relevant. What emerged from our discussion is a clear signal that legal teams are no longer trailing behind innovation—they're often at the front of it.

Stan shared how Atlassian's legal function achieved 85 percent daily usage of AI tools, including the company's in-house assistant, Rovo. This is remarkable when compared to the industry norm, where legal teams typically lag in AI adoption. Instead of resisting change, Stan's team leaned into it, focusing on automation for repetitive tasks while reserving high-value thinking for their legal experts.

We explore Atlassian's responsible tech framework, their principles around transparency and accountability, and how these inform product development from day one. Stan also walked me through how Atlassian is navigating the emerging global regulatory landscape, from the EU AI Act to evolving compliance in the US.

His insights on embedding legal counsel directly into product teams, rather than operating on the sidelines, reveal a model of collaboration that turns risk management into a growth enabler.

For legal professionals, compliance leaders, and tech decision-makers wrestling with how to integrate AI responsibly, this episode offers a grounded, real-world blueprint. It's not just about mitigating risk—it's about building trust, preserving human judgment, and future-proofing your operations. If you're wondering what responsible AI adoption looks like at scale, you'll want to hear this one.

So how are you preparing your legal and compliance strategy for the AI-powered workplace? Let's keep the conversation going.

[00:00:04] What happens when legal meets AI at the cutting edge of innovation? Well, at Team 25 in Anaheim, I recently had the opportunity to sit down with Stan Shepard, General Counsel at Atlassian. And together we had a refreshingly honest look at exactly how one of the world's most innovative tech companies is reshaping what it means to be a modern legal team.

[00:00:30] And from journalism to law, to helping scale Atlassian's legal strategy over a decade of change, Stan brings with him today a unique perspective grounded in curiosity and pragmatism. And in that conversation we're going to explore how his team flipped the script on AI adoption, achieving 85% daily usage in a function that has typically been seen as risk averse.

[00:00:56] And we'll also unpack the fine balance between innovation and compliance, the emergence of AI governance as a strategic imperative, and why trust is becoming a big differentiator.

[00:01:07] So whether you're in legal compliance or just trying to keep your AI strategy aligned with evolving regulation, our conversation today hopefully will give you grounded takeaways, not to mention a behind the scenes look at exactly how Atlassian is staying one step ahead with its legal team leading the charge. But enough from me. Time to get my guest on now. So thank you for joining me here on the Tech Talks Daily podcast at Team 25.

[00:01:37] But for everyone listening, can you just tell everyone a little about who you are and what you do? Yeah, thanks for having me on the show, Neil. Great to be here. So my name is Stan Shepard. I'm the general counsel here at Atlassian. I've been with the company for over a decade. And I feel like I've had probably four different jobs at the company and I've worked at five different iterations of Atlassian. So it's been quite quite a journey and quite a privilege to be here for as long as I have. I come from a family of educators. I definitely address, I think, the world through the lens of curiosity.

[00:02:06] I like to ask questions, to understand why. And I think that's really what drew me to the field of law. Prior to being a lawyer, I actually was a journalist. And I sort of really leaned into that sort of curiosity and that innate wonder about the world.

[00:02:22] But what brought me to Atlassian about 10 years ago was the fact that we are such an innovative legal team that I looked under the hood and I saw a team of non-technical users like lawyers using Jira and their daily work. And I said, that is a really amazing team. That's a team I want to be part of. And that's how I landed here at Atlassian. And the thing is, I think the legal industry has often had a reputation for being slow to adapt to change.

[00:02:49] But I think we can retire that now. That is completely changing, certainly from what I'm seeing. And here at Team 25, there's a lot of big announcements that people are getting very excited about what businesses can do, what opportunities they can unlock. But I'm curious, how are you using your products here? Anything you could share around that? Yeah, it's a great question, Neil. And I appreciate the interest in the legal field and how we are.

[00:03:12] Using AI and should be using AI. So just to give you a little bit of table setting, I recently saw an industry group publish a paper and it showed of their research, the top team internally at the companies they surveyed using AI was the customer service team. So probably no surprises there, lots of high volume tickets and maybe some repetitive work that they want to have their AI assistant take the first crack at.

[00:03:37] At the bottom of that list, Neil, was the legal teams that they surveyed. And it was close to single digits user adoption. And what was really interesting is that we actually have run a survey on our legal team here at Atlassian recently. We run it quarterly. And the most recent data suggested that 85% of the Atlassian legal team is using AI on a daily basis.

[00:03:59] So I was super proud to sort of have flipped that that stat that I saw in the industry and to really feel proud that the team that we've built here at Atlassian, the legal team that we've built are not only early adopters, but we're also using our own AI tools like Rovo to really be more efficient, to allow us to focus on the high value work. And allow our digital sort of virtual assistants to do the more repetitive work and to get not only better and faster results, but more accurate results using AI.

[00:04:29] And many businesses and indeed, I would imagine legal teams have been a little bit nervous around AI. Did you come across any of that nerves of apprehension before? And how have you overcome that? Because I suspect there's a lot of businesses that were almost sat on the fence thinking, how are we going to manage this? What are we going to do? How did you overcome this? Yeah, yeah. I mean, I think you're right that within within the different industries or groups at a company and maybe lawyers are the most risk averse.

[00:04:54] That's part of our reason for being in the world is to spot issues and propose solutions to those so that the worst case scenario doesn't happen. So I think naturally there might be some skepticism around using AI and some of the horror stories that you might be reading about in the news.

[00:05:11] But I think overall, our approach and my personal approach is that I think as long as we're using AI responsibly and we're putting the right guardrails in place, and we're really being, I think, intentional about the use cases for which we're using AI, then I think it's absolutely a win-win. And it's an accelerant, it's a value add, and I think it's something that we as a progressive, innovative legal team need to be doing.

[00:05:38] I think if you fast forward even five years from now, legal teams that are not using AI are probably the teams that are going to be dinosaurs, right? There's going to be a skills gap. And when you go to a job interview and you're not able to talk about prompting and deep research and all the things that are coming along and unlocking that value that AI offers, I think there's going to be folks that are at a disadvantage. So interesting to see how that one plays out. It really is.

[00:06:05] And we're speaking today directly after the keynotes where Atlassian is now offering Rovo as part of its standard offering and a collection of AI tools too. And in your industry, I've got to ask, how do you see that balance between accessibility and innovation, especially with the need for guardrails and responsible tech that you spoke about a minute ago? Where is that line between assertive and permissible? I understand it's a podcast episode almost entirely on its own there, but where do you stand on that?

[00:06:35] Yeah, it's a great question, Neil, and something that the legal team and I think a lot about from a trust perspective. And we really believe that trust across all of the Atlassian products, not only Rovo, but particularly Rovo, is a competitive advantage. And that consumers, customers are really comparing the trust narrative and the compliance postures of products side by side when they're evaluating the marketplace because we know that customers have choices.

[00:07:00] So for us, that transparency, that traceability, the explainability of AI in how the results were generated, on what data was the LLM trained, all of that is number one and top of mind for us. So as I mentioned, we are very transparent. We have documentation on our website. Anyone can pull that up about how are our models trained? What were the data sets? Do we or do we not cross-train across customers?

[00:07:28] And how do we produce auditability logs that can be reviewed to really show a human being how a result was generated? In addition to that, I would say that we are very aware that there are compliance regimes that we should be latching onto, that those really provide customers an objective benchmark by which to prove the difference between a good AI product and a great outstanding AI product.

[00:07:53] So things like SOC 2, ISO compliance here in the US, we have a NIST standard, which is a government standard. In the EU, there is the new AI Act that has been passed in 2024. It will become law in 2025. So all of these, I think, are just really good benchmarks by which for us as a legal team to be giving that blueprint to our product teams and saying, hey, this is customer's expectation. Here is where the law is going. That should be our baseline.

[00:08:21] Now, what you alluded to in your intro was that at Atlassian, we not only take the baseline, what we must do, but we build on top of that the things that we feel we should do that are responsible and are going to create the outcomes that are best for our customer and best for society in general. And I'm glad you mentioned that because before you joined me today, I was doing a little research on you. And one of the things that stood out to me is Atlassian often talks about doing what it should, not just what it must do.

[00:08:47] So can you share maybe a behind the scenes look at how your responsible tech principles maybe influence accurate product decisions like RovoChat and agents and all the stuff we're excited about here? Yeah, yeah, it's a great question. So about a year ago, we developed our responsible tech principles, which are very much aligned to the Atlassian values, things that we as employees live day to day, especially when interacting with customers and building products for our customers.

[00:09:14] The responsible tech principles really focus on things that I just mentioned around that sort of indicia of customer trust. So things like transparency, describing what we do, accountability, human centricity, really understanding, as you saw on the team 25 floor today at the keynote, that it's really a partnership between the AI and the humans. It's not about just autopilot, right, where the AI does it, things alone.

[00:09:42] And then the final thing in our responsible tech principles is really about teamwork, a thing that Atlassian takes quite seriously. So all of that is to say, these are the principles around which we have built Rovo. And the great news is that we've shipped these principles out to anyone who wants to use them, to other companies, and really feel like we are trying to teach the industry best practices and really hoping that others can learn from our experience.

[00:10:07] So what we did is we took those principles and we moved them into a template so that any product team, any legal team can take that and really use it as a checklist in how they can build smart, ethical, and responsible AI. So that easy to use guide will just allow cross-functional teams that are working on building AI to just get a shared understanding of how their AI projects align with just the principles and where they don't.

[00:10:33] So really, that sort of checklist, that audit log, we have found to be really helpful in developing our own products. And a word that we've both used a few times today is trust. And I think one challenge that people listening, especially executives face, is trusting their AI teammates. There seems to be a few trust issues there, especially in regulated industries.

[00:10:53] So what advice would you offer the business leaders listening that may be a little bit hesitant to bring AI deeper into their workflows due to those legal and reputational concerns? We've kind of been here before when social first arrived, but how do you see this evolving? Yeah, I guess I would have two bits of information or recommendations for folks. I would say first, experiment. Start small. Measure results. Measure return on that investment.

[00:11:19] And then I think that iterative process will allow individuals or companies to get more comfort and understand sort of where their comfort level is. So that would be the first bit of advice. The second one, I think, is to do your research, right? And to understand that probably all AI products are not created equally and to really understand your use cases, right? And what is going to be fit for purpose. Some use cases are going to be very high risk where you're making decisions about individuals and humans' lives.

[00:11:47] So things like applying for credit in the judicial sense, who's eligible for parole, education decisions, health care decisions. Those are all things that I would consider to be very high risk. There's going to be other use cases like drafting a weekly team update using AI or summarizing or translating. Those are going to be things that I would consider to be low risk use cases. And so, again, I think that should inform a risk tolerance. So the things that are high risk, tread a little bit more carefully.

[00:12:17] And the things that are lower risk, I think you're off to the races. And you should feel that the AI tools are really your friend and your coworker. And with so many new AI tools like deep research and rovo agents that we're seeing here, especially when they're all pulling rich organizational data, I've got to ask from a legal standpoint, how do you ensure that privacy, user permissions and IP protections, how do you ensure that they remain ironclad without limiting innovation?

[00:12:44] I would imagine it can be quite a tricky balance. Yeah. I mean, I think part of it is really understanding as a legal team who your stakeholders are in the creation of the product. So one of the things that I'm really proud of here at Atlassian is that about a year ago, we stood up what's called product council. So it's really a dedicated team of lawyers that sits within our R&D team and is really their team members.

[00:13:10] They're more thought of as part of the product team than they are even the legal team at this point. And not only are they able to sort of advise the business in how to produce, like you say, the best responsible tech AI, but also really able to act as generalists and be embedded in these product teams. And then they can then go out to the subject matter experts within the legal team, your IP specialist, your privacy specialist dealing with data privacy, data controls,

[00:13:39] your commercial team that might be writing the terms and conditions and the product documentation. So that hub and spoke model that we've stood up in the form of product council, where the generalists sit within the business, but then they have their incredible close connections with the specialists, I think allows us to not only right size the outputs that you're talking about for customers, but also allows us to ship products much faster because there's a lot less stakeholder management of a product manager

[00:14:08] having to talk to, say, eight different lawyers given their specialty. They just talk to one person and that person gives them a full soup to nuts product review. So that's part of, I think, the innovation engine that you're seeing with Atlassian and why we're able to innovate and sort of have this hyper cycle of shipping great products. Yeah, 100% with you. And we mentioned regulation a few moments ago, and I do suspect one of the reasons many businesses are a little bit cautious about going in

[00:14:34] is they don't want to go all in now only to chase compliance in 12 to 18 months. So what role do you see Atlassian's legal and regulatory team playing in helping to shape AI features proactively before those regulations we talked about here? And how are you embedding legal foresight into product development and be more proactive than reacting afterwards? Yeah. So in general, Neil, I would say that we start from a principle of Atlassian supports smart regulation in the industry.

[00:15:04] We want to have a proactive, productive dialogue with governments around the world. We do that directly with our own team. We have a policy team, a government affairs team here at Atlassian. But then we also partner with our industry groups. So we are a member of the Business Software Alliance, BSA. That's a global organization that represents enterprise SaaS companies globally. And then also the Tech Council of Australia, given the large footprint that we have in the Australian market.

[00:15:33] So together, I would say sort of that partnership with industry trade groups, the legislators that are writing the laws in industry, we really feel like we have the right team put together to come out with the best outcomes when it comes to smart regulation. Now, no surprise, I think, to you or any of the listeners that the rules around AI regulation are still being written today. And they differ quite drastically, I would say, around the world. You have the European market, which is leading the pack.

[00:16:01] I would say it's the world's most comprehensive regime. We just talked about the EU AI Act. In the US, we don't really have any unified national framework. We, in fact, are tracking internally 700 proposed state legislation around AI. So just to give you a sense of how decentralized I think the landscape looks here in the United States. But overall, I would go back to that risk analysis that I gave you earlier, where my hope is that legislators are looking at the highest risk use cases of AI,

[00:16:31] starting there and then sort of working their way down through really taking a smart approach and working with companies like Atlassian in the industry to truly understand the technology, given the fact that most folks that are writing the laws are not necessarily developers or technologists. We really want to be able to shine a light and help folks understand the subject matter that they're hoping to regulate

[00:16:57] and do it in a way that's going to be innovative and allow industry to continue to innovate, but also protect the stakeholders and, again, make it a society that we want to live in. And slightly off topic, I'm curious. We are seeing a rise in global conflict. Many countries and even entire regions are beginning to discuss things like data residency, data sovereignty, what it would mean to their business. Potentially, the U.S. could ensure that that data only resides in the U.S. The EU might move away from U.S. tech companies.

[00:17:27] Are you getting any questions or concerns around this, or is it too early to discuss at the moment? Yeah, I appreciate you sort of seeing around corners. And I think from my perspective, Neil, it's probably too soon to know exactly where things are going to land. It seems like probably headlines are changing by the minute here in Anaheim. But I would say that overall, it's something where we are in direct conversations with our customers.

[00:17:54] We are in direct conversations with legislators around the world. And there really should sort of be, I would say, a motto of no surprises, that we want to understand sort of where customers expect their vendors, their strategic vendors like Atlassian, to come out when it comes to trust and compliance. And then also on the other side of the coin, working with legislators around the world to really come up with legislation that's going to work for both industry as well as protecting the consumer

[00:18:22] and really trying to land in a spot that's going to be win-win both for industry and government. But I guess my takeaway here, Neil, is that there's no homogenous sort of use case particularly around AI and that it really comes down to seeing shades of grey and understanding, hey, this law has to be fit for purpose given the diversity of the ways that technology is deployed in the modern world.

[00:18:48] And looking ahead with a more optimistic look, how do you see the global regulatory patchwork continuing to evolve? And is there anything any businesses and tech leaders should be doing today to either future-proof their AI strategies while also remaining ethically grounded? Any takeaways on that? Yeah. So as I mentioned, I think at the national level, the rules are still being written, specifically around AI rules that govern AI.

[00:19:16] Don't forget, though, that there are existing laws that also apply to AI today. So you have things like IP laws that you mentioned, privacy laws, consumer protection laws. All of those, I think, are still applicable today and are some of them, frankly, are still being litigated, right? Around sort of New York Times, Getty Images. Those are all open pending litigation when it comes to U.S. copyright law. On top of that, I guess a third bucket I'd mention, Neil, is sort of soft law or compliance.

[00:19:46] So as I mentioned, you have SOC 2, you have your ISO, you have sort of even in the EU, there was something called the AI Pact, which was a pre-compliance optional regime that was voluntary that Atlassian complied with. And those are, I think, really helpful, more optional. But I think they're just objective frameworks for companies that are building AI to just map onto something that is not only auditable,

[00:20:10] but just has a badge of confidence when you go to a customer and you say, hey, this has SOC 2 compliance, this is NIST compliant. It's just a very, I think, efficient way to show your trust brand in a way that's going to be universally recognized. And being here at Team 25, somewhat of a sensory overload, you're doing so many interviews, sessions, keynotes, conversations with customers. If you put all that into one big melting pot, what are you going to be thinking about on that journey? What are you going to be taking away?

[00:20:39] And also, what would you like everyone listening to take away from the event too? Yeah, it's a great question. And there's just, there's so much, I think, enthusiasm. There's so much great product announcements, just customer energy on the floor here in Anaheim. It's just, it's awesome to be here. And I'm glad you're feeling that as well.

[00:20:57] My future excitement, I think, is around just the ability for all of us, I think, to unleash the ability to do work in a smarter way, in a, I think, a more satisfying way. And all of that is, I think, unlocked with AI. The other thing that I want to address is that I think AI really presents this really important ability for us to create new loops of learning. I mentioned that I come from a family of educators.

[00:21:25] That's why I got into journalism before law school. And I think there's just this opportunity for the legal operations field. That's something that is, I think, only going to become bigger and more strategic within companies of how do you get AI into the hands of your legal team. Other areas like ethics and risk management, I think, are only going to be getting bigger and hotter. So, all to say that I think the future is really exciting and I'm excited to be on that rocket ship. Well, exciting times ahead.

[00:21:55] There's so much more to see and do here at the event, which I'll be reporting back on. I know you've got a dash in a moment, but just thank you for stopping by today. Thanks, Neil. It's been great. So, a big thank you to Stan for joining me here at Team 25 and sharing not just what Atlassian is doing with AI, but how and why they're doing it. And I think it's clear that responsible innovation isn't an afterthought.

[00:22:19] It should be baked into the company's DNA, whether it be embedding product council inside engineering teams or championing transparency, traceability and trust. At Atlassian, they seem to be setting a thoughtful example for exactly what it means to innovate responsibly in an AI-powered era. And as Stan reminded us there in the conversation, it's not just about doing the minimum to stay compliant.

[00:22:46] It's about doing what you should do to build a durable human-first technology. And in a world where regulation is still catching up, I think a mindset like this could be the difference between success and failure. So, if this conversation today gave you something to think about or even rethink around AI governance or trust in tech, I'd love to hear your thoughts. Please, email me.

[00:23:18] Keep those thoughts coming in and I'll return again tomorrow with another guest. How's that sound for a deal? Good answer. I will speak with you all again bright and early tomorrow. Bye for now. Bye for now.