The Lucid Software Playbook For Aligning People, Process, And AI
Tech Talks DailyApril 08, 2026
3475
31:0722.48 MB

The Lucid Software Playbook For Aligning People, Process, And AI

How do you bring people together to do better work when everything around them feels increasingly complex, distributed, and uncertain?

In today's episode, I sat down with Jessica Guistolise from Lucid Software, and what struck me straight away was her belief that work has always been a group project, even if many organizations still behave as though it is not.

Jessica shared how much of the friction we experience at work comes from misalignment, unclear expectations, and a lack of shared understanding. When teams are spread across time zones, systems, and now AI-powered workflows, those gaps only widen. Her perspective is simple but powerful. When people can actually see the work, rather than interpret it through documents, meetings, or assumptions, something shifts. Conversations become clearer, decisions become faster, and collaboration starts to feel human again.

We also explored how visual collaboration platforms like those from Lucid Software are helping teams move away from scattered tools and disconnected workflows toward a more unified way of working. Jessica described it as having everything on one workbench, where teams can brainstorm, plan, and execute without constantly switching context.

What really stayed with me was her focus on inclusivity in collaboration. Not everyone contributes in the same way, and visual environments can create space for different thinking styles, whether someone is outspoken, reflective, or somewhere in between. That idea of creating a shared language across teams, roles, and even personalities feels increasingly relevant in a world where communication often breaks down.

Of course, no conversation right now would be complete without talking about AI. Jessica offered a refreshingly honest view. There is uncertainty, and there should be. But rather than avoiding it, she believes leaders need to make AI visible, map how it is used, define where human judgment matters, and encourage teams to experiment openly.

One of the most interesting ideas she shared was reframing mistakes as early learnings. When teams feel safe to test, fail, and share what they discover, progress accelerates. When fear or blame enters the picture, everything slows down.

We also touched on AI literacy and what it really means in practice. For Jessica, it comes down to clarity. Clear workflows, clear guardrails, and clear expectations about accountability. AI might assist, but humans remain responsible for outcomes. That mindset, combined with leadership that actively participates in experimentation, creates an environment where people feel confident stepping forward rather than holding back.

This conversation left me thinking about how many organizations are still trying to layer AI onto unclear processes and expecting better results. Jessica's message is that clarity comes first, then technology can amplify it.

So if work really is a group project, are we giving our teams the visibility and confidence they need to succeed, or are we still asking them to figure it out in the dark?

Useful Links

[00:00:04] How do you help teams work better together at a time when AI is changing how ideas are formed, shared, and acted on? Well, in today's conversation, my guest is going to be joining me from Lucid Software. And she brings a thoughtful and very human perspective to a topic that often gets framed in extremes. But thankfully, we will avoid extremes and any form of binary thinking today.

[00:00:31] Because instead of treating AI as a threat, my guest will make the case for seeing it as a collaborator. Something that can support people, sharpen thinking, and help teams move from confusion to clarity. And we'll also talk about why visual collaboration matters more than ever. And how leaders can build trust through transparency and AI literacy.

[00:00:55] And why safe experimentation may be one of the most valuable cultural shifts that any business can make right now. So, how do you create an environment where people feel empowered to work with AI? To experiment rather than feel anxious about what it might replace or what they might be blamed for if it goes wrong? Well, the answer to this and many more questions will be coming from my guest right now. So, let me introduce you to her.

[00:01:26] So, thank you for joining me on the podcast today, Jessica. Can you tell everyone listening a little about who you are and what you do? Sure. So, Jessica, hi. Nice to meet you all. Thank you for having me, Neil. I am, it's an odd title, but I'm a senior evangelist at Lucid Software. My background is in agile consulting and professional coaching. But really what I'm passionate about is helping people and teams and organizations really find more humane and creative ways of working together.

[00:01:56] I'm just like wildly curious about how we can use ourselves and technology to bring ourselves closer together and collaborate rather than being further apart. Because I think we're all going further and further apart as technology helps us get further apart. And really, I think there's a possibility of us coming closer and closer together. I love that. And you've got an incredibly cool job title there. And behind that, there's a great origin story as well.

[00:02:20] So, Jessica, for anyone meeting you for the first time or hearing you for the first time today, can you share a little about your journey into Lucid Software? And the problem that you're passionate around solving around visual collaboration and work acceleration? Because I know this is a topic very close to your heart, isn't it? Sure, it is. It's so odd to me to be working for a tool company because I've been tool agnostic my whole career as a consultant. Whatever's there, I can use. It's no big deal.

[00:02:49] But my friend Brian had actually started working for Lucid before I did. And he had started telling me about some amazing things happening at Lucid. And not just about kind of what Lucid was doing, but about the people at Lucid and the culture at Lucid and the values at Lucid. I found all of that really incredible. And then he showed me some of the really cool things that actually were possible with the software. And I was like, what? Where has that been my entire whole wide life?

[00:03:18] And that was really, really cool. And that was actually during the pandemic. And it was doing some things that I desperately needed in my consulting practice. And that was very, very cool. And then he was telling me all of these amazing things that was possible as an insider kind of in the organization. And the thing that kind of really struck me is that I'd really noticed early on in my career that so much of our work as human beings, I wish somebody had told me this very early in my career, is that our work is actually a group project.

[00:03:44] Like, I wish somebody had told me that in school because I would have been probably paying more attention that we do work as a group project. And oftentimes the way that we work creates confusion or misunderstanding. And that really results in a lot of misalignment, frustration, really, and kind of pulls us apart. And it only gets worse when we collaborate kind of across continents or in COVID, like we went boom, remote, hybrid, all of the things.

[00:04:11] And what I'm passionate about in terms of visual collaboration is when we actually see the work, when we stop guessing and we start aligning. And we can do that no matter where we are. And that really brings a sense of relief and clarity. And it really makes the work more enjoyable. And we can actually start to understand one another. And we can understand one another when we have different ways of thinking, when we have different languages, when we have different time zones.

[00:04:41] And on top of that, there's also this when we're able to like automate or remove some of the administrivia that we have had to do in the past. That's kind of bogged us down. I just get kind of ridiculously excited about the possibilities of what we can do in the future. And you're not on your own with the excitement around this either. I mean, Lucid's platform is used by more than 100 million people around the world, including teams at Google, GE and NBC Universal.

[00:05:10] So from your perspective, though, here and the conversations that you're having, why are visual tools like Lucid Chart and Lucid Spark becoming such an important part of how modern organizations think, plan, align and even collaborate? Right. You know, we are living in a wildly complex world. Yeah. And the problems that we're solving are not getting simpler, right? Like they are just getting more and more complex.

[00:05:37] And the work that we do are spanning multiple systems, again, multiple time zones, multiple humans, and now at this point, multiple AI workflows, right? And making decisions and getting alignment is really, it's difficult. And really what I think that Lucid helps with is providing that clarity and becoming that system of action. It's really bringing ideas into a shared visual canvas where teams can move work forward collectively.

[00:06:07] It provides not just a space for collaboration, but a collaboration, decision-making, shared understanding, historical information, and then execution. So it's all in one space together. It's, I like to think of it as, it's like, it's like having everything all in one workbench. What we used to have to do is carry our work from one space to the next space to the next space to the next space. But instead, we have everything on one workbench and we just pull out our different tool sets and everything's there all at once.

[00:06:37] So let's brainstorm. Okay, we've got one tool set. Okay, we're done brainstorming. Now we need a different tool set. We can just pull out a different drawer and we've got our next set of things that we need instead of having to pick up everything and move it to our next space. It's still in our same space. We just have a different tool set.

[00:06:56] And on top of that, I think that visual collaboration and the work that Lucid does really provides a space that every voice is able to contribute in a way that we haven't made it possible before. In that it really supports a variety of thinkers. There's different kinds of – there's some studies that we did a couple years ago about different kinds of collaborators.

[00:07:26] There's relational collaborators, so people who really need to spend time getting into a relationship. There's – I'm trying to remember the – there's people who can just jump in and like just get going right away and who have all kinds of ideas and can just throw it out there right away. And then there's introspective collaborators, people who just really need time on their own and quiet time and maybe need some like quiet brainstorming time.

[00:07:47] You know, really what it is is it brings all kinds of neurodiverse thinking into one space and allows for all of that to thrive in one space. And so I think that universal language is really possible in a space like Lucid. And really it kind of transcends thinking, it transcends time zones, it transcends departments, all of that kind of in one space. Again, I get really excited. So you get me going, Neil.

[00:08:17] Yeah, no, it's easy to see how passionate you are about this and for all the right reasons. It is a universal problem that we're talking about here and the solution that you've created is phenomenal. I think we're talking at a time where there are a lot of conversations around AI replacing human work. But again, Lucid frames AI as a collaborator that augments people rather than replaces them. So if we try and bring that to life for people listening and help them understand what that actually means inside an enterprise,

[00:08:47] can you just talk a little on how you help teams see AI as a partner or something that enhances creativity, planning and decision making? Because that's where the magic happens, right? It does. It really does. I think that there's a lot of concern, I think, around AI. And I think that that's fair. Yeah. I think anybody who's not a little bit worried about it isn't seeing the whole picture.

[00:09:12] Because the thing is, is everything, it's moving so quickly that the real answer around AI right now is that we don't know. And so part of that is that there is a whole lot of ambiguity. And in ambiguity, there's fear. We're in a moment of liminality. We really are.

[00:09:32] And the way that we navigate that is by helping ourselves and each other in that liminality, in that uncertainty, is by working to create clarity. And the way that we can work to create clarity is, I might be biased, but by mapping it, by making it visual, by making it clear. And by making it clear, we can do things like mapping out those workflows by being clear with here's who does what, when, where, and why.

[00:10:02] So, like, here's where the human judgment comes in. Here's where we're using AI and here's why we're using it, right? Like, here's where AI can come in and draft it or organize it or summarize it or expand on your brainstorming. But then here's where the human element comes in. But let's be really clear about that. Let's make sure that we're designing those handoff points specifically and talking about it.

[00:10:29] And when we're experimenting with it, because this is all just a really grand experiment right now. When we're experimenting with it and things don't go exactly as planned, let's document that and let's learn from it and let's share those learnings with one another. Like, let's make this a space where we're not placing blame, but we're actually instead getting really excited about the fact that we just learned something. Like, inspecting it and saying, wow, we just learned something really cool. Let's make sure that we're sharing that with everybody else.

[00:10:57] So let's actually create this as a shared learning space. So again, you can make that visible to the entire organization or to the rest of the team. Instead of making, like, hiding it, I just made this really big mess. Say, wow, I just made this really big mess. Let's everybody learn from that. And then everybody doesn't make the same really big mess down the road. All of this is truly because everything is moving so quickly. This really is a global mass experiment.

[00:11:25] And so we really do, in order to be successful with it, we have to make it safe enough to play with. And I think that where I've seen it be successful is when everyone is experimenting. Leaders are openly experimenting. Everyone is sharing what they're learning. Exploration is encouraged and celebrated. And everyone is part of the learning process.

[00:11:54] When it's hidden and people are afraid, that's when things get broken and causes damage. Yeah, 100% with you. And as you said there, there are some concerns. And that is where responsible AI often steps in. And when you look at responsible AI, it usually comes down to transparency and trust. So tell me a little bit more about how Lucid approaches this through training, guidelines,

[00:12:23] and the idea of an ethical use charter, almost, that clarifies purpose, data use, and boundaries. Because, again, something you're incredibly passionate about, right? Yeah, Lucid has really done an incredible job of making it safe and being safe about the use of AI. And again, I think it comes down to clarity and truly leaders going first.

[00:12:53] So it's kind of a combination of both, right? So at Lucid, we are really, truly encouraged to experiment with AI. We're given specific tools, not just, hey, free for all, you know, go at it. But really being explicit about learning and playing with, here are the things that you have access to. But then we have knowledge sharing workshops, early pilots. So specifically, here's a group that's allowed to play with a certain set of things.

[00:13:22] Leaders are involved in all of those early pilots. There's weekly consumable training. So it's not like, okay, we're going to send you off. Here's everything, right? But just weekly consumable training. And our weekly consumable training sometimes isn't even about work. Here's an AI thing that you can use for your weekend planning. So it takes the fear out of it. Like, it takes almost the barbs out of it. Like, AI can be this really big, scary thing.

[00:13:52] Or, hey, here's a fun thing that you can do with it. And so it takes the, yeah, the barbs out of it. So it's less, there's less fear around it. And there are clear boundaries, clear guidelines. Like you said, ethical use. Here's where you can use it. Here's where it could go wrong. And so there's clear guidelines.

[00:14:13] And the other part that I think is really important, when AI is ruled out, that our leaders are really, really careful about being explicit about. And again, I think this comes with an immense amount of transparency, is the purpose of the tool. It is how the tool is being used, the impact, how the impact is being measured.

[00:14:40] It is about how it's being engaged and truly making sure that everyone understands that AI is being treated as a partner in our brilliance, not a monitor of our performance. It is wildly explicit that AI is not here to take our jobs. That, yes, we do want it to increase our effectiveness and our innovation and all of these things.

[00:15:11] And it's not a but. It's an and. Here's what it's not here to do. And it's very, very clear. And it's still there's still fear around it. Right. Because that's very natural. Is anything any kind of change? That's very human. We are humans after all. And it's over and over communicated. And there's clarity and there's transparency. And it's all very visible that here's what it's here to do. Here's what we're doing. And it's over and over and over and over that that's what we're doing.

[00:15:38] And it's been it's actually been wildly fun versus I think in some places it's it's not as fun and it can be kind of scary. And another idea that I find incredibly fascinating is the reframing of mistakes with AI from user error to early learnings. How does that shifting the mindset change how teams experiment with AI and learn together? It almost removes that fear from it, I think, and also the blame culture that we see in some organizations.

[00:16:08] I'm not lucid and and truly I'm really grateful that when I joined lucid the culture actually is what they said it is. Because I'm AI doesn't create a psychological safety problem. It just simply exposes the one that you already had. If there's a if your environment isn't a trusting environment at any new tool is going to be perceived as a threat, whether it's AI or not AI.

[00:16:35] And I do believe that this is an opportunity for leaders to to explicitly frame AI as a partner. And it's an opportunity for leaders to shift from that command and control ways of working into creating that safe safe space for experimentation.

[00:17:00] The lucid value of innovation in everything we do is it's a it's a core requirement of being part of lucid. And in order for innovation to thrive, you have to break things. You have to. And user error has to be treated like there's not you have to break things and then learn from it and break things and learn from it and break things and learn from it. Because innovation comes from learning. You can't stay safe and innovate.

[00:17:28] And so when you are learning and not just learning, but sharing your learnings and focusing on what the team learns together, not who made the mistake, you are you're emphasizing the collective intelligence. You're focusing on. I come from an agile background, like continuous improvement, right? The leading indicator of a high performing team is continuous improvement.

[00:17:56] It's that's where the opportunity comes from. That's where the juiciness comes from. One of my one. Actually, I'll tell you the just last week we had a really great all hands where there was an invitation for one of our weekly. AI learnings to not just come with what worked, but come with what isn't working and come with your ideas that you don't have any idea if they're going to work or not.

[00:18:23] Because if they don't work today, they might work tomorrow or next week or the week after that because everything's moving so quickly that we want to hang those things on the wall just in case they might work a month from now because we're going to take them off the wall and see if they work at some other point in the future. Like that's go dream big and see what we can come up with is kind of the that's the charter right now. Dream big. Love that.

[00:18:47] And we're recording this in March 2026 where AI literacy is also becoming equally as important as digital literacy too. So what does Lucid's approach to helping leaders educate teams and businesses and people listening around the world about bias, data limitations, ethical implications? What all those look like and also why this foundation is so important for organizations that are adopting AI right now. Tell me more about that. Yeah. Yeah.

[00:19:17] I love the word literacy. I believe that responsibility comes from clarity. Yeah. And I'm a true believer that human beings want to do the right thing. I don't think anybody's waking up and going, I want to do something inappropriate. And the best thing that everybody can do is provide really good guidance and making expectations and processes visible. Visually mapping your AI workflows. I'm, again, biased, but a lucid canvas is really good for that. Mapping out your policies, defining your steps.

[00:19:47] Here's where the person does this thing and here's where AI can do this thing. AI generates this draft and then a human being comes in and does XYZ thing. Here's where the human looks for bias. Here's where the AI comes in. Like, here's what the workflow looks like. So that everybody has a clear understanding of where's my, where does my accountability come in?

[00:20:11] And if things don't work, then we can actually look at where in the workflow did it break and where do I need to, where do, where does the human need to step in a little bit further? And establishing those clear guardrails and really will help protect data and privacy and sensitive information. And make sure that every output really is making sure that every output is fact-checked.

[00:20:34] And because ultimately AI is not accountable for any kind of final work, right? We are. And so any kind of strong foundation has to be mapped out, has to be understood clearly. And all of that comes from transparency and shared learning and being clear about boundaries and ethical expectations. And what that does is allow for safe experimentation.

[00:21:02] It's like, I think for a long time we've had these really strong holds on safety, which is important. Really, really important. And humans are smarter than we think they are. And I think it's time that we start treating our workspaces as if the human beings are really smart.

[00:21:25] And so instead of the grip necessarily that we've had on them, we can start treating our experimentation labs as labs where we have a professor making sure that everybody is experimenting, not blowing up the building. But that everybody's playing with their beakers in a safe way.

[00:21:52] But that everybody has access to play and experiment rather than here's your one specific thing that you're going to put into this one specific bottle, but you can't touch anything else. I think that's the difference in what we're working with now. And if we're not, I think any organization that isn't starting to allow that to happen is not going to last very long because companies are spinning up over weekends right now because they don't have to wait.

[00:22:21] Because they don't have to wait for things to go through six months of nine months of 12 months of waiting to see if somebody can play with something. Yeah, that's such a great point there. There are companies spinning up every weekend and they don't have to wait for all those corporate red tapes and meetings and blame culture that we mentioned earlier.

[00:22:45] And listening to you today, your passion for creating a culture of safe experimentation where employees can test AI responsibly without that fear of reprimand. And there'll be a lot of businesses listening that maybe they haven't got this kind of corporate strategy in place yet and they know what they need to do. But getting there is often quite a different story.

[00:23:06] So from your experience, what are the practical steps that listeners and organizations can take to create that kind of environment and that mindset shift required to succeed and ensure they don't get left behind it and trust in their people to do things like this and experiment more? What steps should they be taking? Well, I think step one is clear expectations. Teams do their best work when they understand the rules.

[00:23:31] We talked about data privacy, ethical use, and at the same time knowing that they do actually have the freedom to explore. Again, with those kind of those defined guardrails. I think on top of the clear expectations, what has worked that I've seen is dedicated spaces for knowledge sharing. Things like Slack channels or just team rituals, whether it's like weekly learnings or things like that.

[00:24:00] Places, spaces to practice, to share, to ask questions, to talk about things that didn't work. I really do love the here's what I failed at this week. Here's what didn't work for me this week. This really normalizes that trial and error. Like really, we tested this. It didn't work. You try it. See if it didn't work for you. That kind of a thing.

[00:24:23] And again, I really think that leadership sets the tone when senior managers participate, when they share their own experiments. We've got leaders that talk about what they're experimenting with AI on the weekends and in their spare time and how they're playing with it and how curious they are and how excited they are. And that really, it permeates the rest of the organization in a way that really gets everybody super excited.

[00:24:51] And creating that space for that really makes a huge, huge, huge difference. The psychological safety to do this truly, truly, truly is the foundation. And everybody's willing to try when somebody else goes first and especially when leaders are the ones that go first and say, it didn't work. And I'm doing it again. Love it. And finally, Lucid's Accelerator and the addition of AirFocus are helping teams connect strategy, planning and execution and so much more.

[00:25:20] But for people listening, hearing about Lucid's Accelerator for the first time, listening a little bit more about them and what excites you about these two? Here's what I'm really excited about. Misalignment is wildly expensive for all organizations. And it happens quick and it's happening faster and faster, especially with the addition of AI. We see product leaders everywhere spending hours just navigating conflicting interests with stakeholder alignment. I mean, like hours and hours and hours and hours.

[00:25:49] And when we can integrate things like AI-powered prioritization, we can bring structure really quickly to massive amounts of chaos. We can help teams connect their, like this big picture strategy directly straight into a team, into their daily execution. Here's what we know that we are doing directly impacts where our organization is headed, our vision, right?

[00:26:17] We are constantly focused on the work that actually truly matters. And we are constantly making sure that we are pivoting based on the feedback being provided by our customers. We are able to get an entire organization into a shared visual environment where alignment happens really quickly. Decisions are able to be made with confidence based on actual data. That gets me really excited.

[00:26:45] I'm super passionate about not making tummy decisions. Like, I think for a long time we were all wandering around saying, my gut says. But instead we can actually make, that's why I call tummy decisions. No, we can actually make decisions based on real information. And that's what Lucid helps us to do. It really turns complexity into clarity and accelerates the work being done based on real information.

[00:27:10] I think we also help teams establish a really strong foundation for AI to operate effectively. Again, we talked about this really establishing those clear workflows. The systems, the ownership, really actually documenting, like, who's doing what, when, where, and why is really important. We know, I think somebody, we did a study that said something like 16% of organizations say that their workflows are well documented. That's terrifying.

[00:27:40] That's, workflows actually do, our systems have to be well documented for AI to work effectively, for reliable AI use. If our systems aren't documented really well, then we are going to let AI loose into our organizations and we're going to go the wrong direction real fast. Faster than ever. So, we need to do that really well and Lucid is going to be able to help us to do that quickly.

[00:28:04] And so, combining visual collaboration with AI, Lucid can document how work gets done, align around that shared context, apply AI directly into workflows and actually get things done effectively. So, that's, I don't know, I get all excited about all of these things. Like I said, you pull a string and I can just talk about this stuff forever. Brilliant. I love how Lucid's agility accelerators are helping all teams in any industry, no matter what framework they use.

[00:28:33] I urge people to go check that out and check out some videos online of that in action as well. But for everyone listening, where would you like me to direct them if they want to find out more information on anything we talked about today or connect with you or your team online? Where should I let them go? Where should I point them? Sure, you can connect to my LinkedIn. I have a very unique last name, so you can find me for sure. Or lucid.co, Lucid Software on all the socials. Awesome.

[00:28:59] Well, I think one of the things that stands out today is your passion for the industry and all the work that you're doing there. Also, it's so important to continue to frame AI as that collaborator, emphasising augmentation rather than a replacement. And transparency with training and ethical use charters will maintain that trust, cover purpose and boundaries and everything in between. And most importantly of all, let people experiment and remove blame culture from everything.

[00:29:28] I think there's so many great points in your conversation today. But more than anything, thank you for sharing your time with me today to share your insights and your passion for this topic. Really appreciate your time. Well, thanks for having me, Neil. It's been great fun. One of the things I enjoyed about this conversation with Jessica today was the reminder that successful AI adoption is as much about culture as it is about technology.

[00:29:53] So from ethical use charters and AI literacy to visual workflows and safe experimentation, this was a discussion about helping people feel informed, included and confident, especially as new tools continuously arrive in the workplace. And she also offered a strong case for why leaders need to go first, why they need to set clear boundaries and create the kind of environment where early learnings are shared rather than hidden.

[00:30:22] So if businesses can get that right, and AI becomes far more useful as a partner in helping better thinking, planning and better collaboration, it's win-win, right? So if you want to learn more about Lucid Software, head over to the website and I'll include Jessica's LinkedIn as well. And as always, I'd love to hear your thoughts. Pop over to techtalksnetwork.com and let me know. Are enough businesses treating AI as a collaborator?

[00:30:51] Or are too many still leading with fear? Let me know and I'll return again very soon with another guest. Speak to you then. Bye for now.