AI训练危机:为何企业在AI上投入巨资,却忽视员工教育?
许多企业高管正斥资数百万美元部署人工智能工具,但一个关键问题始终未被解决——员工缺乏系统性的AI培训。这一疏忽并非细枝末节,而是决定AI投资能否产生实际回报的核心所在。近期与多家顶尖软件公司领导者的对话揭示了一系列精准的失误和可行的解决方案,这对所有正在快速推进AI落地的企业都具有重要意义。
文化转型:从技术部署到组织变革
成功应用AI的根本挑战在于文化层面。人们常误以为AI集成仅是一次技术升级,实则它要求整个组织完成一场深刻的文化转型。根据企业协作软件领域领导者们的观察,高效的AI使用源于将AI融入企业文化本身,而非仅仅将其视为一组新功能。
那些在AI应用上表现出高度参与度的团队,会主动创造“文化时刻”——即通过正式或非正式的方式,赋予员工尝试和运用生成式AI的许可与期待。这种做法包括高层领导的公开支持、内部案例记录以及定期传播真实成果与经验教训。关键启示在于:推动AI落地不仅是技术问题,更是一场深入的组织变革管理。
培训盲区:安全顾虑、领域知识与非确定性
数据与知识产权的安全风险,是阻碍AI培训普及的常见借口。企业常因尚未建立安全访问机制而推迟培训进程。然而,如今已有成熟的操作框架可应对这些担忧——技术风险不应再成为停滞不前的理由。
传统的通用型AI培训(如基础提示词教学)只能带来浅层效果。真正的商业价值来自于针对特定业务流程和专业领域的定制化教育。生成式AI具有“非确定性”特征,意味着其输出不可预测。因此,培训必须包含预期管理,并教会员工如何将模型“锚定”在企业自身对“高质量工作”的定义之上。员工需要学会提供充分的过程文档与操作知识,使AI能够产出符合组织标准的结果。
数据与流程双锚定:实现ROI的“最后一公里”
大多数企业已将重点放在数据整合上,让语言模型能通过简单接口访问公司数据。然而,仅将AI“锚定”于原始数据远远不够。数据本身无法告诉AI工作的流程顺序或业务逻辑。
真正的价值来自将可访问的数据集与清晰的流程文档相结合——明确标注工作流步骤、对账流程及企业特有的最佳实践。例如,若无清晰的操作地图,AI无法胜任财务对账任务。这类流程文档需从多位知识持有者处提炼汇总,并与数据库一并提供给AI系统。唯有同时具备数据与流程上下文,AI才能持续输出可靠且有价值的结果。
衡量AI成效:量化指标与快速实验
面对AI平台日新月异的迭代速度,泛化的评估方式已不再适用。企业必须为每个业务职能设定核心指标,识别出具体的量化信号。例如,对于软件工程团队,可以关注“任务完成时间”或“错误率下降”;对于产品团队,则可衡量“上市速度”。
领先企业会建立内部“快速通道”实验机制,允许员工在保障数据安全的前提下迅速测试新AI工具。这种文化与运营框架不仅能加速学习曲线,还能为企业提供必要的敏捷性,以应对AI系统每周甚至每月的演进变化。
构建AI学习文化:记录成功,鼓励探索
建立AI教育体系远不止举办几次培训课程。领先的AI采用企业会在全员会议、部门简报和内部沟通渠道中常态化地展示学习成果。这种做法促进了知识共享,鼓励员工大胆尝试,并加快了有效策略的传播。
为了实现系统性提升,组织必须为员工留出专门的时间与空间——例如设立黑客松活动或分配项目工时——用于AI技能提升。依赖临时或业余时间的学习模式,无法支撑大规模的AI普及。
可持续AI教育的战略建议
最成功的AI培训项目通常涵盖三个层级:
- 通用AI认知:普及基本概念与提示工程技巧
- 领域专属培训:针对不同职能部门设计具体场景与工作流
- 数据与流程锚定:结合可访问的企业数据与明确的操作流程文档
持续的商业价值要求企业从通用指导转向为每个岗位建立具体、务实的培训体系。与此同时,培育一种鼓励实验与同伴学习的文化,才能实现知识的规模化增长。
结语
若无针对性培训,数百万美元的AI投资终将付诸东流。详尽的业务流程文档、量化的绩效基准以及强大的内部学习文化,才是释放AI商业影响力的基石。前瞻性的决策者应减少对“一刀切”方案的依赖,转而聚焦于细致、有意识的教育与文档建设,确保AI投资不仅当下见效,更能在未来多年持续创造价值。
—英文原文—
原标题: Ep 672: The AI Training crisis: Why companies are spending money on AI but not educating
原内容:
Episode Categories:
Resources:
Join the discussion: Got something to say? Let us know on LinkedIn and network with other AI leaders Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Connect with Jordan Wilson : LinkedIn Profile
The AI Training Crisis: Why Corporate Investments Fail Without Employee Education
Executives may be spending millions on artificial intelligence tools, but a critical gap persists: companies often fail to formally train their workforce. This oversight is not a minor detail—it’s the difference between seeing meaningful returns on AI investments and watching money evaporate. Evidence from recent conversations with leaders at top software firms highlights a series of precise missteps and potential remedies that matter to every organization negotiating the rapid pace of AI adoption.
Cultural Shift in AI Adoption: Moving Beyond Tool Deployment
The underlying challenge in productive AI deployment is fundamentally cultural. Integrating AI is frequently mistaken for a simple technology upgrade, when it actually requires an organization-wide cultural shift. According to observations from leaders in enterprise collaboration software, effective AI use stems from treating AI as part of company culture—not just a suite of features.
Teams demonstrating high AI engagement create “cultural moments,” giving employees both the permission and expectation to experiment with and apply generative AI solutions. This process includes visible leadership support, documentation, and regular internal communication that highlights real outcomes and lessons learned. The takeaway: AI enablement must be as much about internal change management as about the technology itself.
AI Training Gaps: Security, Domain Expertise, and Nondeterminism
A common blocker in AI education is the perceived risk to data and intellectual property. Companies delay training while grappling with safe access protocols. However, established playbooks now exist to address these concerns—technology risk is no longer an excuse for inertia.
Traditional AI training, which offers generic instruction on prompting, yields only shallow results. Deep business value emerges when companies tailor their education to domain-specific processes and subject matter. Generative AI ’s nondeterministic nature means organizations cannot rely on deterministic outputs. Instead, training must include expectation management and techniques to ground models in company-specific definitions of “a good job.” Employees should learn to provide adequate documentation and procedural knowledge, enabling the AI to produce outcomes aligned with organizational standards.
Grounding AI in Data and Process: The Last Mile of ROI
Most enterprises have focused efforts on data integration, making data accessible to language models through straightforward interfaces. Yet, merely grounding AI in raw company data is not enough. Data alone does not instruct an AI system on workflows or the sequence of business operations.
Critical value comes from pairing accessible datasets with process documentation—outlining explicit steps for workflows, reconciliations, and company-specific best practices. For instance, an AI cannot manage financial reconciliations without a clear procedural map. Process documentation must be distilled from multiple knowledge holders, then provided alongside databases. Only with both data and workflow context can AI systems deliver consistently valuable, reliable outputs.
Measuring AI Performance: Quantitative Benchmarks and Rapid Experimentation
Given the relentless pace of AI platform upgrades, blanket approaches to measurement fail. It is essential to isolate central metrics for each business function—identifying quantitative indicators, such as time-to-completion or error reduction for software engineering teams, or speed-to-market for product teams.
Successful organizations implement internal “fast path” experimentation policies, enabling employees to test new AI tools securely and rapidly—without risking sensitive data. These cultural and operational frameworks not only accelerate learning but provide the agility needed to pivot as AI systems evolve weekly or monthly.
Creating a Culture of AI Learning and Documenting Success
Establishing AI education goes beyond periodic training sessions. Companies leading in AI adoption routinely spotlight learning moments at all-hands meetings, staff briefings, and internal channels. This practice facilitates knowledge sharing, encourages risk-taking in experimentation, and allows quick diffusion of successful tactics or insights.
To foster systematic improvement, organizations must grant employees the time and space—such as hackathon sessions or allotted project hours—to upskill in AI. Ad hoc and off-hours learning is insufficient for broad adoption.
Strategic Recommendations for Sustainable AI Education
The most successful AI training programs operate on three levels:
General AI Awareness: Basic concepts and prompt engineering.
Domain-Specific Training: Tailored scenarios and workflows for each business function.
Grounding in Data and Process: Pairing accessible company data with explicit procedural documentation on work execution.
Sustained business value requires moving from generic instruction to specific, pragmatic systems established for each role. Concurrently, nurturing a culture of experimentation and peer learning unlocks scalable knowledge gains.
Conclusion
Without targeted training, multimillion-dollar AI investments fail to reach their potential. Specific, documented business processes, quantitative performance benchmarks, and a strong internal learning culture are the backbone of realizing AI’s business impact. Forward-thinking decision-makers should focus less on one-size-fits-all solutions and more on detailed, deliberate education and documentation to ensure AI investments deliver returns—both today and in the years to come.
Topics Covered in This Episode:
The AI Training Crisis in Enterprises
Company Investment in AI vs. Employee Education
Cultural Shifts Required for AI Adoption
Importance of Domain-Specific AI Training
Grounding Large Language Models With Company Data
Process and Workflow Documentation for AI Success
Rapid Experimentation With New AI Models
Measuring ROI and Outcomes of AI Training
Creating Cultural Moments for AI Learning
Preparing Employees for Fast AI Technology Changes
Episode Transcript
Jordan Wilson [00:00:47]: I literally can’t tell you the number of times that I’ve talked to business leaders who have spent their companies anyways, are spending usually millions of dollars on AI. Yet they haven’t formally trained their people. And it’s almost baffling to me. Right? Because here we are with this generative AI technology powered by large language models. Arguably, the biggest technological shifts ever, and it changes almost daily. Yet why aren’t companies investing in their people to make sure that they understand what the technology does, understand what it can do, and the cultural aided process changes needed to actually get a return on AI. So that’s what we’re gonna be talking about today, going over the AI training crisis and why companies are spending so much money on AI, but not spending the time and the resources to educate their people. Alright. Jordan Wilson [00:01:48]: I’m excited for today’s show. I hope you are too. What’s going on? My name is Jordan Wilson. Welcome to Everyday AI. This is your daily livestream podcast and free daily newsletter helping everyday business leaders like you and me not just keep up with the AI changes because they’re happening literally every single day, but how we can make sense of them and, grab the important insights to grow our companies and our careers. So it starts here with the unedited, unscripted livestream podcast. But to take it to the next level, make sure you go to our website, youreverydayai.com. There, make sure you go sign up for the free daily newsletter. Jordan Wilson [00:02:19]: We’re gonna be recapping the highlights from today’s podcast as well as all of the other daily AI news you need to get ahead. Alright. You don’t gotta listen to me, rant about this one. I’ve done that enough. I’m excited, for our guest for today. So livestream audience, please help me welcome to the show Dan Lawyer, the chief product officer at Lucid Software. Dan, thank you so much for joining the Everyday AI Show. Dan Lawyer [00:02:44]: Jordan, thank you. I I’m thrilled to be here. I’m pretty pretty excited for a chance to talk with you and to share some thoughts with you and your audience. Jordan Wilson [00:02:51]: Alright. So, before we get into the topic and, yeah, this is gonna be a fun conversation, I think. Tell everyone a little bit. If they’re not aware, what does Lucid Software do? Dan Lawyer [00:03:00]: I’d love to. Lucid Software, we are a work acceleration and visual collaboration platform. It’s used by more than a 100,000,000 people, around the world. So a lot of people probably know about Lucid. The you know, we’re on this mission to help teams see and build the future, and we do that through portfolio products, things like LucidChart, which is intelligent diagramming, Lucid Spark, virtual whiteboarding, Air Focus, an AI powered product management, and road mapping platform. So a suite of products that work together to just really help people solve some hard collaboration problems. Jordan Wilson [00:03:33]: And I’ll ask you this, and I’m sure we’re gonna get into a little, a little bit. But even for you all personally, right, you know, like you said, one of the larger companies in the world when it comes to putting AI products out there for people to use, what have you all even learned internally, right, when it comes to investing in AI products, in AI offerings for yourself and for your customers yet training. What’s been some of your biggest takeaways internally? Dan Lawyer [00:04:01]: Yeah. There there’s a couple of things internally that we see. One is, it’s actually much more of a cultural shift than it is just a retool of the team. And, of course, it’s important to provide tools and provide space and time, but it actually has to be treated like a cultural shift and an evolution of the culture of the company to be a company that embraces AI, knows how to use it, has expectations, and normal season even has, like I think of them as cultural moments where AI comes to the forefront that highlights it for people and gives them permission and expectation and things like that. So the cultural exchange has to be very well managed in addition to, like, the security implications, the data availability, the tooling, the training that people talk about. Jordan Wilson [00:04:42]: I think that’s the biggest surprise is how how much the culture impact it actually is. So you have a a a deep background, working in product at some large companies. So, you know, I I I like asking people this. Right? Because I think sometimes you can learn through personal stories. I’ve shared mine plenty. But can you talk a little bit maybe about when was the first time or if you remember, you you know, when was the first time that you looked at an AI system and you were like, wow. Kind of taken aback, but almost like not taking it personally. But when was it at the point where you were like, okay. Jordan Wilson [00:05:15]: This piece of software or LLM just produced something that I didn’t think it could, and this is something maybe some knowledge that I thought I was kinda special at knowing something at this level. Do you remember that, or, you know, do you have any anecdotes kind of, like, on that kind of, point of realization? Dan Lawyer [00:05:32]: Yeah. Well, like, I’ll maybe maybe I’ll do that a two part answer. Like, the first time was actually a long time ago when I worked at ancestry.com. And back then, like, we would have probably just talked about machine learning or stuff like that. But it’s actually what we’re doing is closer to what we think about today as AI than that. And so, like, we like, what we’re able to do to, like, automatically generate stories and information about people’s families and help them find people is amazing. But if you fast forward to, like, the more recent generative AI world, things like that, I think the first time that I was able to go to an AI, and it’s something that Lucid had built, and give it a prompt, basically asking it to, you know, diagram out for a very complex system. And it did it and got it, you know, 95% right. Dan Lawyer [00:06:16]: I was like, okay. Like like, that’s pretty cool. Like, I can see how that could change and speed up the way that I work. Being able to just, like, the the time to understanding inside was so much faster, when I could do that. So I I that was that was, like, a first unlock for me. There’s been many many unlocks now, I think. Jordan Wilson [00:06:35]: So I kinda wanna jump to the end here. So this big AI training crisis, because depending on what this, you know, the stat, the study that you look at, there’s so many. But I’d say for across the board, most stats say that, you know, 90 plus percent of executives say that AI is a top priority. Right? And, you know, obviously, the amount of money that companies are investing into AI, you know, it’s in the billions. Right? Yet most studies show that only a third or less are properly training their employees on how to use it. Why? Why this big gap? Why is everyone saying this is the most important thing and we’ll gladly spend millions of dollars, yet why are employees not getting trained? Dan Lawyer [00:07:18]: Yeah. I think there’s several gaps in there. One of the gaps for employees not getting trained is it actually, like, takes a little bit of time for a company to think through. How do I safely provide access to the tools in a way that doesn’t, you know, compromise our data? It doesn’t compromise our IP. There’s been a there was initially a lot of fear and concern about that. The concerns are still there, but there are a lot of playbooks now for how to, like, do that. And so that part is accelerating. Then second part is, like, like, what should I train them on? Like, like, there’s a broad general training of just like, well, you can give a prompt and get an answer and things like that. Dan Lawyer [00:07:55]: That actually doesn’t take you very far in being able to get your work done. You have to go deeper and think about how to train in domain specific areas, you have to actually have a fair amount of understanding of the domain and subject matter expertise to really extract the highest value. And actually, I think one of the biggest gaps in how people think about, you know, getting value from AI is the combination of training, but also the expectations that are like, in order to get a good outcome from AI, you know, generative AI is nondeterministic. Businesses don’t survive that very well. They need to, you know, predictable outcomes. And so you have to teach people that to get good outcomes from AI, they actually have to ground the AI in what a good job looks like. You have to ground the AI in, you know, the reality of, like, this is how work gets done at our company if you want to automate that work. And so you need to, like, actually back people up and teach them, okay. Dan Lawyer [00:08:49]: You have to actually have a fair amount of documentation that you can provide to the AI about how your company works and about what a good job looks like before you can then get the highest value from AI. And so it takes some preparation and some forethought and some domain specific knowledge to be able to do it well. Jordan Wilson [00:09:06]: Yeah. And an analogy I love, especially since, you know, I interviewed the guy who, you know, came up with the easy button, right, like, way back at Staples, at HP now. You know, but it seems like that’s the expectation that a lot of business leaders have. They’re like, okay. Well, you know, especially larger enterprises if they have tens of thousands of employees, and they’re like, alright. Well, we’ll pay the, you know, the twenty or thirty dollars a month for tens of thousands of people, which adds up to, you know, usually 7 plus figures annually. They’re like, alright. Well, there’s the investment. Jordan Wilson [00:09:38]: Now it’s an easy button. That’s wrong. Right? Dan Lawyer [00:09:41]: Yeah. It it it takes more prep like like, there is there is a there there. There is an outcome there, but it takes more forethought and preparation maybe than people initially thought. And we think about this lucid a lot. We talk about it as the last mile problem, right, which is there’s you know, if you think of logistics, right, you build a bunch of distribution centers. That doesn’t matter actually unless you can get it from the distribution center to people’s homes. And similar to AI, it’s it’s like you can have the AI. You can license the tool. Dan Lawyer [00:10:07]: But if you if you can’t actually, you know, pass to the AI information about how your company works, which requires you to actually go through and document your processes and document things and and get the knowledge that’s scattered across many people’s heads and get it all together where people can see it. And then to make it worse, like, if you, pass the AI a bad process, you will still get a bad outcome. Right? So you actually have to you have to document how your company works, and then you have to refine that. And and that’s part of the essential training is, like, you have to teach people that that part of what they need to do to get the most value from AI is to document how they work so that they can share that with AI, to have good examples of good outcomes so they can share that with AI. And so the there’s a fair amount of, like, teaching and expectation setting, I think, that has to happen there. Jordan Wilson [00:10:51]: And I I I’m glad you brought that up because I I felt weird, you know, back in, like, early twenty twenty three saying, like, hey. You need to talk with an AI. You need to teach it. You need to train it just like you would with an employee. Because I think back then, everyone was looking at large language models like ChatGPT or or Gemini as Copilot input output. Right? And not necessarily a coworker. Yet here we are, you know, rolling into, 2026. I think it’s a little different now. Jordan Wilson [00:11:18]: I think people are looking at AI, especially agentic AI as true coworkers. Right? How can people get through that mindset shift of, hey. This is actually something I need to sit down with. I need to iterate, like your example. I need to ground it, not just in our company’s data, but ground it in, you you know, what good work looks like. How can people get to that shift? Because it is hard to treat a nonhuman thing with a human esque characteristic of working with it, being patient, and sharing. Dan Lawyer [00:11:49]: Yeah. The and it it even shows up like that. You see these, like yeah. Like, we’ve done a bunch of surveys, and you see gaps between, like, an executive or a leader’s mindset and how they interact with AI and how an individual contributor might interact with AI. And it and it really has to do with this this comfort of delegating work, and this, comfort of, like, how of, like, having a subordinate. Like like, you know, I’m I’m very used to, asking other people to do things for me, and expecting high outcomes from that. They’re they’re you know, I think if you, you know, take Dan twenty years ago, I I didn’t even think the same way. And so so it takes some of that, but I I think there’s a there’s a there’s a evolved model that come to, and it’s actually how we try to, like, incorporate into our own products is is, like, the idea of, like, AI as a co collaborator, not as a subordinate. Dan Lawyer [00:12:35]: And I think many people probably feel more comfortable with that. It’s like, hey. Like, we’ve got, you know, a sixth man on the team now, that we can turn to, who we can trust, who can get things done, but we still have to give them feedback like with any other team member. We have to do that. So I think and and I’ve even like, I, like, I wouldn’t go all the way there maybe, but mentally, sometimes I, like, I tend to personify my AI assistants quite a bit. I I tend to talk to them as if they’re real people. I but then I have to back them off because then they tend to talk to me. Like like, I I worry that, you know, my, like, AI assistants try to flatter me. Dan Lawyer [00:13:08]: And I have to tell them, I’m I’m like, look. I don’t want I don’t want you to flatter me. I want critical thinking. Like, I don’t I don’t need you to tell me this is good or, like, I need honest assessment. So I I actually have to have to say things to AI, to get it to give me more critical, feedback. Otherwise, it’s just telling me everything I do was great, which is not true. Yeah. So so, like, there’s, like, just, like, a whole work pattern that we have to think through. Jordan Wilson [00:13:33]: Yeah. Just just like humans, you you know, the, the AI is being a little too, sycophantic to try to, you know, suck up to us. So you said something there that I wanna dig a little bit deeper on. Just this concept of, you know, AI and maybe treating AI like like a subordinate, but maybe that’s not the best way in the future. But real quick before we get into that, a quick word from our sponsors. Jordan Wilson [00:15:03]: You know, Dan, I I I liked what you were saying kind of about, you know, treating AI like like a sixth man. Right? So for basketball fans or maybe if you’re not a basketball fan, right, the sixth sixth man is important. That’s the the person that comes in first, you know, in in basketball, and usually they can play a variety of positions. And, you know, they’re good enough to be a starter, but for whatever reasons, they’re not a starter yet. What happens in in 2026, Dan, when the models themselves are all starter worthy? Right? And maybe they’re better than all of the starters. How do we, number one, get over that mindset shift? Right? And I’m starting to see that a lot personally, not just handing tasks off that I would like a subordinate, but, oh, this is if I had someone running my own company. Right? And I’m taking, you know, those big picture, you know, answers or outputs from it. And and now I’m the subordinate. Jordan Wilson [00:15:57]: Are we getting to that point? And if so, how do we prepare for that, and how do we trade for that? Dan Lawyer [00:16:02]: Yeah. I think I think we are getting closer to that, but but there’s it’s it’s actually really interesting because it’s very similar to how you you treat another person. Right? You have to earn a certain amount of trust. And someone with AI, like and and it’s on a, like, a skill by skill or use case by use case basis. You have to gain trust that it can do a consistently good job at something. And so part of how that trust can be built is, you know, to to inspect what it’s doing. It’s it’s one of my pet peeves on my team when somebody hands me stuff through this AI-generated, and I’m like, did you even read this? Like like, like, it’s not quite right. But so so you have to you have to do that. Dan Lawyer [00:16:38]: And and to to increase the likelihood of of growing that trust and get there, you know, AI does so much better if it has, like, broad context and broad knowledge of the world, but when you can ground a specific context about your business, about your domain, about what you do, it’ll do so much better. And if you can keep feeding it a constant stream of content , just like you would another person on your team so so it stays very aware of what’s happening. It’s gonna do better and better. So I’ve I think, you know, you gain trust, on a use case by use case basis. That probably then starts to build awareness on the team. Like, hey. We’ve actually discovered that that if we, you know, ground AI with this knowledge of how our company works, in this particular use case, it can give us a consistently good outcome. And then that that insight needs to be shared across the team, then the team can start using that and and get that in there. Dan Lawyer [00:17:28]: But but I I’m skeptical that that, you’ll ever get the strong outcomes you need without providing specific context, to the Jordan Wilson [00:17:36]: AI. So I I wanna go a little bit deeper, and and maybe this would be a little little technical and dorky. But, you know, one thing we keep talking about here is is grounding. And, obviously, that’s extremely important, right, when working with nondeterministic generative AI large language models, right, that are, in theory, just next token prediction. Right? But when you ground it, right, in your company’s data, and if your data is clean and if it’s organized, right, I’ll say in 2024, that’s a big part of what humans did. Right? They made sure to feed company information to a large language model. Right? Especially when we’re talking about front end chat bots. Right? So leaving the, the the the API and the dev talk, you you know, at the door here. Jordan Wilson [00:18:20]: But on the front end, now in 2025, all the major systems. Right? They essentially have, you know, two clicks, and now these systems are grounded in your company’s data. Whereas in 2024, that was a big part of what, you know, AI native organizations, what the humans did there. Right? They were just making sure. So now that that’s, you know, it’s it’s not solved. Sure. Right? But grounding is relatively simple and straightforward now. So moving forward, into 2026, now that these large language models, it is much easier to ground them in your data. Jordan Wilson [00:18:55]: How should we be thinking about working with large language models when they do have access to that? And if your data is in a row, right, how does that change the role of your everyday business leader going forward? Dan Lawyer [00:19:05]: Yeah. So there there’s still a missing piece. Right? So so you got the data. The data is not the workflow. The data does not explain this is how you go from a, b, to c, to d to get the outcome that you want. And so so you in addition to having the data, you have to actually ground it in the process. This is how work gets done. This is how, you know, this is, like I’ll be really practical. Dan Lawyer [00:19:29]: Right? So, like, how at your company do you reconcile a wire transfer? Right? Like, like, data will not tell you that. And but but, and, in fact, there’s probably not a single person in your company that will tell you. You probably have to get 10 people together, to answer that question. But if you can document that and pass that to AI, then the combination of this is this is procedurally how we work, how you get it done. This is the dataset, and this is what a good job looks like, then you can do it. So there there’s actually a missing component beyond just grounding in the data is grounding it in in the process, grounding in procedural knowledge of of how to get things done. And and you can imagine a world where you have, like, MTP servers and and strong API between all types of systems. You still you still have to have an orchestration that says this is how to progress the work. Dan Lawyer [00:20:16]: This is the proper sequence. Now I think over time, the AI can be trained and it can learn that, but there will still be, like, specific knowledge in a company that is that is their secret sauce. So, well, we’re better because we do it this way, and we don’t want the world to know that. And so I think for a long time, it’s gonna be important for companies to to augment the data with procedural knowledge that this is how work gets done, Jordan Wilson [00:20:39]: and and then you’ll get good outcomes. You know, Dan, I I think you’ve been spot on just, the amount of times that you’ve mentioned documentation, process documentation, and and having your data in order. I think those are two keys that, you you know, you can’t overlook. But one of the biggest issues, I think, you know, when it comes to educating employees is the rate of change. Right? If we just look at from, you know, mid November until now, every single week, it started with OpenAI, and then it went Google, and then it went Gemini, and then it went Claude. And now we’re back to, OpenAI, releasing a new best model week over week. We’ve had it for five straight weeks now. So, you know, especially when companies are maybe using one or two systems, and they’re changing all the time. Jordan Wilson [00:21:26]: And, you know, a lot of people don’t think, oh, when you go from a a GPT five one to a GPT five two, oh, I can just do things the same. Well, you can’t always. How can companies possibly keep up when the technology they’re using and maybe everything they’ve learned can change very quickly without very much notice? Dan Lawyer [00:21:45]: Yeah. I I think there’s there’s two critical components to that. One is you have to think through, how do you make it easy for your employees to rapidly access the new technology in a safe way? And so, like, it’s it’s like, you know, what is your procurement process? What is your policy around what I can install on my machine? How can I do that safely and quickly? Like like, you have to have, like, like, you almost have to have, like, here’s a fast path, and these are, you know, the guidelines of of how you can do fast path experimentation. You can’t use customer data. You can’t use PII. You can’t, you know, like, that kind of stuff. But but we do need you to rapidly experiment with new things. And then, you know, so you need, like, those fast paths of how you can, you know, get people exposure, to the new things that are coming. Dan Lawyer [00:22:33]: You need to provide them time and space and cultural moments to highlight it. The so that’s the one piece to it. The other piece is you have to know what you’re getting value from. And so you have to figure out for, like, every kind of business function or domain we’re trying to get value from AI, What, like, what is the central measure of speed? You know, how did for example, like and I I I don’t you know, I’m attempted like, should I tell you our measures? But, like like, you like, you know, think of, like, a software engineering team. Is there a number that measures whether AI is speeding you up, that is beyond just, like, you know, the sentiment, but, like, an actual, you know, quantitative view of is is this new tool speeding us up? Or on a product team, like product and UX team. Like, that’s my to me, right, where I spend most of my time. Like, how how do I measure if AI is actually making us faster at getting good outcomes? And we we spend a lot of time figuring that out for our company. Like, what what are those quantitative measures? So so that gives you, like, a way to evaluate and say, is it act is it just new and shiny, or is it actually creating value and speeding us up toward better, faster outcomes for our customers? And and select the the I think that’s the combination. Dan Lawyer [00:23:42]: Right? It’s like allow for rapid access and experimentation, but have a quantifiable way of knowing if it’s helping you. Jordan Wilson [00:23:49]: So rapid experimentation. One one of my favorite things. Right? Yeah. Don’t don’t don’t spend, you know, hours or days or weeks on something if you aren’t ready for plan b, plan c, or experimenting with them at the same time. Right? And and, Dan, I think what you said there just about having those kind of, you know, internal benchmarks and, quantitative measurements, extremely important, especially when you’re working on something a little bit more finite. Right? But what about for everyone else? What if there is no one benchmark? If there is no one measurement, you know, on one system to see if there’s, you know, a a good return? You know, for maybe for those that are looking at training their entire company and maybe this is something that you’ve all learned internally, maybe what’s been some of the most successful ways that you’ve seen even internally on, hey. Here’s good ways that we can educate our people in, space that is changing weekly. Dan Lawyer [00:24:44]: Yeah. So so I think, you know, we create cultural moments that are like and I think those, like, you know so there’s things like like all hands meetings or staff meetings or things like that. And we create space in all those, places to highlight what we’re learning about AI. Right? So, like, in in my, you know, all hands meeting, I have a, you know, an AI moment where where where we are having people highlight various ways that they’re seeing new value or or new experiments, what’s working, what’s not working around AI to share knowledge broadly because you have many people touching it. And, and we need the knowledge to be leveraged. And so so creating those cultural moments, it does the two things. One that shares the knowledge too. It gives people permission to play. Dan Lawyer [00:25:27]: And and it it’s highlighting, hey. This is a good job. This is somebody who went and tried something new with AI, and it did or did not work, but we’re we’re, like, you know, giving them airtime and and highlighting. That’s what a good job is, is to go do the staff experimentation at Safeway. And so I think, like like, that is a critical thing, and and and pretty much any part of any company can figure out, like, where are their cultural moments where we, give the airtime to AI to to, like, start working on the behavioral change and help people realize, it’s safe to play. Now you also have to create the space. Right? Like, expecting people to just, like, go home and, you know, spend their after hours doing all the learning. Like, some people will do that, but I think you have to give them time and space at work, to do that. Dan Lawyer [00:26:11]: And, you know, so, like, can you take hackathon style approaches, or can you say, hey. We’re gonna have half day Fridays where you’re free to experiment or things like that. So that, you know, it’ll be different for every company how it’s doing, but it should be, I think, intentional how they do that. Jordan Wilson [00:26:25]: So, Dan, we covered a lot in today’s conversation. You know, everything from going over the, you know, cultural changes and talking about data and process documentation and having the right quantitative measurements internally so you can know even what to educate people on. But for those business leaders right now who are planning out their 2026 AI education, how they’re gonna get it done, What’s your one most important piece of advice for them to get education right in 2026? Yeah. So so one, think of Dan Lawyer [00:26:58]: the prep work that has to be done to educate and, like, there’s, like, layers to education. There’s the broad general AI awareness. The real value will come when you get domain specific, and talk about, like, in this domain, in this part of my company, for this business function, this is the best way to leverage AI. And then if part of that training is in order to get the highest value, you have to ground the AI in both the data and the procedural knowledge of how things work. You’ll get better outcomes. So it’s like it’s, you know, getting from general to domain specific to very pragmatic around how to get the highest returns from AI will help. And then creating a culture around that that reinforces and supports the idea of learning and training and rapid experimentation. Jordan Wilson [00:27:44]: Alright. Some, great pieces of advice as it’s, you know, big topic we’re all trying to tackle. And, Dan, your time today today, I think helped us, you know, tackle this thing a little bit better. So, Dan, thank you so much for taking time out of your day to join Everyday AI. We really appreciate it. Dan Lawyer [00:27:59]: Of course. Thanks, Jordan. Have a good one. Alright. And if Jordan Wilson [00:28:01]: you missed anything, y’all, don’t worry. We’re gonna be recapping it all in today’s newsletter. So if you haven’t already, make sure to go to youreverydayai.com. Sign up for the free daily newsletter. Thanks for tuning in. We’ll see you back tomorrow and everyday for more Everyday AI. Thanks, y’all.