从效率工具到思维伙伴:重新定义生成式AI的战略角色
许多企业领导者在使用生成式AI时,往往只将其视为快速获取答案的工具。然而,这种“提问—获取输出”的模式容易跳过深度思考,导致错失大型语言模型(LLM)真正的战略潜力。本文提炼了《Everyday AI》播客中的核心洞见,为商业决策者提供了一套可操作的方法论,帮助他们将生成式AI从简单的效率加速器,转变为富有创造力的协作伙伴。
关键在于转变思维方式:不再追求速度,而是通过结构化框架拓展问题空间,在人机协同中提升决策质量与创新水平。
超越提示工程:以开放性提问激发创造性探索
传统的“提示工程”(prompt engineering)强调精准提问以获得理想输出,但这种方式本质上是狭窄且结果导向的。真正高效的团队正在转向一种更高级的互动模式——用开放式、抽象化的提示启动对话,先扩大问题边界,再逐步聚焦解决方案。
例如,当面对酒店流媒体登录不便的问题时,与其直接问“如何改进电视遥控器体验”,不如将其抽象为“身份验证流程优化”。这一转换打开了全新的联想路径:手机生物识别、家庭Wi-Fi自动认证、跨设备单点登录等原本不相关的方案浮出水面。
这种方法被称为通用部件技术(Generic Parts Technique, GPT),其核心是将问题拆解为其最基础的功能单元,而非具体功能或特性。通过剥离表层形式,AI能突破人类固有的认知定式,发现隐藏的改进机会。
案例启示:一位使用者曾向AI提出一个包含15条线索的逻辑谜题,AI却因追求速度而忽略了其中3条关键信息,得出了错误结论。这说明——速度不等于正确,全面性需要主动引导。要求AI展示“思维链”(chain-of-thought)、解释推理过程,是确保深度与准确性的必要步骤。
结构化创新框架:SCAMPER与GPT的实战应用
为了系统化地激发创意,结合成熟的问题解决框架与生成式AI,能显著提升策略产出的质量和广度。
1. 通用部件技术(Generic Parts Technique)
- 核心理念:将复杂系统分解为最基本的功能模块。
- 应用场景:产品设计、流程优化、用户体验重构。
- 操作方式:
- 提问:“这个系统的每一个组成部分,在功能上到底实现了什么?”
- 让AI以极简语言描述每个组件的作用(如“输入密码” → “身份确认”)。
- 基于抽象功能重新组合或替换,寻找替代实现路径。
2. SCAMPER 创新法
源自广告行业的经典创意工具,通过七个维度引导结构性发散思维:
| 缩写 | 含义 | 关键提问 |
|---|---|---|
| Substitute | 替代 | 哪些环节可以被其他技术/角色/流程取代? |
| Combine | 组合 | 哪些步骤可以合并以提升效率? |
| Adapt | 改编 | 其他行业是如何处理类似问题的? |
| Modify | 修改 | 流程顺序或形态能否调整? |
| Put to another use | 他用 | 当前资源是否可用于其他场景? |
| Eliminate | 消除 | 是否存在可完全去除的步骤? |
| Reverse | 反转 | 如果颠倒流程顺序会怎样? |
实战示例:一家SaaS公司面临客户流失率高的问题。运用SCAMPER中的“Eliminate”(消除)角度,AI建议取消强制性的新手引导弹窗;而“Reverse”(反转)则启发团队尝试让用户自主选择何时接受培训——这些反直觉的设计最终显著提升了用户留存。
保持人类主导权:在人机协作中掌控创造主权
随着AI能力不断增强,组织面临一个深层风险:将战略判断力外包给算法。成功的团队懂得如何利用AI的“收敛性思维”(快速整合信息)与人类的“发散性思维”(情感联结、价值判断)形成互补。
如何维持“创造主权”?
-
多模型交叉验证
使用不同LLM(如ChatGPT、Claude、Gemini)运行同一任务,比较输出差异。多样性暴露盲区,防止单一模型偏见主导决策。 -
心理距离机制
- 使用无记忆会话模式,避免AI基于历史偏好做出预设判断。
-
将某一模型的输出作为另一模型的输入,发起批判性讨论:“请分析以下观点可能存在哪些漏洞?”
-
追问意义与动机
不满足于“是什么”,更要探究“为什么”: - “你为什么会推荐这个方案?”
-
“它的核心假设是什么?如果这些假设不成立呢?”
-
构建专属AI教练
可创建专门用于指导特定框架的定制化GPT(如“SCAMPER助手”),使其成为团队内部的知识传承工具。
实质性商业价值:超越效率的四大跃迁
当企业将结构化创造性框架与生成式AI深度融合,可实现以下关键突破:
-
增强创新能力
突破“默认选项”陷阱,挖掘竞争对手忽视的战略路径。 -
提升风险管理能力
主动识别AI推理中的遗漏环节,提前预判潜在失败点。 -
提高效率而不牺牲深度
将人力集中于高阶分析与价值判断,AI负责广泛探索与初步筛选。 -
优化客户体验
从根源重构问题,而非仅修修补补表面症状,交付真正契合用户需求的解决方案。
行动清单:给商业领导者的五项实践建议
- 转变起点:从“寻求答案”转向“探索问题空间”,用抽象、开放的问题开启AI对话。
- 采用框架:系统应用GPT、SCAMPER等方法,通过多AI工具交叉执行,拓宽思路。
- 审查推理:始终要求AI展示思维链,审视其逻辑依据与潜在假设。
- 守住主权:不因便捷而放弃战略输入,确保最终输出符合组织价值观与实际情境。
- 制造距离:善用“中立视角”的AI打破组织惯性,挑战根深蒂固的认知偏见。
生成式AI已不再是单纯的“快答机器”。唯有有意识地结合结构化创造性框架,才能释放其真正的战略潜能——推动更明智的决策、构建更具竞争力的业务模式,并创造可衡量的长期价值。现在正是重新思考人机协作方式的关键时刻。
—英文原文—
原标题: Ep 675: Creative Frameworks for Problem-Solving with Generative AI
原内容:
Episode Categories:
Resources:
Join the discussion: Got something to say? Let us know on LinkedIn and network with other AI leaders Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Connect with Jordan Wilson : LinkedIn Profile
Unlocking Creative Problem Solving with Generative AI: Practical Frameworks for Business Leaders
Business leaders using generative AI often rely on quick answers and outputs, but overlooking the deeper strategic potential of large language models (LLMs) can limit impact. This article draws from insights featured on the Everyday AI podcast and translates specific, actionable strategies for transforming generative AI from a simple efficiency tool into a collaborative partner for creative and strategic problem solving. Forget the surface-level uses—here’s how to fundamentally improve decision-making and outcomes.
Rethinking Generative AI: From Passive Output to Active Thought Partnership
When team members turn to LLMs like ChatGPT , Gemini, Copilot, or Claude, the reflex is to seek the fast answer. This shortcut often means critical thinking is bypassed in favor of expedience. The most successful organizations are reframing generative AI as a thought partner—an approach that doesn’t just speed up processes, but actively augments human intelligence and the quality of output. Using AI for deeper problem exploration leads to more creative solutions, improved team collaboration, and measurable gains in business problem solving.
Prompt Engineering and Problem Space Expansion: Beyond Simple Requests
Prompt engineering has traditionally focused on crafting direct questions for AI, often resulting in a narrow, output-driven exchange. The episode highlights a shift: using open-ended prompts and creative scaffolding to expand the problem space before narrowing in on a solution. Rather than specifying the desired answer up front, initiating interactions with abstract, generic prompts triggers AI to explore unconventional avenues—inviting options not previously considered by the human counterpart.
For example, breaking down an authentication challenge in hospitality into generic functional components (rather than focusing on features) led to new associations and workaround ideas. This process, called the Generic Parts Technique (GPT) , enables more expansive ideation and exposes hidden inefficiencies or alternative solutions. When employed with more sophisticated models, business leaders can uncover opportunities for process or product innovation not visible through traditional methods.
Creative Confidence and Agency: Avoiding the Trap of Outsourcing Value
Generative AI ’s increasing sophistication tempts organizations to outsource not just tasks, but core creative and strategic agency. The episode demonstrates how rapid AI-generated outputs risk bypassing human expertise and inadvertently introduce blind spots. Leaders must actively maintain agency over the output—questioning, iterating, and challenging the results rather than accepting the first (often incomplete) suggestions.
A specific case discussed in the podcast revealed that when an AI model was given an insight puzzle, it skipped critical facts and arrived at the wrong answer simply due to speed. The episode’s key lesson: speed is not a substitute for thoroughness. Structuring interactions to require AI’s “chain-of-thought” explanation, requesting reasoning steps, and iterating with multiple tools (including those with and without user memory) surfaces deeper insights and prevents critical detail loss.
Frameworks for Structured Creative Problem Solving: SCAMPER, Generic Parts Technique, and More
Instead of relying on random brainstorming, integrating established frameworks with generative AI yields measurable improvements in strategy and innovation:
Generic Parts Technique (GPT): Decomposes problems into their most abstract, functional components. Running this exercise through multiple AI tools exposes distinct avenues for improvement, enabling leaders to address core pain points instead of superficial symptoms.
Generic Parts Technique (GPT): Decomposes problems into their most abstract, functional components. Running this exercise through multiple AI tools exposes distinct avenues for improvement, enabling leaders to address core pain points instead of superficial symptoms.
SCAMPER: Originally developed for advertising and product innovation, this acronym (Substitute, Combine, Adapt, Modify, Put to another use, Eliminate, Reverse) organizes creative thinking into structured “moves.” For example, applying SCAMPER to customer churn in SaaS might reveal overlooked touchpoints for adaptation or elimination, leading to superior retention strategies.
SCAMPER: Originally developed for advertising and product innovation, this acronym (Substitute, Combine, Adapt, Modify, Put to another use, Eliminate, Reverse) organizes creative thinking into structured “moves.” For example, applying SCAMPER to customer churn in SaaS might reveal overlooked touchpoints for adaptation or elimination, leading to superior retention strategies.
Importantly, using frameworks with AI does not mean surrendering creativity. Instead, pairing these frameworks with iterative prompting and cross-tool validation actively broadens the solution set and highlights biases rooted in personal expertise or organizational tradition.
Importantly, using frameworks with AI does not mean surrendering creativity. Instead, pairing these frameworks with iterative prompting and cross-tool validation actively broadens the solution set and highlights biases rooted in personal expertise or organizational tradition.
Human-AI Collaboration: Managing Bias and Uncovering Meaning
Successful teams leverage AI’s convergence—the ability to synthesize information rapidly—but rely on human divergence to connect emotional context, values, and organizational priorities. AI provides options, but humans must interrogate reasoning, meaning, and relevance. Techniques outlined in the podcast, such as asking why AI arrived at a conclusion or comparing responses across different LLMs, ensure leaders remain in control of value creation and context.
Maintaining a human-in-the-loop approach means AI is a source of psychological distance—testing logic, exposing cognitive biases, and challenging assumptions about customer needs or process design. This collaboration surfaces risks, uncovers viable alternatives, and delivers solutions tailored to specific audiences or operational realities.
Concrete Business Value: Improving Decision Quality and Strategic Outcomes
The episode makes a clear case for integrating structured creative frameworks with generative AI :
Enhanced innovation: Moves organizations beyond “default” outputs, uncovering strategies competitors may miss.
Enhanced innovation: Moves organizations beyond “default” outputs, uncovering strategies competitors may miss.
Better risk management: Prevents overlooked details and identifies vulnerabilities by examining how and why AI arrives at an answer.
Better risk management: Prevents overlooked details and identifies vulnerabilities by examining how and why AI arrives at an answer.
Increased efficiency (without sacrificing depth): Deploys time on high-value analysis versus low-impact busywork, leveraging AI for expansive exploration rather than rote tasks.
Increased efficiency (without sacrificing depth): Deploys time on high-value analysis versus low-impact busywork, leveraging AI for expansive exploration rather than rote tasks.
Elevated customer experience: Tailors product and workflow solutions based on broader problem deconstruction, not just surface-level features.
Elevated customer experience: Tailors product and workflow solutions based on broader problem deconstruction, not just surface-level features.
Action Steps for Business Leaders
Shift from “answer-seeking” to “problem space exploration”—use open-ended, abstract prompts to start AI conversations.
Shift from “answer-seeking” to “problem space exploration”—use open-ended, abstract prompts to start AI conversations.
Shift from “answer-seeking” to “problem space exploration”—use open-ended, abstract prompts to start AI conversations.
Employ frameworks like GPT and SCAMPER, running exercises through multiple AI engines and cross-referencing for blind spots.
Employ frameworks like GPT and SCAMPER, running exercises through multiple AI engines and cross-referencing for blind spots.
Interrogate outputs by reviewing reasoning, chain-of-thought, and alternative perspectives.
Interrogate outputs by reviewing reasoning, chain-of-thought, and alternative perspectives.
Maintain agency over creativity—don’t sacrifice strategic input for speed or convenience.
Maintain agency over creativity—don’t sacrifice strategic input for speed or convenience.
Encourage psychological distance through “neutral” AI perspectives, especially when dealing with entrenched organizational expertise.
Encourage psychological distance through “neutral” AI perspectives, especially when dealing with entrenched organizational expertise.
Generative AI is no longer just a fast output machine. Structured creative frameworks, when used intentionally with AI, create opportunities for smarter strategies and defensible decisions—delivering real, measurable value for those willing to rethink their approach.
Topics Covered in This Episode:
AI Agent Orchestrators as Job Title
AI Agents in Company Hiring Trends
Enterprise Reasoning Data Collection Growth
AI Driving Professional Services Pricing Crisis
Universal Basic Income and AI Job Loss
Open Source AI Models Surpassing Proprietary
Chinese AI Model Global Market Impact
Perplexity Answers Engine Business Pivot
Frontier AI API Price Drops
VC Funding Surge in Embodied AI
Advancements in AI Video Generation Tools
AI’s Disruption of Traditional Internet Models
Social Media Deepfake Misinformation Surge
Episode Transcript
Jordan Wilson [00:00:15]: Be honest with yourself. When you go into your favorite large language model of choice, whether that’s ChatChiPT, Gemini, Copilot, Claude, whatever it may be, you’re probably just looking for an output. Right? Maybe a quick answer to your question or something. Maybe you can copy paste and modify and use at school or at work. That’s probably not the best way to use it. If you’ve been listening to this show at all over the past three years, you’ve probably heard me rant over and over that you need to be using large language models as thought partners, as helping to augment your own intelligence, not just kick off and grab answers and try to move on as quickly as possible. I think those that are finding the best results both individually, on teams, and, as companies are those that are using large language models to solve problems in a creative way and to actually make their own human outputs better. So that’s the topic that we’re gonna be tackling today on everyday AI. Jordan Wilson [00:01:18]: Welcome. If you’re new here, what’s going on? My name is Jordan Wilson, and welcome to everyday AI. We We do this every single day. It’s your unedited, unscripted daily livestream podcast newsletter , helping everyday business leaders like you and me make sense of all the AI craziness that’s happening nonstop. Hopefully, help us learn it a little better so we can leverage it to grow our companies and our careers. If that’s what you’re trying to do, awesome. It starts here. But if you really wanna take it to the next level, you’re gonna have to go to youreverydayai.com. Jordan Wilson [00:01:48]: Sign up for the free daily newsletter. We’re gonna be recapping all of the important highlights from today’s show as well as bringing you all of the AI news like we do each and every day. So if you want that, make sure to go check out the daily newsletter. Alright. I’m excited for today’s conversation. Have in, very experienced fantastic guest within great background, and we’re gonna be talking, like I said, today about using AI in a little bit of a different way. But, enough of me chit chatting about it. I’m excited to bring on my guest. Jordan Wilson [00:02:18]: So live stream audience, please help me welcome Leslie Grandy, lead executive in residence in the executive education program at the University of Washington. Leslie, thank you so much for joining the Everyday AI Show. Leslie Grandy [00:02:29]: I’m such a I’m such a fan of Jordan Wilson [00:02:31]: the show. I’m excited to be on it. Fantastic. As am I. Like, excited to be on it every single day. Right? But, you know, real quick before we get into it, tell everyone a little bit about your background. Leslie Grandy [00:02:43]: Sure. I have a been in an unconventional journey. I worked for thirteen years in the film industry, and I’m a member of the Director’s Guild of America. And after I left the film industry, I got into, technology product management after getting my MBA at the University of Washington, and I worked for twenty five years at companies like Apple, Amazon, and T Mobile. And, at the 2022, I started thinking about writing a book because my experience in innovation really showed me how important it is for my teammates to have the creative confidence to really drive innovation and not just implement innovation. And I think before AI, people struggled with it, and they still struggle with it. But now they have a tool, and so one of the things I wanted to do is help people build that creative agency and creative confidence and use AI as a thought partner to do that. Jordan Wilson [00:03:35]: I love that. And, you know, maybe could you walk us through, you know, because we’re gonna get into some some detailed strategies and, you know, really, helpful frameworks. But, maybe could you walk us through even some of your own personal, you know, findings, I guess, through, originally working with large language models? I’ve talked about mine for many hours over the last last few years. But, you know, what was it like for you, and what were some of those, initial moments, in the earlier days of of using large language models? Leslie Grandy [00:04:06]: That it’s a great question because I really felt in compelled to integrate it into my process of writing the book, Creative Velocity. And, one of the ways I was compelled to use it was at the end of every chapter, I, after discussing a creative thinking framework, I provide exercises for people to practice with and without AI. And I thought any reader who does this is probably gonna likely put these exercises into AI to see what AI comes up with. So I had an insight problem where I had 15 facts, and the question of the, was really who lived in the Red House. Right? And so you had to use all these these various facts and insights to to back into who lived in each colored house. So I gave the problem to AI, and, it came back with an answer, and it was wrong. And I knew it was wrong because I wrote the the the exercise. So I wondered, how did it get it wrong? And it actually skipped three of the 15 facts. Leslie Grandy [00:05:01]: It took the first answer it arrived at, considered it correct, and the other one’s, other factors were easily ignored. And so in the interest of speed, it came up with the first answer, and the fastest answer wasn’t the right answer. And I think that was the biggest moment for me was recognizing that speed triumphs over, smart, and and comprehensive. And sometimes if it lasts, a little longer to get a a better answer, AI won’t necessarily take that extra time. And so that was one of the really big moments was not to trust the first output and to really question where did that output come from. And that process really helped me inform how I teach this, topic in, my Maven course and at, the UW because I think we are all primed for speed. And so we’ll go grab that first answer and run with it. Jordan Wilson [00:05:57]: Yeah. It’s it it it’s such a good point because, I I I think when people are trying to show an ROI on Gen AI, the default is to go to efficiency and and and productivity and to, you know, just do a task faster and not necessarily better. And I think what that means a lot of times is people giving up their agency. Right? But really, maybe, outsourcing to AI, one of their most valuable skill sets. So can you walk us through, you you you know, when it comes to still using and still leveraging your agency, how can you do that? What are the best practices to do that as these large language models become more and more sophisticated, more and more robust with all the scaffolding and, agentic capabilities? How can humans still lean on their own agency? Leslie Grandy [00:06:50]: Well, I that that’s an important question because I think we’re so, trained to do prompt engineering to go question answer question answer, and I think that sort of, creates a situation where we’re willing to outsource our thinking to AI. And so what what I’m trying to do is train people to use more open ended questions that are actually, organized as creative thinking scaffolding, as a way to navigate a problem space without jumping to the first conclusion. Because sometimes, the the first thing isn’t the best thing, or sometimes the most innovative thing is a place that lives further away from the space that you’re in, and you need to take the time to explore that. And I think time is the resource that gets, eliminated when we go into the prompt engineering mindset. Jordan Wilson [00:07:44]: Yeah. And and maybe let’s let’s talk a little bit on on prompt engineering, and I know it’s an ever changing definition. And now the, you you know, the trend is to call it context engineering, and I’m sure next year we’ll be calling it something else. But, you you know, maybe for, our audience that is maybe not as technical, why, you know, why is the engine like, the prompt engineering process important? Why is it important to to iterate and continually improve upon, what a model might spit out at first? Leslie Grandy [00:08:13]: Well, I think part of it is starting with a prompt that’s open ended, right, where the answer isn’t isn’t, pointed, by the context you’ve been given. So look in this problem space for this answer really is a narrow way of thinking. And what I’m trying to do is broaden the space of the initial conversation before narrowing it. So zooming out and asking it in more generic terms, while it still it still will happen quickly, it does allow you to invite something that you hadn’t thought of before. And so, one of the first ways to do that is a technique called the generic parts technique, where you break a problem down to its most generic functional components, not the features, but what every piece of that solution provides functionally. Because when you do that now, the space is defined in a more abstract way, and you’re able then to explore a part of that problem uniquely. So a great example of this is in my Maven course, I had a problem where, I, as a consumer, can’t stand when I go to a hotel or an Airbnb, and I can’t log in to my streaming accounts easily, and I have to use some lousy TV remote to log in. And I just think that should be simpler because there’s a million ways I could imagine it to be simpler. Leslie Grandy [00:09:26]: So when you ask AI to break that system down into its most generic parts, you start to realize authentication is the heart of that problem. And where else do I authenticate in that workflow that I could actually leverage so I don’t have to have the problem at the pain point I experience it? By talking about it in those generic terms, it helps me associate another possible authentication solution into this problem space that wasn’t there before. And so starting in a more generic way with the functionality breaks that fixedness, that bias towards a specific spot for an answer to exist and allows you to look at the problem space more expansively. I think that’s kinda getting at the point there, which is start by being more abstract and then get more finite. Jordan Wilson [00:10:12]: Yeah. And and, you you know, you bring up some great points there. And even as I, you know, myself kind of think about the shift from, you know, prompt engineering to to context engineering. Right? And what’s the big step there? Right? It’s bringing in more relevant context either to what you’re working on, your team’s working on, certain data, right, that personalizes something for you or your industry. You know, what role, you know, in the prompt engineering process, How does this, you know, the human in the loop role continually change, as the models change? And, what should people that are maybe not just looking for the quickest answer and are looking to turn a large language model into a creative problem solver? How does their role continue to change as the human in the loop, and what should they be looking at in terms of improving on the original output that may or may not be right? Leslie Grandy [00:11:07]: That that’s that’s the the heart of the question in the human AI partnership. AI has a very distinctive way of converging on something, and humans have divergent thinking. We’re messy. We have emotion. Things trigger memories that are associated with something that other people don’t associate with that thing. And so what we value and what meaning we ascribe to things aren’t innately in the answer you get from AI. And so being able to challenge the value of the answer as it relates to meaning and purpose is is really a critical component of it, but also understanding the thought process that led there. So what’s really important is to learn from how the answer was derived to what did you what what areas did you look at, and did you also look at this other area? And so asking the questions back so you get more, understanding of the journey AI took to bring you those options also expands your thinking. Leslie Grandy [00:12:06]: So why did you think that? Where did you come up with that? Again, is agency over the output to make sure you find the meaning in it that it that AI maybe unintentionally ascribed to it. Jordan Wilson [00:12:19]: You know, I I love that, that little phrase there, agency over the output. Right. For me, one thing I personally do when using large language models as a creative problem solver is, well, I’ll turn off, you know, the memory and chat history and go into a kind of a private chat, and I’ll I’ll do two different ones, always using a thinking model. And I’ll read the kind of chain of thought to better understand, you know, how the model is is reasoning and and how the model is, tackling a problem. I’m wondering for you, like, what’s been your kind of personal approach to this to, really just turn a large language model from more than, yeah, I’m just gonna try to improve on an output and iterate to make it a little better. Right? How are you actually, collaborating, and what are the best practices that you’re seeing to continue to collaborate with these models? Leslie Grandy [00:13:11]: Well, I I absolutely use more than one tool almost all the time. I like to see how the answers differ, and then exercises like the generic parts technique, you can run that same problem through two different tools and get two different answer sets, neither of them being right or wrong. Right? And so the idea that I’m not looking for the right answer, that I’m looking for the most expansive way to think about the problem allows me to get more opportunities because the language models don’t approach the problem the same way. And so using more than one tool is is is kind of my standard go to method. I, on the other hand, do exactly, the opposite. I have one that knows me really well because I wanna see the bias, and then I use the other ones that don’t know me that well where I have no shared memory. Because I do wanna see to your point when it’s shortcutting, what it thinks I want because it knows me versus what happens when it doesn’t. And it doesn’t necessarily mean the one that knows me is better or worse. Leslie Grandy [00:14:10]: It just re it makes me realize it may have jumped over or left over some areas that it assumes I’m not interested in because of history. And so those areas now become more available to me if I use another tool where it doesn’t have that history and memory. So using multiple tools, I think, is is super critical. I even like to take the answer from one tool and put it in the other tool, and I like to see how the other tool responds to what it saw as the answer I provided it. I’ll say comment on this perspective, and I’ll give it the answer, and then it gives me some reason why it may not be the best perspective. So it helps open the door for worth thinking. Jordan Wilson [00:14:46]: Yeah. And and I love that the first kind of framework we tackled, hey, just happens to be GPT. Right? Or an an an easy acronym to remember in the generic parts technique, but maybe what are some other, kind of creative frameworks that maybe our listeners have used or maybe that they haven’t? What are some other ones that that you, kind of lean on? Leslie Grandy [00:15:07]: Absolutely. Let me just say before I go into that that I did actually create a GPT for GPT. So you can look at chat GPT and explore the GPTs and find a generic parts technique GPT that will actually walk you through the process and teach you the process. So, that’s one of the other great things about it is you can build a a GPT to teach somebody one of these frameworks. One of the most popular frameworks that I like to use is actually, SCAMPER, and it was developed decades ago by, the o in BBDO, the advertising agency, that was big and popular during the madman era. And it it it gives you seven specific moves to make in order to consider how you might adjust your thinking around the problem space. So SCAMPER is an acronym, and it stands for substitute, combine, adapt, modify, put to another use, eliminate, or reverse. And so when you think of that, a really great example would be, hey. Leslie Grandy [00:16:06]: I wanted I I wanna think of another way to make shopping convenient for people. And, of course, Instacart realized that you could shop online, somebody else could do it, you could pay for it, and then it could get delivered. But curbside pickup became a really big revolution where somebody else did the shopping, and then I went to the store, but I just picked everything up already paid for. Right? So these modifications, or re reversing the steps really can open the door for a whole new way of thinking about a problem space. Jordan Wilson [00:16:37]: So, yeah, I the scamper one is is really interesting, and I love, you know, using whether it’s, you know, copywriting techniques or, problem solving, you know, acronyms in in in this case from a pre generative AI , that works great with today’s latest technology. You you know, what does using something like scamper? Right? So let’s say someone, is trying to solve one of their business problems. I don’t know. Maybe they’re, in software and their churn is is too high. Right? What is using something like SCAMPER? What might that help, you know, a a decision maker stumble upon that maybe if they weren’t using a framework like that, but we’re still using a large language model. Right? What is that maybe going to lead to that it might not lead to if you weren’t using a more structured, kind of, creative problem solving process? Leslie Grandy [00:17:32]: It’s such a great question because I think one of the things this framework really does, it makes you rethink the problem space. Because I think we get so functionally fixated on how, workflows and how people need to get from a to b in a process or how people use your product as intended. And yet there’s friction and people don’t always behave the way you want them to. And so what really helps, with these technologies is to ask the questions, not in any linear sequence, but just to randomly pick these letters. Like, using the s in SCAMPER, what could be substituted to remove friction around this step? A great example, of another problem space is, if you’re looking at trying to make pins on debit cards more secure because everybody could watch you enter your PIN number, scamper is a great way to say what could be eliminated to to make that problem less in less secure more secure, less insecure? What could I do that would be a modification of the flow that could remove the pin altogether? What could I eliminate? What could I right. So now I’m looking at very specific part of the problem, but I’m asking to use a technique for things I could explore that really smooth out the friction or add some behavior in that is more normalized for how consumers wanna work. And so it pops up these ideas for you to imagine that you might not have thought about because you’re so functionally fixated on how the process works or the product is intended to be used. Jordan Wilson [00:19:04]: Mhmm. You know, one thing that I’ve always personally held this strong belief on is is when you should use a large language model and for what reason. Right? I feel very early on, right, maybe in 2023 and early twenty twenty four, it seemed like large language models were just kind of a a a content creation machine. Right? Just, you know, making short blog posts longer or, you know, helping you rewrite an email or something like that. Very, very small. Right? Very small outputs. Right? And and as we shift, to more agentic models, that can research and go back and iterate on their own while you sit there and wait. Right. Jordan Wilson [00:19:45]: So what maybe for the average business leader who is trying to also, maybe reincorporate or better incorporate, large language models to help with problem solving strategy, etcetera. What are some of those kind of, maybe agency unlocks or agency rewiring that we all need to do because it’s not easy. Right? Because sometimes it can be time consuming to go through these type of frameworks. Leslie Grandy [00:20:12]: Yeah. I think the the things that, that I see people do that narrow their focus and that l l l m’s can unlock, one is, you have cognitive biases. The way you think about solving problems is just your natural tendency. It’s not a it’s not a bias against a a solution per se. It’s a it’s a bias that may come from your expertise, right, or your personal experience. And I think the psychological distance of LLMs is what’s so critical in creative thinking because you get really attached to the problem space the way you think about it, or you get really attached to the first solution that you came up with. But you at the absence of any other solutions, you move forward with something that may work but may not have considered all of the possible ways that solution could go wrong. And so testing your own logic and not being, tied to your own solution is one of the best ways to use AI is to really say, I thought about doing this problem this way. Leslie Grandy [00:21:09]: What are other ways I could think about it? Right? Which helps you break the bias for how you approach problem solving. Another another, cognitive bias, right, that we we often have is that we think we know who the customer is because we are the customer. And so sometimes you wanna look at another domain where you’re not the customer to see how they solved a similar problem because there may be a better solution that works in another place. And a great example of this is, right, the military often, learns from how nature works. Right? How the insects swarm. Right? And if you wanna look at drone warfare, they learned a lot from how insects swarm. Right? And so looking outside of your domain for the answer or outside of the competition for an answer takes a little bit of, comfort with ambiguity because there may or may not be something there. But unless you ask, you don’t know. Jordan Wilson [00:22:03]: You know, you shared a little bit about your background, you you know, working on the product side for companies like, you know, Amazon, Apple, T Mobile. I’m wondering, right, if you had today’s technology, how might, had how might of your, decision making or problem solving changed earlier in your career? Leslie Grandy [00:22:24]: Oh, it it’s it it it’s so rich in in so many ways, that question, because, I think about, how experts guided my thinking in places that I went for the first time. So when I first worked in mobile, I thought everyone else knew wireless better than me, and so they were more likely to have a better answer than I would. Right? And so I fell prey to that kind of expertise because I was new there when in fact the reason I was hired was because I wasn’t that person. And so having to get the confidence that being different in my in my approach to problems, was something I had to bring to the table. It might have been easier if this neutral third party with psychological distance known as chat GPT or Claude would be the person in the room saying it instead of me. Yeah. Because I think people might have heard it differently and might not have thought I was being irrational when I suggested something different than what had the way it had worked all along. And I think working in environments where it’s worked a certain way for a very long time, retail, wireless, whatever, as a outsider, you seem a little less, sensitive to why things work that way, and you’re asking questions that people feel defensive about. Leslie Grandy [00:23:45]: But the thing that’s so great again about, AI now is that confront those conversations for me. I don’t have to look like it’s a personal comment. It’s what a large language model with access to lots of data was able to come up with solutions that were ones that maybe you haven’t thought about. Should we just dismiss them all? And by the way, if you think they’re all gonna fail, let’s ask AI what could cause these ideas to fail, and let’s solve those problems too. Jordan Wilson [00:24:12]: And Leslie Grandy [00:24:12]: I think that would have been a huge difference. Jordan Wilson [00:24:14]: I love that. I love that. A little, front end metacognition there. So, Leslie, we’ve covered a lot on today’s show from, you know, how humans can still flex their creative agency in problem solving, prompt engineering, context engineering, and went over some of your, trusted problem solving frameworks. But as we wrap up, what is the one most important takeaway that you have for our audience on how to best use creative frameworks for problem solving in the age of AI? Leslie Grandy [00:24:45]: Well, I think it’s critical to use them to begin with because I think we’re tempted to outsource creativity in the interest of efficiency and speed. And I think it’s exactly the opposite that you need to do. You need to develop that creative confidence that then gives you the agency over the output to make it to make it more meaningful and valuable and relevant to whoever it is that is going to experience that output. Right? And so whether it’s for you and making a recipe out of what’s in your refrigerator that you actually might enjoy eating and making that relevant to you, or whether it’s solving a really big problem that your customers have in the workflow that you have in the back end that shows it sort of it’s dirty laundry to customers in their front end. That problem needs to be rethought. And so taking agency over what those suggested options are and making them relevant and meaningfully for your audience is critical. So building the confidence using these frameworks and then applying them in a way where you’re actually challenging the output and making it better before running forward with Jordan Wilson [00:25:49]: it. Alright. Well, some some great advice on how we can hopefully all start to, rethink and use large language models a little bit better for problem solving. So, Leslie, thank you so much for taking time out of your day to join the Everyday AI Show. We really appreciate it. Leslie Grandy [00:26:04]: Thanks for having me, Jordan. Jordan Wilson [00:26:05]: Alright. And as a reminder, if you miss anything, any of those frameworks that she shared, it’s all gonna be recapped in today’s newsletter. So if you haven’t already, please make sure to go to youreverydayai.com. Sign up for that free daily newsletter. We’ll see you back tomorrow and everyday for more everyday AI. Thanks, y’all.