谁被排除在AI的未来之外?——技术偏见、内容失衡与商业风险


谁在塑造AI的叙事?谁又被悄然抹去?

人工智能正以前所未有的速度渗透进工作决策和日常生活的方方面面。尽管大型语言模型展现出惊人的能力,但一个关键的商业风险正日益凸显:这些系统的输出往往复制并放大了现实社会中的结构性偏见与代表性缺失。

对于希望负责任且战略性地使用AI的企业而言,理解“谁正在被写入或写出AI的未来”并非边缘议题,而是一个核心战略问题。忽视这一问题,可能导致市场错位、品牌受损,并在招聘、内容审核等关键流程中固化不公。


偏见的根源:训练数据与话语权的不平等

当前的AI模型远非中立工具。它们的表现力取决于所依赖的数据、反馈机制以及背后的设计视角。现实中,许多边缘化群体——包括有色人种、女性、酷儿与跨性别者、年长与年轻人群、工人阶级——在主流科技对话中长期处于边缘地位。

当AI系统无法反映客户、员工及利益相关者的多样性时,企业将面临直接后果:
市场脱节:产品与服务难以触达多元用户。
制度性歧视:自动化招聘或内容审核系统可能误判或压制特定群体。
声誉风险:公众对“技术不公”的敏感度持续上升,一次失误即可引发广泛批评。

责任是多层次的。不仅是开发者和数据选择者,部署模型的企业、影响者、培训师乃至普通用户,都在共同塑造AI的结果。真正的改变始于我们主动选择倾听和放大哪些声音。


“AI垃圾”泛滥:规模化内容背后的信任危机

为追求效率,越来越多企业放松编辑审查,将未经核实的AI生成内容直接用于营销、客服甚至媒体发布。这种现象被称为“AI垃圾”(AI Slop)——大量低质量、高偏见的内容充斥网络。

这不仅仅是品牌调性的问题,更是一场信任危机:
– 用户对AI内容愈发怀疑,注意力门槛不断提高。
– 缺乏人工干预的内容容易传播刻板印象,忽略重要社会语境。
– 长期来看,企业将失去建立真实连接的机会,陷入“越多越无效”的恶性循环。


内容审核的文化盲区:技术为何“看不见”某些身份?

AI驱动的内容审核工具普遍缺乏文化敏感度。当系统无法识别边缘群体的身份表达时,会带来实际伤害:
– 黑人女性佩戴脏辫或Bantu结发型被标记为“不当内容”;
– 酷儿社群使用的术语被错误归类为违规;
– 少数民族语言或方言被判定为“低质”。

这些问题不是偶然故障,而是系统性排斥的体现。例如,Canva早期AI图像生成工具曾拒绝生成“黑人女性自然发型”的图片,理由是“违反社区准则”。这传递出一种无声的信息:你的存在不被承认


人类监督:多智能体时代的商业必需品

随着AI个性化功能增强(如记忆、历史偏好),系统越来越倾向于反映用户的既有观念,形成“镜像效应”。这不仅削弱创造力,也加剧组织内部的回音室效应——团队反复确认自身偏见,丧失创新动力与真实洞察。

打破这一循环的关键在于结构化的人类参与
– 在输出端设立强制人工审核环节,尤其是在面向客户或涉及内容审核的场景。
– 定期审计AI产出,检查是否存在代表性缺口或隐性偏见。
– 主动引入挑战性观点,确保训练数据与评估标准具备多样性。


信任即竞争力:高质量内容的新分水岭

在AI内容泛滥的时代,信息过载反而抬高了可信度的价值。那些能够提供经过深思熟虑、具人文温度、包容多元现实内容的企业,将在竞争中脱颖而出。

未来的差异化不再只是“是否用AI”,而是:
– 是否有人类深度参与?
– 是否体现了对不同群体生活经验的理解?
– 是否敢于挑战主流叙事?

这类内容不再是合规清单上的一项勾选,而是建立长期用户信任的核心资产。


企业领导者可采取的六大行动

  1. 定期审计输出内容
    检查AI生成材料中是否存在代表性缺失或潜在偏见,尤其是涉及人物形象、职业设定、文化表达等方面。

  2. 主动纳入多元声音
    在领导层、用户反馈机制和内容创作流程中,有意识地引入少数群体代表,确保他们的视角被听见。

  3. 坚持“人在环路”原则
    尤其在客户服务、内容审核、品牌传播等关键路径上,保留必要的人工审查节点。

  4. 打破认知回音室
    定期让决策者接触与其立场相左的观点,避免陷入单一思维模式。

  5. 制定文化胜任力标准
    在选用AI工具时,优先考虑具备文化敏感度的解决方案,特别是在内容生成与审核领域。

  6. 重新定义成功指标
    不仅衡量效率与成本,更要评估内容的质量、包容性和社会影响力。


结语:不是拒绝AI,而是更有意识地使用它

通往未来的道路,不在于否定AI的价值,而在于以更严谨、更有意图的方式运用它。那些愿意质疑自身假设、投资于以人为本、包容性强的AI策略的企业,才能真正构建值得信赖的产品与品牌。

他们不仅能拓展更广阔的市场,更能避免因排斥与盲视所带来的高昂代价。AI的未来不应由少数人书写——它必须属于所有人。

—英文原文—
原标题: Ep 681: Who Gets Written Out of the AI Future?
原内容:
Episode Categories:
Resources:
Join the discussion: Got something to say? Let us know on LinkedIn and network with other AI leaders Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Connect with Jordan Wilson : LinkedIn Profile

Who Gets Written Out of the AI Future? Unpacking the Business Risks and Opportunities
The rapid integration of AI into every aspect of work and decision-making has created new opportunities—and new blind spots. While many see large language models as highly smart and capable, a critical business risk emerges from an often overlooked fact: the outputs of these systems often mirror existing societal biases and gaps. For companies aiming to use AI responsibly and strategically, understanding precisely who gets “written out” of AI’s narratives is not a peripheral concern—it’s a central business issue.

AI Bias and Representation: The Cost of Overlooking Marginalized Voices
Current AI models are not neutral tools. Their effectiveness depends on the data, feedback, and perspectives that feed into them. Specific examples discussed in this episode highlight obstacles faced by marginalized groups—people of color, women, queer and trans individuals, older and younger people, and the working class—who frequently find themselves omitted from mainstream technology conversations.
The business cost: When AI outputs don’t reflect the full diversity of customers, employees, or stakeholders, business decisions risk alienating segments of the market, reinforcing structural biases in hiring or moderation, and undermining brand reputation.
Training Data and Accountability: Who Shapes AI Outcomes?
Responsibility for these gaps is multi-layered. Not only do developers and those selecting training data play a role, but the businesses deploying these models—and the broader ecosystem of influencers, trainers, and users—share ownership over outcomes.
This episode underscores that actionable change starts with those who choose whom to listen to and amplify in the tech space. Without conscious choices about training data, leadership, and feedback, organizations run the risk of embedding yesterday’s blind spots into tomorrow’s workflows.
Generative AI at Scale: The Dangers of “AI Slop”
The pressure to generate content at scale has accelerated what’s now often called “AI slop”: mass-produced, low-scrutiny content that carries forward unchecked biases and assumptions. As companies loosen editorial controls and “copy-paste” AI output into customer-facing interactions—sometimes without meaningful review—they can inadvertently propagate stereotypes or ignore crucial perspectives.
This is more than a reputational issue. It erodes user trust, diminishes content quality, and opens companies to regulatory or public backlash.
Content Moderation and Cultural Competence: Implications for Brand Safety
AI-powered content moderation tools already struggle with cultural competence. When automated systems are not adapted to recognize the nuances of marginalized identities and communities, there are concrete consequences: posts are erroneously flagged, creative content is limited, and some voices are silenced. A specific illustration: certain AI design tools were unable to generate images of Black women with natural hairstyles, categorizing them as “inappropriate.” Such blind spots are not simply glitches—they become systemic exclusions.
Human Oversight: A Business Imperative in the Age of Multi-Agent AI
Even as personalization features in advanced models reflect individual users and their preferences, the risk grows that businesses and their teams fall into echo chambers. AI may increasingly surface outputs aligning with a company’s existing biases simply because those are reinforced through repeated use and feedback. This “mirror effect” can stall innovation, cement groupthink, and undermine genuine customer insight.
Introducing structured human review, routine auditing, and actively seeking out challenging perspectives—both in training data and in output evaluation—are necessary steps for companies intent on using AI as an enabler rather than a limiter.
Trust, Signal vs. Noise, and Competitive Differentiation
The proliferation of AI-generated content risks devaluing trust. As AI-generated outputs flood marketing , support, and media, end-users become more skeptical, raising the bar for what wins attention and confidence. Paradoxically, the companies that win will be those able to produce high-quality content that demonstrates clear human oversight and trustworthy decision-making.
Quality, thoughtful, and inclusive content —grounded in robust human review and a clear understanding of diverse customer realities—becomes a competitive differentiator, not just a compliance checkbox.
Practical Actions for Business Leaders
Audit Outputs Regularly : Evaluate AI-generated content for representational gaps and unintended biases.
Audit Outputs Regularly : Evaluate AI-generated content for representational gaps and unintended biases.
Curate Diverse Influences : Structure leadership and user feedback to include underrepresented voices.
Curate Diverse Influences : Structure leadership and user feedback to include underrepresented voices.
Human-in-the-Loop : Maintain human review as a core part of AI deployment, especially in customer-facing or moderation workflows.
Human-in-the-Loop : Maintain human review as a core part of AI deployment, especially in customer-facing or moderation workflows.
Challenge Echo Chambers : Regularly expose decision-makers to critical insights and perspectives outside their own inclination.
Challenge Echo Chambers : Regularly expose decision-makers to critical insights and perspectives outside their own inclination.
Set Cultural Competence Standards : Prioritize culturally aware moderation and creative tools.
Set Cultural Competence Standards : Prioritize culturally aware moderation and creative tools.
The path forward is not about rejecting AI, but about using it with rigor and intentionality. Companies willing to challenge their assumptions and invest in human-guided, inclusive AI strategies will be best equipped to build products and brands that grow trust, reach broader markets, and avoid the costly pitfalls of exclusion.

Topics Covered in This Episode:
Over-Reliance on AI in Daily Life
Marginalized Groups Excluded from AI Future
AI Reflecting Societal Biases and Blind Spots
Responsibility for AI Training Data and Bias
Dangers of “AI Slop” and Unedited Content
Biased AI Moderation and Platform Challenges
Importance of Human Oversight in AI Outputs
Avoiding AI Echo Chambers and Algorithmic Divide
Trust and Quality Concerns with AI Content
Amplifying Diverse Voices in AI Leadership

Episode Transcript
Jordan Wilson [00:00:16]: It’s getting to the point where a lot of people are using AI for everything, helping them think and plan, helping them strategize, and really in their personal and professional lives. I think AI has gotten to the point now where people maybe become overly reliant on it. And, yeah, you might say that’s an okay thing because, you know, today’s large language models, are very capable and actually very smart, but have you thought about what comes out of these models and where it’s actually originating from? Right? I’ve talked about, probably dozens of times on this show. Large language models are not perfect. In fact, they’re often very bad because they can reflect, bad parts of society, sexism, racism, and really just, mirroring some of the worst that there is on the Internet. So if we, as a society are just blindly copying and pasting anything that comes out of a large language model, Whose story are we not telling and who might be getting written out of the AI future? I don’t have the answers, but our guest today is a lot more experienced in this area than me, and I’m excited to talk about it. Alright. Let’s get into it. Jordan Wilson [00:01:35]: If you’re new here, welcome. My name is Jordan. This is Everyday AI. This is your daily livestream podcast and free daily newsletter helping everyday business leaders like you and me not just keep up with everything that’s happening in the world of AI because it is nonstop, but how we can make sense of it. Right? Extract insights that matter and grow our companies and our career. So if that’s what you’re trying to do, awesome. Starts here with the unedited, unscripted livestream podcast. But if you wanna take it to the next level, make sure you go to our website at youreverydayai.com. Jordan Wilson [00:02:03]: There, we’re gonna be recapping the highlights from today’s show as well as all of the other AI news that you need to know. Enough of me chit chatting. I’m excited to talk to, today’s guests. So livestream audience, if you could please help me welcome to the show, Bridget Todd, who is the podcast host, at, the Mozilla Foundation. Bridget Todd [00:02:25]: Thank you so much for having me. I am so excited to be here. Jordan Wilson [00:02:29]: Alright. Yeah. Me as well. Alright. Tell everyone a little bit about your background because you do a lot on the on the tech scene, on the podcasting scene. So, yeah, tell everyone a little bit about your background. Bridget Todd [00:02:40]: Yeah. So I am the host of a couple of different podcasts about the intersection of technology and identity. One is one that I make with the Mozilla Foundation called IRL. It really is an examination of who has the power in AI, who is using AI and technology to to interrogate power. I also host iHeartRadio’s tech and culture podcast called There Are No Girls on the Internet, kind of about the same thing, but from a little bit different approach, all about the intersection of identity, social media , technology, and how it shows up in all of our lives. Jordan Wilson [00:03:10]: Alright. And, yeah, if if if you are a podcast fan listening, well, you probably are because you’re listening to us talk. Make sure to go, check out Bridget’s shows. They’re they’re great. So maybe let’s just start at the end here, Bridget. And I know this is maybe one of those, episodes where there’s no right and wrong answers. Right? We might get into more, philosophical quest discussions here. But, ultimately, right now, who is getting written out of the AI future? Bridget Todd [00:03:37]: Oh, my biggest concern is that it’s really people who are traditionally marginalized in all conversations about technology. Right? One of the kind of foundational ways that I think about this is obviously identity. So racialized folks, people of color, women, but it’s also queer folks, trans folks. It’s also older folks, youth, working class people. I really wanna make sure that all of us, everybody, is able to be included in these conversations that are so impactful to all of our lives. And so the same way that these people are often pushed to the sidelines in conversation about technology more broadly, the same thing is happening at AI. We’re not being reflected as meaningfully as we should be. Jordan Wilson [00:04:15]: And so what does that ultimately, lead to? Right? In in reality, right, and I kind of touched on that in in in the opening, of today’s show. Right? And I do think now more than ever, people are using AI everywhere. You you know, are there dangers maybe to using AI a little bit too much specifically, you know, for some of those reasons? Because, you know, not all large language models are going to be maybe properly, reflecting the goods in society. Bridget Todd [00:04:50]: Yeah. I love this question. It’s something that comes up a lot on the podcast that I make. You know, it’s very easy to talk about things like AI as this sort of all knowing computer brain that knows everything. It’s all powerful, all thing. But the reality is is that AI is built and trained and designed by all of us, humans. Right? And so all of the blind spots and foibles and biases that we already know humans have and can reflect. I don’t think I’m telling anybody listening what they don’t already know. Bridget Todd [00:05:19]: That the danger is that those same pitfalls are just reflected back at us through this powerful technology via AI. And so I think it really has to be a conversation of, like, understanding and then also interrogating what that actually means with humanity. If if if the same biases that we all walk around with every day are just being reflected back at us using this technology, it really behooves us to think about, like, how is this how are we going to use this technology? How is that reality gonna shape what role we allow it to play our everyday work lives? Jordan Wilson [00:05:51]: And, this this one might be a a loaded question as well because, you know, we get into things like, you know, training data and and reinforcement learning with human feedback and some of the more technical sides of AI. But who’s maybe ultimately the person or entity or company responsible? Right? Is it, people who are making decisions on training data for large language models? Is it the people, the companies, right, who are training these models? Is it, the the companies and the teams who are using them? Right? Who ultimately might have to play a, more pivotal role in order to, you know, have outputs ultimately that are more reflective? Bridget Todd [00:06:32]: Oh, what a good question. I think there is I mean, it’s kind of a a a good bad thing. Right? Like, I would say all of us, everyone, everybody that you just named, every entity, every company that you’re thinking of, I think, bears some responsibility here. But that’s also kind of a good thing because there’s a lot of people who can who can have a hand in being the solution as part of that. Right? Like, I really think that, you know, I have I don’t have a direct line to Sam Altman or anybody at Chad GPT or OpenAI, but I know that I can listen to folks who are sort of changing the conversation around AI and amplify those folks as leaders in my own mind. And so I think it really can start with all of us making a a sort of individual choice to kind of think differently when we think about AI and the conversation that we have about it. Right? Think differently about who it is that we that we think of as experts and leaders in these conversations. I think it can really it sounds kind of duh and and and, you know, basic, but I think starting there can be a really good way to just, like, start that conversation and start that change. Jordan Wilson [00:07:33]: Yeah. I think I think you bring up a good point. Right? There’s no, you know, one party or, you know, one, you you know, piece, in this that’s maybe more responsible than others. But, you know, I’m wondering today. Right? Because, you you know, improving training data, you you know, or changing how Frontier Labs maybe make models is probably a much bigger, and longer process, than we might think. But in terms of what we use, right, because maybe two, three years ago, right, when ChatGPT first came out, I don’t think companies, right, were just blindly copying and pasting things. But now we have this whole problem with AI slop. Right? Everyone’s trying to, you know, create content at scale and barely touching it. Jordan Wilson [00:08:17]: Maybe what are some of the dangers in that? Right? And companies maybe being too hands off and just using whatever the model spits out. Bridget Todd [00:08:25]: Yeah. I mean, I think it really comes back to the idea that you and I were talking about off mic where I think the Internet and technology have got to become a less hostile space for people who are traditionally marginalized. Right? If everybody cannot show up equally to to make their voices heard online, we’re already limiting the conversation that l LLMs are gonna be spitting back at us when it comes to those those post folks. Right? And so I really do think that we have to, you know it it it doesn’t just start with AI companies. It it really does start with what are the conditions that are allowing people to show up or not show up in technology and online, and how is that reflecting this sort of warped and biased view of marginalized people in AI? Jordan Wilson [00:09:10]: Is is is there an answer to that right now? Are there, like, so many answers? Like, what are some of those, you you know, conditions maybe for, you you know, myself and, you know, others in our audience that maybe just don’t know some of those obstacles and roadblocks? Bridget Todd [00:09:23]: Yeah. I mean, I think one of them is, like, the things like the use of AI to do, moderation on social media platforms. Right? We already know that a lot of that is super biased. It’s biased for a lot of reasons. It’s it’s not I when I have these conversations, I think people assume that I’m saying, oh, a big bad white guy who is rich is making evil decisions because he’s an evil person. That’s not what I’m saying at all. I I typically mean something like, oh, someone is is deploying AI based moderation tools. Those tools are not always culturally competent, and so they’re biased against cultures that these tools might not be able to understand. Bridget Todd [00:10:00]: And that things like that can relate to, marginalized people not being able to show up in a way that’s equitable online. Jordan Wilson [00:10:07]: Yeah. And, I think a lot of that just goes back to, you know, again, this isn’t a very technical show, but, you know, algorithms. Right? Whether you’re talking about social media , whether you’re talking about, you you know, machine learning, deep learning, some of the technologies that have led to today’s large language models. Right? There’s there’s this, you know, they’re they’re opaque. Right? Not everyone knows exactly how all of these things work, which maybe makes it a little bit more difficult to do something about it. Right? You you know, I’m curious. I’ve seen, you know, personal examples of this, and I’m happy to share those. But have you, like, you know, Bridget, do you have any, you know, first hit accounts when you’ve been using any AI tool. Jordan Wilson [00:10:51]: Right? Whether it’s it’s text, photo, video, where you’re like, wait. This doesn’t seem right. This doesn’t seem right. This output doesn’t really seem reflective of the society that I know. Bridget Todd [00:11:03]: Oh my gosh. I mean, I have a good example. This might sound like kind of a weird example, but when Canva was rolling out, their AI tools, there was a whole thing where you were unable to ask it to generate black women with black natural hairstyles because for whatever reason, those hairstyles hairstyles like the one I’m wearing right now were deemed, like, inappropriate by by their tool. And so, you know, I think it’s sometimes it’s these situations where I don’t think that is is I don’t think the reason those kind of things happen are because somebody is is, you know, trying to do something bad or nefarious. But it’s just one of those things where somebody who is culturally competent did not is not the reason why that that decision got made. Right? And so that kind of decision, it might seem small, but it leads to black women like myself being left out of the conversation when it comes to AI. If I if if I can’t go on to Canva and say, generate an image of a black woman with Bantu knots natural hairstyle, because it’s against because it it triggers whatever their they think is, like, against their community guidelines. I I don’t exist as it pertains to Canvas AI. Bridget Todd [00:12:11]: And so there are all these very little ways that I think that marginalized people are just being left out and unseen that I think reflect a larger hostility and bias that is sometimes present in our technology landscape more broadly. Jordan Wilson [00:12:25]: Yeah. I I think that’s a great a great call out and a great example. I’ve maybe talked about this once or twice, but, you you know, we kinda did a very informal, you know, study. I wouldn’t even call it that where, you you know, we said, hey. Give, you know, picture of a CEO, right, to earlier versions of of mid journey. Right? And almost every single time, it was, you know, white white guy probably in his fifties. Right? Without fail. You know, so with with these issues, right, where clearly, I think anyone that’s that that’s a power user of of any large language model, you you know, has seen their fair share of examples. Jordan Wilson [00:13:05]: Yet at the same time, they’ve probably overlooked more than they’ve spotted, right, which makes it hard. So, you know, there’s always this ongoing conversation about the importance of of human in the loop, right, and and how as, you know, large language models become more agentic and, you know, multi agentic orchestration and all these things when agents are working with other agents. Like, what role does the human play in all of this, and how should business leaders be, you know, maybe, in a reasonable and responsible way, how should they be auditing what these models spit out? Bridget Todd [00:13:43]: Oh my gosh. What a good question. I always say that, you know, we have to remember that the work that we’re putting out for the most part I mean, I I can speak for myself as a creative. I am a human that makes work for other humans. I use tools like AI as tools to help me produce that work, but ultimately remembering that there needs to be humans in the equation at the end of the day for this. Like, that’s why I’m doing this. And so the the future that you just laid out where it’s, you know, a AI agents communicating with other AI agents, to me, that’s really scary because that’s that really, I think, leaves out the humanity and the humanness that I think should be at the center of why we do any of this. And so, yeah, I would say it really comes back to making sure that we are remembering that this work the reason why we’re doing this work is at the end of the day about humans and our our humanity. Jordan Wilson [00:14:30]: Mhmm. Yeah. Humanity is such an important discussion right now. I I always have to remind myself as someone that talks about AI literally every single day, right, is is ultimately yes. Like, this the all the information that pulls out of these models originated with a human. Right? There is someone’s story, whether it’s a a boring business story or a a a story of of of courage and, you you know, something triumphant. It’s always goes back to a human. And, ultimately, whatever we create will be consumed also by humans. Jordan Wilson [00:15:02]: Right? Yes. You you know, what maybe advice, do you have, for people to focus on the human when it is AI everywhere? Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can’t really get traction to find ROI on GenAI. Hey. This is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you’re looking for chat g p t training for thousands or just need help building your front end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to your everydayai.com/partner to get in contact with our team, or you can just click on the partner section of our website. Jordan Wilson [00:16:10]: We’ll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on GenAI. Bridget Todd [00:16:21]: Do you remember this is gonna sound wild, but do you remember that old clothing brand from the nineties, FUBU, For Us, By Us? Oh, yeah. So I yeah. That is my sort of orientation in this that this work is For Us, By Us, and the Us and That is humans. Right? And so I can’t tell you how many times I’ve seen somebody so excited on Reddit or a message board to share something that they that they wrote using AI or that they built using AI. Like, I’m in the notebook l m community where people are really excited to share their AI generated podcast. And oftentimes, the top comment will be, you know, if it wasn’t worth your time to to create this as a human, why is it worth my time as a human to listen to it or read it or to engage with it? Right? And so I would say real that something about that comment, I think really changed my understanding of the role that AI can be playing for me as a creative professional that, you know, the word the reason why I do this work is about trust and connection and community, and those are traits that humans have. Right? And so really making sure that all of the things that that we do are grounded in the things that make humans great and not trying to outsource that to AI. Because when you do, your audience is gonna know, and they’re gonna be like, oh, this actually wasn’t worth your time to make, so I’m not gonna spend my twenty minutes listening to it. Jordan Wilson [00:17:35]: Yeah. Yeah. That’s that’s a great yeah. I I I I love reflecting back, you know, AI for us, by us. Yeah. I want to see if, Daymond John can sign off on that. Right? Bridget Todd [00:17:45]: Yeah. Jordan Wilson [00:17:46]: Yeah. Yeah. So, but great. Like, you bring up another topic that is actually really interesting. Right? So you mentioned, like, NotebookLM and, you know, obviously, the the AI generated podcast. But I think there’s also with AI, in much more now even than six months ago. It is starting to reflect the person using it more and more. Right? As, you know, all of the major models now have released things like, you know, personalization or memory or past chat history to where the responses are maybe even reflecting their own preferences and maybe even sometimes their own biases. Jordan Wilson [00:18:26]: Right? So throwing that on top of the, you know, issues with with training data. So even with that in mind, right, should, you know, individuals be going in and, you know, checking their settings and making sure, you know, things in their custom instructions, you know, maybe challenge their own preconceived notions. Bridget Todd [00:18:44]: Yes. Yes. Yes. Yes. Yes. Yes. I actually had an issue with this myself because, you know, I will use Chat EBT to help me write captions and, like, descriptions and metadata for my podcast. And I caught myself, you know, with with whatever Chat GPT spits back out at me being like, oh, this is so good. Bridget Todd [00:19:03]: This is this is phenomenal. I don’t even need to edit this. But I’ve come to realize, I think it’s good because it’s just mimicking how I sound. It’s not it’s not it’s not actually, challenging me or giving me anything to actually think about. And I’m just like the, like the myth of the person that’s falling in love with the mirror or the the the reflection. That’s what I’m doing. And like, that’s not good writing. Right? That’s not that’s not good creative work. Bridget Todd [00:19:26]: Good creative work is challenging. And I I guess I just as someone who talks about AI and creativity a lot, I was surprised how quickly I kind of fell in love with the sound of my own writing being reflect or the sound of my own voice being reflected back at me. And I think I say that to say that we should all be you know, if we’re gonna be using this tool that is so powerful, it it should really behoove us all to be using it in a way where it’s not just setting us up to fall in love with our own voice. Right? That it is challenging us, that it is you know, we are asking it to set us up to be to be able to to give us a little bit more pushback. I found that to be very useful in my own use of AI for my own creative work. Jordan Wilson [00:20:07]: Yeah. I think I think that’s a good call out. Right? And and, yeah, being being careful not to fall in love with the, you you know, the AI in the mirror, so to speak. You know, it I think it also, Bridget, ties ties a little bit back to, you know, how social media algorithms over the years, you know, have now just kind of done the same thing. Right? But more, getting very deeply ingrained, especially on social media , with just being an echo chamber. Right? And and everything maybe you see, is only getting you deeper and deeper, in whatever belief that you have, whether that’s for a good reason or a bad reason. Unfortunately, I think a lot of times it’s the latter. How can we look at AI and maybe prevent that? Right? How can we prevent the, echo chamber and, the algorithm divide that social media has caused? Right? If AI becomes as used or even more used than social media? Bridget Todd [00:21:04]: I think it’s really about being intentional about curating the voices that you consume and listen to and amplify when it comes to AI. Right? Like, I am someone who is just always gonna be a tech optimist. However, I need to make sure that I’m listening to voices that are critical about AI. Otherwise, I know myself, I know that I’m prone to be like, this technology is great. It’s gonna change all of our lives. No problems whatsoever. That’s not great. But I also don’t wanna be somebody who was only listening to voices that are super skeptical. Bridget Todd [00:21:34]: I think it it it’s tough, and it involves being okay with hearing opinions and attitudes and takes that you’re not always going to agree with, but that’s a part of it for me. And so I think really being intentional about who you’re curating when it comes to to saying what an AI. Like, really making sure that you’ve got a healthy, robust diet of, you know, folks in the conversation that don’t always look like the people that we think of being amplified as leaders when it comes to technology and AI. Jordan Wilson [00:22:00]: That’s good. Yeah. It’s almost like, kind of the political equivalent. Right? Like, if you’re always watching CNN, maybe you need to turn on Fox News and and vice versa. Right? I I think that’s a good call out. Right? Like, having to intentionally, listen to voices that are maybe against your current grain, is is something that’s maybe healthy for for a lot of us. Bridget Todd [00:22:19]: Yeah. I I gotta tell you, I find myself I I guess I’m only coming to realize how susceptible I am to the voices that I’ve I’ve kinda surround myself with because, you know, I if I if I listen to voices that are and and we should be listening to to critical voices when it comes to AI. But, like, I will believe I will train myself, talk myself out of use cases that I already know AI has been really helpful for me personally. Right? I’ll be like, oh, AI can’t do that. And it’s like, well, actually, you do you use AI for that all the time. Why are you believing that, like, it can’t do this when you you use it that way all the time? And so yeah. I’ve only come to realize I’m very susceptible to the voices that surround us when it comes to technology conversations. Jordan Wilson [00:23:00]: You know, Bridgette, you kind of already helped us, traverse the main topic of today’s episode. Right? Like, who gets written out of the AI future and a little bit as well, helping us be a little more cognizant of maybe not, too many of the same people getting written into the AI future. You you know, here we are at the end of 2025. I I know a lot of business leaders are looking, you know, forward to 2026 and starting to kind of get their AI agenda all in order. What are some other maybe, you know, either personal concerns, of AI specifically when it comes, to making sure the people using it, are maybe telling a story that is reflective. Right? I I know none of us know what’s gonna happen, right, in 2026, but maybe what are some of those other things as someone that covers, the technology so closely as yourself? What are some of the other things you’re you’re worried about or looking at? Bridget Todd [00:23:55]: Oh, I think one of the things I’m worried about is you you mentioned AI flop earlier. I am incredibly worried about just the general state of trust as it pertains to AI, you know, when approaching in more and more of our media spaces. It’s one of the reasons I’m I’m also really kind of excited about the rise of AI in some of these spaces because I think that people, humans who can make good, thoughtful, trustworthy content are gonna be at a premium. So I’m excited for that for that outcome. But I I’m worried about the devaluation of trust in our media and online spaces more generally because people just see things that are just badly generated AI, and they say, well, I don’t need to pay attention to that. And in a media climate where people have been trained by the ubiquity of bad AI content that they don’t need to pay attention to anything, I worry about the ability for good voices to rise. So that’s I don’t know if that answers your question, but it’s something that I spend a lot of time thinking about. Jordan Wilson [00:24:53]: Alright. So, we’ve covered a ton in today’s episode, Bridget. But, you know, as we wrap, maybe what is your one most important takeaway, when we think and hopefully, make actions, right, about who gets written into the AI future and maybe, unfortunately, who gets written out. What’s your one most important takeaway? Bridget Todd [00:25:14]: My one most important takeaway is I know I said this earlier, but really just make sure that you’re you’re challenging your own idea of who is a leader and who is a voice that you should be following and amplifying when it comes to conversations in technology more broadly, but specifically AI. Right? There are so many people and so many fascinating stories, activists, advocates, artists who are using technology like AI in groundbreaking ways that really are challenging what we think of as power and who holds it and how it is used. Right? I just left Barcelona for Mozilla’s, MOS Fest, and we had a live conversation with Barcelona based activists who were using AI to, do inverse surveillance of the government and people in power. And so often, we’re thinking about AI based surveillance as a government surveilling all of us little people, and they’re inverting that and flipping it on its head and saying, no. What if we use technology and AI to watch the watchers? Right? And so I would say really challenging what we think about when it comes to who is a leader in technology and AI and how they are using that technology to really shake things up and and change the conversation. Jordan Wilson [00:26:19]: It’s a great way to end today’s conversation. Yeah. Inverse AI. Love to see it. Bridget, thank you so much for your time and joining the Everyday AI Show. We really appreciate it. Bridget Todd [00:26:29]: Thanks so much for having me. This has been great. Jordan Wilson [00:26:30]: Alright. And if you miss anything, y’all don’t worry. It’s all gonna be in today’s newsletter. So if you haven’t already, please make sure to go to youreverydayai.com. We’re gonna be recapping today’s conversation there and a whole lot more. Thanks for tuning in. Hope to see you back tomorrow and every day for more everyday AI.