作加

从自动化到智能代理:为何数据薄弱迫使AI“猜测”


从自动化到智能代理:企业转型中的数据挑战与应对策略

随着企业逐步迈向2026年,一场深刻的变革正在发生——传统的自动化流程正被“代理化”(agentified),即由具备自主决策能力的AI智能代理取而代之。这一趋势已渗透至企业的各项运营中,涵盖日常任务处理到核心业务流程。尽管基于代理的系统带来了更高的灵活性和自主性,但也暴露出新的复杂性与风险。

自动化 vs. 智能代理:工作流可靠性的根本差异

在传统自动化模式下,工作流依赖于确定性逻辑——即严格的“如果/那么”规则。一旦输入稍有偏差(例如多一个空格或少一个逗号),整个流程就会中断,无法产生任何输出。这种刚性虽然限制了灵活性,却也避免了“静默错误”的发生,并便于追踪问题根源。

而AI智能代理则属于非确定性系统。即使数据不完整或策略定义模糊,它们通常仍会生成结果。这意味着,当信息缺失时,代理不会停止运行,而是可能进行推测、做出不合理假设,甚至“虚构”内容。其后果是:若缺乏坚实的数据基础,代理系统极易产生误导性输出。

代理化的利与弊:效率提升背后的隐忧

将传统自动化升级为AI代理,确实能显著提升生产力。员工得以从重复性事务中解放,专注于更高价值的工作。以财务流程为例,在差旅报销审批场景中,AI代理可通过对话式界面理解员工提交的内容,并动态解读政策条款,大幅缩短提交与审批周期。

然而,这种便利的背后是对数据质量的高度依赖。只有建立在准确、受控且持续更新的数据基础上,代理系统的输出才值得信赖。低质量的数据输入往往导致错误输出,而由于人工干预减少,这些错误更难被及时发现和纠正。因此,企业必须将重心从“配置规则”转向“持续管理数据质量与治理”。

数据质量对AI代理的实际影响

随着多代理协同工作的普及,任何一个环节的数据缺陷都可能在整个链条中传播并放大,最终引发连锁性失误。例如,Salesforce 和 ServiceNow 等不同系统中的代理若基于未同步的数据集运行,就可能出现“各说各话”的情况,导致操作失败或违反公司政策。

为此,企业必须强化数据治理,重点包括:

  • 确保核心业务数据的准确性与完整性
  • 实施严格的数据访问控制,确保代理仅使用授权数据
  • 持续维护政策文档与内部知识库,因为代理将直接从中提取决策依据

忽视上述措施,将迅速陷入“劣质数据输入 → 更糟结果输出”的恶性循环,尤其是在人类监督逐渐退出的情况下。

可观测性、治理与代理控制中心

在多代理协作环境中,监控其行为变得愈发困难。传统的IT可观测工具虽能提供系统运行状态,但已不足以应对AI代理的复杂决策过程。如今需要专门的AI治理解决方案,实时追踪代理行为并识别异常操作。

“代理控制中心”(Agent Control Tower)技术应运而生,它能够跨多个平台(如 Salesforce、AWS Bedrock、Boomi)聚合监控信息,检测偏离常规的行为模式。一旦某个代理表现出异常举动,系统可立即向相关人员发出警报,甚至自动暂停可疑流程,从而构建起防范静默错误的第二道防线。

实践建议:连接AI创新者与业务数据负责人

领先企业正积极促成AI技术人员与业务部门数据所有者的协作。他们并不试图一次性清理整个企业的全部数据,而是聚焦于识别那些对高影响力代理流程至关重要的关键数据子集。

通过将数据治理资源集中于最核心的业务成果,企业不仅能更快实现投资回报,还能避免陷入耗资巨大却收效甚微的全面数据清洗项目。这种方法支持渐进式价值创造:从小范围试点开始,验证成效后逐步扩展,基于实际收益扩大投入。

面向未来:以结果为导向衡量ROI

要充分发挥代理型工作流的潜力,企业需彻底转变项目设计思路。不应盲目追求技术部署,再寻找适用场景;而应从明确的业务目标出发——例如“降低费用审批成本”或“加快客户上线速度”——定义可量化的成果指标,再反向配置所需的数据与AI技术。

这一方法确保了技术投资与真实商业价值之间的紧密对齐,使ROI评估更加清晰,也让数据治理的投资更具战略意义。


结语

企业在用AI代理替代传统确定性自动化的过程中,必须重新审视其数据管理方式。长期的投资回报并非来自代理本身的“智能”,而是取决于其所依赖的数据基础设施的强度、治理水平与持续维护能力。面对这一挑战,唯有通过深思熟虑的战略规划、强有力的监管机制,以及数据管理者与AI创新者之间的精准协作,才能真正驾驭这场变革的机遇与风险。

—英文原文—
Ep 671: From Automation to Agents: Why Weak Data Makes AI Guess
As organizations plot their path toward 2026, a rapid transformation is underway: classic automation is being replaced—or “agentified”—by AI agents. This shi ft is present across business operations, from routine workflows to core enterprise processes. The integration of agent-based systems promises greater flexibility and autonomy, but it also surfaces new complexities. Key insights from the Everyday AI podcast episode reveal specific dangers and best practices that business leaders should note to avoid costly missteps.

Automations vs. AI Agents: The Impact on Workflow Reliability
In legacy automation, workflows were built with deterministic logic—rigid “if/then” rules. The result? When an input didn’t align (a missing comma, an extra space), the process simply failed, producing no output at all. This rigidity, though limiting, protected organizations from “silent errors” and made it easier to trace root causes.
Agent-based workflows, however, are nondeterministic. Rather than failing outright, an agent often produces outputs under any circumstance, even when underlying data or policy definitions are incomplete or incorrect. Sometimes, this means the agent will “guess,” make unsound assumptions, or even fabricate information. The bottom line: the evolution to agentic systems introduces risks of misleading results if critical foundations are not in place.

Agentification Pros and Cons: Productivity Gains and Data Risks
Transitioning from classic automations to AI agents brings tangible productivity benefits. Human workers can “take their hand off the wheel” for mundane, repetitive tasks, freeing up time for higher-value work. In finance processes, like expense report approvals, agents enable conversational interfaces and adaptive rule interpretation, streamlining submission and approval cycles.
Yet, these advantages come at the cost of increased dependence on data quality. Agentic automations only deliver trustworthy outputs if fueled by robust, well-governed, and up-to-date data. Poor data inputs lead to erroneous outputs—often without immediate human intervention to catch or correct mistakes. This means organizations must shift their focus from simply configuring rules to managing ongoing data quality and governance.

Data Quality in AI Agent Workflows: Concrete Business Impact
The emergence of AI agents places new pressures on foundational datasets. In multi-agent orchestration—where agents hand off tasks to one another—compromised data in any part of the chain can propagate errors, compounding the negative outcomes. For example, disparate systems like Salesforce and ServiceNow may have agents operating over unsynchronized datasets. If these sources diverge, agents may “talk past” each other, leading to operational breakdowns or policy violations.
Organizations must therefore double down on data governance. This includes:
Prioritizing pristine data in core business datasets
Implementing strict access controls so that agents only use authorized data
Continuously maintaining policy documents and business wikis, as agents now interpret these directly
Ignoring these steps can swiftly lead to “bad data in, worse outcomes out,” particularly as human oversight decreases.

Observability, Governance, and Agent Control Towers
With multi-agent workflows, monitoring agent behavior across systems becomes more challenging. Traditional observability tools offer visibility into IT operations; now, specialized AI governance solutions are needed to track agent decisions and catch anomalous actions in real time.
Agent control tower technologies, for instance, provide aggregated oversight across diverse agent platforms (such as Salesforce, AWS Bedrock, and Boomi), monitoring for deviations from standard patterns. If an agent exhibits unexpected behavior, these tools can alert stakeholders and even pause questionable processes, adding another layer of defense against silent errors.

Practical Steps: Connecting AI Innovators and Business Owners
Forward-thinking organizations are proactively bringing their AI innovators together with line-of-business data owners. These collaborations focus not on boiling the ocean of all enterprise data, but rather on identifying the precise data subsets driving high-impact agentic workflows. By orienting data quality initiatives toward the most business-critical outcomes, companies drive faster ROI and avoid resource-draining blanket data projects.
This practical approach enables iterative value creation: starting small with targeted datasets, proving outcome-based improvements, and scaling efforts based on real returns.

Moving Forward: Measure ROI at the Outcome Level
To unlock the full potential of agentic workflows, companies are advised to reverse their project logic. Rather than chasing technology implementations in search of possible business applications, they start by defining the measurable business outcome they wish to achieve (cost savings in expense approval, accelerated customer onboarding, etc.). Required data and AI technologies are then applied to those well-defined endpoints.
This methodology guarantees alignment between technological investment and real business impact, allowing clearer ROI measurement and more strategic data governance investment.

Organizations adopting agentic workflows in place of deterministic automations must fundamentally shift how they manage their data foundations. Sustained ROI derives not from the inherent intelligence of AI agents, but from the strength, governance, and ongoing hygiene of the data infrastructure they depend on. The risks and opportunities here are sharply defined—and should be addressed with deliberate strategy, robust oversight, and targeted collaboration between data stakeholders and AI innovators.