AI Agents vs Automation Workflows: What’s the Difference (and When to Use Each)?
Teams that already run workflow automation or RPA often ask the same question: are “AI steps in automation” the same thing as “AI agents”?
They’re not. An AI-assisted workflow still follows a predefined control flow (triggers – rules – actions). An AI agent is goal-driven and can decide what to do next (within guardrails), using a perceive – plan – act loop.
Terminology: what people mean by “AI steps” vs “AI agents”
When people say AI steps in automation, they usually mean adding AI to one or more workflow automation steps – for example, classifying an email or extracting fields from a PDF – while the overall workflow stays deterministic.
When people say AI agent, they mean a system that is defined by goal-seeking behavior. It can interpret inputs, plan, select tools, take actions, observe outcomes, and iterate – rather than executing a fixed sequence.
This guide compares three models you’ll see in real implementations:
- Deterministic workflow automation: fixed steps and branching rules.
- AI-assisted workflow automation: still deterministic control flow, but with AI used inside steps (classification, extraction, summarization).
- Agent-driven automation: a goal-directed loop that can choose actions dynamically (with safety constraints).
AI-assisted workflow step example: A workflow receives an invoice PDF, runs OCR + field extraction, then routes it to AP approval based on extracted amount and vendor.
Agentic loop example: An agent receives “this invoice looks wrong,” investigates purchase order history and receiving records, identifies the most likely issue, drafts an explanation, and escalates with evidence if it can’t resolve safely.
Definition box: AI agents vs automation workflows (fast, practical definitions)
Automation workflow (deterministic)
A workflow automation is a step-based process that runs when a trigger occurs, evaluates rules/conditions, executes a defined sequence of actions (API calls, RPA steps, notifications), and produces consistent outcomes for the same inputs and rules.
Typical logs: step-by-step execution logs (trigger fired, rule matched, action executed, retry occurred, error thrown).
Typical failure mode: a step fails (API error, validation error, missing required field) and the run stops or follows a predefined exception branch.
AI agent (goal-driven)
An AI agent is a goal-driven system that can plan and decide what to do next, choose tools (APIs, databases, ticketing, RPA), act, observe results, and iterate using context and (optionally) memory. Its behavior can vary run-to-run because it is probabilistic.
Typical logs: action traces (what it tried, tool calls, outputs), plus any captured rationale and evidence used for decisions.
Typical failure mode: it makes a wrong choice (uses the wrong tool, misinterprets text, drafts an incorrect message) unless constrained by permissions, approvals, and policy checks.
Practical takeaway: workflows are designed for predictability; agents are designed for adaptability, and therefore require stronger guardrails.
How a workflow works: the typical steps (triggers – rules – actions – monitoring)
A workflow is best thought of as a deterministic runbook you’ve encoded into software. It starts from an event, checks conditions, executes integrations, and records what happened.
Common workflow components:
- Trigger: webhook, schedule, new record, inbound email, file upload.
- Input validation: required fields, schema checks, data type checks.
- Branching rules: if/then logic, routing, approvals.
- Actions: API calls, database updates, RPA UI actions, notifications.
- Retries: backoff, idempotency keys, transient error handling.
- Exception handling: dead-letter queues, manual review tasks, fallback paths.
- Human approval gates: required sign-off for high-impact steps.
- Logging/alerts: step logs, failure alerts, SLA monitoring.
Determinism here means: for the same inputs and the same set of rules, the workflow follows the same path (within defined system behavior).
Where AI typically plugs in without changing the deterministic orchestration:
- Classification: categorize an email/ticket/document.
- Extraction: pull structured fields from PDFs, images, or free text.
- Routing: decide which queue/team should handle an item, then route using fixed rules.
- Summarization: generate a summary that a human (or later step) uses.
- Anomaly detection: flag outliers for review while the workflow remains stable.
Example: inbound support email triage workflow (with an AI classification step)
- Trigger: new email arrives in support@ inbox.
- Validate sender domain and required metadata (from, subject, body present).
- AI step: classify intent (billing issue, bug report, feature request, cancellation).
- Rule: if intent = billing issue, route to Billing queue; else route to Support queue.
- Rule: if customer tier = enterprise, set priority = high; else normal.
- Rule: if message contains “refund” and order ID is missing, request order ID via template reply.
- Create ticket in helpdesk tool with tags from classification.
- Assign to queue owner; start SLA timer.
- If ticket creation fails, retry up to configured limit; if still failing, alert Ops and create a manual task.
- Log run details (classification result, routing decision, ticket ID, errors if any).
If you want a deeper refresher on classic mechanics, see workflow automation fundamentals (triggers, rules, and actions).
How an AI agent works: perceive – plan – act – observe – iterate (and when it stops)
An agent is a different control model. Instead of following a fixed DAG of steps, it runs a loop:
- Perceive: read the request, context, and relevant signals (tickets, emails, records).
- Plan: decide a sequence of actions that could achieve the goal.
- Select tools: choose which systems to query or update (within permissions).
- Act: call tools, draft messages, create/update records, request approvals.
- Observe: evaluate results, check whether constraints/policies are satisfied.
- Iterate: adjust the plan and try the next action.
Tool use is what makes agent behavior practical in enterprise settings. Tools can include APIs, databases, ticketing/CRM systems, retrieval/search, and even RPA bots. The critical point is that tool permissions define the agent’s true capability and limit blast radius.
Stopping conditions should be explicit and engineered, such as:
- Goal met (the task is completed with required evidence).
- Confidence threshold met (or not met, triggering escalation).
- Time/budget limits (max tool calls, max runtime).
- Policy boundary reached (requires approval or human review).
Concrete example: “customer is angry, refund request”
Goal: resolve the refund request correctly, fast, and within policy.
- Perceive: reads the email and identifies sentiment + refund intent.
- Plan: check order, confirm payment status, validate refund window, decide next best action.
- Act (tool calls): query order system; pull subscription status; retrieve refund policy; draft response.
- Observe: if refund is allowed and under an approval threshold, proceed to request approval or initiate the approved workflow; otherwise escalate.
Escalation example: If the agent finds policy conflict (e.g., unclear eligibility), it routes to a human with a concise packet: customer details, order ID, policy excerpt, timeline, and a proposed resolution draft.
If you want the conceptual foundations, see what agentic AI means (planning, tools, memory, feedback loops).
Comparison table: AI agents vs automation workflows (what changes in practice)
Many teams use the term “AI workflow” to mean a deterministic workflow that includes AI-assisted steps. That’s different from full agent autonomy.
| Dimension | Automation workflows (rule-based) | AI agents (agentic AI) |
|---|---|---|
| Primary objective | Execute predefined steps reliably. Example: route invoices by amount and cost center. |
Achieve a goal under changing conditions. Example: resolve invoice exceptions by investigating context. |
| Control flow | Fixed order and branches (a defined runbook). | Dynamic planning + tool selection; steps may change per case. |
| Inputs | Mostly structured triggers, forms, and fields. | Mixed inputs (text, docs, events) plus retrieved context/memory. |
| Exception handling | Predefined branches and manual review queues. | Can investigate and adapt, but must be constrained to avoid unsafe actions. |
| Reliability & auditability | Deterministic run logs are straightforward to audit and reproduce. | Probabilistic outputs require guardrails, approvals, and trace/evidence capture. |
| Cost/risk profile | Lower variability; failures are usually integration or data issues. | Higher variability; risks include wrong decisions, policy violations, and unintended actions. |
| Governance fit | Strong fit for strict change control and predictable execution. | Needs stronger controls: scoped permissions, approvals, and monitoring. Caution: require approvals for high-impact actions. |
| Best fit | High-volume, repeatable tasks with stable rules. | Exception-heavy, ambiguous tasks requiring interpretation and cross-tool work. |
It’s not either/or. In production, the most reliable approach is often hybrid: agents decide and coordinate; workflows execute critical steps deterministically.
Side-by-side walkthrough: the same business process built as a workflow vs as an AI agent
Use case: invoice processing (AP intake – validation – routing – exception handling).
Version A: deterministic workflow automation (10-14 steps)
- Trigger: invoice arrives via AP inbox or vendor portal upload.
- Validate file type, scan for basic completeness (invoice number present, vendor name present).
- AI-assisted step (still deterministic control flow): extract header fields (vendor, invoice #, date, total, PO #) into structured fields.
- Rule: if vendor is not in vendor master, route to “Vendor setup” queue and stop.
- Rule: if PO # is missing, route to “Missing PO” exception queue.
- Rule: if invoice # already exists for that vendor, route to “Possible duplicate” queue.
- Rule: if total > approval threshold, create approval task for approver group.
- Match PO to receiving record (3-way match) using integrations.
- Rule: if totals mismatch beyond a configured tolerance, route to “Mismatch” exception queue.
- Create or update invoice record in ERP/AP system.
- If any API call fails, retry per policy; on repeated failure, alert IT/Ops and place item in a manual backlog queue.
- Log full run: trigger source, extracted fields, rule branches taken, approvals requested, system IDs created.
How the workflow handles common exceptions
- Missing PO: deterministic branch to Missing PO queue; request PO from requester/vendor using a standard template.
- Mismatch totals: deterministic branch to Mismatch queue; AP analyst reviews receiving/PO and updates record.
- Duplicate suspicion: deterministic branch to Duplicate queue; compare invoice # + amount + date; require analyst confirmation.
- Vendor not in master data: deterministic branch to Vendor Setup process; require vendor onboarding checklist completion.
- Ambiguous line items: deterministic branch to Coding queue; ask cost center owner to code line items.
What happens when it’s wrong (failure containment): a workflow tends to fail in predictable ways: bad extraction, missing fields, or integration errors. Containment is usually achieved through validation rules, exception queues, and approval gates before posting or paying.
Version B: AI agent implementation (perceive – plan – act – observe)
In an agent model, you define the goal and constraints, then let the agent decide the path.
- Perceive: read the invoice and any accompanying email thread, plus AP policies.
- Plan: decide what checks to run (vendor validation, duplicate check, PO match, coding) based on what’s present and what’s missing.
- Act (tool use): query vendor master; look up PO; pull receiving records; search past invoices; extract line items if needed.
- Observe: evaluate whether evidence supports safe progression or whether it must escalate.
- Iterate: if a check fails (missing PO, mismatch, ambiguous item), decide the next best step: request info, propose coding, or route to the right team.
- Stop: when the invoice is posted (if allowed), or when it creates a structured exception case with a recommended resolution and evidence.
Deterministic steps vs agent decisions (clear boundary):
- Deterministic steps: creating ERP records, starting an approval flow, posting status updates, sending notifications via approved templates, writing audit logs.
- Agent decisions: interpreting messy invoices, determining likely vendor/entity matches, selecting which checks to run first, choosing which workflow to call next, composing a case summary for a human.
How the agent handles the same exceptions
- Missing PO: agent checks whether it’s a non-PO invoice category based on vendor history and policy; if uncertain, it drafts an email requesting PO and opens an exception case with evidence.
- Mismatch totals: agent identifies whether mismatch is likely tax/shipping/partial receipt; it attaches supporting documents and recommends the next action (request updated invoice vs update receiving vs escalate).
- Duplicate suspicion: agent looks for similar invoices and compares context; if risk remains, it escalates with the comparison set rather than proceeding.
- Vendor not in master data: agent initiates the vendor setup workflow, pre-filling what it can from the invoice and communications, then requests missing compliance documents from the vendor contact.
- Ambiguous line items: agent proposes coding based on past invoices and cost center mappings, then routes to the cost center owner for approval.
What happens when it’s wrong (failure containment): the main risk is an incorrect decision (for example, misclassifying an exception or drafting an incorrect justification). Containment comes from strict tool permissions, approval gates, evidence requirements, and a “recommend first” mode for sensitive actions.
When to use automation workflows (and when not to)
Workflow automation excels when you can specify the process as steps and decision rules, and when you need predictable outcomes.
Best-fit criteria for workflows
- Stable business rules and clear decision points.
- High volume and repeatability.
- Strict SLAs and predictable queue management.
- Strong compliance/audit expectations and change control.
- Well-defined integrations where “success” is unambiguous.
Common tasks that are ideal for workflows
- Data sync between systems (CRM – billing, HRIS – IAM).
- Standard approvals (amount-based thresholds, role-based routing).
- Scheduled report generation and distribution.
- Provisioning/deprovisioning with fixed checks and tickets.
- Ticket routing based on known fields and categories.
Workflow anti-patterns (where rule-based automation struggles)
- Highly variable, unstructured inputs (free-form emails, PDFs with inconsistent formats) where rules explode in complexity.
- Frequent edge cases that require interpretation, negotiation, or policy judgment.
You can still add AI without making the system agentic: use AI for extraction or classification, then keep the orchestration deterministic with explicit rules and approval gates.
When to use AI agents (and when not to)
Agents fit best when the goal is clear but the path is variable, and when success depends on interpreting unstructured information or coordinating across multiple tools.
Best-fit criteria for agents
- Heavy unstructured data (emails, documents, chats) that needs interpretation.
- Exception handling where “what to do next” changes case-by-case.
- Multi-step investigation across systems (search, retrieval, correlation).
- Cross-tool coordination (CRM + billing + support + knowledge base).
Agent-friendly examples
- Incident triage: gather signals, suggest likely root cause, draft next actions, open the right tickets.
- Sales Ops research: enrich accounts from internal notes and structured sources, draft summaries for reps.
- Customer issue resolution: assemble context, propose next steps, draft responses with evidence links.
- AP/AR exception handling: investigate mismatches and prepare a resolution packet for approval.
- Knowledge base maintenance: propose article updates from repeated ticket patterns (with human review).
When not to use full autonomy (and safer alternatives)
- High-impact irreversible actions (sending legal communications, terminating accounts, submitting payments): use an agent to draft and recommend, then require approval before execution.
- Strict determinism requirements where reproducibility is essential: keep a workflow as the execution layer; allow the agent only to classify, summarize, or route.
Autonomy levels matter: start with “recommend/draft” mode, then move to “execute” only for low-risk actions with tight constraints.
Hybrid architecture: agents orchestrating deterministic workflows (the practical middle path)
Most mature designs use an agent on top and deterministic workflows underneath. The agent decides what to do; workflows execute critical actions with consistent logging and approvals.
Three reference patterns:
- Router agent: classify the case and select the right workflow to run.
- Supervisor agent: monitor workflow runs, detect repeated failures, and decide whether to retry, escalate, or open an incident.
- Exception-handler agent: keep the “happy path” as a workflow; invoke the agent only when a known exception is hit.
Router pattern: simple sequence of events
- Trigger: new invoice arrives.
- Agent classifies invoice type (PO, non-PO, credit note) and identifies likely exception category.
- Agent selects a deterministic workflow: “PO invoice workflow” vs “vendor setup workflow” vs “exception case workflow.”
- Workflow executes actions (create ERP record, start approvals, route tasks) with standard logs.
- Agent writes a summary back to the case: what was run, what happened, what’s next, and what evidence was used.
Interface contracts you’ll want in a hybrid design
- Input/output schemas: workflows should accept structured inputs (even if the agent started from unstructured data).
- Idempotency: repeated calls should not create duplicate invoices/tickets.
- Retries: workflows handle transient failures predictably; the agent should not “thrash” systems with repeated tool calls.
- Safe tool permissions: separate read vs write vs irreversible actions.
Permissioning example: the agent can read all invoices and vendor data, but can only submit a payment request by invoking an approval workflow that requires a human approver before execution.
For practical examples of this structure, see hybrid agent + workflow automation architecture examples.
Governance, compliance, and risk: auditability differences and guardrails that actually work
Workflows and agents have different governance needs. Workflows are easier to audit because they are designed around explicit steps, versioning, and reproducible execution paths.
Agents add a decision layer that can vary run-to-run. That doesn’t make them unusable in enterprise settings, but it does mean you need stronger controls and better observability.
Workflow governance (what “good” looks like)
- Version control and change approvals for workflow definitions.
- Role-based access to edit, run, and view results.
- Deterministic run logs (trigger, branches, actions, errors).
- Separation of duties for approval steps and sensitive integrations.
Agent governance (controls you should design in)
- Permission scoping: narrow tool access and separate read/write capabilities.
- Action approvals: require human approval for high-impact actions.
- Policy constraints: explicit rules the agent must follow (what it can’t do, what evidence it must attach).
- Logging: record prompts/inputs (with redaction as needed), tool calls, outputs, and approval decisions.
- Evaluation and testing: test sets for common cases and edge cases, plus regression checks as prompts/tools change.
- Incident response: playbooks for policy violations, incorrect actions, and data exposure concerns.
Risk management programs often emphasize characteristics such as validity and reliability, safety, security, accountability and transparency, explainability and interpretability, privacy, and fairness. Designing agent controls around these themes helps align technical choices with governance expectations.
High-risk action checklist (require approvals)
- Submitting or scheduling payments.
- Deleting records or making irreversible data changes.
- Sending legal, compliance, or HR-sensitive communications.
- Changing access permissions, disabling users, or altering security settings.
- Closing incidents or tickets where closure has contractual impact.
Guardrails that work in practice
- Human-in-the-loop checkpoints for sensitive actions.
- Allowlists/denylists for tools, endpoints, and operations.
- Rate limits and budgets (tool calls per run, time limits).
- Sandboxing for “draft” actions (create a draft email, not a sent email).
- Rollback strategies where possible (or compensating workflows).
Sample agent audit log fields (what to capture)
- Timestamp
- Actor (agent name/version) and initiating user/system
- Goal / task type
- Inputs summary (with sensitive fields redacted)
- Tools called (system + operation)
- Parameters (redacted as needed)
- Result (success/failure + returned IDs)
- Evidence attachments (links to records used for the decision)
- Approval required? (yes/no)
- Approver identity and decision
If you’re designing these controls, see governance and approval flows for AI-driven processes.
KPIs and monitoring: how to measure success for workflows vs agents
Workflows and agents need different dashboards. If you measure an agent like an RPA bot, you’ll miss quality and safety signals. If you measure a workflow like an agent, you’ll overcomplicate basic operations.
Workflow KPIs (operations-first)
- Throughput: number of cases completed per time period.
- Cycle time / throughput time: how long end-to-end completion takes on average.
- Step failure rate: how often specific steps fail (by integration, endpoint, or validation rule).
- SLA adherence: percent of cases completed within SLA.
- Queue time: time spent waiting between steps (often indicates approval bottlenecks).
- Rework rate: how often cases bounce back for correction.
- Cost per transaction: useful for program-level planning (ensure you define it consistently).
Agent KPIs (quality, autonomy, and safety)
- Task success rate: percent of tasks completed to acceptance criteria.
- Escalation rate: percent of tasks routed to humans (and why).
- Autonomy rate: share of tasks the agent executes vs only recommends/drafts.
- Tool-call error rate: failed tool calls due to permissions, bad parameters, timeouts.
- Invalid action rate: attempted actions that violate policy or schema.
- User satisfaction for drafts: simple feedback capture from the approvers/operators.
Responsible AI guidance commonly recommends defining metrics for potential harms and using both manual and automated measurement to evaluate and monitor systems over time, including monitoring for regression as systems and usage evolve.
Monitoring and alerting (what’s different)
- Workflow alerts: focus on step failures, backlogs, and SLA risk.
- Agent alerts: focus on policy violations, disallowed tool attempts, unusual action patterns, and quality regressions.
Dashboard spec (8-12 metrics) with owners
- Workflow cycle time (Ops): average end-to-end completion time.
- Workflow step failure rate by integration (IT): failures per step/action.
- SLA adherence (Ops): % within SLA.
- Exception queue volume (Ops): count of items awaiting manual action.
- Rework rate (Ops): % returned for correction.
- Agent task success rate (Product/Ops): % accepted outcomes.
- Agent escalation rate by reason (Ops): missing data, low confidence, policy boundary.
- Agent policy violation attempts (Compliance/Security): blocked actions and categories.
- Agent tool-call error rate (IT): failed calls by tool/system.
- Approval turnaround time (Ops): time from draft to approval/deny.
Example alert rules
- Workflow alert: if a specific step fails repeatedly within a short interval, page IT and temporarily route items to manual processing.
- Agent alert: if the agent attempts a disallowed action (e.g., payment submission without approval), block execution and notify Compliance/Security with the trace.
Implementation roadmap: how to introduce agents safely into an existing automation stack
If you already have workflows and bots in production, don’t rip-and-replace. Introduce agent capabilities where they reduce exception burden and improve triage, while keeping deterministic execution for critical steps.
Step 1: map the existing workflow and find exception hotspots
Document the current process: triggers, branches, exception queues, approval points, and where manual work happens. Identify which exceptions are driven by unstructured inputs (emails, PDFs, chat transcripts).
Step 2: start with AI steps inside workflows
Add AI where it’s low-risk and measurable: classification, extraction, summarization, and routing suggestions. Keep the workflow control flow deterministic.
Step 3: add an agent in “copilot” mode
Have the agent draft resolutions, summaries, and recommended next steps. Require humans to approve actions, and capture feedback for evaluation.
Step 4: move to an orchestrator pattern
Let the agent route to deterministic workflows for execution. Keep permissions tight: the agent decides, but workflows enforce approvals, validations, and idempotent execution.
Step 5: continuous improvement
Maintain test sets, run targeted adversarial testing for policy boundaries, review governance regularly, and conduct post-incident reviews to refine guardrails and monitoring.
Pilot scope example (with rollback plan)
- In scope: invoice intake triage, exception categorization, drafting exception summaries, routing to existing AP workflows.
- Out of scope: payment submission, vendor bank detail changes, automatic write-offs.
- Rollback plan: disable agent routing and revert to current deterministic routing rules; keep AI extraction step if it remains stable and validated.
Minimum required artifacts
- Process map (happy path + exceptions)
- Policy rules and approval thresholds
- Tool permissions matrix (read/write/irreversible)
- Evaluation checklist (quality, safety, and operational readiness)
Need a second set of eyes on your design? Request a workflow vs agent assessment (process mapping + hybrid architecture recommendation).
FAQ: AI agents vs automation workflows
What’s the difference between an AI agent and an automation workflow?
An automation workflow executes predefined steps with triggers and rules. An AI agent is goal-driven and can plan and choose actions dynamically (within constraints). In practice, workflows are predictable execution; agents are adaptive decisioning.
Are AI agents the same as automation?
No. Agents can automate work, but they use a different control model. A workflow automates by executing a known procedure. An agent automates by deciding a procedure case-by-case and then acting through tools.
Do AI agents replace workflow automation tools?
Usually no. Many reliable implementations use agents to orchestrate existing workflow automation and RPA, especially for exception handling and cross-system coordination.
When should I use a rule-based workflow instead of an AI agent?
Use a workflow when the process is fully specifiable as steps and decision rules, when auditability and reproducibility are critical, and when the cost of a wrong action is high.
Can AI agents orchestrate existing workflows and RPA bots?
Yes. A common pattern is a router agent that interprets the request and selects which deterministic workflow or bot to run, then writes back a summary. Example: agent detects “missing PO” and routes to the Missing PO workflow rather than trying to improvise changes in the ERP.
What makes an AI agent truly “agentic” (planning, tools, memory, feedback)?
An agent is “agentic” when it can plan a sequence of actions, use tools to execute those actions, retain or retrieve context as it works, and adjust based on results (feedback). Without tool use and a loop, many systems are better described as AI-assisted steps inside a workflow.
Is an “AI workflow” the same as an “AI agent”?
Not necessarily. Many “AI workflows” are deterministic workflows that include AI-assisted steps (like extraction or classification). An AI agent implies a goal-driven loop that can choose actions dynamically, which changes governance, monitoring, and risk controls.
Decision checklist: when to use AI agents vs automations
- Is the process fully specifiable as steps and decision rules (yes/no)?
- How often do exceptions occur that require judgment or interpretation?
- Are inputs mostly structured (forms/fields) or unstructured (emails, PDFs, chats)?
- Do you need strict determinism, audit trails, and change control (regulated/SOX/PCI/HIPAA-like constraints)?
- What is the blast radius if the system takes a wrong action (low/medium/high)?
- Do you already have reliable automations that an agent could orchestrate rather than replace?
Key takeaways
- Workflow automation excels when the steps and rules are known and stable (predictable execution).
- AI agents excel when the goal is clear but the path is variable (planning + tool use + iteration).
- Most real deployments work best as hybrids: agents decide and coordinate; workflows execute critical steps deterministically.
- Governance differs: workflows are easier to audit; agents need constraints, approvals, monitoring, and fallbacks.
- Measure success differently: workflow KPIs focus on throughput and error rate; agent KPIs focus on task success, autonomy quality, and safe escalation.
References
- https://www.nist.gov/itl/ai-risk-management-framework
- https://www.nist.gov/itl/ai-risk-management-framework/ai-risk-management-framework-faqs
- https://docs.automationanywhere.com/bundle/enterprise-v11.3/page/enterprise/topics/security-architecture/audit-logs.html
- https://docs.uipath.com/process-mining/automation-suite/2.2510/user-guide/generic-kpis
- https://docs.uipath.com/process-mining/automation-suite/2.2510/user-guide/o2c-kpis
- https://learn.microsoft.com/en-us/legal/cognitive-services/openai/overview
- https://learn.microsoft.com/en-us/answers/questions/2156197/best-practices-for-securing-azure-openai-with-conf
- https://azure.microsoft.com/en-us/blog/agent-factory-top-5-agent-observability-best-practices-for-reliable-ai/
- https://azure.microsoft.com/en-us/blog/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications/
