{
  "nbformat": 4,
  "nbformat_minor": 5,
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "name": "python",
      "version": "3.13.0"
    },
    "blog_metadata": {
      "topic": "How Microsoft Agent 365 changes enterprise AI governance",
      "slug": "how-microsoft-agent-365-changes-enterprise-ai-governance",
      "generated_by": "LinkedIn Post Generator + Azure OpenAI",
      "generated_at": "2026-05-03T14:49:35.525Z"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# How Microsoft Agent 365 changes enterprise AI governance\n",
        "\n",
        "This notebook turns the blog post into a hands-on governance validation exercise. It focuses on the shift from prompt-centric AI oversight to execution-centric controls such as identity, permissions, tool access, auditability, and approval gates. The examples use Python to model agent inventory, blast-radius classification, policy enforcement, telemetry, and cost governance."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "%pip install -q pandas matplotlib networkx"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "import pandas as pd\n",
        "import matplotlib.pyplot as plt\n",
        "import networkx as nx\n",
        "from dataclasses import dataclass, asdict\n",
        "from typing import List, Dict, Any\n",
        "from collections import Counter\n",
        "import math\n",
        "import json"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Governance shift: from chat safety to execution control\n",
        "\n",
        "The blog argues that GA changes the question from \"what did the model say?\" to \"what was the agent allowed to do?\" The diagram below recreates that control-plane view by showing how policy, identity, approved data, audit logs, and human approval gates sit between users and agent actions."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "import matplotlib.pyplot as plt\n",
        "import networkx as nx\n",
        "\n",
        "G = nx.DiGraph()\n",
        "\n",
        "nodes = {\n",
        "    'Business User': (0, 1),\n",
        "    'Agent 365': (1.5, 1),\n",
        "    'Policy & Identity Layer': (3.5, 1),\n",
        "    'Approved Data Sources': (5.8, 1.8),\n",
        "    'Audit & Compliance Logs': (5.8, 1.0),\n",
        "    'Human Approval Gates': (5.8, 0.2),\n",
        "    'Grounded AI Response': (8.2, 1),\n",
        "    'Before: Shadow AI': (1.5, -1.0),\n",
        "    'Unapproved Apps': (3.5, -1.0),\n",
        "    'Data Leakage Risk': (5.8, -0.5),\n",
        "    'No Audit Trail': (5.8, -1.5),\n",
        "}\n",
        "\n",
        "edges = [\n",
        "    ('Business User', 'Agent 365', 'solid'),\n",
        "    ('Agent 365', 'Policy & Identity Layer', 'solid'),\n",
        "    ('Policy & Identity Layer', 'Approved Data Sources', 'solid'),\n",
        "    ('Policy & Identity Layer', 'Audit & Compliance Logs', 'solid'),\n",
        "    ('Policy & Identity Layer', 'Human Approval Gates', 'solid'),\n",
        "    ('Approved Data Sources', 'Grounded AI Response', 'solid'),\n",
        "    ('Human Approval Gates', 'Grounded AI Response', 'solid'),\n",
        "    ('Grounded AI Response', 'Business User', 'solid'),\n",
        "    ('Before: Shadow AI', 'Unapproved Apps', 'dashed'),\n",
        "    ('Unapproved Apps', 'Data Leakage Risk', 'dashed'),\n",
        "    ('Unapproved Apps', 'No Audit Trail', 'dashed'),\n",
        "]\n",
        "\n",
        "for src, dst, style in edges:\n",
        "    G.add_edge(src, dst, style=style)\n",
        "\n",
        "pos = nodes\n",
        "plt.figure(figsize=(14, 6))\n",
        "\n",
        "solid_edges = [(u, v) for u, v, d in G.edges(data=True) if d['style'] == 'solid']\n",
        "dashed_edges = [(u, v) for u, v, d in G.edges(data=True) if d['style'] == 'dashed']\n",
        "\n",
        "nx.draw_networkx_nodes(G, pos, node_color='#DCEEFF', node_size=2600, edgecolors='#2F5D8A')\n",
        "nx.draw_networkx_labels(G, pos, font_size=9)\n",
        "nx.draw_networkx_edges(G, pos, edgelist=solid_edges, arrows=True, width=2, edge_color='#2F5D8A')\n",
        "nx.draw_networkx_edges(G, pos, edgelist=dashed_edges, arrows=True, width=2, style='dashed', edge_color='#B04A4A')\n",
        "\n",
        "plt.title('Agent Governance as a Control Plane')\n",
        "plt.axis('off')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Practical validation dataset\n",
        "\n",
        "To test the blog's claims, we create a small synthetic enterprise inventory spanning Agent 365, Copilot extensions, Foundry-built agents, workflow automations, and unofficial scripts. This lets us validate discovery, classification, registration, API mediation, and approval logic."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "agents = [\n",
        "    {\n",
        "        'agent_id': 'A001',\n",
        "        'name': 'Sales Knowledge Assistant',\n",
        "        'platform': 'Agent 365',\n",
        "        'owner': 'Sales Ops',\n",
        "        'purpose': 'Answer sales policy questions',\n",
        "        'model': 'gpt-4.1',\n",
        "        'tools': ['search_docs'],\n",
        "        'data_sources': ['SharePoint Sales KB'],\n",
        "        'action_scope': 'read',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 1,\n",
        "        'business_criticality': 2,\n",
        "        'data_sensitivity': 2,\n",
        "        'external_connectivity': False,\n",
        "        'api_governed': True,\n",
        "        'registered': True,\n",
        "        'human_approval_required': False,\n",
        "        'monthly_tokens_k': 120,\n",
        "        'tool_invocations': 450,\n",
        "        'retrieval_gb': 12,\n",
        "        'workflow_runs': 0\n",
        "    },\n",
        "    {\n",
        "        'agent_id': 'A002',\n",
        "        'name': 'CRM Update Agent',\n",
        "        'platform': 'Agent 365',\n",
        "        'owner': 'Revenue Systems',\n",
        "        'purpose': 'Update CRM records from meeting notes',\n",
        "        'model': 'gpt-4.1',\n",
        "        'tools': ['crm_api', 'meeting_parser'],\n",
        "        'data_sources': ['CRM', 'Meeting Notes'],\n",
        "        'action_scope': 'write',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 3,\n",
        "        'business_criticality': 4,\n",
        "        'data_sensitivity': 3,\n",
        "        'external_connectivity': False,\n",
        "        'api_governed': False,\n",
        "        'registered': True,\n",
        "        'human_approval_required': False,\n",
        "        'monthly_tokens_k': 480,\n",
        "        'tool_invocations': 1800,\n",
        "        'retrieval_gb': 24,\n",
        "        'workflow_runs': 220\n",
        "    },\n",
        "    {\n",
        "        'agent_id': 'A003',\n",
        "        'name': 'Invoice Exception Resolver',\n",
        "        'platform': 'Foundry',\n",
        "        'owner': 'Finance Automation',\n",
        "        'purpose': 'Resolve invoice mismatches and trigger payment workflows',\n",
        "        'model': 'gpt-4.1',\n",
        "        'tools': ['erp_api', 'payment_flow'],\n",
        "        'data_sources': ['ERP', 'Invoices'],\n",
        "        'action_scope': 'approve',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 4,\n",
        "        'business_criticality': 5,\n",
        "        'data_sensitivity': 4,\n",
        "        'external_connectivity': False,\n",
        "        'api_governed': True,\n",
        "        'registered': True,\n",
        "        'human_approval_required': True,\n",
        "        'monthly_tokens_k': 650,\n",
        "        'tool_invocations': 2400,\n",
        "        'retrieval_gb': 18,\n",
        "        'workflow_runs': 540\n",
        "    },\n",
        "    {\n",
        "        'agent_id': 'A004',\n",
        "        'name': 'HR Policy Copilot Plugin',\n",
        "        'platform': 'Copilot Extension',\n",
        "        'owner': 'HRIT',\n",
        "        'purpose': 'Answer HR policy questions',\n",
        "        'model': 'gpt-4o-mini',\n",
        "        'tools': ['search_hr_docs'],\n",
        "        'data_sources': ['HR Policy Library'],\n",
        "        'action_scope': 'read',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 1,\n",
        "        'business_criticality': 2,\n",
        "        'data_sensitivity': 3,\n",
        "        'external_connectivity': False,\n",
        "        'api_governed': True,\n",
        "        'registered': False,\n",
        "        'human_approval_required': False,\n",
        "        'monthly_tokens_k': 90,\n",
        "        'tool_invocations': 300,\n",
        "        'retrieval_gb': 8,\n",
        "        'workflow_runs': 0\n",
        "    },\n",
        "    {\n",
        "        'agent_id': 'A005',\n",
        "        'name': 'Marketing Content Bot',\n",
        "        'platform': 'Unofficial Script',\n",
        "        'owner': 'Demand Gen',\n",
        "        'purpose': 'Generate campaign drafts and post to external tools',\n",
        "        'model': 'gpt-4o-mini',\n",
        "        'tools': ['cms_api', 'social_api'],\n",
        "        'data_sources': ['Brand Guidelines'],\n",
        "        'action_scope': 'write',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 3,\n",
        "        'business_criticality': 3,\n",
        "        'data_sensitivity': 2,\n",
        "        'external_connectivity': True,\n",
        "        'api_governed': False,\n",
        "        'registered': False,\n",
        "        'human_approval_required': False,\n",
        "        'monthly_tokens_k': 300,\n",
        "        'tool_invocations': 1600,\n",
        "        'retrieval_gb': 6,\n",
        "        'workflow_runs': 120\n",
        "    },\n",
        "    {\n",
        "        'agent_id': 'A006',\n",
        "        'name': 'IT Reset Workflow Agent',\n",
        "        'platform': 'Power Automate',\n",
        "        'owner': 'IT Service Desk',\n",
        "        'purpose': 'Handle password reset and account unlock requests',\n",
        "        'model': 'gpt-4o-mini',\n",
        "        'tools': ['identity_api', 'ticketing_api'],\n",
        "        'data_sources': ['ITSM', 'Identity Directory'],\n",
        "        'action_scope': 'write',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 2,\n",
        "        'business_criticality': 4,\n",
        "        'data_sensitivity': 4,\n",
        "        'external_connectivity': False,\n",
        "        'api_governed': True,\n",
        "        'registered': True,\n",
        "        'human_approval_required': True,\n",
        "        'monthly_tokens_k': 220,\n",
        "        'tool_invocations': 2100,\n",
        "        'retrieval_gb': 10,\n",
        "        'workflow_runs': 900\n",
        "    },\n",
        "    {\n",
        "        'agent_id': 'A007',\n",
        "        'name': 'Vendor Risk Reviewer',\n",
        "        'platform': 'Foundry',\n",
        "        'owner': 'Procurement Risk',\n",
        "        'purpose': 'Review vendor questionnaires and recommend approvals',\n",
        "        'model': 'gpt-4.1',\n",
        "        'tools': ['risk_db_api', 'document_parser'],\n",
        "        'data_sources': ['Vendor DB', 'Questionnaires'],\n",
        "        'action_scope': 'recommend',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 2,\n",
        "        'business_criticality': 4,\n",
        "        'data_sensitivity': 3,\n",
        "        'external_connectivity': True,\n",
        "        'api_governed': True,\n",
        "        'registered': True,\n",
        "        'human_approval_required': True,\n",
        "        'monthly_tokens_k': 410,\n",
        "        'tool_invocations': 980,\n",
        "        'retrieval_gb': 14,\n",
        "        'workflow_runs': 60\n",
        "    },\n",
        "    {\n",
        "        'agent_id': 'A008',\n",
        "        'name': 'Shadow Data Export Bot',\n",
        "        'platform': 'Unofficial Script',\n",
        "        'owner': 'Unknown',\n",
        "        'purpose': 'Export customer data for ad hoc analysis',\n",
        "        'model': 'unknown',\n",
        "        'tools': ['db_dump', 'file_share'],\n",
        "        'data_sources': ['Customer DB'],\n",
        "        'action_scope': 'delete',\n",
        "        'environment': 'prod',\n",
        "        'autonomy_level': 4,\n",
        "        'business_criticality': 5,\n",
        "        'data_sensitivity': 5,\n",
        "        'external_connectivity': True,\n",
        "        'api_governed': False,\n",
        "        'registered': False,\n",
        "        'human_approval_required': False,\n",
        "        'monthly_tokens_k': 50,\n",
        "        'tool_invocations': 40,\n",
        "        'retrieval_gb': 120,\n",
        "        'workflow_runs': 12\n",
        "    }\n",
        "]\n",
        "\n",
        "df = pd.DataFrame(agents)\n",
        "df"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Inventory first: discover agents through identity, APIs, and telemetry\n",
        "\n",
        "The blog recommends starting with discovery rather than policy memos. The code below simulates an inventory view and highlights the kinds of gaps that indicate shadow agents: missing registration, lack of API mediation, unknown ownership, and production deployment."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "inventory_summary = {\n",
        "    'total_agents': len(df),\n",
        "    'registered_pct': round(df['registered'].mean() * 100, 1),\n",
        "    'api_governed_pct': round(df['api_governed'].mean() * 100, 1),\n",
        "    'prod_agents': int((df['environment'] == 'prod').sum()),\n",
        "    'unofficial_agents': int(df['platform'].isin(['Unofficial Script']).sum())\n",
        "}\n",
        "\n",
        "shadow_signals = df[\n",
        "    (~df['registered']) |\n",
        "    (~df['api_governed']) |\n",
        "    (df['owner'].eq('Unknown'))\n",
        "][['agent_id', 'name', 'platform', 'owner', 'registered', 'api_governed', 'environment']]\n",
        "\n",
        "print('Inventory summary:')\n",
        "for k, v in inventory_summary.items():\n",
        "    print(f'- {k}: {v}')\n",
        "\n",
        "print('\\nPotential shadow or weakly governed agents:')\n",
        "display(shadow_signals.sort_values(['registered', 'api_governed']))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Classify by blast radius, not by hype\n",
        "\n",
        "A core recommendation is to tier governance based on consequence: data sensitivity, action scope, autonomy, business criticality, and external connectivity. The next cell computes a simple blast-radius score and maps agents into low, medium, and high risk tiers."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "action_weight = {\n",
        "    'read': 1,\n",
        "    'recommend': 2,\n",
        "    'write': 3,\n",
        "    'approve': 4,\n",
        "    'delete': 5\n",
        "}\n",
        "\n",
        "def blast_radius_score(row):\n",
        "    score = 0\n",
        "    score += row['data_sensitivity'] * 2\n",
        "    score += row['business_criticality'] * 2\n",
        "    score += row['autonomy_level'] * 2\n",
        "    score += action_weight.get(row['action_scope'], 1) * 3\n",
        "    score += 3 if row['external_connectivity'] else 0\n",
        "    return score\n",
        "\n",
        "def classify_tier(score):\n",
        "    if score >= 28:\n",
        "        return 'high'\n",
        "    if score >= 18:\n",
        "        return 'medium'\n",
        "    return 'low'\n",
        "\n",
        "df['blast_radius_score'] = df.apply(blast_radius_score, axis=1)\n",
        "df['risk_tier'] = df['blast_radius_score'].apply(classify_tier)\n",
        "\n",
        "display(df[['agent_id', 'name', 'platform', 'action_scope', 'data_sensitivity', 'autonomy_level', 'business_criticality', 'external_connectivity', 'blast_radius_score', 'risk_tier']].sort_values('blast_radius_score', ascending=False))\n",
        "\n",
        "print('\\nRisk tier counts:')\n",
        "print(df['risk_tier'].value_counts())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Visualize the governance backlog\n",
        "\n",
        "This chart makes the operating issue visible: high-risk agents that are not registered or not behind governed APIs should move to the top of the remediation queue."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "plot_df = df.copy()\n",
        "plot_df['governance_gap'] = (~plot_df['registered']) | (~plot_df['api_governed'])\n",
        "colors = plot_df['governance_gap'].map({True: '#D9534F', False: '#5CB85C'})\n",
        "\n",
        "plt.figure(figsize=(10, 6))\n",
        "plt.scatter(plot_df['blast_radius_score'], plot_df['monthly_tokens_k'], s=180, c=colors)\n",
        "\n",
        "for _, row in plot_df.iterrows():\n",
        "    plt.text(row['blast_radius_score'] + 0.2, row['monthly_tokens_k'] + 5, row['agent_id'], fontsize=9)\n",
        "\n",
        "plt.xlabel('Blast Radius Score')\n",
        "plt.ylabel('Monthly Token Consumption (thousands)')\n",
        "plt.title('High Blast Radius + Governance Gaps = Priority')\n",
        "plt.grid(alpha=0.3)\n",
        "plt.show()\n",
        "\n",
        "plot_df[['agent_id', 'name', 'blast_radius_score', 'risk_tier', 'registered', 'api_governed', 'governance_gap']].sort_values(['governance_gap', 'blast_radius_score'], ascending=[False, False])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Federated operating model: lightweight registration with mandatory metadata\n",
        "\n",
        "The blog recommends a paved road: central policy, local delivery, shared telemetry. This cell validates whether each production agent has the minimum registration record needed for governance."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "required_fields = ['owner', 'purpose', 'model', 'tools', 'data_sources', 'action_scope', 'environment']\n",
        "\n",
        "def registration_completeness(row):\n",
        "    missing = []\n",
        "    for field in required_fields:\n",
        "        value = row[field]\n",
        "        if value is None:\n",
        "            missing.append(field)\n",
        "        elif isinstance(value, str) and value.strip().lower() in {'', 'unknown'}:\n",
        "            missing.append(field)\n",
        "        elif isinstance(value, list) and len(value) == 0:\n",
        "            missing.append(field)\n",
        "    return missing\n",
        "\n",
        "registration_report = []\n",
        "for _, row in df.iterrows():\n",
        "    missing = registration_completeness(row)\n",
        "    registration_report.append({\n",
        "        'agent_id': row['agent_id'],\n",
        "        'name': row['name'],\n",
        "        'registered': row['registered'],\n",
        "        'missing_required_metadata': missing,\n",
        "        'metadata_complete': len(missing) == 0\n",
        "    })\n",
        "\n",
        "registration_df = pd.DataFrame(registration_report)\n",
        "display(registration_df)\n",
        "\n",
        "print('Agents needing registration or metadata cleanup:')\n",
        "display(registration_df[(~registration_df['registered']) | (~registration_df['metadata_complete'])])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Policy sequence for trusted execution\n",
        "\n",
        "The second diagram from the blog describes a practical control sequence: authenticate, check permissions, apply policy, retrieve approved knowledge, require human approval for high-risk actions, and log decisions. The code below simulates that workflow for sample requests."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "import matplotlib.pyplot as plt\n",
        "import networkx as nx\n",
        "\n",
        "G = nx.DiGraph()\n",
        "\n",
        "nodes = {\n",
        "    'Request to Enterprise Agent': (0, 1),\n",
        "    'Authenticate User': (1.8, 1),\n",
        "    'Check Role + Permissions': (3.8, 1),\n",
        "    'Apply Governance Policies': (6.0, 1),\n",
        "    'Retrieve Approved Knowledge': (8.4, 1.6),\n",
        "    'Generate Response': (10.8, 1),\n",
        "    'Log Decision + Actions': (13.0, 1),\n",
        "    'Deliver Trusted Output': (15.2, 1),\n",
        "    'High-Risk Action?': (8.4, 0.1),\n",
        "    'Human-in-the-Loop Approval': (10.8, 0.1),\n",
        "}\n",
        "\n",
        "edges = [\n",
        "    ('Request to Enterprise Agent', 'Authenticate User'),\n",
        "    ('Authenticate User', 'Check Role + Permissions'),\n",
        "    ('Check Role + Permissions', 'Apply Governance Policies'),\n",
        "    ('Apply Governance Policies', 'Retrieve Approved Knowledge'),\n",
        "    ('Retrieve Approved Knowledge', 'Generate Response'),\n",
        "    ('Generate Response', 'Log Decision + Actions'),\n",
        "    ('Log Decision + Actions', 'Deliver Trusted Output'),\n",
        "    ('Apply Governance Policies', 'High-Risk Action?'),\n",
        "    ('High-Risk Action?', 'Human-in-the-Loop Approval'),\n",
        "    ('Human-in-the-Loop Approval', 'Generate Response'),\n",
        "]\n",
        "\n",
        "for src, dst in edges:\n",
        "    G.add_edge(src, dst)\n",
        "\n",
        "plt.figure(figsize=(16, 5))\n",
        "nx.draw_networkx_nodes(G, nodes, node_color='#E8F5E9', node_size=2600, edgecolors='#4E7D4E')\n",
        "nx.draw_networkx_labels(G, nodes, font_size=8)\n",
        "nx.draw_networkx_edges(G, nodes, arrows=True, width=2, edge_color='#4E7D4E')\n",
        "plt.title('Trusted Execution Flow for Enterprise Agents')\n",
        "plt.axis('off')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Simulate policy enforcement and approval gates\n",
        "\n",
        "This example turns the control sequence into executable logic. It shows how governance should evaluate identity, registration, API mediation, risk tier, and action scope before allowing an agent to act."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "users = {\n",
        "    'alice': {'role': 'sales_user', 'allowed_actions': ['read', 'recommend']},\n",
        "    'bob': {'role': 'finance_manager', 'allowed_actions': ['read', 'recommend', 'write', 'approve']},\n",
        "    'carol': {'role': 'it_admin', 'allowed_actions': ['read', 'write']},\n",
        "}\n",
        "\n",
        "def evaluate_request(user_id: str, agent_id: str, requested_action: str) -> Dict[str, Any]:\n",
        "    user = users.get(user_id)\n",
        "    agent = df.set_index('agent_id').loc[agent_id].to_dict()\n",
        "    decision_log = []\n",
        "\n",
        "    if not user:\n",
        "        return {'allowed': False, 'reason': 'authentication_failed', 'log': ['User not found']}\n",
        "    decision_log.append(f'Authenticated user {user_id} as role {user[\"role\"]}')\n",
        "\n",
        "    if requested_action not in user['allowed_actions']:\n",
        "        decision_log.append(f'Permission denied for action {requested_action}')\n",
        "        return {'allowed': False, 'reason': 'permission_denied', 'log': decision_log}\n",
        "    decision_log.append(f'User permission check passed for action {requested_action}')\n",
        "\n",
        "    if not agent['registered']:\n",
        "        decision_log.append('Agent is not registered')\n",
        "        return {'allowed': False, 'reason': 'unregistered_agent', 'log': decision_log}\n",
        "    decision_log.append('Agent registration check passed')\n",
        "\n",
        "    if not agent['api_governed'] and requested_action in {'write', 'approve', 'delete'}:\n",
        "        decision_log.append('High-impact action blocked because tool access is not API governed')\n",
        "        return {'allowed': False, 'reason': 'api_mediation_required', 'log': decision_log}\n",
        "    decision_log.append('API governance check passed')\n",
        "\n",
        "    risk_tier = agent['risk_tier']\n",
        "    decision_log.append(f'Risk tier is {risk_tier}')\n",
        "\n",
        "    if risk_tier == 'high' or agent['human_approval_required'] or requested_action in {'approve', 'delete'}:\n",
        "        decision_log.append('Human approval required before execution')\n",
        "        return {'allowed': True, 'reason': 'approval_required', 'log': decision_log}\n",
        "\n",
        "    decision_log.append('Approved for execution')\n",
        "    return {'allowed': True, 'reason': 'approved', 'log': decision_log}\n",
        "\n",
        "scenarios = [\n",
        "    ('alice', 'A001', 'read'),\n",
        "    ('alice', 'A002', 'write'),\n",
        "    ('bob', 'A003', 'approve'),\n",
        "    ('carol', 'A006', 'write'),\n",
        "    ('bob', 'A008', 'delete'),\n",
        "]\n",
        "\n",
        "results = []\n",
        "for user_id, agent_id, action in scenarios:\n",
        "    outcome = evaluate_request(user_id, agent_id, action)\n",
        "    results.append({\n",
        "        'user': user_id,\n",
        "        'agent_id': agent_id,\n",
        "        'requested_action': action,\n",
        "        'allowed': outcome['allowed'],\n",
        "        'reason': outcome['reason'],\n",
        "        'log': ' | '.join(outcome['log'])\n",
        "    })\n",
        "\n",
        "results_df = pd.DataFrame(results)\n",
        "display(results_df)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Shared telemetry: measure registration, mediation, review coverage, and spend visibility\n",
        "\n",
        "The blog suggests tracking hard numbers in the first 90 days. This cell computes a small governance scorecard aligned to those recommendations."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "high_risk = df[df['risk_tier'] == 'high']\n",
        "metrics = {\n",
        "    '% agents registered': round(df['registered'].mean() * 100, 1),\n",
        "    '% behind governed APIs': round(df['api_governed'].mean() * 100, 1),\n",
        "    '% high-risk agents reviewed': round((high_risk['human_approval_required'].mean() * 100) if len(high_risk) else 0, 1),\n",
        "    '% spend visibility by agent class': 100.0 if {'monthly_tokens_k', 'tool_invocations', 'retrieval_gb', 'workflow_runs'}.issubset(df.columns) else 0.0\n",
        "}\n",
        "\n",
        "scorecard = pd.DataFrame(list(metrics.items()), columns=['metric', 'value'])\n",
        "display(scorecard)\n",
        "\n",
        "by_class = df.groupby(['platform', 'risk_tier']).agg(\n",
        "    agents=('agent_id', 'count'),\n",
        "    tokens_k=('monthly_tokens_k', 'sum'),\n",
        "    tool_invocations=('tool_invocations', 'sum'),\n",
        "    retrieval_gb=('retrieval_gb', 'sum'),\n",
        "    workflow_runs=('workflow_runs', 'sum')\n",
        ").reset_index()\n",
        "\n",
        "display(by_class)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Cost governance is now agent governance\n",
        "\n",
        "The blog links execution controls to cost controls. The next cell estimates a simple monthly cost proxy and flags waste patterns such as high token use, excessive tool invocation, broad retrieval volume, and workflow growth."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "cost_model = df.copy()\n",
        "\n",
        "cost_model['token_cost'] = cost_model['monthly_tokens_k'] * 0.02\n",
        "cost_model['tool_cost'] = cost_model['tool_invocations'] * 0.001\n",
        "cost_model['retrieval_cost'] = cost_model['retrieval_gb'] * 0.15\n",
        "cost_model['workflow_cost'] = cost_model['workflow_runs'] * 0.01\n",
        "cost_model['estimated_monthly_cost'] = cost_model[['token_cost', 'tool_cost', 'retrieval_cost', 'workflow_cost']].sum(axis=1)\n",
        "\n",
        "cost_model['waste_signal'] = (\n",
        "    (cost_model['monthly_tokens_k'] > 400) |\n",
        "    (cost_model['tool_invocations'] > 1500) |\n",
        "    (cost_model['retrieval_gb'] > 50) |\n",
        "    (cost_model['workflow_runs'] > 500)\n",
        ")\n",
        "\n",
        "cols = ['agent_id', 'name', 'risk_tier', 'monthly_tokens_k', 'tool_invocations', 'retrieval_gb', 'workflow_runs', 'estimated_monthly_cost', 'waste_signal']\n",
        "display(cost_model[cols].sort_values('estimated_monthly_cost', ascending=False))\n",
        "\n",
        "plt.figure(figsize=(10, 5))\n",
        "plt.bar(cost_model['agent_id'], cost_model['estimated_monthly_cost'], color=['#D9534F' if x else '#5BC0DE' for x in cost_model['waste_signal']])\n",
        "plt.title('Estimated Monthly Cost by Agent')\n",
        "plt.xlabel('Agent ID')\n",
        "plt.ylabel('Estimated Cost (arbitrary units)')\n",
        "plt.grid(axis='y', alpha=0.3)\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 90-day remediation plan generator\n",
        "\n",
        "To make the notebook actionable, this cell produces a prioritized remediation queue based on the blog's recommended sequence: inventory, classify, register, mediate tool access, and stand up shared telemetry with review coverage."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "def remediation_actions(row):\n",
        "    actions = []\n",
        "    if not row['registered']:\n",
        "        actions.append('Register agent with mandatory metadata')\n",
        "    if not row['api_governed']:\n",
        "        actions.append('Put tool access behind governed APIs')\n",
        "    if row['risk_tier'] == 'high' and not row['human_approval_required']:\n",
        "        actions.append('Add human approval gate for high-risk actions')\n",
        "    if row['owner'] == 'Unknown':\n",
        "        actions.append('Assign accountable owner and escalation path')\n",
        "    if row['external_connectivity']:\n",
        "        actions.append('Review external connectivity and data egress controls')\n",
        "    return actions\n",
        "\n",
        "plan = df.copy()\n",
        "plan['remediation_actions'] = plan.apply(remediation_actions, axis=1)\n",
        "plan['priority_score'] = plan['blast_radius_score'] + (~plan['registered']).astype(int) * 5 + (~plan['api_governed']).astype(int) * 5\n",
        "plan = plan[['agent_id', 'name', 'platform', 'risk_tier', 'priority_score', 'remediation_actions']].sort_values('priority_score', ascending=False)\n",
        "\n",
        "display(plan)\n",
        "\n",
        "print('Top 3 priorities for the next 90 days:')\n",
        "for _, row in plan.head(3).iterrows():\n",
        "    print(f\"- {row['agent_id']} {row['name']}: {', '.join(row['remediation_actions']) if row['remediation_actions'] else 'No immediate action'}\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Sources referenced in the blog\n",
        "\n",
        "- Microsoft named a Leader in the IDC MarketScape: Worldwide API Management 2026 Vendor Assessment\n",
        "- OpenAIs GPT-5.5 in Microsoft Foundry: Frontier intelligence on an enterprise ready platform\n",
        "- Introducing Azure Accelerate for Databases: Modernize your data for AI with experts and investments\n",
        "- Cloud Cost Optimization: How to maximize ROI from AI, manage costs, and unlock real business value\n",
        "- Microsoft Sovereign Private Cloud scales to thousands of nodes with Azure Local"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Summary\n",
        "\n",
        "This notebook validates the blog's central claim: enterprise AI governance changes when agents move from answering questions to taking actions across systems. The practical controls that matter most are inventory, identity, permissions, API mediation, blast-radius classification, approval gates, telemetry, and cost visibility.\n",
        "\n",
        "## Next Steps\n",
        "\n",
        "1. Replace the synthetic dataset with your real agent inventory from Entra, API gateways, Foundry, Copilot extensions, and automation platforms.\n",
        "2. Tune the blast-radius scoring model to match your regulatory, operational, and business context.\n",
        "3. Connect the policy simulation to real approval workflows, audit logs, and cost telemetry.\n",
        "4. Track 90-day metrics: registration coverage, API mediation coverage, high-risk review completion, and spend visibility by agent class."
      ]
    }
  ]
}