Manage Interactions

Workforce AI Security enforces security policies across several planes that operate on all Manage Interactions pages - Access, Chats, and Agents. These planes work together to provide layered protection for AI-driven workflows.

Policy configuration is hierarchical:

  1. App-level Access Policy defines the basic permissions for apps and what each app is allowed to do.

  2. Agent-level Access Policy defines what agents can do (specific to the Agentic Platform) in terms of behavior, that is, what an agent is allowed to do.

  3. Data Loss Prevention (DLP) / Chat Policy determines rules on sensitive data and decides what must be blocked. At this level, data-type restrictions are configured, such as: blocking certain information types, content categories, sensitive data.

Policy Layer Overview

Workforce AI Security uses a layered policy model to allow flexible governance depending on the use case.

1. Access Policies — Application-Level Control

These determine whether a user is allowed to access a specific AI application.

Examples:

  • Allow all AI apps except a blocklisted set

  • Allow only approved AI tools (allow-list mode)

  • Apply different rules to corporate vs. personal AI accounts

2. Chats / DLP Policies — Content-Level Control

These policies inspect the content being sent to AI systems and enforce actions such as:

  • Allow

  • Ask (require confirmation)

  • Block

  • Prevent

  • Detect (log only)

They use data types and classification rules to identify sensitive or regulated information.

3. Agent Policies — Tool-Level Control for MCP Agents

These policies govern he tools, operations, and external resources exposed by MCP servers. Administrators can:

  • Restrict or allow MCP tools

  • Enforce URL and file reputation checks

  • Apply prompt injection protection

  • Apply content moderation controls

This prevents unauthorized or risky operations executed by AI agents.

Access

When creating an Access policy, define which applications should be allowed or blocked according to organizational needs and practical use cases. Instead of applying blanket restrictions, consider the context in which each application is used. For example, you might allow access to ChatGPT for marketing content generation but block it for source code development to prevent intellectual property exposure. This approach ensures policies are tailored to business requirements while maintaining security.

In the Access page, select applications, assign rules to user groups, and set enforcement actions such as Allow or Block. Use filters like risk level and business function to fine-tune your policy for different scenarios.

To add a new rule:

  1. From the left menu, select Workforce > Manage Interactions > Access.

  2. On the toolbar, click Create new.

  3. On the right pane, edit the new rule:

    1. Enter the rule name.

    2. Set the Active slider to ON (green).

    3. Select Entire organization to apply the rule for all users.

    4. Select Selected users and groups to set the rule granularity:

      • Select the relevant groups from the list. The groups appear as defined in Active Directory.

      • Select the relevant users from the list. The users appear as defined in Active Directory.

    5. Set the destination where the rule applies:

      • Any - for all applications

      • Applications - select one or more discovered AI tools

    6. Select the Action to perform when the rule is triggered:

      • Allow

      • Ask

      • Block

    7. Select to enable or disable Logging.

  4. Click Save.

Chats

The Chats page focuses on interactions that take place within chat-based AI services, including web and desktop versions of conversational tools. It provides summaries of chat sessions, along with indicators that highlight prompt activity and potential risk. For each interaction, the view shows the results of applied policies, making it easier to understand how enforcement is being carried out across conversational AI usage.

Administrators use this area to apply DLP policies that prevent sensitive information from being included in prompts, to block or limit interactions that are considered risky, and to monitor overall compliance for chat-based AI tools.

To add a new rule:

  1. From the left menu, select Workforce > Manage Interactions > Chats.

  2. On the toolbar, click Create new.

  3. On the right pane, edit the new rule:

    1. Enter the rule name.

    2. Set the Active slider to ON (green).

    3. Select the event type:

      • Prompt

      • File Upload

      • Paste

    4. Select Entire organization to apply the rule for all users.

    5. Select Selected users and groups to set the rule granularity:

      • Select the relevant groups from the list. The groups appear as defined in Active Directory.

      • Select the relevant users from the list. The users appear as defined in Active Directory.

    6. Set the destination:

      • Any platform - For all applications.

      • Selected platforms - Select one or more standalone discovered AI tools.

      • Managed platforms - Select one or more application platforms managed by your organization. To set a platform as managed, see Managed Applications.

    7. From the Data Types list, select the sensitive data types you want the rule to monitor. If you need help choosing the appropriate categories, use the DLP details button to refine your selection. For more information about Data Types, see Data Types Classification.

      Note -

      Data Type selection is limited to a maximum of 100 Data Types across all policies in an account.

      To ensure effective enforcement, select the Data Types that are most relevant to your organization’s data protection needs.

    8. Select the Action to perform when the rule is triggered:

      • Allow

      • Ask

      • Block

      • Prevent

      • Detect

      • Redact (not supported for files)

    9. Select to enable or disable Logging.

    10. Optionally, add a comment.

  4. Click Save.

Agents

The Agents view provides oversight of AI agents that operate through MCP servers and the tools they expose to users or automated workflows. It displays active MCP servers, the tools associated with them, and metrics that describe their behavior, including create, read, update, and delete operations. The view also offers risk analysis that helps administrators understand the capabilities of each agent and whether those capabilities introduce potential exposure.

From here, administrators can restrict access to tools that represent higher levels of risk, set policies that shape acceptable agent behavior, and monitor compliance for automated, agent-driven activity.

To add a new rule:

  1. From the left menu, select Workforce > Manage Interactions > Agents.

  2. On the toolbar, click Create new.

  3. On the right pane, edit the new rule:

    1. Enter the rule name.

    2. Set the Active slider to ON (green).

    3. Select the Source - define to which users and groups to apply the policy:

      • Entire organization

      • Selected users and groups to set the rule granularity:

        • Select the relevant groups from the list. The groups appear as defined in Active Directory.

        • Select the relevant users from the list. The users appear as defined in Active Directory.

    4. Specify which MCP server(s) this policy governs:

      • All MCP servers

      • Select a server from the list

      • Enter the sever name and type according to the configuration

    5. In the Operations & Tools selection section, select the relevant tools to allow or to block.

    6. Select a relevant operating system.

    7. Select Platforms.

    8. Select the Action to perform when the rule is triggered:

      • Allow

      • Ask

      • Block

    9. Select to enable or disable Logging.

  4. Click Save.

Policy

Short Description

Behavior

Data Flow Data Control

Allow

Always allow the action

Accepts the entered data without restrictions. Action proceeds normally.

Allowed

Not restricted

Ask

User must confirm

Prompts the user to approve or cancel the action before proceeding.

Conditional

Conditional

Block

Do not allow the action

Rejects the entered data and stops the action.

Not allowed

Attempt blocked

Detect

Log the event only

Identifies and records the data event without changing or interrupting the action.

Allowed

Not restricted

Prevent

Strictly block the action

Actively stops the action and may disable the ability to submit data.

Not allowed

Action disabled

Redact

Remove sensitive data

Allows the action but removes or masks sensitive information before processing.

Sanitized only

Sensitive data