How Privacera AI Governance Supports AI TRiSM

a man and woman looking at a computer screen

Introduction

A recent Gartner survey revealed that 29% of organizations have experienced an AI security breach. Analysts Avivah Litan and Jeremy D’Hoinne attribute these breaches to internal or external data compromises and malicious attacks. At the same time, they emphasize Generative AI has transformed data into a new security attack surface, making AI security a top board concern. Consequently, they advocate for increased funding in AI governance, security, privacy, and risk management. They say new measures are essential for managing the trust, risk, and security of AI deployments. 

In this blog, I will delve into Gartner’s proposed solution, TRiSM (Trust, Risk, and Security Management), and explore how Privacera’s AI Governance framework supports the requirements of TRiSM’s. In alignment with Gartner’s definition of AI TRiSM, Privacera enhances AI security, privacy, and risk management, ensuring robust governance and protection against AI-related threats.

Gartner Proposed Solution Framework

Gartner suggests that addressing AI security issues requires a comprehensive approach involving content anomaly detection, data protection, and application protection. Named TRiSM, TRiSM capabilities encompass enforcement of acceptable enterprise use cases, keyword matching, rule engines, and AI-based detection. Key features include anomaly detection, deep inspection, behavioral profiling, data obfuscation, and correlation with threat detection. These measures ensure robust AI governance and mitigate risks associated with AI deployment. I will explore how Privacera’s AI Governance (PAIG) supports these requirements, and enforces AI security, privacy, and risk management.

How PAIG Delivers TRiSM

PAIG bolsters AI TRiSM framework by supporting critical functions through three key capabilities: enforcing and maintaining privacy controls on vector databases and enrichments, securing model inputs and outputs, and providing comprehensive compliance monitoring. These capabilities ensure robust data protection, enhanced AI security, and facilitated adherence to regulatory standards, aligning seamlessly with TRiSM’s objectives for managing trust, risk, and security in AI deployments.

Digging in now, PAIG enables fine-grained data access and privacy controls for vector databases and embeddings. This retains and replicates all access and policies from source systems into the vector database, ensuring consistent governance across the data domains. Fine-grained authorization are implemented to comply with regulatory and compliance requirements. 

During retrieval, PAIG filters results, returning only the data chunks for which users or groups have permissions and entitlements to access. This not only ensures users receive appropriate information but also limits the amount of data processed and returned, enhancing security and efficiency.

Additionally, PAIG manages GenAI access and security through fine-grained metadata filtering, adding specificity based on tag values. PAIG employs context-aware data protection by inspecting user-prompted queries and masking or redacting sensitive data before it enters the model. Attribute-Based Access Control (ABAC) ensures sensitive data in model outputs are masked, allowing users to see only authorized information. User prompts are inspected, and unauthorized queries that could expose sensitive data are denied. This approach safeguards data exposure while maintaining stringent security and privacy controls.

A Real World Example

This use case  involved applying GenAI to data documenting patient interactions with new drugs or therapies. This pharmaceutical customer sought to harness the power of GenAI to streamline the analysis of rich, unstructured data from drug trials, enhancing researcher efficiency and accelerating access to crucial information. By automating the synthesis of conclusions from this data, GenAI democratized the usage of this information, eliminating the need for specialized tools. However, this potential comes with the challenge of managing the sensitive nature of patient data while reaping the productivity and analytical benefits of automation.

Integrating PAIG into this pharmaceutical data analysis process addressed the security and privacy concerns due from inclusion of protected health information (PHI). To mitigate these risks, PAIG established governance and controls to ensure safe connection of life science data to GenAI systems. Collaborating with Privacera, this customer acquired a secure method to transfer data from source systems like SharePoint and Confluence into GenAI, using attribute properties to maintain data integrity and confidentiality. This approach enables the identification of critical patterns, such as the ‘purple toes’ phenomenon, while ensuring users only access authorized information. Consequently, this life science company can safely utilize GenAI to alert patients and researchers about new therapy issues and opportunities, all while adhering to stringent data governance standards. 

Parting Words

Gartner has found that AI security breaches are taking place, highlighting the urgent need for robust AI Trust, Risk, and Security Management (TRiSM). Analysts Avivah Litan and Jeremy D’Hoinne emphasize that Generative AI is the next attack surface, necessitating enhanced governance and protection measures. TRiSM responds to this risk with anomaly detection, deep inspection, behavioral profiling, data obfuscation, and threat detection correlation.

PAIG supports TRiSM objectives by enforcing fine-grained data access and privacy controls, retaining source system policies, and ensuring regulatory compliance. PAIG manages Gen AI access and security with metadata filtering, context-aware data protection, and Attribute-Based Access Control (ABAC). The latter masks sensitive data and inspects user prompts to prevent unauthorized queries. Comprehensive dashboards and audit goes a step further by allowing data governance personnel to conduct detailed compliance monitoring, tracking model access, sensitive data usage, and applied protections, ensuring robust oversight and security.

Interested in
Learning More?

Subscribe today to stay informed and get regular updates from Privacera.