Virtual Networks for Azure AI services
Azure AI services offer a layered security model that allows users to restrict access to their Azure AI services accounts to specific networks. When network rules are set up, only applications that request data over the designated networks can access the account. This ensures that you can limit access to your resources using request filtering, which permits requests only from certain IP addresses, IP ranges, or from a list of subnets in Azure Virtual Networks.
Applications accessing an Azure AI services resource when network rules are active need authorization. This authorization can be achieved using Azure Active Directory (Azure AD) credentials or a valid API key. It’s crucial to note that enabling firewall rules for your Azure AI services account will block incoming data requests by default. To allow these requests, they either need to originate from a service operating within an Azure Virtual Network on the allowed subnet list of the target Azure AI services account or from an approved list of IP addresses.
To enhance the security of your Azure AI services resource, you should initially set up a rule to deny access to all networks, including internet traffic. Afterwards, you can establish rules that grant access to specific virtual networks. This setup ensures a secure network boundary for your applications. Additionally, you can set rules to allow traffic from selected public internet IP address ranges, enabling connections from specific internet or on-site clients.
Azure AI services also support private endpoints, which allow clients on a virtual network to securely access data over Azure Private Link. This private link ensures that network traffic between the clients and the resource travels only through the virtual network and a private link on the Microsoft Azure backbone network, eliminating any exposure to the public internet.
Azure OpenAI Service encryption of data at rest
Azure OpenAI ensures that your data is encrypted when it’s stored in the cloud. This encryption not only safeguards your data but also helps in fulfilling your organization’s security and compliance requirements. The article delves into the specifics of how Azure OpenAI manages the encryption of data at rest, especially focusing on training data and fine-tuned models.
Azure AI services encryption: Azure OpenAI, being a part of Azure AI services, encrypts and decrypts data using FIPS 140-2 compliant 256-bit AES encryption. This process is transparent, meaning that the encryption and access are managed for you. As a result, your data remains secure by default without any need for code modifications.
Encryption key management: By default, Microsoft-managed encryption keys are used. However, there’s an option to manage your subscription with your own keys, known as customer-managed keys (CMK). CMK provides more flexibility in terms of creating, rotating, disabling, and revoking access controls. It also allows for auditing of the encryption keys safeguarding your data.
Customer-managed keys with Azure Key Vault: CMK, also termed as Bring Your Own Key (BYOK), offers enhanced flexibility. These keys are stored in Azure Key Vault. You can either create your keys and store them in the vault or utilize Azure Key Vault APIs for key generation. It’s essential that the Azure AI services resource and the key vault are in the same region and Azure Active Directory (Azure AD) tenant.
Enable customer-managed keys: To activate customer-managed keys, you need to navigate to your Azure AI services resource, select Encryption, and then choose Customer Managed Keys. After enabling, you can specify a key to associate with the Azure AI services resource.
Rotate and Revoke customer-managed keys: You can rotate a customer-managed key in Key Vault based on your compliance policies. Revoking access to an active customer-managed key can impact various functionalities like downloading training data, fine-tuning new models, and deploying them.
Data Deletion: Azure OpenAI allows users to delete their training data, fine-tuned models, and deployments. The data is stored in Azure Storage, and users can delete files using the DELETE API operation.
Disable customer-managed keys: If you decide to disable customer-managed keys, your Azure AI services resource will revert to being encrypted with Microsoft-managed keys.
Azure OpenAI Service with managed identities
Azure OpenAI provides a method to authenticate to your OpenAI resource using Azure Active Directory (Azure AD). This document offers guidance on how to use Azure CLI for role assignments and obtain a bearer token to access the OpenAI resource. The primary focus is on Azure role-based access control (Azure RBAC) for more intricate security scenarios.
Prerequisites: To proceed, you need an Azure subscription, access to the Azure OpenAI Service in the desired Azure subscription, custom subdomain names for features like Azure AD authentication, Azure CLI, and specific Python libraries.
Azure CLI Sign-in: Users can sign into the Azure CLI using the az login command. This sign-in may need to be repeated if the session remains idle for an extended period.
Role Assignment: Users can assign themselves to the “Cognitive Services User” role, allowing them to access specific Azure AI services resources.
Acquire Azure AD Access Token: Access tokens, which expire in an hour, can be obtained to authorize API calls. The token is used to set the Authorization header value for the API call.
Authorize Access with Managed Identities: Azure OpenAI supports Azure AD authentication with managed identities for Azure resources. This feature allows applications running on Azure virtual machines (VMs), function apps, and other services to authorize access to Azure AI services resources using Azure AD credentials. This method eliminates the need to store credentials with cloud-based applications.
Enable Managed Identities on a VM: Before authorizing access to Azure AI services resources from a VM using managed identities, it’s essential to enable managed identities for Azure resources on the VM. Various methods, including the Azure portal, Azure PowerShell, Azure CLI, and Azure Resource Manager templates, can be used for this purpose.
FAQ – Azure OpenAI Service
Q: What is Azure OpenAI Security?
A: Azure OpenAI Security refers to the set of security measures and features provided by Microsoft Azure for its OpenAI service. It ensures the confidentiality, integrity, and availability of data and resources used in OpenAI applications.
Q: What is network security in the context of Azure OpenAI?
A: Network security in Azure OpenAI refers to the protection of the network infrastructure and communication channels used by the OpenAI service. It includes measures like firewalls, access control, and encryption to safeguard against unauthorized access and data breaches.
Q: How is Microsoft involved in Azure OpenAI Security?
A: Microsoft is the provider of Azure cloud services, including Azure OpenAI. They are responsible for developing and maintaining the security features and updates for the service, ensuring that it meets industry standards.
Q: What is Azure Key Vault and its role in Azure OpenAI Security?
A: Azure Key Vault is a service provided by Microsoft Azure that enables secure storage and management of cryptographic keys, secrets, and certificates. It plays a crucial role in Azure OpenAI Security by allowing secure access to sensitive information and protecting data encryption keys.
Q: What is cloud security features in Azure OpenAI?
A: Cloud security in Azure OpenAI refers to the measures and practices implemented to protect the OpenAI service and its underlying cloud infrastructure. It includes identity and access management, threat detection, data encryption, and compliance with regulatory standards.
Q: What is a security baseline in Azure OpenAI?
A: A security baseline in Azure OpenAI refers to a predefined set of security configurations and best practices recommended by Microsoft. It helps organizations achieve a minimum level of security for their OpenAI applications and ensures compliance with industry standards.
Q: How does Azure OpenAI protect customer data?
A: Azure OpenAI provides several security features to protect customer data. These include data encryption, access control, and privacy controls. Microsoft follows strict data protection standards and regulations to safeguard customer data used in the OpenAI service.
Q: What are the best practices for Azure OpenAI Security?
A: Some best practices for Azure OpenAI Security include regularly applying security updates, implementing role-based access control, encrypting data at rest and in transit, using Azure Key Vault for secrets management, and monitoring for security incidents.
Q: How does Azure OpenAI handle sensitive data?
A: Azure OpenAI follows strict guidelines and industry best practices to handle sensitive data securely. It provides features like data encryption, access controls, and privacy controls to protect sensitive information used in OpenAI applications.
Q: Is customer data used by Azure OpenAI?
A: Azure OpenAI doesn’t use customer data without explicit permission or proper authorization. Customer data used in the OpenAI service remains under the control and ownership of the customer.
keywords: use azure openai in microsoft learn to use the azure cognitive security for azure openai service azure region azure openai on your data privacy and security for openai models