AI Risk Analysis & Governance
Configuring AI Risk Policies
AI Risk Policies allow for automated monitoring and enforcement with compliance standards across all AI models in your organization's inventory. These policies help you proactively identify potential security, licensing, and operational risks before they impact your business.
For detailed guidance, see our Policy Framework for Safe Adoption of
Open-Weight AI Models and Datasets white paper.
Setting Up Risk Policies
- Navigate to AI Risk Policies on the left side navigation.
- Configure policies based on your organization's security and compliance needs.
- Set appropriate severity levels for each policy (Critical, High, Medium, Low).
- Click Save in the top right corner to save your changes.
AI Risk Policies let you configure automated alerts for:
- Model Updates: Alert when models haven't been updated within a given timeframe
- Model Age: Alert when models are too new for production use
- Country of Origin: Alert when models originate from high-risk countries
- Trusted Organizations: Alert when models come from non-approved organizations
- Dataset Requirements: Alert when models lack proper training data documentation
- License Compliance: Alert when models have problematic licenses or require review
Managing License Compliance
Automate license compliance by setting license alert statuses that flag problematic AI models before deployment.
- Navigate to Settings > Licenses page
- Review license list with current approval status:
- Approved (green badge) - License permitted for use
- Review (yellow badge) - License requires manual review
- Forbidden (red badge) - License not allowed
- Click the edit icon next to any license to update its status. Updated statuses immediately affect policy scanning across your model inventory.
Note: License compliance policies use the approval statuses to determine alert severity.
Analyzing AI Models
Generating risk analysis for an open-weight AI model
- Navigate to AI Model Explorer.
- Optional and recommended step: Check "Enable deep search for datasets" to extract datasets from arXiv papers next to the search bar.
- In the search bar enter a Hugging face model name URL.
- Click on the result from the dropdown to start the model analysis. One the analysis has been completed, it will appear in Recent Models table below.
- Click the model name to view a detailed model card analysis.
Understanding Risk Scores
Model risk scores are determined by your configured AI policies. The highest severity finding sets the overall model risk:
- High = at least one violation finding
- Medium = at least one needs review finding
- Low = everything else
Managing Your AI Inventory
The model inventory is a list of models that have been approved for use in your organization. Members may request to add new models to the inventory at any time. Only admins can approve which models may be officially added.
Requesting model approval & adding to inventory
- Analyze a model in AI Model Explorer, and click on a model name to view the full model analysis.
- Click "Request Approval" in the top right corner.
An administrator will need to approve the model for it to be added in the organizational inventory.
Approving Models (Admins only)
- Navigate to Model Inventoryand click on the Model Requests tab.
- Click "Approve" for models you want to add to the Model Inventory. Click “Reject” for models that are not approved for use.
Removing Models from Inventory
- Go to Model Inventory and click on the Approved Models.
- Click on the **⠇**icon next to the name of the model you wish to remove.
- Click "Remove model" from the menu options.
Updated about 1 month ago