Understanding Manifest AI Risk

Manifest AI Risk offers full visibility into the AI models and data powering your software so you can govern AI use at every level. AI Risk continuously monitors both open-source and custom models to enable AI governance policy enforcement, risk reduction, and ensure responsible development.

Overview

Manifest’s AI Risk module offers organizations to set policies that make sense for them, designed to give visibility into their end to end AI Supply Chain Risk.


What Manifest Researches for You

Manifest searches the internet and model hubs (Hugging Face) to find critical information that you can use to build AI usage policy around regulatory frameworks and best practices. Manifest AI Risk accelerates diligence by analyzing READMEs and arXiv papers and consolidating the findings, giving you instant access to the following information on a per model basis:

  • Outdated Model Risk
  • Newly Released Model Risks
  • Country of Origin
  • Trusted Organizations/Suppliers
  • Datasets used to train the model
  • Software Dependencies
  • License Usage Compliance
  • Research Papers for the Model

How to Interpret Results

Unlike in software, where vulnerability databases are maintained by several central authorities of truth, there is no one way to determine risk of an AI model.

Model Age: The age of an AI model can be indicative in two ways - has this model been around long enough to have been tested enough prior to be used by your organization? Alternatively, using a a model that is too old may leave your organization behind critical upgrades.

Trusted Countries and Organizations: The training and development of AI models is often obscured, difficult to interpret and hidden from the end user. Knowing who made a model is a key data point for any organization. Is your data going to be used to train future models? Who can access your organizations or customers data? Setting trusted countries and manufacturers is one way to help minimize these risks.

Dependencies: Every AI model introduces software dependencies into your ecosystem. Knowing if there are vulnerabilities associated with these software libraries is crucial in ensuring your applications are secure. Manifest will indicate what libraries are needed for each model, and find exact versions used pre-build.

Legal and Compliance: Review the model’s license carefully to understand permitted uses — some allow full commercial and derivative use, while others restrict redistribution, fine-tuning, or deployment in sensitive domains.

Research and Traceability: When evaluating an open-weight AI model, first check whether the developers have disclosed the datasets used for training and fine-tuning — many releases omit this, limiting transparency and compliance verification. Next, look for any accompanying research papers or technical reports (arxiv papers), which often detail the model’s architecture, benchmarks, and limitations.

Why You Should Care

Security leaders are accountable for making sure AI development is secure by design, compliant with evolving regulations, and aligned to the organization’s risk appetite, so models don’t introduce unacceptable business risk. That means setting policy and controls across the ML lifecycle, embedding privacy and threat modeling, validating third-party/model supply chains, enforcing developers’ use of approved AI models and curated training data, and preventing use of unsanctioned or unapproved models, monitoring usage and drift, and leading AI-specific incident response. Manifest AI Risk enables organizations to operationalize this by building a robust AI governance policy framework with developer-level guardrails and enforcement, turning governance into measurable, auditable practice.

Organizations should care about AI risk because models can introduce legal, ethical, and operational vulnerabilities that directly impact trust, compliance, and mission outcomes. Poorly governed AI systems can expose companies to data privacy violations, intellectual property disputes, or regulatory penalties. Beyond compliance, managing AI risk also protects brand reputation, ensures model reliability, and builds stakeholder confidence in responsible innovation.



User Roles & Permissions

ActionMemberAdmin
Analyze AI models
Add to inventory via API
Request AI model approval
Approve AI models
Configure policies