Learn how to craft disclosure-centric AI safety policies that emphasize reporting, whistleblower protections, and public accountability.
Not all AI systems demand the same level of reporting. Build a risk classification model that calibrates requirements fairly.
| Tier | Characteristics | Example Obligations |
|---|---|---|
| Tier 1 | Low-risk tools with limited autonomy | Basic registration, annual safety statement |
| Tier 2 | Systems affecting individuals or critical processes | Incident reporting, whistleblower program certification |
| Tier 3 | High-impact or frontier AI models | Quarterly safety audits, real-time incident reporting, independent oversight board |
Use factors such as scale of deployment, potential for societal harm, autonomy level, and sensitivity of data processed. Keep classification adaptive—organizations can petition to move tiers if risk profiles change.