AI Governance & Ethics — Cheatsheet
24 Feb, 2026
Quick reference based on Alison: AI Governance and Ethics and some extra follow-up readings
Five Pillars of Responsible AI
Pillar
Core Question
Fairness
Does the system produce equitable outcomes across demographics?
Transparency
Can stakeholders understand how decisions are made?
Accountability
Is there a clear owner for AI outcomes and harms?
Privacy
Is personal data protected throughout the AI lifecycle?
Societal Impact
Are broader economic and social effects being managed?
1. Bias & Fairness Where Bias Comes From
Source
Example
Training data
Historical hiring data encoding past discrimination
Model design
Word embeddings reproducing stereotypes ("doctor" ↔ male)
Deployment context
Predictive policing reinforcing over-policing of minority areas
Bias Mitigation Techniques
Data-level — Re-sampling, re-weighting, expanding data collection to underrepresented groups
Model-level — Fairness-aware algorithms, adversarial debiasing, fairness constraints in objective functions
Process-level — Regular audits of data and outputs, diverse development teams, independent oversight
Organizational — Ethics boards, cross-disciplinary collaboration (CS + ethics + law + sociology)
Key Evidence
Study
Finding
Buolamwini & Gebru "Gender Shades" (2018)
Facial recognition error: up to 34.7% for darker-skinned females vs 0.8% for lighter-skinned males
ProPublica on COMPAS (2016)
Black defendants ~2x more likely to be falsely flagged as high recidivism risk
Obermeyer et al. (2019)
Healthcare algorithm underestimated illness in Black patients by using cost as a proxy for health
2. Transparency & Explainability The Problem Deep learning models act as "black boxes" — high performance but low interpretability, undermining trust in high-stakes domains (healthcare, criminal justice, finance).
Solutions
Tool / Approach
What It Does
LIME
Approximates any model locally with an interpretable model to explain individual predictions
SHAP
Uses Shapley values (game theory) to attribute each feature's contribution to a prediction
Interpretable algorithms
Simpler models (decision trees, linear models) where appropriate
Explainability by design
Build explanation capabilities into the system from the start
Why It Matters
GDPR grants individuals the right to an explanation for automated decisions
Transparency enables auditing, accountability, and stakeholder trust
Regulators increasingly mandate explainability
3. Accountability Key Principles
Clear responsibility — Define who owns AI outcomes (design, deployment, operation)
Human oversight — Maintain ability to intervene and override AI decisions
Auditability — Systems must be inspectable; keep logs of decisions and reasoning
Governance frameworks — Regular audits, impact assessments, continuous monitoring
Challenges
Highly autonomous systems (self-driving, automated trading) blur responsibility lines
Probabilistic/non-deterministic outputs make tracing causation harder
Rapid AI evolution outpaces legal frameworks
4. Data Privacy & Security Privacy Measures
Measure
Description
Privacy by Design
Embed privacy protections into every stage of AI development
Privacy Impact Assessments
Identify data risks before deployment
Differential Privacy
Statistical guarantees that individual records don't significantly affect outputs
Anonymization
Remove personally identifiable information from datasets
Encryption + Access Controls
Protect data at rest and in transit
Informed Consent
Ensure individuals understand and agree to how their data is used
Security Priorities
Defend against breaches, hacking, and adversarial attacks
Maintain continuous patching and vulnerability assessments
Align with ISO standards and cybersecurity best practices
Partner with cybersecurity specialists
Cultural Practices
Train all staff on data protection and AI ethics
Foster a privacy- and security-first culture
Establish whistleblower channels for reporting unethical behavior
5. Societal Impact
Job displacement — McKinsey estimates up to 375M workers may need to switch occupations by 2030
Mitigation — Reskilling/upskilling programs, social safety nets, policy collaboration
Equitable distribution — Ensure AI benefits are shared broadly, not concentrated
Democratic integrity — Guard against AI-enabled manipulation (e.g. Cambridge Analytica)
Regulatory Landscape
Regulation / Framework
Scope
GDPR
EU data protection: consent, right to explanation, data breach accountability
CCPA
California: consumer rights to know, delete, opt-out of data sales
EU AI Act
Risk-based AI regulation: prohibited, high-risk, limited-risk, minimal-risk tiers
IEEE Ethically Aligned Design
8 principles + P7000 standards series for autonomous/intelligent systems
Asilomar AI Principles
23 principles on research, ethics/values, and long-term AI issues
EU Ethics Guidelines for Trustworthy AI
Trustworthy AI = lawful + ethical + robust; 7 key requirements
OECD AI Principles
5 values-based principles; adopted by 47 countries, endorsed by G20
ISO/IEC 42001:2023
First international standard for AI management systems
Google's AI Principles
7 objectives: beneficial, unbiased, safe, accountable, private, rigorous, available
Case Study Lessons HealthCore (Healthcare AI)
Challenge: Predictive analytics on sensitive patient data
Approach: Privacy impact assessments → anonymization + encryption → explainable AI → independent oversight → staff training
Lesson: Multi-tiered governance (consent + transparency + security + cooperation) enables innovation while protecting patients
Challenge: AI diagnostic tool underperformed for minority patients
Approach: Expanded data collection → fairness-aware re-weighting → adversarial debiasing → clinician + ethicist collaboration → ongoing audits
Lesson: Bias mitigation requires coordinated technical, organizational, regulatory, and educational interventions — fixing data alone is not enough
TechNova (Regulatory Compliance)
Challenge: Navigating GDPR/CCPA while integrating AI at scale
Approach: Cross-functional team (data science + legal + compliance) → anonymization + consent → explainable models → IEEE-aligned audits → human override mechanisms
Lesson: Proactive compliance builds trust; embed ethics into governance from day one, not as an afterthought
Cambridge Analytica (Data Misuse)
What happened: Harvested millions of Facebook users' data without consent for political ad targeting
Lesson: Informed consent is non-negotiable; robust data governance and transparency prevent manipulation and preserve democratic integrity
PredPol (Predictive Policing)
What happened: Algorithm perpetuated racial bias by training on historically biased crime data
Lesson: Biased data in → biased predictions out; independent oversight and transparent algorithmic processes are essential
Watson for Oncology (Healthcare AI Failure)
What happened: Provided unsafe treatment recommendations based on hypothetical rather than real patient data
Lesson: Rigorous testing and validation before deployment; continuous monitoring post-deployment; AI must complement — not replace — human judgment
Organizational Action Checklist
Data — Audit training data for representativeness and historical bias
Models — Apply fairness-aware algorithms; use LIME/SHAP for explainability
Governance — Establish an ethics board or committee with cross-disciplinary membership
Privacy — Implement privacy by design, impact assessments, anonymization, and encryption
Human oversight — Design systems that allow human intervention and override
Audits — Schedule regular audits of AI performance, fairness, and compliance
Compliance — Map to GDPR, CCPA, EU AI Act, and relevant domain standards
Training — Educate all staff on AI ethics, data protection, and bias awareness
Diversity — Build diverse, inclusive development teams
Collaboration — Engage with policymakers, academia, and cross-industry forums (FAccT, IEEE, OECD)
Monitoring — Continuously monitor deployed systems for drift, bias, and security threats
Incident response — Maintain whistleblower channels and breach response procedures
References & Links Regulations & Laws
Ethics Frameworks & Principles
Standards
Research Organizations & Initiatives
Key Research & Case Studies
Privacy Techniques