Safe AI Research Commitment

Building an inclusive AI research community that prioritizes safety, ethics, and societal benefit

Safe and Responsible AI Plan

CAAI Workshop – NAIRR Pilot Program

University of Kentucky Center for Applied AI

Published in compliance with grant requirements

Executive Summary

The University of Kentucky’s Center for Applied AI (CAAI) is committed to conducting our NAIRR Pilot-funded workshops in alignment with the highest standards for safe, secure, and trustworthy AI research and education. This Safe and Responsible AI (SAI) plan outlines our comprehensive framework for responsible AI development, inclusive participation, and ethical research practices.

Our approach demonstrates leadership in responsible AI while fostering diverse participation and advancing safe AI research methodologies that serve the public good.

1. Workshop Overview and Alignment with NAIRR Goals

Workshop Title: [Insert Workshop Title]
Duration: [Insert Duration]
Participants: [Expected Number] researchers, educators, and students
NAIRR Focus Areas: Safe, Secure and Trustworthy AI; AI Education and Training

Alignment with NAIRR Objectives:

  • Spur Innovation: Introduce cutting-edge responsible AI techniques and methodologies
  • Increase Diversity of Talent: Recruit participants from underrepresented groups and underserved institutions
  • Improve Capacity: Build AI research capabilities across diverse communities
  • Advance Trustworthy AI: Emphasize ethical, safe, and responsible AI development practices

2. Responsible AI Framework

2.1 Core Principles

Our workshop operates under the following responsible AI principles:

Fairness and Non-Discrimination:

  • Identify and mitigate algorithmic bias in AI systems
  • Ensure equitable representation in training data and model development
  • Address disparate impacts on protected and vulnerable populations

Transparency and Explainability:

  • Promote development of interpretable AI systems
  • Encourage documentation of model limitations and capabilities
  • Foster understanding of AI decision-making processes

Privacy and Data Protection:

  • Implement privacy-preserving techniques in AI research
  • Ensure compliance with data protection regulations
  • Protect sensitive information in datasets and models

Safety and Reliability:

  • Emphasize robust testing and validation of AI systems
  • Address potential failure modes and safety risks
  • Promote reliable performance across diverse contexts

Accountability and Governance:

  • Establish clear responsibility for AI system outcomes
  • Implement appropriate oversight and review processes
  • Ensure compliance with legal and ethical standards

2.2 Risk Assessment and Mitigation

Identified Risks:

Technical Risks:

  • Model bias and unfair outcomes
  • Privacy leakage from training data
  • Adversarial attacks and security vulnerabilities
  • Lack of model interpretability

Societal Risks:

  • Reinforcement of existing societal biases
  • Misuse of AI technologies
  • Economic displacement concerns
  • Erosion of human agency

Mitigation Strategies:

  • Implement bias detection and mitigation techniques
  • Use differential privacy and federated learning approaches
  • Conduct adversarial robustness testing
  • Develop explainable AI methodologies
  • Establish ethical review processes for research outputs

3. Participant Safety and Inclusion

3.1 Inclusive Participation Framework

Recruitment Strategy:

  • Partner with Minority Serving Institutions (MSIs)
  • Engage EPSCoR jurisdictions and underserved communities
  • Provide accessibility accommodations for participants with disabilities
  • Offer financial support for underrepresented participants

Accessibility Measures:

  • ADA-compliant venue and materials
  • Multiple format options for content delivery
  • Real-time captioning and interpretation services
  • Flexible participation options (in-person/virtual hybrid)

3.2 Ethical Guidelines for Participants

Research Ethics:

  • IRB approval for any research involving human subjects
  • Informed consent for data collection and use
  • Protection of participant privacy and confidentiality
  • Right to withdraw from research activities

Professional Conduct:

  • Adherence to professional codes of ethics
  • Respectful and inclusive behavior expectations
  • Prohibition of harassment and discrimination
  • Clear reporting mechanisms for concerns

4. Data Governance and Security

4.1 Data Management Plan

Data Types:

  • Workshop materials and presentations
  • Participant research data and models
  • Evaluation and assessment data
  • Collaboration outputs and documentation

Data Security Measures:

  • Encrypted storage and transmission
  • Access controls and authentication
  • Regular security audits and updates
  • Incident response procedures

Data Sharing and Privacy:

  • Clear data use agreements
  • Anonymization and de-identification procedures
  • Compliance with FERPA for student data
  • Respect for proprietary and confidential information

4.2 NAIRR Resource Usage Guidelines

Computational Resource Use:

  • Responsible allocation and usage of NAIRR computing resources
  • Compliance with acceptable use policies
  • Monitoring and reporting of resource utilization
  • Respect for shared infrastructure limitations

Open Science Commitments:

  • Publication of research results in open literature
  • Sharing of developed tools and methodologies
  • Contribution to public AI knowledge base
  • Compliance with federal open access requirements

5. AI Safety Research and Education

5.1 Safety-Focused Research Activities

Workshop Modules:

  • AI bias detection and mitigation techniques
  • Adversarial robustness and security testing
  • Privacy-preserving machine learning methods
  • Explainable AI and interpretability tools
  • AI governance and risk management frameworks

Hands-on Exercises:

  • Bias auditing of AI models
  • Implementation of differential privacy
  • Adversarial attack and defense demonstrations
  • Explainability tool development
  • Safety case development for AI systems

5.2 Educational Objectives

Learning Outcomes:

  • Understanding of responsible AI principles and practices
  • Technical skills in AI safety and security methods
  • Awareness of societal implications of AI technologies
  • Ability to implement ethical AI development processes
  • Knowledge of regulatory and governance frameworks

Assessment Methods:

  • Pre- and post-workshop knowledge assessments
  • Project-based evaluations
  • Peer review and feedback processes
  • Long-term follow-up on implementation

6. Compliance and Oversight

6.1 Regulatory Compliance

Federal Requirements:

  • NIST AI Risk Management Framework compliance
  • Export control regulations (ITAR/EAR) adherence
  • Research security policy compliance
  • Federal funding regulations and reporting

Institutional Requirements:

  • University of Kentucky IRB approval
  • CAAI safety and ethics review
  • Legal and compliance office coordination
  • Risk management assessment

6.2 Monitoring and Evaluation

Continuous Monitoring:

  • Real-time safety and ethical review
  • Participant feedback and concern reporting
  • Technical safety assessment of developed systems
  • Compliance audit and verification

Evaluation Metrics:

  • Participant safety and satisfaction measures
  • Learning outcome achievement rates
  • Diversity and inclusion effectiveness
  • Research output quality and impact
  • Long-term community building success

7. Crisis Management and Response

7.1 Incident Response Plan

Safety Incidents:

  • Immediate threat assessment and response
  • Participant safety and support measures
  • Communication with relevant authorities
  • Documentation and reporting procedures

Technical Incidents:

  • Security breach response protocols
  • Data protection and recovery measures
  • System integrity verification
  • Stakeholder notification procedures

Ethical Concerns:

  • Ethics review committee consultation
  • Participant protection measures
  • Corrective action implementation
  • Transparent communication and resolution

7.2 Emergency Contacts

For workshop-related safety concerns or incidents:

  • Workshop Director: [Contact via CAAI main office]
  • CAAI Director: [Contact via CAAI main office]
  • UK Emergency Services: 911
  • UK Campus Safety: (859) 257-1616

Non-Emergency Support:

  • UK Counseling Services: (859) 257-8701
  • UK Office of Institutional Equity: (859) 257-8927
  • CAAI General Inquiries: [Website contact form]

8. Sustainability and Long-term Impact

8.1 Community Building

Network Development:

  • Participant alumni network
  • Ongoing collaboration facilitation
  • Mentorship program establishment
  • Resource sharing platform maintenance

Knowledge Transfer:

  • Best practices documentation
  • Curriculum development and sharing
  • Train-the-trainer programs
  • Institutional partnership development

8.2 Continuous Improvement

Feedback Integration:

  • Regular participant survey collection
  • Expert advisory board consultation
  • Literature and practice review
  • Policy and regulation monitoring

Program Evolution:

  • Annual program assessment and updates
  • Emerging technology integration
  • New partnership development
  • Scaling and replication planning

9. Reporting and Documentation

9.1 NAIRR Reporting Requirements

Progress Reports:

  • Monthly brief updates to NAIRR program office
  • Quarterly detailed progress reports
  • Annual comprehensive evaluation report
  • Final project summary and outcomes

Resource Utilization:

  • Computational resource usage tracking
  • Cost-benefit analysis documentation
  • Efficiency and impact measurements
  • Lessons learned compilation

9.2 Public Dissemination

Research Outputs:

  • Peer-reviewed publications
  • Conference presentations
  • Workshop proceedings
  • Best practices guides

Community Outreach:

  • Public workshops and seminars
  • Media and press engagement
  • Social media and web presence
  • Policy and stakeholder briefings

10. Budget and Resource Allocation

10.1 Safety and Compliance Costs

Personnel:

  • Ethics and safety coordinator: [Budget Amount]
  • Technical security specialist: [Budget Amount]
  • Accessibility support services: [Budget Amount]

Infrastructure:

  • Secure computing environment: [Budget Amount]
  • Accessibility accommodations: [Budget Amount]
  • Safety and security tools: [Budget Amount]

10.2 Monitoring and Evaluation

Assessment Tools:

  • Pre/post workshop evaluations: [Budget Amount]
  • Long-term impact studies: [Budget Amount]
  • Third-party safety audits: [Budget Amount]

Conclusion

This Safe and Responsible AI plan ensures our NAIRR-funded workshop advances the frontiers of AI research while maintaining the highest standards of safety, ethics, and responsibility. Through comprehensive risk management, inclusive participation, robust oversight, and commitment to open science, we will contribute to the development of trustworthy AI that serves the public good.

Our approach aligns with federal priorities for responsible AI development and positions our workshop as a model for safe and inclusive AI education and research. We are committed to continuous improvement and adaptation as the field evolves and new challenges emerge.

Plan Version: 1.0
Effective Date: [Date]
Next Review: [Date]
Approved By: [Name, Title, Date]

Contact Information:
For questions about this plan or our responsible AI initiatives:
Email: ai@uky.edu
Website: caai.ai.uky.edu

University of Kentucky Center for Applied AI
760 Press Avenue
Lexington, KY 40508