UCLA Health AI Council (HAIC)

Artificial intelligence and machine learning play critical roles in shaping the future of health care. To stay on top of these evolving technologies and adapt to this changing landscape, UCLA Health established the Health AI Council (HAIC).

The HAIC provides AI guidelines, oversight and strategic direction for the entire UCLA Health system. HAIC members work together to ensure that all leaders and providers are using AI ethically and responsibly.

What does the UCLA Health AI Council do?

The primary responsibilities of the HAIC are to provide AI governance and oversee all clinical and operational initiatives involving AI-based technology. The HAIC’s responsibilities include:

  • Adopting responsible AI principles that serve as a guide for every member of the UCLA Health team
  • Evaluating ethical, regulatory and operational implications of AI initiatives, policies and practices
  • Ensuring that these initiatives are in alignment with the overarching mission and values of UCLA Health
  • Facilitating the exchange of AI-related knowledge, expertise and best practices
  • Encouraging collaboration and interdisciplinary engagement across the institution
  • Promoting innovative, responsible and safe AI solutions to enhance health care
  • Serving as a central hub for AI discussion

Establishing responsible principles to guide AI practices

One of the objectives of the HAIC is to agree to a core set of values that serve as a guiding framework for AI initiatives. These core values ensure that everyone is on the same page when considering and evaluating how we use AI.

The HAIC adopted the Responsible AI Principles established by the University of California (UC) Presidential Working Group on AI in 2021. These AI principles guide us as we procure, develop, implement and monitor AI technologies across UCLA Health: 

Appropriateness: The potential benefits and risks of AI and the needs and priorities of those affected should be carefully evaluated to determine whether AI should be applied or prohibited.

Transparency: Individuals should be informed when AI-enabled tools are being used. The methods should be explainable, to the extent possible, and individuals should be able to understand AI-based outcomes, ways to challenge them and meaningful remedies to address any harms caused.

Accuracy, reliability and safety: AI-enabled tools should be effective, accurate and reliable for the intended use and verifiably safe and secure throughout their lifetime.

Fairness and nondiscrimination: AI-enabled tools should be assessed for bias and discrimination. Procedures should be put in place to proactively identify, mitigate and remedy these harms.

Privacy and security: AI-enabled tools should be designed in ways that maximize the privacy and security of persons and personal data.

Human values: AI-enabled tools should be developed and used in ways that support the ideals of human values, such as human agency and dignity, and respect for civil and human rights. Adherence to civil rights laws and human rights principles must be examined in consideration of AI-adoption where rights could be violated.

Shared benefit and prosperity: AI-enabled tools should be inclusive and promote equitable benefits (e.g., social, economic, environmental) for all.

Accountability: UCLA Health should be held accountable for the performance of developed and procured AI systems in line with the above principles. Similarly, users of the AI systems, such as UCLA Health faculty and staff, should be held accountable for their individual use and for any decisions made with the support of these systems.

Visit the University of California Office of the President to learn more about the UC-wide Presidential Working Group on AI.

Developing an AI risk and impact assessment strategy

As part of UCLA Health’s commitment to responsible AI, the HAIC is developing a comprehensive risk and impact assessment strategy. This AI risk-management framework will:

  • Evaluate AI-enabled technologies from the initial procurement stage through implementation and their operational lifetime
  • Stratify all AI systems by level of risk, ensuring that those with the highest anticipated risk receive more oversight
  • At a minimum, collect detailed documentation on all predictive AI systems for future reference to ensure algorithmic transparency
  • Make recommendations for mitigating risks and optimizing the positive impact on patient care and operational efficiency