top of page

Corporate AI Ethics Best Practices

Research

A Study in realities, Risks and Rewards

The goal of this research is to share best practices for responsible AI among external and internal advisory boards as well as practitioner groups inside of companies. 

 

To accomplish this AI Truth is working with a team of Berkeley Law students to research: 

  1. how some of the biggest companies are organizing their people for responsible AI success, 

  2. the processes, frameworks, policies, and tech they are using to launch responsible AI initiatives, and 

  3. the measurements (KPIs) they are using to know when they are successful. 

BerkeleyStudents4.jpg

What Should Corporations do to ensure their AI is responsible? 

Organization of Paper​

​

SECTION 1: External Boards

  • Target Operating Model: People, Processes, Tools

  • Measurements: Baselines and KPIs

​

SECTION 2: Internal Boards

  • Target Operating Model: People, Processes, Tools

  • Measurements: Baselines and KPIs

​

SECTION 3: Practitioner Groups (People actively developing AI/ML capabilities inside companies)

  • Target Operating Model: People, Processes, Tools

  • Measurements: Baselines and KPIs

​

Main Questions 

Below are starter questions we would ask to get the discussions rolling. 

  1. Who are the stakeholders in the company responsible for AI dev & gov? What types of AI projects do they take on?

  2. Who is responsible for Corporate AI Ethics and how are they organized within the organization to maximize success?

  3. What mechanisms/processes (incentives, disincentives, etc) are in place to ensure AI Ethics? 

  4. What are the main AI Ethics challenges being tackled and are these the biggest issues or low-hanging fruit?

  5. What are measurements or Key Performance Indicators used to achieve “success”? (What is success?)

  6. What are the biggest obstacles to ensuring ethical AI?

  7. What is the role legal professionals play in ensuring AI Ethics within a corporation?

​

Research Contributors

We respect the privacy of the individuals contributing to this work and so we will not reveal any specific companies’ or individuals’ contributions to this work unless it is their expressed wish that we do so.  

 

Contributors we have interviewed to date include: 

  • People participating in External AI Councils or advisory boards

  • People participating in Internal AI councils

  • Practitioners actively working on responsible Artificial Intelligence or Machine Learning efforts inside their companies (these include employees in areas like user experience design, data engineering, data strategy, data science—people who use python and develop algorithms for machine learning purposes, software engineering, research and development). 

​

Evergreen Research

AI Truth considers this work to be as continually evolving as the field of AI and ML itself. Because of this we do not anticipate stopping with one piece of research, rather continuing to document the evolution of responsible AI in corporations as it continues to unfold over time. The research should be considered as “snapshots in time” of progress toward Responsible AI goals. 

​

Research Deliverable Timing

The research will be released in May 2022 to the Springer Nature AI and Ethics Journal for peer review. Prior to that reviews of the work will be held with research contributors.

Meet The Team

Abraham Brauner.jpeg

Abraham Brauner

  • LinkedIn

Berkeley Law

Arohi_Kashyap.jpeg

Arohi Kashyap

  • LinkedIn

Berkeley Law

Daniela_Ramos.jpeg

Daniela Ramos

  • LinkedIn

Berkeley Law

Dima_Sliusarenco.jpeg

Dumitru Sliusarenco

  • LinkedIn

Berkeley Law

1624384029945.jpeg

Bradford Newman

  • LinkedIn

AI Truth

Abha Kashyap.jpeg

Abha Kashyap

  • LinkedIn

Berkeley Law

Vasu_Mijithia.jpeg

Vasundhara Majithia

  • LinkedIn

Berkeley Law

Anuja Shah.jpeg

Anuja Shah

  • LinkedIn

Berkeley Law

Aishwarya_Todalbagi.jpeg

Aishwarya Todalbagi

  • LinkedIn

Berkeley Law

1567828029442_edited.png

Cortnie Abercrombie

  • LinkedIn

AI Truth

bottom of page