Corporate AI Ethics Best Practices
Research
A Study in realities, Risks and Rewards
The goal of this research is to share best practices for responsible AI among external and internal advisory boards as well as practitioner groups inside of companies.
To accomplish this AI Truth is working with a team of Berkeley Law students to research:
-
how some of the biggest companies are organizing their people for responsible AI success,
-
the processes, frameworks, policies, and tech they are using to launch responsible AI initiatives, and
-
the measurements (KPIs) they are using to know when they are successful.
What Should Corporations do to ensure their AI is responsible?
Organization of Paper​
​
SECTION 1: External Boards
-
Target Operating Model: People, Processes, Tools
-
Measurements: Baselines and KPIs
​
SECTION 2: Internal Boards
-
Target Operating Model: People, Processes, Tools
-
Measurements: Baselines and KPIs
​
SECTION 3: Practitioner Groups (People actively developing AI/ML capabilities inside companies)
-
Target Operating Model: People, Processes, Tools
-
Measurements: Baselines and KPIs
​
Main Questions
Below are starter questions we would ask to get the discussions rolling.
-
Who are the stakeholders in the company responsible for AI dev & gov? What types of AI projects do they take on?
-
Who is responsible for Corporate AI Ethics and how are they organized within the organization to maximize success?
-
What mechanisms/processes (incentives, disincentives, etc) are in place to ensure AI Ethics?
-
What are the main AI Ethics challenges being tackled and are these the biggest issues or low-hanging fruit?
-
What are measurements or Key Performance Indicators used to achieve “success”? (What is success?)
-
What are the biggest obstacles to ensuring ethical AI?
-
What is the role legal professionals play in ensuring AI Ethics within a corporation?
​
Research Contributors
We respect the privacy of the individuals contributing to this work and so we will not reveal any specific companies’ or individuals’ contributions to this work unless it is their expressed wish that we do so.
Contributors we have interviewed to date include:
-
People participating in External AI Councils or advisory boards
-
People participating in Internal AI councils
-
Practitioners actively working on responsible Artificial Intelligence or Machine Learning efforts inside their companies (these include employees in areas like user experience design, data engineering, data strategy, data science—people who use python and develop algorithms for machine learning purposes, software engineering, research and development).
​
Evergreen Research
AI Truth considers this work to be as continually evolving as the field of AI and ML itself. Because of this we do not anticipate stopping with one piece of research, rather continuing to document the evolution of responsible AI in corporations as it continues to unfold over time. The research should be considered as “snapshots in time” of progress toward Responsible AI goals.
​
Research Deliverable Timing
The research will be released in May 2022 to the Springer Nature AI and Ethics Journal for peer review. Prior to that reviews of the work will be held with research contributors.