Safe and ethical development, deployment, and utilization of Artificial Intelligence


To ensure the safe and ethical development, deployment, and utilization of AI technologies


To advance society’s use of beneficial AI, while working to mitigate economic harms, privacy vulnerabilities, risk of catastrophic events, algorithmic bias, and other forms of harm.

Founding Story

The Ethical AI Forum (EAIF) is a not-for-profit think tank founded by Noah M. Kenney, a renowned researcher, technologist, and practitioner working at the intersection of computing and the social sciences. Having developed AI models and algorithms for many years, Noah was abundantly aware of the benefits of AI in society. However, it was impossible to ignore the potential for harm on an unprecedented scale, several orders of magnitude larger than anything we, as a society, have experienced before. Noah’s goal was to see AI be utilized in an ethical manner that serves the good of humanity. In turn, he founded the Ethical AI Forum as a global think tank and built an interdisciplinary team of experts to collaborate on frameworks, policy proposals, and public education surrounding the ethical use of AI.

Global Artificial Intelligence Framework (GAIF)

One large project of the EAIF has been the development of the Global Artificial Intelligence Framework (GAIF), an interdisciplinary and multidisciplinary project composed by top researchers and practitioners from around the world. Though it is not yet finished, the GAIF is intended to be one of the world’s most comprehensive AI frameworks.

In recent years, Artificial Intelligence (AI) has become increasingly prevalent, and is now ingrained in the daily lives of most people. From simple AI-powered algorithms that recommend movies or music to large language learning models to autonomous vehicles, AI has reached nearly every aspect of society. Yet, despite its prevalence, artificial intelligence is largely unregulated. We find three primary reasons why governments may be ill-suited to developing AI regulations:

1. Speed of innovation: Many governments are unable to implement regulation without extensive review, approval, and implementation processes. In light of the rate of technological innovation within the field of AI, this often prevents a large challenge.

2. Lack of understanding: Government leaders are required to analyze and implement policy to address a wide range of issues. As such, it is impossible for government leaders to be experts in every policy area. This is especially true in policy related to artificial intelligence, given the complexity of the technology. Ultimately, this often results in regulations that are impractical to implement.

3. Decision bias: Government leaders, particularly in democratic governments, are generally concerned with gaining approval of constituents. As such, they are biased toward policy that may be popular, even if aforementioned policy fails to meet the needs of constituents.

In light of this, we understand that it may not be possible for most governments to effectively regulate artificial intelligence. Accordingly, this framework was developed to provide a practical approach for individuals, businesses, and organizations to develop, deploy, and utilize artificial intelligence in an ethical manner.
To achieve our goal of developing a practical framework, we have outlined several governing principles that shape our analysis. While this framework considers many objectives, we deemed the following three most important:

1. Ensuring data privacy and reducing bias: What is the least amount of data that can be shared to effectively accomplish a task? How can a lack of data privacy lead to discrimination in algorithmic decision making within your respective focus area?

 2. Economic implications: How will a particular policy impact the economy? Each focus area will have different economic considerations, which must be taken into account, given that business feasibility must not be in conflict with other objectives.

 3. Mass destruction: How can a particular type of AI be “weaponized” or used for malicious intent? How can we overcome this? Assuming, we cannot mitigate the threat completely, at what level do we consider it “safe enough” to implement anyways?

Given that every business and organization is different, this framework is meant to provide flexibility in implementation. In some cases, we provide fixed parameters to govern the use of development, deployment, and use of artificial intelligence. We were particularly uncompromising in matters of ethics. In other cases, we acknowledge tradeoffs between business interests and user desires. For example, we recognize an inherent trade-off between ensuring data privacy or anonymity in AI model training and ensuring data integrity or verifiability. In these cases, we seek to acknowledge the trade-offs, challenges, and concerns posed by each extreme.

Our general guidance is that businesses should operate in a manner that prioritizes ethics, privacy, and anti-discrimination metrics without compromising business interests beyond what is deemed reasonable. In short, we give significant freedom and flexibility to businesses and organizations. We provide “optimal scenarios”, with an understanding that what is optimal may not always be feasible.