critical evaluation of Google’s seven principles on AI ethics

Google announced seven principles for its uses of AI (https://blog.google/topics/ai/ai-principles/). Overall, I like the announcement. As far as I know, no big profile companies other than Google announced anything like that. But I see reasons not to be satisfied with the seven principles. I critically review the principles below.

1. Be socially beneficial.

  • “As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.....”  => [My concern: Google’s first principle is a kind of cost-benefit analysis. There is no deontological minimum safety line. As long as benefits override downsides, Google will go for that.]
  • “We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate.”  => [My concern: Google can take advantage of local norms. There is no comment about global standards, for instance, human rights. If China allows, legally, Google to use AI in a way that violates human rights, Google will go for it.]

2. Avoid creating or reinforcing unfair bias.

  • “We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies.” => [My concern: Google will excuse itself by saying that there is no standard notion of fairness. There is no perfect consensus, yes, but there are widely shared notions of fairness, which in most part are translated into statues. Second, there is growing literature studying fair ML/AI. Google should more actively do and support such research.] 

3. Be Built and tested for safety: No comment.

4. Be accountable to people. 

  • “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal.” => [My concern: Unlike responsibility, accountability is a voluntary action to take responsibility (Watson, Scanlon, Enoch). In particular the virtue of accountability is take responsibility without fault. An example is strict liability. Under the principle of reciprocity, Google should take liability without fault under the principle of reciprocity (burdens must be borne by benefits, see AI as scape goat section of my paper, Ethics implications of the second machine age on business and society).]

5. Incorporate privacy design principles:

  • “We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.” => [My concern: only ax ante efforts (notice and consent) are mentioned. No efforts about ex post commitment (ex post explanation, see my paper about a right to explanation). “Relevant explanations” are mentioned under principle 3, but not here.

6. Uphold high standards of scientific excellence: no comment.

7. Be made available for uses that accord with these principles: no comment.

“AI applications we will not pursue”:

  • “Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. “=> [My concern: this is a consequentialist standard without a minimum respect and safety line. Second, whether the benefits outweigh risks depend upon Google’s own belief. It’s better to use an objective or a reasonable person standpoint.]

Finally, Google states, “Technologies whose purpose contravenes widely accepted principles of international law and human rights.” This is not part of the seven principles. This is an auxiliary rule. This rule can partially address my concerns about lack of deontological minimum respect and safety. But putting this rule at the end as a side constraint rather than as the heart of AI ethics will likely give an impression to employees that the cost-benefit analysis is a higher principle than this side constraint rule. Also, tricky is AI technologies can violate human rights and widely accepted moral and legal principles unintentionally. The rule above is concerned only about technologies whose purposes violate minimum standards. This sounds to me, “We will not intentionally develop AI that violates human rights, but it is not our business if our AI systems, recklessly, negligently, or unintentionally violate minimum standards.”  AI scapegoat.

I hope I am over-reacting. ‘

Tae Wan Kim,