The ethics of AI ethics board

Google dramatically dissolved AI ethics board just one week after forming it. Many ask who should be included in AI ethics board. Ethicists? Politicians? Who should seat in the board is an important issue, but what principles the board should use is a more important and fundamental issue. I propose the following principles for AI ethics board.

The framework is in line with the broadly Kantian/Habermasian procedural and discursive approach to justice and democracy. In particular, I am indebted to Daniels and Sabin's procedural approach to fair allocations of limited resources.

First, determining whether or not X (new product, policy, etc) is ethical or not should not be a matter of power or voting. Instead, any normative judgment should be a matter of reasonableness/legitimacy.

Second, X is permissible when all of the following five conditions are met:

  1. Utility Condition: X satisfies the Utility Condition if it is rational to believe (with the best theory and evidence) that such a change maximizes (or least enhances) the overall value of involved stakeholders.

  2. Generalization Condition (No-cheating principle): Company acts unethically unless it is doing something that it could rationally will every company to do, in similar circumstances, if it can.” (from Parfit)

  3. Publicity Condition: Decisions and rationales behind the decisions must be publicly accessible to all stakeholders. The Publicity condition is useful to ensure transparency of decision-making procedures, which allows involved parties to examine whether the decision-making processes are coherent, sound and grounded by evidence. Having the rationales publicly accessible can also make decision-makers to clarify and double-check their rationales and relate them to the people. Such transparency would ensure that decisions are open to scrutiny and debate by the people, which, in turn, can contribute to the quality of public deliberation and facilitate social learning. Decision-making processes that meet the Publicity condition demonstrate that the decision-makers are principled and responsive to the people, in particular to those who are affected by their decisions, thereby providing legitimacy to the decision-makers. If business secret is involved, a trusted third party can represent the public.

  4. Revision and Appeals Condition: There must be mechanisms for challenge and dispute resolution regarding decisions, and, more broadly, opportunities for revision and improvement of policies in light of new evidence or arguments.

  5. Regulation Condition: There is regulation of the process to ensure that conditions 1--4 are met. Ai ethics board that has no power is not a board.

critical evaluation of Google’s seven principles on AI ethics

Google announced seven principles for its uses of AI (https://blog.google/topics/ai/ai-principles/). Overall, I like the announcement. As far as I know, no big profile companies other than Google announced anything like that. But I see reasons not to be satisfied with the seven principles. I critically review the principles below.

1. Be socially beneficial.

  • “As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.....”  => [My concern: Google’s first principle is a kind of cost-benefit analysis. There is no deontological minimum safety line. As long as benefits override downsides, Google will go for that.]
  • “We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate.”  => [My concern: Google can take advantage of local norms. There is no comment about global standards, for instance, human rights. If China allows, legally, Google to use AI in a way that violates human rights, Google will go for it.]

2. Avoid creating or reinforcing unfair bias.

  • “We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies.” => [My concern: Google will excuse itself by saying that there is no standard notion of fairness. There is no perfect consensus, yes, but there are widely shared notions of fairness, which in most part are translated into statues. Second, there is growing literature studying fair ML/AI. Google should more actively do and support such research.] 

3. Be Built and tested for safety: No comment.

4. Be accountable to people. 

  • “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal.” => [My concern: Unlike responsibility, accountability is a voluntary action to take responsibility (Watson, Scanlon, Enoch). In particular the virtue of accountability is take responsibility without fault. An example is strict liability. Under the principle of reciprocity, Google should take liability without fault under the principle of reciprocity (burdens must be borne by benefits, see AI as scape goat section of my paper, Ethics implications of the second machine age on business and society).]

5. Incorporate privacy design principles:

  • “We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.” => [My concern: only ax ante efforts (notice and consent) are mentioned. No efforts about ex post commitment (ex post explanation, see my paper about a right to explanation). “Relevant explanations” are mentioned under principle 3, but not here.

6. Uphold high standards of scientific excellence: no comment.

7. Be made available for uses that accord with these principles: no comment.

“AI applications we will not pursue”:

  • “Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. “=> [My concern: this is a consequentialist standard without a minimum respect and safety line. Second, whether the benefits outweigh risks depend upon Google’s own belief. It’s better to use an objective or a reasonable person standpoint.]

Finally, Google states, “Technologies whose purpose contravenes widely accepted principles of international law and human rights.” This is not part of the seven principles. This is an auxiliary rule. This rule can partially address my concerns about lack of deontological minimum respect and safety. But putting this rule at the end as a side constraint rather than as the heart of AI ethics will likely give an impression to employees that the cost-benefit analysis is a higher principle than this side constraint rule. Also, tricky is AI technologies can violate human rights and widely accepted moral and legal principles unintentionally. The rule above is concerned only about technologies whose purposes violate minimum standards. This sounds to me, “We will not intentionally develop AI that violates human rights, but it is not our business if our AI systems, recklessly, negligently, or unintentionally violate minimum standards.”  AI scapegoat.

I hope I am over-reacting. ‘

Tae Wan Kim, 

MIT Media Lab is wrong about AI ethics (or Machine Ethics). The Carnegie approach is better.

A major paradigm in AI ethics (or machine ethics) is, ironically, a human intuition-based model. An example is MIT Media Lab's study about the ethics of autonomous vehicles.

Screen Shot 2018-05-15 at 6.11.37 PM.png

But any intuition-based model has serious limitations.

Screen Shot 2018-05-15 at 6.11.57 PM.png

The fact that humans think you ought to do X does not mean that you ought to do X. It's just an opiniuon.

Screen Shot 2018-05-15 at 6.12.13 PM.png
Screen Shot 2018-05-15 at 6.12.19 PM.png
Screen Shot 2018-05-15 at 6.12.25 PM.png
Screen Shot 2018-05-15 at 6.12.32 PM.png

So MIT Media lab's approach is wrong. It is possible to develop a non-intuition based approach to machine ethics. If you are interested, click  here (this paper was presented at AIES conference 2018).

Algorithmic authority and organizational legitimacy

Firms are racing to introducing algorithms to automate workplaces. Algorithms are replacing with human authorities. Algorithms substitute expert authorities. Using algorithms is better than hiring human experts to predict market share, diagnose cancer and to detect errors. Humans resist. Algorithms replace managerial authorities. Algorithms are your bosses who direct your performances. What makes firms distinct from market is authority, major theorists say. When there is authority, it must be legitimate. Hence, the legitimacy of algorithmic authority is an important question for the coming machine age. If an algorithmic authority is not legitimate, it loses its own foundation. If authority is lost, what makes firms distinct from market is lost. For the last decade, research on organizational research has been aggregated and sophisticated. Regarding the legitimacy of algorithmic authority, many questions abound. What is algorithmic authority? Why is it important for firms? Is that legitimate? Under what circumstances is algorithmic authority perceived legitimate? What aspects of legitimacy (pragmatic, cognitive, moral and relational) is most relevant to algorithmic authority? Is algorithmic authority going to earn legitimacy for the coming machine age? Are firms that use algorithmic authorities going to survive the legitimacy crisis? To be continued.

So many Confucianism papers

So many Confucianism-focused papers are coming out. Editors often ask me to review them. Many of the papers study not Confucianism, but what sociologist Peter Berger called Vulgar Confucianism. Confucianism is a set of thoughts developed by Confucian philosophers (from Confucius to modern commentators), as Kantianism is a set of thoughts developed by Kantian philosophers. The modern behavioral pattern of Germany is not necessarily Kantian. Accordingly, the modern behavioral pattern of the Chinese is not necessarily Confucian. Guanxi is a term that many papers connect to Confucianism. But the term guanxi is not found in major Confucian classics at all. Furthermore, I am extremely skeptical how Confucian values can condone corruption. The best interpretation on guanxi is a friendship-based commercial relationship and for Confucians friendship cannot stand by corruption because one who encourages his friends to be corrupted is not a genuine friend. Another example is hereditary CEO/chairperson succession. This practice is prevalent in East Asia and people often connect it to Confucianism. But Confucianism began to revert the pedigree-based succession. For more discussion about this issue, please see my next blog. 

On autonomous machine and machine ethics

Majors definitions of autonomous agents in engineering define they are agents that can decide their own goals. Thus, in principle, fully autonomous machines can act in ways inconsistent with human interests. A promising solution is to teach codes of ethics to autonomous machines. But this does not fully dissolve the problem. In theory, fully autonomous agents can choose whether to use codes of ethics. This potential creates the public anxiety about singularity--a point at which super artificial intelligence destroys human civilizations. The fundamental problem is the major definition of autonomous agents in robotics. A totally different definition of autonomous agents is possible and has been studied in moral philosophy at least since Kant. For Kant, autonomous actions are actions that are based on gneralizable action plans, so autonomy is the ability to act on a generalizable manner. If we use this notion of autonomy, autonomous super artificial intelligence will be super ethical. I am currently working on this thought with John Hooker. To be continued.