1. Tae Wan Kim & Bryan Routledge, "Algorithmic transparency, a right to explanation, and placing trust" (working paper). In 2016, the European Parliament adopted the General Data Protection Regulation (GDPR) to regulate automated algorithmic decision systems. Researchers debate whether GDPR, in addition to “a right to be forgotten,” grants data subjects another novel kind of protection, a so-called “right to explanation.” If GDPR grants such a right, companies (e.g., Facebook) that process the personal data of EU residents have a duty to provide a meaningful explanation to involved parties (e.g., users, customers, or employees) about how their automated algorithmic systems reach decisions. The debate itself assumes that there ought to be such a right, but the justification for this right is underexplored. I, along with my coauthor, search for the ethical foundations of the right. We contend that the ethical foundation of informed consent—autonomy and intelligent trust—justifies a right to explanation.

2. "Toward non-intuition-based machine ethics." How can businesses that use autonomous artificial agents prevent the agents from incurring harms/wrongs to humans? One way is to teach ethics to machines. A dominant model in machine ethics is a prima facie duty approach computerized through an inductive logic programming. But this approach relies on human moral intuition too much. I plan to develop a non-moral intuition-based model of machine ethics. For this project, I am currently collaborating with John Hooker.

3. "Autonomy, reasons-responsiveness, and artificial intelligence: A moral case for explainable AIs." When some kind of AI gives a response to a medical doctor (“Cancer”), a lawyer (“Guilty”), or an investment banker (“Buy”), the machine gives the professional the most likely answer, but it does not provide a rationale. There is a fundamental mismatch between the ways some kinds of machine learning algorithms (e.g., neural networks) and humans think.[1] The mismatch challenges some aspect of autonomy—i.e., “reasons-responsiveness.” One’s reasons-responsiveness is respected only if she is in a condition in which she is capable of acting for reasons that she endorses.[2] The mismatch challenges humanity, given that autonomy is a central aspect of humanity. I aim to develop this argument to support the idea of explainable/interpretable AIs.

4. "Confucian meritocracy and hereditary CEO succession." It is often believed that the business practice of hereditary CEO succession in East Asian firms is in accordance with Confucianism. This article draws upon Confucian classical texts to critically explore this popular belief. The analysis reveals why Confucianism does not justify the practice and explains that the Confucian account of governance and leadership change is fundamentally based upon worthiness rather than pedigree. The analysis urges management researchers, policymakers, and public media to overcome the ideology of “vulgar Confucianism” by carefully examining whether or not what they call “Confucian” practices are justifiably Confucian, especially when the term “Confucian” is used to explain and rationalize Chinese or broadly Asian economic models and business practices.


[1] Jenna Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data & Society (2016, January-June), pp. 1-12.

[2] Elizabeth Anderson, “Practical reason and incommensurable goods,” in Incommensurability, incomparability and practical reasons, Ruth Chand (ed.) (Cambridge: Harvard University Press).