They created a youtube video ...
A major paradigm in AI ethics (or machine ethics) is, ironically, a human intuition-based model. An example is MIT Media Lab's study about the ethics of autonomous vehicles.
But any intuition-based model has serious limitations.
The fact that humans think you ought to do X does not mean that you ought to do X. It's just an opiniuon.
Firms are racing to introducing algorithms to automate workplaces. Algorithms are replacing with human authorities. Algorithms substitute expert authorities. Using algorithms is better than hiring human experts to predict market share, diagnose cancer and to detect errors. Humans resist. Algorithms replace managerial authorities. Algorithms are your bosses who direct your performances. What makes firms distinct from market is authority, major theorists say. When there is authority, it must be legitimate. Hence, the legitimacy of algorithmic authority is an important question for the coming machine age. If an algorithmic authority is not legitimate, it loses its own foundation. If authority is lost, what makes firms distinct from market is lost. For the last decade, research on organizational research has been aggregated and sophisticated. Regarding the legitimacy of algorithmic authority, many questions abound. What is algorithmic authority? Why is it important for firms? Is that legitimate? Under what circumstances is algorithmic authority perceived legitimate? What aspects of legitimacy (pragmatic, cognitive, moral and relational) is most relevant to algorithmic authority? Is algorithmic authority going to earn legitimacy for the coming machine age? Are firms that use algorithmic authorities going to survive the legitimacy crisis? To be continued.
So many Confucianism-focused papers are coming out. Editors often ask me to review them. Many of the papers study not Confucianism, but what sociologist Peter Berger called Vulgar Confucianism. Confucianism is a set of thoughts developed by Confucian philosophers (from Confucius to modern commentators), as Kantianism is a set of thoughts developed by Kantian philosophers. The modern behavioral pattern of Germany is not necessarily Kantian. Accordingly, the modern behavioral pattern of the Chinese is not necessarily Confucian. Guanxi is a term that many papers connect to Confucianism. But the term guanxi is not found in major Confucian classics at all. Furthermore, I am extremely skeptical how Confucian values can condone corruption. The best interpretation on guanxi is a friendship-based commercial relationship and for Confucians friendship cannot stand by corruption because one who encourages his friends to be corrupted is not a genuine friend. Another example is hereditary CEO/chairperson succession. This practice is prevalent in East Asia and people often connect it to Confucianism. But Confucianism began to revert the pedigree-based succession. For more discussion about this issue, please see my next blog.
Majors definitions of autonomous agents in engineering define they are agents that can decide their own goals. Thus, in principle, fully autonomous machines can act in ways inconsistent with human interests. A promising solution is to teach codes of ethics to autonomous machines. But this does not fully dissolve the problem. In theory, fully autonomous agents can choose whether to use codes of ethics. This potential creates the public anxiety about singularity--a point at which super artificial intelligence destroys human civilizations. The fundamental problem is the major definition of autonomous agents in robotics. A totally different definition of autonomous agents is possible and has been studied in moral philosophy at least since Kant. For Kant, autonomous actions are actions that are based on gneralizable action plans, so autonomy is the ability to act on a generalizable manner. If we use this notion of autonomy, autonomous super artificial intelligence will be super ethical. I am currently working on this thought with John Hooker. To be continued.