Machine Morality
New terms emerge every day, especially in this rapid-evolving web age. Machine morality is about ethical decision-making of machines. At first glance many readers (including myself) must think it being about Sci-Fi novels. In fact, however, scientists have really started to study this issue seriously. Wendell Wallach, who is a Lecturer at Yale Interdisciplinary Center for Bioethics, claimed that "We are just a few years away from a catastrophic disaster from an autonomous computer system making a decision." Is this claim overstated?
No matter whether the previous claim is overstated, it is realistic that people are starting to rely on machines to make more and more decisions in the future. At present, these decision-making requests are mostly on the pure technical realms. In the future, however, it is very much possible that these decision-making questions would be extended to the ethical realm. If we cannot stop people issuing such types of questions to machines, we'd better thinking of how to handle them at the beginning. This is thus why machine morality is a real problem.
A problem is whether morality could be quantitatively calculated. What is the threshold between being moral and being immoral? There are too many similar questions in the realm of computational philosophy. The advance of this field could be very interesting for the future web.
No comments:
Post a Comment