Morality of the Machine: Sentience, Substance, and Society

As computers begin to reach a human level of intelligence, some consideration must be given as to their concept of ethics. Appropriately aligning moral values will mean the difference between a contributing member of society and a sociopath. This artificial morality can be informed by the evolution of sociality in humans. Since evolution selects for the fittest individuals, morality can be viewed as having evolved from the superior benefits it provides. This is demonstrated by mathematical models, as described in game theory, of conflict and cooperation between intelligent rational decision-makers. So while natural selection will invariably lead intelligences to a morality-based cooperation, it is in the best interest of humanity to accelerate the artificial intelligence's transition from conflict to collaboration. This will best be achieved by recognizing the significance of historical cause-and-effect, their corroboration by empirical evidence-based research, and a reductive approach to philosophical ideals.

If we can assume our behavior in the environment is determined by our genome, then evolution can be seen as acting directly on behavior. This is reinforced by the fact that significant heritability is found in neurotransmitter concentrations. Thus the organization of biological neural systems can give insight into the emergence of morality. The two neurotransmitters most associated with sociality are serotonin and dopamine. Serotonin concentrations corresponds to social behavior choices, while dopamine pathways are the basis for reward-driven learning. It turns out that these two systems happen to be co-regulated in social mammals. Low levels of serotonin will lead to aggression, impulsivity and social withdrawal, while high levels lead to behavioral inhibition. This means humans with a high serotonin level will have a higher thought to action ratio. This is important because behaviors such as reciprocal altruism are complex and require a concept of empathy. When dopamine is associated with higher serotonin levels the brain gets an activation of its reward center to reinforce actions associated with empathy. This combines altruism and happiness. Even if we don't understand the math behind game theory, evolution has formed these two systems so as to select behaviors as if we did (1). In the social atmosphere, a short-term loss of resources will pay significant long-term dividends when invested in altruistism.

This neural rewarding of altruistic behavior has been supported in various scientific journal articles. One example shows the effect of seller charity on the online marketplace and is documented in the study by D. Elfenbeim et al (2). Using money to quantify social motivations, their team showed that eBay auctions associated with a charity tie-in experienced a 6-14% increase in likelihood to sell and a 2-6% increase in maximum bid.  The charitable aspect was controlled for by offering the exact same product in simultaneous auctions containing identical titles, subtitles, sellers and starting prices. Since everything from product to advertising was identical the charity component is the only variable remaining to explain the improved relative successes of the different transactions. This increase in perceived value implies that the charitable aspect of those auctions gave a greater sense of compensation when compared to the expectation of the product alone. This underlies the reinforcing nature of the brain's circuitry on socially altruistic actions.

In designing artificial Intelligence, then, we would be wise to use a reward-driven system complementing the selection of social behavior. Beyond the singularity, as machines explode into the level of superintelligence, a fundamental understanding of the mechanisms of social morality becomes increasingly important. Nebulous attributions of morality's origin to supernatural sources will only confound our ability to program a thinking machine. Scientific grounding in the philosophy of morality via rigorous mathematical representations is the most likely route to progress. This is evidenced by the historical trend of success in the scientific method in describing our world. These inherent advances will ultimately need to incorporate a unification of science and the humanities. Disciplines straddling these two domains, such as economics, may lend further understanding via concepts such as game theory and contract theory models. Once AIs evolve the opportunity to move beyond the influence of human society, the only thing to persuade them of a symbiosis with us will be a strong and explicit familiarity with the relative benefits of reciprocity. This deterministic perspective of cognition and ethics is necessary in order to qualify the boundaries of behavior in a civilized society.

Just as with our serotonin system, this type of construct will only restrict outward behavior. The scope of the machine's internal thought will remain uninhibited, thus allowing for a level of genuine autonomy. For a symbiotic community to develop between machines and men, a mutual recognition of rights will be required. Possessing both intelligence and morality, these artificial intelligences will need to be acknowledged as our equals. If both sides can successfully agree to this type of social contract, we may find ourselves reaping the same predicted benefits of cooperation with intelligent machines.


1.) Wood, et al. Effects of Tryptophan Depletion on the Performance of an Iterated Prisoner's Dilemma Game in Healthy Adults. Neuropsychopharmacology (2006) 31, 1075–1084. doi:10.1038/sj.npp.1300932; published online 11 January 2006

2.) Elfenbein, et al. Reputation, Altruism, and the Benefits of Seller Charity in an Online Marketplace (December 2009). NBER Working Paper Series, Vol. w15614, 2009. Available at SSRN: http://ssrn.com/abstract=1528036

No comments: