There is some renewed debate about whether ‘Friendly AI’ can blackmail, also known as the Roko’s Basilisk problem.
More information about it can be found here, here, here, and here.
I’m kinda amazed by how much attention this has gotten, with stories even on Business Insider about the thought experiment.
Roko’s Basilisk addresses an as-yet-nonexistent artificially intelligent system designed to make the world an amazing place, but because of the ambiguities entailed in carrying out such a task, it could also end up torturing and killing people while doing so.
According to this AI’s worldview, the most moral and appropriate thing we could be doing in our present time is that which facilitates the AI’s arrival and accelerates its development, enabling it to get to work sooner. When its goal of stopping at nothing to make the world an amazing place butts up with orthogonality, it stops at nothing to make the world an amazing place. If you didn’t do enough to help bring the AI into existence, you may find yourself in trouble at the hands of a seemingly evil AI who’s only acting in the world’s best interests. Because people respond to fear, and this god-like AI wants to exist as soon as possible, it would be hardwired to hurt people who didn’t help it in the past.
So, the moral of this story: You better help the robots make the world a better place, because if the robots find out you didn’t help make the world a better place, then they’re going to kill you for preventing them from making the world a better place. By preventing them from making the world a better place, you’re preventing the world from becoming a better place!
This is a critique of utilitarianism and consequentialism. At the extreme, Stalin’s purges could have been justified on the grounds of utilitarianism, to promote the ‘greatest good’ for his people, even if millions had to die in the process. The perceived utilitarian tendency to reduce the totality of humanity in to into quantifiable ‘units’ of economic value or economic ‘agents’ whose utility must be maximized at all costs, irrespective of ambiguous concepts like morality, could explain why utilitarianism may be off-putting to some (too much logos and not enough pathos).
But such suppositions and fears may be unwarranted, because utilitarianism and consequentialism underpin society and the economy on a day-to-day basis. Consequentialism, related to utilitarianism, is simply a way of quantifying the risk/reward analysis of decisions by choosing actions that maximize utility and minimize costs, such as a business ordering in-demand ‘units’ and discontinuing less ones that don’t sell. All else being equal, a business that does not maximize utility is at a competitive disadvantage against one that does. Another example is dividing a restaurant bill.
Although utilitarianism and consequentialism are often seen as philosophical domains of the left, some on the ‘right’ also embrace it, for example, in supporting justifiable homicides by police and foreign interventionism. In the former, a ‘greater good’ is attained by preventing greater harm to innocents than the loss of a single life. In the latter, the killings of enemy combatants is justified to promote a ‘greater good’ of ‘peace’ and ‘stability’. Liberals may argue that a bakery that morally objects to making a ‘gay’ cake must be forced to do so on the utilitarian grounds of promoting the ‘greater good’ of equality.
Related: Utilitarianism Is Not Welfare Liberalism
The 2008 bank bailouts are another example of consequentialism as applied to modern policy – the end (financial stability) justifying the means (bailouts) to promote a ‘greater good’ (economic stability, benefiting millions of Americans) at the risk of possible moral hazard.
Should an AI be allowed to punish those who hinder the attainment of a ‘greater good’ or commit an action that may hurt a few for the ‘good’ of man, does this violate the principles of ‘friendly AI’? In some existential circumstances maybe it doesn’t. Hard to know.
Also, utilitarianism is compatible with < href="http://greyenlightenment.com/what-is-better-than-a-republic/">anti-democracy. As Caplan and others have noted, most voters are irrational (the term ‘irrational’ strictly being used in the economics sense [1), and this irrationality results in voter preferences that deviate from the optimal, and I think there is some truth to this, and utilitarians understand that irrationality should not guide policy, only quantifiable evidence that generates the best policy. Utilitarians may be content with some voices (irrational ones) perhaps being marginalized or excluded if it leads to an optimal outcome, and I think that’s an acceptable trade-off.
What defines ‘good policy’ is harder to quantify, but one criteria is that it ‘advances’ civilization, although what quantifies as ‘advancement’ is obviously not politically agnostic. For the ‘left’ such policy may be to advance social justice causes, as a way of maximizing happiness. For the ‘right’ it may be to maximize economic growth and technological innovation, in the hope prosperity and innovation will trickle down and benefit all. Creating optimal policy benefits everyone, not just a majority. In the case of taxes, if taxes are too high it may disincentivize risk taking and investment, and then markets fail and the economy will also fail or undergo severe recession or stagnation, ultimately hurting everyone. China saw an explosion in living standards after, in the 70’s, abandoning market-communism and embracing globalization, as an example of good policy.
Right-wing versions of pragmatism and utilitarianism can also include programs like eugenics, more funding for gifted education, more funding for high-tech industries, lower taxes, and a high-IQ basic income. Euthanasia and rationed healthcare (by IQ, for example) are ways to maximize resources and reduce entitlement spending, in the spirit of utilitarianism but with a right-wing bias. These are programs or ideas that may yield the most utility even if they are unpopular with many people.