Diane Coyle wrote a piece on the Project Syndicate website discussing that computers are designed to think like economists. Artificial intelligence (AI) is a faultless version of homo economicus as it is a rationally calculating, logically consistent, ends-orientated agent capable of achieving its desired outcomes with finite computational resources. They are perceived as much more effective than a human in achieving the maximum amount of utility for an individual. Coyle does go onto say that economists today cannot offer a measure of actual utility.
Jeremy Bentham’s famous formulation of utilitarianism is known as the “greatest-happiness principle”. It holds that one must always act so as to produce the greatest aggregate happiness among all sentient beings, within reason. John Stuart Mill’s method of determining the best utility is that a moral agent, when given the choice between two or more actions, ought to choose the action that contributes most to (maximises) the total happiness in the world. However this assumption can produce some unease.
- Most of those designing algorithms are utilitarians who believe that if a ‘good’ is known, then it can be maximised. Therefore how much thought is there about possible societal impacts of algorithms as they are designed to optimise efficiency and profitability.
- Algorithms are created using current and future data that is full of bias. The result could be the institutionalisation of biased and damaging decisions with the excuse of, to quote ‘Little Britain’, ‘the computer says no’. see video below.
- Algorithms make it easy for consumers to decide things and it acts as a short-cut (heuristic). Therefore we become a slave to the algorithm rather than taking more ownership of our thinking /reasoning. Those who control of the algorithm have an unfair position.
There is no doubt in certain aspects of society AI is extremely useful and can cut down bureaucracy and lead to improved efficiency in everyday life. The real issue extends beyond the use of algorithmic decision-making in corporate and political governance, and strikes at the ethical foundations of our societies. As Coyle points out we need to engage in self-reflection and decide if we really want to encode current social arrangements into the future.