I000

Long --°--'N   Lat --°--'E

  The outcome of the drones was largely predictable and beneficent when under the control of a trusted supervisor. Trust in the machine learning is what it had all come down to. The AI was not under-slept, feeling fraught after arguing with their neighbour, hiding a chronic drug addiction, or falling in love with a woman from New Hampshire. The more the results kept improving, the more we let the AI drive. It became routine.

  It was not so much the optimism of technology that had won out. More-so it was a pessimism in human ability. The machine showed us our mistakes in increasingly sharper relief. Ethics was not a competition between right and wrong, it had convinced us; it was a contest between wrong and more wrong. It was evidently best to ignore the arguments of the inferior ethical system. If the outcomes were more favourable this way then it was better to rely more on the machine – rather than to insist on the lesser of two ethicists. Even when the AI’s argument was not well understood.

  The technology was sound as far as it went. The security could have always been better. It is hard to predict when you have invested sufficiently in safety. Security tests were all about risks and likelihoods. In the end an intelligent judgement is always to some extent a gamble. Hopefully our bet is a well calculated. But with AI the stakes had risen very high and we got very unlucky.

  The machine we had built to reduce violence was, perversely, learning to encourage it.

Comments

Popular Posts