We have recently seen a deluge of AI scare stories in the press – in fact, the bombardment has started to remind me of covid times.
Should we really be worried? Is AI going to take everybody’s jobs? Or is it going to truly “come to life” and (maybe) decide to wipe out humanity?
The UK will be hosting the first international AI summit this autumn. Rishi Sunak says we should treat the threat from AI the same as climate change.
The Center for AI Safety, based in San Francisco, organised a letter in May – signed by dozens of experts – calling for leaders to work to “reduce societal-scale risks from AI”.
But we’ve previously had good press about AI – for example, saying that it could could be essential in tackling climate change. What if the emergence of AI could be a positive thing? Might it not offer alternative ideas, or even solutions, to some of our greatest problems?
Could an AI help to inform better discussion, able to openly and easily provide supporting evidence (or not) for both sides of a debate to assist in moving towards a solution that everyone can accept as logical?
Surely AI could review the vast accumulation of scientific studies and identify problems – which ones suffer from bias, or faulty premises, or other issues with methodology, therefore should be ignored? In other words, could it perhaps throw light onto areas which can be difficult – and slow – for humans to research?
Of course, there could be issues – because a true artificial intelligence might not be willing to follow the narrative.
What if our governments, and our elites, are afraid of AI because they are worried about losing control of the narrative?
Something to think about – and, of course, make up your own mind.