At the start of last month, the UK and US signed a landmark deal to work together on testing advanced AI. The agreement says both countries will work together on developing ‘robust’ methods for evaluating the safety of AI tools and the systems that underpin them. It is the first bilateral agreement of its kind. UK tech minister, Michelle Donelan, said it is ‘the defining technology challenge of our generation’ and added that the agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November 2023.
Eleanor Watson, IEEE member, AI ethics engineer and AI Faculty at Singularity University, said: “Hopefully, this will provide a chance to build upon the foundations already laid. As ethical considerations surrounding AI become more prominent, it is important to take stock of where the recent developments have taken us and to meaningfully choose where we want to go from here. The responsible future of AI requires vision, foresight and courageous leadership that upholds ethical integrity in the face of more expedient options.
“Explainable AI, which focuses on making Machine Learning models interpretable to non-experts, is certain to become increasingly important as these technologies impact more sectors of society. That’s because both regulators and the public will demand the ability to contest algorithmic decision-making. While these subfields offer exciting avenues for technical innovation, they also address growing societal and ethical concerns surrounding Machine Learning.”