skip to main content

Fake news and gender bias spotlighted in AI symposium hosted by HMGCC

Gender bias, deepfake detection and machine learning ethics were just some of the hot topics discussed as part of a special event in Milton Keynes – all focused on artificial intelligence. 

The HMGCC-organised symposium, entitled ‘The Challenges of Responsible Generative Artificial Intelligence’, attracted a range of speakers from local and national organisations. 

The public event was staged as part of the Milton Keynes AI Festival 2024, which ran from 28 October until 1 November, to mark the first anniversary of the government’s AI Safety Summit – held last year at Bletchley Park. 

The festival offered an opportunity for businesses and other organisations to showcase their AI work and share insights, from many different perspectives, on this growing technology. 

HMGCC’s Chief Technology Officer Paul introduced the event, setting the scene for a series of fascinating talks by volunteer speakers from an array of different organisations. These included the Alan Turing Institute, Works For Us, Gartner, Aiimi, Origami Labs, Fuzzy Labs and Faculty AI. 

He commented: “What we are finding is that emerging technologies such as AI cut across so much of what we do at HMGCC. While it is a significant enabler, AI also comes with risks that we all need to understand and manage. That is doubly true for work with national security where very significant real-world consequences are at stake. Ethics must always be at the forefront of what we do.” 

The event was attended by a 40-strong audience who had the opportunity to ask questions, prompting discussions into both the pluses and pitfalls of working with AI. 

Thom Kirwan-Evans, co-founder of Origami Labs, was among the speakers. He said: “The key takeaway from this event I hope will be understanding the difference between trust and trustworthiness – and how this is applied in the digital space. This is about understanding where content is coming from and knowing that, just because someone we know is sharing something online, it doesn’t mean that content is correct or even real. We need to reevaluate how we attribute trust to what we view online, especially when we don’t know the creator.”