v3.7: Algorithms of life
It’s all about algorithms this week: what they are, how much influence they wield and what makes them tick.
It’s important we understand these bits of code because it’s not just Netflix recommendations they serve up. They can also make decisions that have an impact on your life – and the Dutch government learned the hard way about how badly that can go wrong.
In 2016, data scientist Cathy O’Neil dubbed them Weapons of Math Destruction and, unfortunately, in the apparent age of AI, there seems to be less focus on their potential to cause harm than their potential to generate sub-standard content. 💣
So in an effort to help us all better understand algorithms, we brought in Megan Nyhan, a PhD researcher at D-Real where she is working on a framework for designing ethical and trustworthy AI recommender systems.
Megan gave us a grounding in collaborative filtering versus content-based filtering, implicit signals versus explicit signals, supervised versus unsupervised learning, black box versus explainable AI, and how we end up coding in bias and polarisation, creating echo chambers and feedback loops. (She even answered one of our listener’s questions, and if you have a burning tech question you want answered, you can reach us at fortechssakepod@gmail.com). 📧
And it’s not all negative. Algorithms can do useful, interesting work too, particularly in the area of healthcare. (Did you know you could track a pandemic through Google searches?) But reaping the benefits will require strict regulation and oversight to mitigate the risks. 🕵🏻♀️
There is ongoing work in this area, with the EU’s Digital Services Act and AI Act, and Ireland’s own Online Safety Code – though the Irish Council for Civil Liberties has criticised the latter for not addressing “toxic” algorithms. And Megan says what we need to have real autonomy in a world guided by algorithms is to understand them, so we hope this episode helps in this effort.
And for her part, Megan is furthering the development of human-centred and explainable AI that empowers users. She’s also looking to set up an Irish chapter of Encode Justice and a youth advisory council for AI 2030 – if you have any interest in supporting this, connect with her on LinkedIn (or reach out to us for an introduction). 🤝
CONTENT NOTE: This episode includes some discussion on eating disorders, and if you are affected by or seeking support on these issues, check out these resources from Bodywhys.
Want to hear more?
If you enjoyed this episode of For Tech’s Sake then there are others you may also want to check out, as good companions to some of the themes discussed here:
⚖️ Ethical AI with Ireland’s AI ambassador, Dr Patricia Scanlon
🤖 Bias in AI with UN advisor Dr Abeba Birhane
🔍 Search engines with Prof Gareth Jones
🔦 Dark patterns with Prof Owen Conlan
Next week, we’ll have a special bonus episode exclusive for HeadStuff+ Community members, so if you really want to hear from us every week, you’ll have to become a member here.