Episodes
Monday Dec 20, 2021
079 - Guest: John Zerilli, Cognitive Science Philosopher, part 2
Monday Dec 20, 2021
Monday Dec 20, 2021
This and all episodes at: https://aiandyou.net/ .
What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of The Adaptable Mind; John Zerilli is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge. We talk about his 2021 book A Citizen's Guide to Artificial Intelligence, which spells out the categories in which we should pay attention. In part 2, we'll be talking about bias, how education should address AI, and more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Dec 13, 2021
078 - Guest: John Zerilli, Cognitive Science Philosopher, part 1
Monday Dec 13, 2021
Monday Dec 13, 2021
This and all episodes at: https://aiandyou.net/ .
What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of The Adaptable Mind; John Zerilli is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge. We talk about his 2021 book A Citizen's Guide to Artificial Intelligence, which spells out the categories in which we should pay attention. In part 1, we'll be talking about what he wants to achieve with the book and how some of those issues, in particular, privacy, impact the average person today. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Dec 06, 2021
Monday Dec 06, 2021
This and all episodes at: https://aiandyou.net/ .
How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The Global AI Index, the Responsibility100 Index, and the Global Disinformation Index. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals. In part 2, we talk about China's race for AI gold, and the global disinformation index. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Nov 29, 2021
Monday Nov 29, 2021
This and all episodes at: https://aiandyou.net/ .
How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The Global AI Index, the Responsibility100 Index, and the Global Disinformation Index. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals. In part 1, we talk about the methodologies behind the indexes, their relationship to ESG, inequality, and the labor markets. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Nov 22, 2021
075 - Guest: Michael Hind, IBM AI Explainability Expert, part 2
Monday Nov 22, 2021
Monday Nov 22, 2021
This and all episodes at: https://aiandyou.net/ .
Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications. Michael Hind is a Distinguished Research Staff Member in the IBM In part 2, we talk about the Teaching Explainable Decisions project, some of Michael’s experience with Watson, the difference between transparency and explainability, and a lot more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Nov 15, 2021
074 - Guest: Michael Hind, IBM AI Explainability Expert, part 1
Monday Nov 15, 2021
Monday Nov 15, 2021
This and all episodes at: https://aiandyou.net/ .
Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications. Before I talked with Michael Hind, my usual remark on the subject was, "If you want a demonstration of the ultimate futility of explainability, try asking your kid how the vase got broken." But after this episode I've learned more than I thought possible about how we can teach AI what an explanation is and how to produce one. Michael is a Distinguished Research Staff Member in the IBM All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Nov 08, 2021
073 - Guest: Kordel France, AI Engineer and CEO, part 2
Monday Nov 08, 2021
Monday Nov 08, 2021
This and all episodes at: https://aiandyou.net/ .
From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of Seekar Technologies, which is building the first clinical AI tool used to advise neuropsychologists in diagnosing mental disorders. There are a lot of surprises in this episode as we talk about explainability, artificial general intelligence, and the fragility of image recognition AI, among other things. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Nov 01, 2021
072 - Guest: Kordel France, AI Engineer and CEO, part 1
Monday Nov 01, 2021
Monday Nov 01, 2021
This and all episodes at: https://aiandyou.net/ .
From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of Seekar Technologies, which contributed AI to help ease demand on medical staffing and screen for COVID-19 faster. There are a lot of surprises - AI In hunting? - in this episode. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |