Episodes

Monday Nov 30, 2020
024 - The Biggest Question About AGI
Monday Nov 30, 2020
Monday Nov 30, 2020
This and all episodes at: https://aiandyou.net/ .
We tackle the most important question about Artificial General Intelligence - When Will It Happen? Everyone really wants to know, but no one has a clue. Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us. We might not be able to get a date, but we'll explore why it's such a hard question and see what useful questions we can get out of it. All that and our usual look at today's headlines in AI. Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Nov 23, 2020
023 - Guest: Pamela McCorduck, AI Historian, part 2
Monday Nov 23, 2020
Monday Nov 23, 2020
This and all episodes at: https://aiandyou.net/ .
Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from. In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow's "Two Cultures", and the interaction between AI and the humanities, along with more tales of its founding fathers. All that and our usual look at today's headlines in AI. Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Nov 16, 2020
022 - Guest: Pamela McCorduck, AI Historian
Monday Nov 16, 2020
Monday Nov 16, 2020
This and all episodes at: https://aiandyou.net/ .
Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from. In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days. All that and our usual look at today's headlines in AI. Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Nov 09, 2020
021 - Guest: David Wood, Futurist, part 2
Monday Nov 09, 2020
Monday Nov 09, 2020
This and all episodes at: https://aiandyou.net/ .
How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance. In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future. Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Nov 02, 2020
020 - Guest: David Wood, Futurist
Monday Nov 02, 2020
Monday Nov 02, 2020
This and all episodes at: https://aiandyou.net/ .
How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance. In part 1 of our interview, we talk about David's singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation. Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Oct 26, 2020
019 - Guest: Tony Czarnecki, Futurist, part 2
Monday Oct 26, 2020
Monday Oct 26, 2020
This and all episodes at: http://aiandyou.net/ .
Tony Czarnecki is a futurist and a Managing Partner of Sustensis, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence. In this concluding part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and European and global economic structures for sustaining technological evolution. Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Oct 19, 2020
018 - Guest: Tony Czarnecki, Futurist
Monday Oct 19, 2020
Monday Oct 19, 2020
This and all episodes at: http://aiandyou.net/ .
Tony Czarnecki is a futurist and a Managing Partner of Sustensis, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence. In this first part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and discuss the pandemic and what's next. Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Oct 12, 2020
017 - Guest: Roman Yampolskiy, Professor of AI Safety, part 2
Monday Oct 12, 2020
Monday Oct 12, 2020
This and all episodes at: http://aiandyou.net/ .
What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over. In this second part of our interview, we talk about his latest paper: a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve. Transcript and URLs referenced at HumanCusp Blog. |
![]() |