Episodes

Monday Mar 18, 2024
196 - Guest: Roman Yampolskiy, AI Safety Professor, part 2
Monday Mar 18, 2024
Monday Mar 18, 2024
This and all episodes at: https://aiandyou.net/ .
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable.
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and… the threat Roman’s coffee cup poses to humanity. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |

Monday Mar 11, 2024
195 - Guest: Roman Yampolskiy, AI Safety Professor, part 1
Monday Mar 11, 2024
Monday Mar 11, 2024
This and all episodes at: https://aiandyou.net/ .
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable.
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |

Monday Mar 04, 2024
194 - Guest: Rachel St. Clair, AGI Scientist, part 2
Monday Mar 04, 2024
Monday Mar 04, 2024
This and all episodes at: https://aiandyou.net/ .
Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI. That was last year. Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” In the conclusion, we talk about the role of sleep in human cognition, AGI and consciousness, and… penguins. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |

Monday Feb 26, 2024
193 - Guest: Rachel St. Clair, AGI Scientist, part 1
Monday Feb 26, 2024
Monday Feb 26, 2024
This and all episodes at: https://aiandyou.net/ .
Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI. That was last year. Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” In part 1 we talk about markers for AGI, distinctions between it and narrow artificial intelligence, self-driving cars, robotics, and embodiment, and… disco balls. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |

Monday Feb 19, 2024
192 - Re-evaluating Existential Risk From AI
Monday Feb 19, 2024
Monday Feb 19, 2024
This and all episodes at: https://aiandyou.net/ .
Since I published my first book on AI in 2017, the public conversation and perception of the existential risk - risk to our existence - from AI has evolved and broadened. I talk about how that conversation has changed from Nick Bostrom's Superintelligence, the "hard take-off" and what that means, and through to the tossing about of cryptic signatures like All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |

Monday Feb 12, 2024
191 - Guest: Frank Sauer, AI arms control researcher, part 2
Monday Feb 12, 2024
Monday Feb 12, 2024
This and all episodes at: https://aiandyou.net/ .
Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |

Monday Feb 05, 2024
190 - Guest: Frank Sauer, AI arms control researcher, part 1
Monday Feb 05, 2024
Monday Feb 05, 2024
This and all episodes at: https://aiandyou.net/ .
Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |

Monday Jan 29, 2024
189 - Guest: Peter Norvig, AI professor/author/researcher, part 2
Monday Jan 29, 2024
Monday Jan 29, 2024
This and all episodes at: https://aiandyou.net/ .
Literally writing the book on AI is my guest Peter Norvig, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at USC, Stanford, and Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006. He’s also the author of the world’s longest palindromic sentence. In this second half of the interview, we talk about how the rise in prominence of AI in the general population has changed how he communicates about AI, his feelings about the calls for slowdown in model development, and his thinking about general intelligence in large language models; and AI Winters. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
|
![]() |