Episodes
Monday Nov 13, 2023
178 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 1
Monday Nov 13, 2023
Monday Nov 13, 2023
This and all episodes at: https://aiandyou.net/ .
The attention of the world to the potential impact of AI owes a huge debt to my guest Jaan Tallinn. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts. He's also a member of the board of sponsors of the Bulletin of the Atomic Scientists, and a key funder of the Machine Intelligence Research Institute. In this first part, we talk about the problems with current #AI frontier models, Jaan's reaction to GPT-4, the letter causing for a pause in AI training, Jaan's motivations in starting CSER and FLI, how individuals and governments should react to AI risk, and Jaan's idea for how to enforce constraints on AI development. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Nov 06, 2023
177 - Guest: Bart Selman, Professor for responsible AI use, part 2
Monday Nov 06, 2023
Monday Nov 06, 2023
This and all episodes at: https://aiandyou.net/ .
Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence, a fellow of the American Association for the Advancement of Science, and a contributing scientist at the two Asilomar conferences on responsible AI development. In the conclusion of our interview we talk about self-driving cars, the capability of large language models to synthesize knowledge across many human domains, Richard Feynman, our understanding of language, Bertrand Russell, AIs as co-authors on research papers, and where Bart places us on a scale of artificial general intelligence ability. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Oct 30, 2023
176 - Guest: Bart Selman, Professor for responsible AI use, part 1
Monday Oct 30, 2023
Monday Oct 30, 2023
This and all episodes at: https://aiandyou.net/ .
Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence and a fellow of the American Association for the Advancement of Science. In the first part of the interview we talk about common sense, artificial general intelligence, computer vision, #LLM and their impact on computer programming, and how much they might really be understanding. Bart will also give his take on how good they are, how to understand how they’re working, and his experiments in getting ChatGPT to understand geometry. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Oct 23, 2023
175 - AI and Education
Monday Oct 23, 2023
Monday Oct 23, 2023
This and all episodes at: https://aiandyou.net/ .
The first area to see a dramatic impact from #ChatGPT was when it crushed term papers and sent teachers scurrying for ways to assess their students. Now that we've had nearly a year to evaluate the impact of #AI on #education, I look at how assessments and teaching have been affected and how schools might adapt to the incredible opportunities of generative AI. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Oct 16, 2023
174 - AI and Jobs
Monday Oct 16, 2023
Monday Oct 16, 2023
This and all episodes at: https://aiandyou.net/ .
What effect will #AI, especially large language models like #ChatGPT, have on jobs? The conversation is intense and fractious. I attempt to shed some light on those effects, and discuss some of the different predictions and proposals for distributing the dividend from reducing costs and increasing markets through deploying AI. How will that capital get to where it is needed? All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Oct 09, 2023
173 -The UK AI Summit, Reflections
Monday Oct 09, 2023
Monday Oct 09, 2023
This and all episodes at: https://aiandyou.net/ .
The United Kingdom government is holding a Summit on Artificial Intelligence at the storied Bletchley Park on November 1 and 2. Luminaries of #AI will be helping government authorities understand the issues that could require regulation or other government intervention. Our invitation to attend may have been lost in the post. But I do have reflections on the AI risks that will (or should) be presented at this event and some analysis and thought-provoking questions prompted by excellent events on these topics I recently attended by the London Futurists and MKAI. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Oct 02, 2023
172 - Guest: Matthew Lungren, Chief Medical Information Officer, part 2
Monday Oct 02, 2023
Monday Oct 02, 2023
This and all episodes at: https://aiandyou.net/ .
Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it? To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at Nuance Communications, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains. In this interview conclusion, we talk about the details of how AI including large language models can be an effective part of a radiologist’s workflow how decisions about integrating AI into medicine can be made, and where we might be going with it in the future. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |
Monday Sep 25, 2023
171 - Guest: Matthew Lungren, Chief Medical Information Officer, part 1
Monday Sep 25, 2023
Monday Sep 25, 2023
This and all episodes at: https://aiandyou.net/ .
Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it? To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at Nuance Communications, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains. The basis for Hinton's assertion was that AI can be trained to find tumors, for instance, in CT scans, and we know how good AI is at image analysis when it’s got lots of labeled data to be trained on, and we certainly have that with CT scans. We get to find out what's real about AI in #medicine in this episode. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog. |