Episodes

27 minutes ago
27 minutes ago
This and all episodes at: https://aiandyou.net/ .
|
Have you ever thought about your thoughts? About what or how you’re thinking? It gets real meta real fast, doesn’t it? That’s called metacognition, and humans and certain other creatures do it. But what about AI? Can it think about thinking? Here to help us understand this whole thing is artificial metacognition researcher Ricky Sethi. He is Professor of Computer Science at Fitchburg State University, Director of Research for the Madsci Network, and an Adjunct Professor at Worcester Polytechnic Institute. His research spans fact-checking misinformation, virtual communities, and artificial metacognition, where he focuses on designing GenAI systems that can monitor, evaluate, and regulate their own reasoning. Is that cool or what? Ricky has a bachelors in neurobiology and physics, an M.S. in physics/information systems, and a PhD in AI from UC Riverside. He has over 50 scholarly publications, and his work has been covered in outlets such as the Chicago Tribune, The Conversation, and Communications of the ACM. Recently, he has introduced the Metacognitive State Vector framework for quantifying key cognitive signals in ensembles of large language models. We talk about how this spans computer science, neuroscience, and psychology; System One and System Two thinking come up again, with a beautiful explanation. We also talk about testing and measuring metacognition in humans and AIs – and what about dolphins? All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Mar 16, 2026
300 - Guest: Mark Peres, Civic Entrepreneur, part 2
Monday Mar 16, 2026
Monday Mar 16, 2026
This and all episodes at: https://aiandyou.net/ .
|
Because AI touches our lives down to our core where our emotions and subconscious reside, we need to be touched with the important lessons that our fellow humans wish to communicate about AI through vehicles like art, poetry, and, in the case of today’s guest, fiction. Mark Peres is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson & Wales University. He’s just published The Accord, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla. As much as that sounds like a description of any number of sensationalist and shallow works that you and I could name, this is not in that category. I found his book remarkable for the level of maturity it granted the reader and the no-holds-barred courage with which it tackled issues of the identity of a future artificial general intelligence - which may not be so far in the future any more. We talk about why the AI character of Lyla has a true sense of identity and mortality, whether control over advanced AI is possible, principles for human–AI coexistence, what responsible use, transparency, and “cognitive autonomy” look like for today’s university students, what it means to “humanize” AI before trying to regulate it, and how to take responsibility for our future with AI. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Mar 09, 2026
299 - Guest: Mark Peres, Civic Entrepreneur, part 1
Monday Mar 09, 2026
Monday Mar 09, 2026
This and all episodes at: https://aiandyou.net/ .
|
We shouldn’t learn the important lessons of AI through just exposition, through just descriptions, explanations, lessons, and textbooks. But because AI touches our lives down to our core where our emotions and subconscious reside, we need to be similarly touched with the important lessons that our fellow humans wish to communicate about AI through art, poetry, and, in the case of today’s guest, fiction. Mark Peres is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson & Wales University. He’s just published The Accord, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla. I will say right now that whatever stereotypes that description is evoking in you right now – and god knows there are lots to choose from – I want you to throw them out. And you’ll soon see why as we talk about the big questions the story raises about consciousness, responsibility, and what we owe intelligent machines, the clash between universities, corporations, and government forces trying to control new technology, the growing role of AI as companion and confidant in everyday life, and the real-world headlines and classroom debates that inspired the book. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Mar 02, 2026
298 - Guest: Holly Elmore, AI Pause Advocate, part 2
Monday Mar 02, 2026
Monday Mar 02, 2026
This and all episodes at: https://aiandyou.net/ .
|
In 2023 a global movement called Pause AI started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, Holly Elmore. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the Pause AI US group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard. We conclude the interview by talking about what would actually flip public opinion on AI safety, specific AI bills and regulations, why some leaders warn about risk while accelerating anyway, whether and when it would be safe to unpause, and how you can get involved. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Feb 23, 2026
297 - Guest: Holly Elmore, AI Pause Advocate, part 1
Monday Feb 23, 2026
Monday Feb 23, 2026
This and all episodes at: https://aiandyou.net/ .
|
In 2023 a global movement called Pause AI started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, Holly Elmore. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the Pause AI US group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard. We talk about what the Pause movement stands for, overlaps with animal welfare strategies, why pausing is an effective aim and why we need it, the pros and cons of limiting AI training by compute metrics, and comparing AI safety to the airline industry. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Feb 16, 2026
296 - Guest: Maya Ackerman, Creative AI Pioneer, part 2
Monday Feb 16, 2026
Monday Feb 16, 2026
This and all episodes at: https://aiandyou.net/ .
|
One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. I continue talking about this very sensitive subject with Maya Ackerman, author of the new book Creative Machines: AI, Art, and Us, which tackles it head on, full of emotion, vulnerability, and poetry. Maya is the CEO and co-founder of Wave AI, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter. We talk about experiments in machine creativity, the distinction between creative processes and creative products and the role of the observer in the creative experience, how bias against AI shows up, and how AI that’s constructed around compassion and ethical stewardship could support deeper human flourishing in the next few years. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Feb 09, 2026
295 - Guest: Maya Ackerman, Creative AI Pioneer, part 1
Monday Feb 09, 2026
Monday Feb 09, 2026
This and all episodes at: https://aiandyou.net/ .
|
One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. Here to talk about this very sensitive subject is Maya Ackerman, author of the new book Creative Machines: AI, Art, and US, which tackles it head on, full of emotion, vulnerability, and poetry. Maya is the CEO and co-founder of Wave AI, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter. We talk about how Maya’s interdisciplinary backgrounds of machine learning and computational creativity converged in her book, what Maya calls “humble creative machines”: AI collaborators and how they can uplift us, her concept of a co-creative spectrum where humans are served by AI assisting us in being more creative, and – opera singing. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Feb 02, 2026
294 - Special: The Future of Work
Monday Feb 02, 2026
Monday Feb 02, 2026
This and all episodes at: https://aiandyou.net/ .
|
The fear, uncertainty, and doubt around the future of work has reached epidemic proportions. We'll attempt to shed some light and provide some relief with data, questions, and informed speculation on the topic. How is the generation of content going to evolve, and what's awaiting us in robotics? Who will make the decisions about the future workplace, and what will happen to creatives in marketing and other fields? All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |






