Episodes

17 minutes ago
307 - Guest: Jeremy Ney, Economic Policymaker, part 1
17 minutes ago
17 minutes ago
This and all episodes at: https://aiandyou.net/ .
|
What is AI doing and going to do to job opportunities? What does it mean to have enough, and who has too little, and what’s fair? One answer to that is to look at inequality; how different are the financial circumstances of one set of people compared to another? Here to help us understand that is Jeremy Ney, Adjunct Professor at Columbia Business School and author of an upcoming book about opportunity and inequality in America. He writes the American Inequality newsletter and was previously a macroeconomic policymaker at the Federal Reserve. His work on regional divides and economic mobility has appeared in TIME Magazine, Business Insider, BBC, NPR, PBS, and on the TED stage. We talk about how AI affects inequality in job availability, particularly recent college grads, and Jeremy has crunched a lot of current data about that. Is the answer to become a plumber or electrician? Where is the wealth dividend from automation going? We talk about the difference between low-wage and low-skill work, the Gini Coefficient, socioeconomic mobility, the cost of higher education vs the college wage premium and how schools and AI might democratize that dilemma. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Apr 27, 2026
306 - Guest: Rob May, Intellectual Troublemaker, part 2
Monday Apr 27, 2026
Monday Apr 27, 2026
This and all episodes at: https://aiandyou.net/ .
|
I am talking with Rob May, Co-Founder and CEO of NeuroMetric AI, optimizing inference for multi-model AI systems, and returning to the show after 5 years. We're making sense of how rapid advances in AI—especially the shift from training to real-world inference—are reshaping work, business strategy, and the roles of humans. They accelerate productivity while increasing cognitive demands and redefining where human judgment still matters. A serial entrepreneur, Rob has founded and scaled multiple successful companies including Backupify, Talla, Dianthus, and BrandGuard. He’s also an active angel investor with over 100 portfolio companies and brings firsthand insight into the evolution of AI entrepreneurship. His expertise lies at the intersection of AI research, applied inference, and startup strategy. He is also the co-host of the AI in NYC Show, where he discusses the latest breakthroughs in AI infrastructure and reasoning with top founders, investors, and researchers. We conclude the interview by talking about AI processing on local devices, natural limits to automation, the shift in competitive advantage from execution to creativity, and restructuring of the economy and the workforce. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Apr 20, 2026
305 - Guest: Rob May, Intellectual Troublemaker, part 1
Monday Apr 20, 2026
Monday Apr 20, 2026
This and all episodes at: https://aiandyou.net/ .
|
Rapid advances in AI—especially the shift from training to real-world inference—are reshaping work, business strategy, and the roles of humans. They accelerate productivity while increasing cognitive demands and redefining where human judgment still matters. Here to help us get a handle on that and returning to the show after five years is Rob May, Co-Founder and CEO of NeuroMetric AI, a pioneering company optimizing inference for multi-model AI systems. A serial entrepreneur, Rob has founded and scaled multiple successful companies including Backupify, Talla, Dianthus, and BrandGuard. He’s also an active angel investor with over 100 portfolio companies and brings firsthand insight into the evolution of AI entrepreneurship. His expertise lies at the intersection of AI research, applied inference, and startup strategy. He is also the co-host of the AI in NYC Show, where he discusses the latest breakthroughs in AI infrastructure and reasoning with top founders, investors, and researchers. We talk about how the bottleneck in business is shifting from labor to cognition, the paradoxical way AI is increasing workloads and cognitive strain, the rise of inference as the core AI economy, and how infrastructure constraints are pushing towards a more distributed AI ecosystem. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Apr 13, 2026
304 - Guest: Virginia Dignum, Responsible AI Expert, part 2
Monday Apr 13, 2026
Monday Apr 13, 2026
This and all episodes at: https://aiandyou.net/ .
|
AI abounds with paradoxes. I have been talking about those with Virginia Dignum, professor of responsible artificial intelligence at Umeå (OOMEO) university, about her new book, The AI Paradox: How to Make Sense of a Complex Future, just released, with links in the show notes and transcript. Virginia is an internationally recognized expert in AI ethics and policy who has led initiatives for the European Commission, the United Nations, the World Economic Forum, UNESCO, and UNICEF, among others, and is the author of the 2019 book Responsible Artificial Intelligence. We conclude the interview by talking about some paradoxes in detail, like Moravec’s Paradox, questions about using AI in legal decisions, AI literacy, and the recent battle between Anthropic and the US Department of Defense, and OpenClaw. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Apr 06, 2026
303 - Guest: Virginia Dignum, Responsible AI Expert, part 1
Monday Apr 06, 2026
Monday Apr 06, 2026
This and all episodes at: https://aiandyou.net/ .
|
AI runs not just broadly across human interest, but deeply. And at every level it seems to create paradoxes. How can it be useful without power; how can it be safe with power. We want it to take over jobs, but still leave us with meaning and purpose. We deny it the possibility of becoming sentient, but how can we trust something that lacks compassion? My guest today has written a whole book on these paradoxes. Calling from Sweden is Virginia Dignum, professor of responsible artificial intelligence at Umeå University, and the author of the 2019 book Responsible Artificial Intelligence. She’s an internationally recognized expert in AI ethics and policy who has led initiatives for the European Commission, the United Nations, the World Economic Forum, UNESCO, and UNICEF, among others. And we are talking about her new book, The AI Paradox: How to Make Sense of a Complex Future. We talk about the issues of ethical choices and power that AI raises, where the difference between humans and LLMs matters, Viriginia’s path to the work she’s doing and her book, and our trend to techno-solutionism and what it means for us. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Mar 30, 2026
302 - Guest: Ricky Sethi, Artificial Metacognition Researcher, part 2
Monday Mar 30, 2026
Monday Mar 30, 2026
This and all episodes at: https://aiandyou.net/ .
|
Have you ever thought about your thoughts? About what or how you’re thinking? It gets real meta real fast, doesn’t it? That’s called metacognition, and humans and certain other creatures do it. But what about AI? We’re coming back to the interview with Ricky Sethi, Professor of Computer Science at Fitchburg State University, and researcher into artificial metacognition, or whether and how machines can think about thinking. Ricky’s research spans fact-checking misinformation, virtual communities, and artificial metacognition, where he focuses on designing GenAI systems that can monitor, evaluate, and regulate their own reasoning. He is Director of Research for the Madsci Network, and an Adjunct Professor at Worcester Polytechnic Institute. He has over 50 scholarly publications, and his work has been covered in outlets such as the Chicago Tribune, The Conversation, and Communications of the ACM. We conclude the interview by talking about his research into disinformation, measuring the emotions associated with it, how clusters of models in different roles could assess AIs for lying and implement AI safety, and how AI is impacting the job opportunities in research. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Mar 23, 2026
301 - Guest: Ricky Sethi, Artificial Metacognition Researcher, part 1
Monday Mar 23, 2026
Monday Mar 23, 2026
This and all episodes at: https://aiandyou.net/ .
|
Have you ever thought about your thoughts? About what or how you’re thinking? It gets real meta real fast, doesn’t it? That’s called metacognition, and humans and certain other creatures do it. But what about AI? Can it think about thinking? Here to help us understand this whole thing is artificial metacognition researcher Ricky Sethi. He is Professor of Computer Science at Fitchburg State University, Director of Research for the Madsci Network, and an Adjunct Professor at Worcester Polytechnic Institute. His research spans fact-checking misinformation, virtual communities, and artificial metacognition, where he focuses on designing GenAI systems that can monitor, evaluate, and regulate their own reasoning. Is that cool or what? Ricky has a bachelors in neurobiology and physics, an M.S. in physics/information systems, and a PhD in AI from UC Riverside. He has over 50 scholarly publications, and his work has been covered in outlets such as the Chicago Tribune, The Conversation, and Communications of the ACM. Recently, he has introduced the Metacognitive State Vector framework for quantifying key cognitive signals in ensembles of large language models. We talk about how this spans computer science, neuroscience, and psychology; System One and System Two thinking come up again, with a beautiful explanation. We also talk about testing and measuring metacognition in humans and AIs – and what about dolphins? All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |

Monday Mar 16, 2026
300 - Guest: Mark Peres, Civic Entrepreneur, part 2
Monday Mar 16, 2026
Monday Mar 16, 2026
This and all episodes at: https://aiandyou.net/ .
|
Because AI touches our lives down to our core where our emotions and subconscious reside, we need to be touched with the important lessons that our fellow humans wish to communicate about AI through vehicles like art, poetry, and, in the case of today’s guest, fiction. Mark Peres is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson & Wales University. He’s just published The Accord, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla. As much as that sounds like a description of any number of sensationalist and shallow works that you and I could name, this is not in that category. I found his book remarkable for the level of maturity it granted the reader and the no-holds-barred courage with which it tackled issues of the identity of a future artificial general intelligence - which may not be so far in the future any more. We talk about why the AI character of Lyla has a true sense of identity and mortality, whether control over advanced AI is possible, principles for human–AI coexistence, what responsible use, transparency, and “cognitive autonomy” look like for today’s university students, what it means to “humanize” AI before trying to regulate it, and how to take responsibility for our future with AI. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog. |
![]() |






