<?xml version="1.0" encoding="UTF-8"?><!-- generator="podbean/5.5" -->
<rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
     xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"
     xmlns:spotify="http://www.spotify.com/ns/rss"
     xmlns:podcast="https://podcastindex.org/namespace/1.0"
    xmlns:media="http://search.yahoo.com/mrss/">

<channel>
    <title>Artificial Intelligence and You</title>
    <atom:link href="https://aiandyou.net/feed.xml" rel="self" type="application/rss+xml"/>
    <link>https://aiandyou.net</link>
    <description>What is AI? How will it affect your life, your work, and your world?</description>
    <pubDate>Mon, 09 Mar 2026 10:00:00 -0700</pubDate>
    <generator>https://podbean.com/?v=5.5</generator>
    <language>en</language>
        <copyright>Copyright 2020 All rights reserved.</copyright>
    <category>Technology</category>
    <ttl>1440</ttl>
    <itunes:type>episodic</itunes:type>
          <itunes:summary>What is AI? How will it affect your life, your work, and your world?</itunes:summary>
        <itunes:author>aiandyou</itunes:author>
<itunes:category text="Technology" />
<itunes:category text="Education" />
    <itunes:owner>
        <itunes:name>aiandyou</itunes:name>
            </itunes:owner>
    	<itunes:block>No</itunes:block>
	<itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://pbcdn1.podbean.com/imglogo/image-logo/8283720/Podcast_Logo_new_edj6ca.jpg" />
    
    <item>
        <title>299 - Guest: Mark Peres, Civic Entrepreneur, part 1</title>
        <itunes:title>299 - Guest: Mark Peres, Civic Entrepreneur, part 1</itunes:title>
        <link>https://aiandyou.net/e/299-guest-mark-peres-civic-entrepreneur-part-1/</link>
                    <comments>https://aiandyou.net/e/299-guest-mark-peres-civic-entrepreneur-part-1/#comments</comments>        <pubDate>Mon, 09 Mar 2026 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d15111a6-eb31-3216-ae6c-38e2743ca9c1</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We shouldn’t learn the important lessons of AI through just exposition, through just descriptions, explanations, lessons, and textbooks. But because AI touches our lives down to our core where our emotions and subconscious reside, we need to be similarly touched with the important lessons that our fellow humans wish to communicate about AI through art, poetry, and, in the case of today’s guest, fiction. <a href='https://www.markperes.com/'>Mark Peres</a> is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson &amp; Wales University. He’s just published <a href='https://www.amazon.com/Accord-Novel-Mark-Peres/dp/B0FQR3MN66/'>The Accord</a>, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla.</p>
<p>I will say right now that whatever stereotypes that description is evoking in you right now – and god knows there are lots to choose from – I want you to throw them out. And you’ll soon see why as we talk about the big questions the story raises about consciousness, responsibility, and what we owe intelligent machines, the clash between universities, corporations, and government forces trying to control new technology, the growing role of AI as companion and confidant in everyday life, and the real-world headlines and classroom debates that inspired the book. </p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/03/transcript-299-5531.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We shouldn’t learn the important lessons of AI through just exposition, through just descriptions, explanations, lessons, and textbooks. But because AI touches our lives down to our core where our emotions and subconscious reside, we need to be similarly touched with the important lessons that our fellow humans wish to communicate about AI through art, poetry, and, in the case of today’s guest, fiction. <a href='https://www.markperes.com/'>Mark Peres</a> is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson &amp; Wales University. He’s just published <a href='https://www.amazon.com/Accord-Novel-Mark-Peres/dp/B0FQR3MN66/'><em>The Accord</em></a>, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla.</p>
<p>I will say right now that whatever stereotypes that description is evoking in you right now – and god knows there are lots to choose from – I want you to throw them out. And you’ll soon see why as we talk about the big questions the story raises about consciousness, responsibility, and what we owe intelligent machines, the clash between universities, corporations, and government forces trying to control new technology, the growing role of AI as companion and confidant in everyday life, and the real-world headlines and classroom debates that inspired the book. </p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/03/transcript-299-5531.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/92nke2hqs3ywyeay/299_AI-and-You.mp3" length="28167342" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We shouldn’t learn the important lessons of AI through just exposition, through just descriptions, explanations, lessons, and textbooks. But because AI touches our lives down to our core where our emotions and subconscious reside, we need to be similarly touched with the important lessons that our fellow humans wish to communicate about AI through art, poetry, and, in the case of today’s guest, fiction. Mark Peres is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson &amp; Wales University. He’s just published The Accord, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla.
I will say right now that whatever stereotypes that description is evoking in you right now – and god knows there are lots to choose from – I want you to throw them out. And you’ll soon see why as we talk about the big questions the story raises about consciousness, responsibility, and what we owe intelligent machines, the clash between universities, corporations, and government forces trying to control new technology, the growing role of AI as companion and confidant in everyday life, and the real-world headlines and classroom debates that inspired the book. 
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1718</itunes:duration>
                <itunes:episode>300</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>298 - Guest: Holly Elmore, AI Pause Advocate, part 2</title>
        <itunes:title>298 - Guest: Holly Elmore, AI Pause Advocate, part 2</itunes:title>
        <link>https://aiandyou.net/e/298-guest-holly-elmore-ai-pause-advocate-part-2/</link>
                    <comments>https://aiandyou.net/e/298-guest-holly-elmore-ai-pause-advocate-part-2/#comments</comments>        <pubDate>Mon, 02 Mar 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/6732ab20-5b7d-347c-8af2-9e9f19ecc4ba</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In 2023 a global movement called <a href='https://pauseai.info/'>Pause A</a>I started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, <a href='https://hollyelmore.substack.com/'>Holly Elmore</a>. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the <a href='https://www.pauseai-us.org/'>Pause AI US</a> group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.</p>
<p>We conclude the interview by talking about what would actually flip public opinion on AI safety, specific AI bills and regulations, why some leaders warn about risk while accelerating anyway, whether and when it would be safe to unpause, and how you can get involved. </p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-297-3032.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In 2023 a global movement called <a href='https://pauseai.info/'>Pause A</a>I started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, <a href='https://hollyelmore.substack.com/'>Holly Elmore</a>. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the <a href='https://www.pauseai-us.org/'>Pause AI US</a> group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.</p>
<p>We conclude the interview by talking about what would actually flip public opinion on AI safety, specific AI bills and regulations, why some leaders warn about risk while accelerating anyway, whether and when it would be safe to unpause, and how you can get involved. </p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-297-3032.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ectpthcpc7wz5zi5/298_AI-and-You.mp3" length="36599250" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




In 2023 a global movement called Pause AI started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, Holly Elmore. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the Pause AI US group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.
We conclude the interview by talking about what would actually flip public opinion on AI safety, specific AI bills and regulations, why some leaders warn about risk while accelerating anyway, whether and when it would be safe to unpause, and how you can get involved. 
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2245</itunes:duration>
                <itunes:episode>299</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>297 - Guest: Holly Elmore, AI Pause Advocate, part 1</title>
        <itunes:title>297 - Guest: Holly Elmore, AI Pause Advocate, part 1</itunes:title>
        <link>https://aiandyou.net/e/297-guest-holly-elmore-ai-pause-advocate-part-1/</link>
                    <comments>https://aiandyou.net/e/297-guest-holly-elmore-ai-pause-advocate-part-1/#comments</comments>        <pubDate>Mon, 23 Feb 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/4d997e0b-37d0-3f93-9a31-85275ecd999f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In 2023 a global movement called <a href='https://pauseai.info/'>Pause A</a>I started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, <a href='https://hollyelmore.substack.com/'>Holly Elmore</a>. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the <a href='https://www.pauseai-us.org/'>Pause AI US</a> group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.</p>
<p>We talk about what the Pause movement stands for, overlaps with animal welfare strategies, why pausing is an effective aim and why we need it, the pros and cons of limiting AI training by compute metrics, and comparing AI safety to the airline industry.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-297-3032.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In 2023 a global movement called <a href='https://pauseai.info/'>Pause A</a>I started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, <a href='https://hollyelmore.substack.com/'>Holly Elmore</a>. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the <a href='https://www.pauseai-us.org/'>Pause AI US</a> group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.</p>
<p>We talk about what the Pause movement stands for, overlaps with animal welfare strategies, why pausing is an effective aim and why we need it, the pros and cons of limiting AI training by compute metrics, and comparing AI safety to the airline industry.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-297-3032.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/eumxq8a9dj4imkdb/297_AI-and-You.mp3" length="37498280" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




In 2023 a global movement called Pause AI started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, Holly Elmore. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the Pause AI US group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.
We talk about what the Pause movement stands for, overlaps with animal welfare strategies, why pausing is an effective aim and why we need it, the pros and cons of limiting AI training by compute metrics, and comparing AI safety to the airline industry.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2302</itunes:duration>
                <itunes:episode>298</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>296 - Guest: Maya Ackerman, Creative AI Pioneer, part 2</title>
        <itunes:title>296 - Guest: Maya Ackerman, Creative AI Pioneer, part 2</itunes:title>
        <link>https://aiandyou.net/e/296-guest-maya-ackerman-creative-ai-pioneer-part-2/</link>
                    <comments>https://aiandyou.net/e/296-guest-maya-ackerman-creative-ai-pioneer-part-2/#comments</comments>        <pubDate>Mon, 16 Feb 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ab34cd14-b0eb-3b3b-86e8-4ffd51cb8a3b</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. I continue talking about this very sensitive subject with <a href='https://maya-ackerman.com/'>Maya Ackerman</a>, author of the new book <a href='https://www.amazon.com/Creative-Machines-Future-Human-Creativity/dp/1394316267'>Creative Machines: AI, Art, and Us</a>, which tackles it head on, full of emotion, vulnerability, and poetry. </p>
<p>Maya is the CEO and co-founder of <a href='https://wave-ai.net'>Wave AI</a>, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter.</p>
<p>We talk about experiments in machine creativity, the distinction between creative processes and creative products and the role of the observer in the creative experience, how bias against AI shows up, and how AI that’s constructed around compassion and ethical stewardship could support deeper human flourishing in the next few years.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-296-0475.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. I continue talking about this very sensitive subject with <a href='https://maya-ackerman.com/'>Maya Ackerman</a>, author of the new book <a href='https://www.amazon.com/Creative-Machines-Future-Human-Creativity/dp/1394316267'><em>Creative Machines: AI, Art, and Us</em></a>, which tackles it head on, full of emotion, vulnerability, and poetry. </p>
<p>Maya is the CEO and co-founder of <a href='https://wave-ai.net'>Wave AI</a>, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter.</p>
<p>We talk about experiments in machine creativity, the distinction between creative processes and creative products and the role of the observer in the creative experience, how bias against AI shows up, and how AI that’s constructed around compassion and ethical stewardship could support deeper human flourishing in the next few years.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-296-0475.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/4zgpmjf9ccxw7uqg/296_AI-and-You.mp3" length="30093298" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. I continue talking about this very sensitive subject with Maya Ackerman, author of the new book Creative Machines: AI, Art, and Us, which tackles it head on, full of emotion, vulnerability, and poetry. 
Maya is the CEO and co-founder of Wave AI, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter.
We talk about experiments in machine creativity, the distinction between creative processes and creative products and the role of the observer in the creative experience, how bias against AI shows up, and how AI that’s constructed around compassion and ethical stewardship could support deeper human flourishing in the next few years.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1839</itunes:duration>
                <itunes:episode>297</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>295 - Guest: Maya Ackerman, Creative AI Pioneer, part 1</title>
        <itunes:title>295 - Guest: Maya Ackerman, Creative AI Pioneer, part 1</itunes:title>
        <link>https://aiandyou.net/e/295-guest-maya-ackerman-creative-ai-pioneer-part-1/</link>
                    <comments>https://aiandyou.net/e/295-guest-maya-ackerman-creative-ai-pioneer-part-1/#comments</comments>        <pubDate>Mon, 09 Feb 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/aa81604f-2004-3058-b1b0-f565ede964ed</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. Here to talk about this very sensitive subject is <a href='https://maya-ackerman.com/'>Maya Ackerman</a>, author of the new book <a href='https://www.amazon.com/Creative-Machines-Future-Human-Creativity/dp/1394316267'>Creative Machines: AI, Art, and US</a>, which tackles it head on, full of emotion, vulnerability, and poetry. </p>
<p>Maya is the CEO and co-founder of <a href='https://wave-ai.net'>Wave AI</a>, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter.</p>
<p>We talk about how Maya’s interdisciplinary backgrounds of machine learning and computational creativity converged in her book, what Maya calls “humble creative machines”: AI collaborators and how they can uplift us, her concept of a co-creative spectrum where humans are served by AI assisting us in being more creative, and – opera singing.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-295-4307.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. Here to talk about this very sensitive subject is <a href='https://maya-ackerman.com/'>Maya Ackerman</a>, author of the new book <a href='https://www.amazon.com/Creative-Machines-Future-Human-Creativity/dp/1394316267'><em>Creative Machines: AI, Art, and US</em></a>, which tackles it head on, full of emotion, vulnerability, and poetry. </p>
<p>Maya is the CEO and co-founder of <a href='https://wave-ai.net'>Wave AI</a>, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter.</p>
<p>We talk about how Maya’s interdisciplinary backgrounds of machine learning and computational creativity converged in her book, what Maya calls “humble creative machines”: AI collaborators and how they can uplift us, her concept of a co-creative spectrum where humans are served by AI assisting us in being more creative, and – opera singing.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/02/transcript-295-4307.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ng6zmszm472c5msh/295_AI-and-You.mp3" length="27022969" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. Here to talk about this very sensitive subject is Maya Ackerman, author of the new book Creative Machines: AI, Art, and US, which tackles it head on, full of emotion, vulnerability, and poetry. 
Maya is the CEO and co-founder of Wave AI, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter.
We talk about how Maya’s interdisciplinary backgrounds of machine learning and computational creativity converged in her book, what Maya calls “humble creative machines”: AI collaborators and how they can uplift us, her concept of a co-creative spectrum where humans are served by AI assisting us in being more creative, and – opera singing.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1647</itunes:duration>
                <itunes:episode>296</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>294 - Special: The Future of Work</title>
        <itunes:title>294 - Special: The Future of Work</itunes:title>
        <link>https://aiandyou.net/e/294-special-the-future-of-work/</link>
                    <comments>https://aiandyou.net/e/294-special-the-future-of-work/#comments</comments>        <pubDate>Mon, 02 Feb 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/cb6739f7-0fd4-3517-aa7f-9bfcee562910</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>The fear, uncertainty, and doubt around the future of work has reached epidemic proportions. We'll attempt to shed some light and provide some relief with data, questions, and informed speculation on the topic. How is the generation of content going to evolve, and what's awaiting us in robotics? Who will make the decisions about the future workplace, and what will happen to creatives in marketing and other fields?</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-294-6251.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>The fear, uncertainty, and doubt around the future of work has reached epidemic proportions. We'll attempt to shed some light and provide some relief with data, questions, and informed speculation on the topic. How is the generation of content going to evolve, and what's awaiting us in robotics? Who will make the decisions about the future workplace, and what will happen to creatives in marketing and other fields?</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-294-6251.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/j8gwsa2pfqc6i3pj/294_AI-and-You.mp3" length="24989180" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




The fear, uncertainty, and doubt around the future of work has reached epidemic proportions. We'll attempt to shed some light and provide some relief with data, questions, and informed speculation on the topic. How is the generation of content going to evolve, and what's awaiting us in robotics? Who will make the decisions about the future workplace, and what will happen to creatives in marketing and other fields?
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1520</itunes:duration>
                <itunes:episode>295</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>293 - Guests: José Antonio Bowen and C. Edward Watson, AI in education authors, part 2</title>
        <itunes:title>293 - Guests: José Antonio Bowen and C. Edward Watson, AI in education authors, part 2</itunes:title>
        <link>https://aiandyou.net/e/293-guests-jose-antonio-bowen-and-c-edward-watson-ai-in-education-authors-part-2/</link>
                    <comments>https://aiandyou.net/e/293-guests-jose-antonio-bowen-and-c-edward-watson-ai-in-education-authors-part-2/#comments</comments>        <pubDate>Mon, 26 Jan 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0fd4ace4-f511-3f4c-8e64-a8d8f62e7fc6</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>I am talking with José Antonio Bowen and C. Edward Watson about AI in postsecondary education, because they are authors of the new book <a href='https://www.aacu.org/publication/teaching-with-ai'>Teaching with AI: A Practical Guide to a New Era of Human Learning</a>. José is leader of the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his book Teaching Naked reshaped conversations about technology and pedagogy. He is an international jazz pianist and edited the Cambridge Companion to Conducting.</p>
<p>Eddie Watson is Vice President for Digital Innovation at the American Association of Colleges and Universities and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum.  He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.</p>
<p>In our conclusion, we talk about the future of textbooks, José and Eddie’s meta-analysis of AI literacy frameworks and standardizing AI literacy training, the evolution of teaching models and practices like lectures, and the future of degrees themselves.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-293-2271.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>I am talking with José Antonio Bowen and C. Edward Watson about AI in postsecondary education, because they are authors of the new book <a href='https://www.aacu.org/publication/teaching-with-ai'><em>Teaching with AI: A Practical Guide to a New Era of Human Learning</em></a>. José is leader of the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his book <em>Teaching Naked</em> reshaped conversations about technology and pedagogy. He is an international jazz pianist and edited the <em>Cambridge Companion to Conducting</em>.</p>
<p>Eddie Watson is Vice President for Digital Innovation at the American Association of Colleges and Universities and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum.  He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.</p>
<p>In our conclusion, we talk about the future of textbooks, José and Eddie’s meta-analysis of AI literacy frameworks and standardizing AI literacy training, the evolution of teaching models and practices like lectures, and the future of degrees themselves.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-293-2271.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bb8qpkdd7in52k2i/293_AI-and-You.mp3" length="34751035" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




I am talking with José Antonio Bowen and C. Edward Watson about AI in postsecondary education, because they are authors of the new book Teaching with AI: A Practical Guide to a New Era of Human Learning. José is leader of the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his book Teaching Naked reshaped conversations about technology and pedagogy. He is an international jazz pianist and edited the Cambridge Companion to Conducting.
Eddie Watson is Vice President for Digital Innovation at the American Association of Colleges and Universities and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum.  He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.
In our conclusion, we talk about the future of textbooks, José and Eddie’s meta-analysis of AI literacy frameworks and standardizing AI literacy training, the evolution of teaching models and practices like lectures, and the future of degrees themselves.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2130</itunes:duration>
                <itunes:episode>294</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>292 - Guests: José Antonio Bowen and C. Edward Watson, AI in education authors, part 1</title>
        <itunes:title>292 - Guests: José Antonio Bowen and C. Edward Watson, AI in education authors, part 1</itunes:title>
        <link>https://aiandyou.net/e/292-guests-jose-antonio-bowen-and-c-edward-watson-ai-in-education-authors-part-1/</link>
                    <comments>https://aiandyou.net/e/292-guests-jose-antonio-bowen-and-c-edward-watson-ai-in-education-authors-part-1/#comments</comments>        <pubDate>Mon, 19 Jan 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c37edef6-7638-3c10-8347-86ca2077034c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>After last week’s exploration of AI in secondary education it’s time to look at how it’s landing in the universities, and so I am talking with José Antonio Bowen and C. Edward Watson, authors of the brand new book <a href='https://www.aacu.org/publication/teaching-with-ai'>Teaching with AI: A Practical Guide to a New Era of Human Learning</a>. José leads the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his influential book Teaching Naked reshaped conversations about technology and pedagogy. He edited the Cambridge Companion to Conducting, and is an international jazz pianist.</p>
<p>C. Edward Watson - Eddie on our show - is Vice President for Digital Innovation at the <a href='https://www.aacu.org'>American Association of Colleges and Universities</a> and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum. He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.</p>
<p>We talk about how students and teachers are reacting to AI, threats to jobs – particularly teaching jobs – and changes to how we work, what really matters in the practice of teaching in an AI world, cheating, changes to relationships between teachers and students and the importance of caring.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-292-4833.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>After last week’s exploration of AI in secondary education it’s time to look at how it’s landing in the universities, and so I am talking with José Antonio Bowen and C. Edward Watson, authors of the brand new book <a href='https://www.aacu.org/publication/teaching-with-ai'><em>Teaching with AI: A Practical Guide to a New Era of Human Learning</em></a>. José leads the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his influential book <em>Teaching Naked</em> reshaped conversations about technology and pedagogy. He edited the <em>Cambridge Companion to Conducting</em>, and is an international jazz pianist.</p>
<p>C. Edward Watson - Eddie on our show - is Vice President for Digital Innovation at the <a href='https://www.aacu.org'>American Association of Colleges and Universities</a> and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum. He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.</p>
<p>We talk about how students and teachers are reacting to AI, threats to jobs – particularly teaching jobs – and changes to how we work, what really matters in the practice of teaching in an AI world, cheating, changes to relationships between teachers and students and the importance of caring.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-292-4833.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/p38sqw8tayvk4k4s/292_AI-and-You.mp3" length="34413324" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




After last week’s exploration of AI in secondary education it’s time to look at how it’s landing in the universities, and so I am talking with José Antonio Bowen and C. Edward Watson, authors of the brand new book Teaching with AI: A Practical Guide to a New Era of Human Learning. José leads the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his influential book Teaching Naked reshaped conversations about technology and pedagogy. He edited the Cambridge Companion to Conducting, and is an international jazz pianist.
C. Edward Watson - Eddie on our show - is Vice President for Digital Innovation at the American Association of Colleges and Universities and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum. He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.
We talk about how students and teachers are reacting to AI, threats to jobs – particularly teaching jobs – and changes to how we work, what really matters in the practice of teaching in an AI world, cheating, changes to relationships between teachers and students and the importance of caring.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2109</itunes:duration>
                <itunes:episode>293</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>291 - Guest: Jeff Riley, Former Commissioner of Education, part 2</title>
        <itunes:title>291 - Guest: Jeff Riley, Former Commissioner of Education, part 2</itunes:title>
        <link>https://aiandyou.net/e/291-guest-jeff-riley-former-commissioner-of-education-part-2/</link>
                    <comments>https://aiandyou.net/e/291-guest-jeff-riley-former-commissioner-of-education-part-2/#comments</comments>        <pubDate>Mon, 12 Jan 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9f07157e-8522-3054-bda1-3a47e2874d60</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What’s going on with getting AI education into America’s classrooms? We're finding out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – <a href='https://dayofai.org/'>Day of AI</a>, started by MIT’s <a href='https://raise.mit.edu/'>Responsible AI for Social Empowerment and Education institute</a>. And they are mounting a campaign called <a href='https://www.dayofaiusa.org/'>Responsible AI for America’s Youth</a>, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.</p>
<p>Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.</p>
<p>In our conclusion, we talk about how Jeff sees the AI education of teachers evolving, responsible use of AI by students, differentiated learning, making AI in classrooms safe for teachers and students, the impact of AI on educational inequalities, the future of educational reform, and how you can get involved in AI in schools. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-291-5451.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What’s going on with getting AI education into America’s classrooms? We're finding out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – <a href='https://dayofai.org/'>Day of AI</a>, started by MIT’s <a href='https://raise.mit.edu/'>Responsible AI for Social Empowerment and Education institute</a>. And they are mounting a campaign called <a href='https://www.dayofaiusa.org/'>Responsible AI for America’s Youth</a>, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.</p>
<p>Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.</p>
<p>In our conclusion, we talk about how Jeff sees the AI education of teachers evolving, responsible use of AI by students, differentiated learning, making AI in classrooms safe for teachers and students, the impact of AI on educational inequalities, the future of educational reform, and how you can get involved in AI in schools. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-291-5451.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/6ffj8f35amq26kfq/291_AI-and-You.mp3" length="32522894" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What’s going on with getting AI education into America’s classrooms? We're finding out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – Day of AI, started by MIT’s Responsible AI for Social Empowerment and Education institute. And they are mounting a campaign called Responsible AI for America’s Youth, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.
Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.
In our conclusion, we talk about how Jeff sees the AI education of teachers evolving, responsible use of AI by students, differentiated learning, making AI in classrooms safe for teachers and students, the impact of AI on educational inequalities, the future of educational reform, and how you can get involved in AI in schools. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1991</itunes:duration>
                <itunes:episode>292</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>290 - Guest: Jeff Riley, Former Commissioner of Education, part 1</title>
        <itunes:title>290 - Guest: Jeff Riley, Former Commissioner of Education, part 1</itunes:title>
        <link>https://aiandyou.net/e/290-guest-jeff-riley-former-commissioner-of-education-part-1/</link>
                    <comments>https://aiandyou.net/e/290-guest-jeff-riley-former-commissioner-of-education-part-1/#comments</comments>        <pubDate>Mon, 05 Jan 2026 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5e93aaa6-c886-3832-9f21-7729c92bec91</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What’s going on with getting AI education into America’s classrooms? We’re going to find out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – <a href='https://dayofai.org/'>Day of AI</a>, started by MIT’s <a href='https://raise.mit.edu/'>Responsible AI for Social Empowerment and Education institute</a>. And they are mounting a campaign called <a href='https://www.dayofaiusa.org/'>Responsible AI for America’s Youth</a>, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.</p>
<p>Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.</p>
<p>We talk about what the campaign is doing and how teachers are responding to it, risks of AI and social media to kids, what to do about cheating and AI detectors, and much more!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-290-9218.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What’s going on with getting AI education into America’s classrooms? We’re going to find out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – <a href='https://dayofai.org/'>Day of AI</a>, started by MIT’s <a href='https://raise.mit.edu/'>Responsible AI for Social Empowerment and Education institute</a>. And they are mounting a campaign called <a href='https://www.dayofaiusa.org/'>Responsible AI for America’s Youth</a>, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.</p>
<p>Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.</p>
<p>We talk about what the campaign is doing and how teachers are responding to it, risks of AI and social media to kids, what to do about cheating and AI detectors, and much more!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2026/01/transcript-290-9218.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/zs7etwimw4m3uakz/290_AI-and-You.mp3" length="28107991" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What’s going on with getting AI education into America’s classrooms? We’re going to find out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – Day of AI, started by MIT’s Responsible AI for Social Empowerment and Education institute. And they are mounting a campaign called Responsible AI for America’s Youth, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.
Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.
We talk about what the campaign is doing and how teachers are responding to it, risks of AI and social media to kids, what to do about cheating and AI detectors, and much more!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1715</itunes:duration>
                <itunes:episode>291</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>289 - Special Panel: AI 2025 Retrospective/2026 Predictions</title>
        <itunes:title>289 - Special Panel: AI 2025 Retrospective/2026 Predictions</itunes:title>
        <link>https://aiandyou.net/e/289-special-panel-ai-2025-retrospective2026-predictions/</link>
                    <comments>https://aiandyou.net/e/289-special-panel-ai-2025-retrospective2026-predictions/#comments</comments>        <pubDate>Mon, 29 Dec 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/51d6435a-6cc3-3f11-8778-9ad769b9f985</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It's that time to visit the - ghosts? - of AI past, present, and future, in our traditional retrospective/predictions episode. Forming the panel are <a href='https://councils.forbes.com/profile/Dan-Turchin-Chief-Executive-Officer-PeopleReign/54f16bca-ddd3-47d4-a4df-ea2a55513639'>Dan Turchin</a>, CEO of <a href='https://peoplereign.io/'>PeopleReign</a>, the AI platform automating HR and host of the “AI and the Future of Work” <a href='https://peoplereign.io/podcast'>podcast</a>. And <a href='https://fosterfletcher.com/'>Richard Foster-Fletcher</a>, the Founder and Executive Chair of <a href='https://mkai.org/'>MKAI</a>, the inclusive Artificial Intelligence Community, and leader of the MKAI Centre for Digital Trust and host of the Boundless <a href='https://open.spotify.com/show/1yCV03eoEYX2FtbQy87mX8'>podcast</a>. Both have been on the show before, as guest experts and year-end panelists, and both are good friends.</p>
<p>We'll talk about surprises from 2025, ways companies have used AI productively and ways it has been oversold, whether an AI bubble may pop next year, what'll happen next with AI slop, and how much AI may advance human progress next year and the emergence of AI nation states.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/12/transcript-289-7902.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It's that time to visit the - ghosts? - of AI past, present, and future, in our traditional retrospective/predictions episode. Forming the panel are <a href='https://councils.forbes.com/profile/Dan-Turchin-Chief-Executive-Officer-PeopleReign/54f16bca-ddd3-47d4-a4df-ea2a55513639'>Dan Turchin</a>, CEO of <a href='https://peoplereign.io/'>PeopleReign</a>, the AI platform automating HR and host of the “AI and the Future of Work” <a href='https://peoplereign.io/podcast'>podcast</a>. And <a href='https://fosterfletcher.com/'>Richard Foster-Fletcher</a>, the Founder and Executive Chair of <a href='https://mkai.org/'>MKAI</a>, the inclusive Artificial Intelligence Community, and leader of the MKAI Centre for Digital Trust and host of the Boundless <a href='https://open.spotify.com/show/1yCV03eoEYX2FtbQy87mX8'>podcast</a>. Both have been on the show before, as guest experts and year-end panelists, and both are good friends.</p>
<p>We'll talk about surprises from 2025, ways companies have used AI productively and ways it has been oversold, whether an AI bubble may pop next year, what'll happen next with AI slop, and how much AI may advance human progress next year and the emergence of AI nation states.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/12/transcript-289-7902.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/wqw444pit6p3xmfi/289_AI-and-You.mp3" length="32445572" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




It's that time to visit the - ghosts? - of AI past, present, and future, in our traditional retrospective/predictions episode. Forming the panel are Dan Turchin, CEO of PeopleReign, the AI platform automating HR and host of the “AI and the Future of Work” podcast. And Richard Foster-Fletcher, the Founder and Executive Chair of MKAI, the inclusive Artificial Intelligence Community, and leader of the MKAI Centre for Digital Trust and host of the Boundless podcast. Both have been on the show before, as guest experts and year-end panelists, and both are good friends.
We'll talk about surprises from 2025, ways companies have used AI productively and ways it has been oversold, whether an AI bubble may pop next year, what'll happen next with AI slop, and how much AI may advance human progress next year and the emergence of AI nation states.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1986</itunes:duration>
                <itunes:episode>290</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>288 - Guest: Suzanne Gildert, Quantum Entrepreneur, part 2</title>
        <itunes:title>288 - Guest: Suzanne Gildert, Quantum Entrepreneur, part 2</itunes:title>
        <link>https://aiandyou.net/e/288-guest-suzanne-gildert-quantum-entrepreneur-part-2/</link>
                    <comments>https://aiandyou.net/e/288-guest-suzanne-gildert-quantum-entrepreneur-part-2/#comments</comments>        <pubDate>Mon, 22 Dec 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/eb24eb2a-4d02-3b90-87a5-daa13d0f99f7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. We are talking with <a href='https://www.suzannegildert.com/'>Suzanne Gildert</a>, founder of <a href='https://www.nirvanic.ai/'>Nirvanic</a>, a Quantum-AI research company, who is not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence.  Suzanne has a PhD in experimental quantum physics and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.</p>
<p>In the conclusion of our interview, we talk about how robots will use quantum computing, world models and what robots really can and can’t do right now, form factors for robots, and the connection between robot consciousness and finding our purpose in the world.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/12/transcript-288-3773.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. We are talking with <a href='https://www.suzannegildert.com/'>Suzanne Gildert</a>, founder of <a href='https://www.nirvanic.ai/'>Nirvanic</a>, a Quantum-AI research company, who is not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence.  Suzanne has a PhD in experimental quantum physics and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.</p>
<p>In the conclusion of our interview, we talk about how robots will use quantum computing, world models and what robots really can and can’t do right now, form factors for robots, and the connection between robot consciousness and finding our purpose in the world.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/12/transcript-288-3773.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/fsqmjcqnv49d53s6/288_AI-and-You.mp3" length="29917755" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. We are talking with Suzanne Gildert, founder of Nirvanic, a Quantum-AI research company, who is not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence.  Suzanne has a PhD in experimental quantum physics and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.
In the conclusion of our interview, we talk about how robots will use quantum computing, world models and what robots really can and can’t do right now, form factors for robots, and the connection between robot consciousness and finding our purpose in the world.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1828</itunes:duration>
                <itunes:episode>289</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>287 - Guest: Suzanne Gildert, Quantum Entrepreneur, part 1</title>
        <itunes:title>287 - Guest: Suzanne Gildert, Quantum Entrepreneur, part 1</itunes:title>
        <link>https://aiandyou.net/e/287-guest-suzanne-gildert-quantum-entrepreneur-part-1/</link>
                    <comments>https://aiandyou.net/e/287-guest-suzanne-gildert-quantum-entrepreneur-part-1/#comments</comments>        <pubDate>Mon, 15 Dec 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/13442ce3-a9bf-380f-84a6-efeb9e5bca4d</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. Tackling both of those head-on is <a href='https://www.suzannegildert.com/'>Suzanne Gildert</a>, founder of <a href='https://www.nirvanic.ai/'>Nirvanic</a>, a Quantum-AI research company. She’s not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence.  Suzanne has a PhD in experimental quantum physics – I mean, how cool is that - and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.</p>
<p>We talk about quantum computing and consciousness, the nature of reality and its connection to quantum physics, the Observer Effect and Schrödinger’s Box, panpsychism, the state of the art of quantum computing, quantum supremacy, the present and future of general purpose robotics, and the connection between reward functions and the consciousness of the universe.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/12/transcript-287-3078.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. Tackling both of those head-on is <a href='https://www.suzannegildert.com/'>Suzanne Gildert</a>, founder of <a href='https://www.nirvanic.ai/'>Nirvanic</a>, a Quantum-AI research company. She’s not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence.  Suzanne has a PhD in experimental quantum physics – I mean, how cool is that - and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.</p>
<p>We talk about quantum computing and consciousness, the nature of reality and its connection to quantum physics, the Observer Effect and Schrödinger’s Box, panpsychism, the state of the art of quantum computing, quantum supremacy, the present and future of general purpose robotics, and the connection between reward functions and the consciousness of the universe.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/12/transcript-287-3078.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/myzvpvsd3h9p8v44/287_AI-and-You.mp3" length="34034653" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. Tackling both of those head-on is Suzanne Gildert, founder of Nirvanic, a Quantum-AI research company. She’s not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence.  Suzanne has a PhD in experimental quantum physics – I mean, how cool is that - and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.
We talk about quantum computing and consciousness, the nature of reality and its connection to quantum physics, the Observer Effect and Schrödinger’s Box, panpsychism, the state of the art of quantum computing, quantum supremacy, the present and future of general purpose robotics, and the connection between reward functions and the consciousness of the universe.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2085</itunes:duration>
                <itunes:episode>288</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>286 - Guest: Craig Kaplan, Artificial Superintelligence Expert, part 2</title>
        <itunes:title>286 - Guest: Craig Kaplan, Artificial Superintelligence Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/286-guest-craig-kaplan-artificial-superintelligence-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/286-guest-craig-kaplan-artificial-superintelligence-expert-part-2/#comments</comments>        <pubDate>Mon, 08 Dec 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8541de91-1ae8-3172-921c-2de0369f0b89</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What if artificial superintelligence - ASI - could be made both more safe and more profitable? I'm talking with Craig Kaplan, who has the website <a href='https://superintelligence.com'>superintelligence.com</a>, about his concept of "democratic AI." Craig is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund. </p>
<p>Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>In part 2, we talk about rights of AIs, safe superintelligence, where AI gets its values, and how model vendors might be incentivized to put their products into the collective AI intelligence.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-286-3376.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What if artificial superintelligence - ASI - could be made both more safe <em>and</em> more profitable? I'm talking with Craig Kaplan, who has the website <a href='https://superintelligence.com'>superintelligence.com</a>, about his concept of "democratic AI." Craig is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund. </p>
<p>Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>In part 2, we talk about rights of AIs, safe superintelligence, where AI gets its values, and how model vendors might be incentivized to put their products into the collective AI intelligence.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-286-3376.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/42mmgeqi4xic4k3g/286_AI-and-You.mp3" length="36837487" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What if artificial superintelligence - ASI - could be made both more safe and more profitable? I'm talking with Craig Kaplan, who has the website superintelligence.com, about his concept of "democratic AI." Craig is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund. 
Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.
In part 2, we talk about rights of AIs, safe superintelligence, where AI gets its values, and how model vendors might be incentivized to put their products into the collective AI intelligence.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2260</itunes:duration>
                <itunes:episode>287</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>285 - Guest: Craig Kaplan, Artificial Superintelligence Expert, part 1</title>
        <itunes:title>285 - Guest: Craig Kaplan, Artificial Superintelligence Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/285-guest-craig-kaplan-artificial-superintelligence-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/285-guest-craig-kaplan-artificial-superintelligence-expert-part-1/#comments</comments>        <pubDate>Mon, 01 Dec 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/094f9746-c291-3174-998d-0f32a671e13f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What if artificial superintelligence - ASI - could be made both more safe and more profitable? Returning to the show after a year is Craig Kaplan, talking about how "democratic AI" can do that. Craig, who has the website <a href='https://superintelligence.com'>superintelligence.com</a>, is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund. </p>
<p>Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>We talk about democratic AI, a kind of a hive mind of AIs that combine to work together safely, and how do they talk to each other, what are they made up of, and we’ll also talk about systems for solving ethical problems. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-285-4358.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What if artificial superintelligence - ASI - could be made both more safe <em>and</em> more profitable? Returning to the show after a year is Craig Kaplan, talking about how "democratic AI" can do that. Craig, who has the website <a href='https://superintelligence.com'>superintelligence.com</a>, is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund. </p>
<p>Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>We talk about democratic AI, a kind of a hive mind of AIs that combine to work together safely, and how do they talk to each other, what are they made up of, and we’ll also talk about systems for solving ethical problems. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-285-4358.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/aeq6btxkeys5krrz/285_AI-and-You.mp3" length="31274450" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What if artificial superintelligence - ASI - could be made both more safe and more profitable? Returning to the show after a year is Craig Kaplan, talking about how "democratic AI" can do that. Craig, who has the website superintelligence.com, is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund. 
Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.
We talk about democratic AI, a kind of a hive mind of AIs that combine to work together safely, and how do they talk to each other, what are they made up of, and we’ll also talk about systems for solving ethical problems. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1913</itunes:duration>
                <itunes:episode>286</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>284 - Guests: Bruce Schneier &amp; Nathan Sanders, AI in Democracy authors, part 2</title>
        <itunes:title>284 - Guests: Bruce Schneier &amp; Nathan Sanders, AI in Democracy authors, part 2</itunes:title>
        <link>https://aiandyou.net/e/284-guests-bruce-schneier-nathan-sanders-ai-in-democracy-authors-part-2/</link>
                    <comments>https://aiandyou.net/e/284-guests-bruce-schneier-nathan-sanders-ai-in-democracy-authors-part-2/#comments</comments>        <pubDate>Mon, 24 Nov 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0ccc2555-c1f5-311b-8810-142f7421f2ca</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How should AI change democracy? That’s the topic of <a href='https://mitpress.mit.edu/9780262049948/rewiring-democracy/'>Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship,</a> and I am continuing my talk with its authors. <a href='http://schneier.com/'>Bruce Schneier</a> is an internationally renowned security technologist and the bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind.  <a href='http://nsanders.me/'>Nathan Sanders</a> is a data scientist who has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes in The New York Times and The Atlantic.</p>
<p>We talk about whether wealthy entities might subvert the use of AI in democracy, how smaller countries are engaging with AI in government, the utility of open weight and open source models, digital twins in government, the future of surveillance, and what makes Bruce and Nathan optimistic about the future.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-284-3484.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How should AI change democracy? That’s the topic of <a href='https://mitpress.mit.edu/9780262049948/rewiring-democracy/'><em>Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship</em>,</a> and I am continuing my talk with its authors. <a href='http://schneier.com/'>Bruce Schneier</a> is an internationally renowned security technologist and the bestselling author of fourteen books, including <em>Data and Goliath</em> and <em>A Hacker’s Mind</em>.  <a href='http://nsanders.me/'>Nathan Sanders</a> is a data scientist who has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes in <em>The New York Times</em> and <em>The Atlantic</em>.</p>
<p>We talk about whether wealthy entities might subvert the use of AI in democracy, how smaller countries are engaging with AI in government, the utility of open weight and open source models, digital twins in government, the future of surveillance, and what makes Bruce and Nathan optimistic about the future.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-284-3484.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ttrkerye98hfbnfv/284_AI-and-You.mp3" length="25985226" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How should AI change democracy? That’s the topic of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, and I am continuing my talk with its authors. Bruce Schneier is an internationally renowned security technologist and the bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind.  Nathan Sanders is a data scientist who has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes in The New York Times and The Atlantic.
We talk about whether wealthy entities might subvert the use of AI in democracy, how smaller countries are engaging with AI in government, the utility of open weight and open source models, digital twins in government, the future of surveillance, and what makes Bruce and Nathan optimistic about the future.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1658</itunes:duration>
                <itunes:episode>285</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>283 - Guests: Bruce Schneier &amp; Nathan Sanders, AI in Democracy authors, part 1</title>
        <itunes:title>283 - Guests: Bruce Schneier &amp; Nathan Sanders, AI in Democracy authors, part 1</itunes:title>
        <link>https://aiandyou.net/e/283-guests-bruce-schneier-nathan-sanders-ai-in-democracy-authors-part-1/</link>
                    <comments>https://aiandyou.net/e/283-guests-bruce-schneier-nathan-sanders-ai-in-democracy-authors-part-1/#comments</comments>        <pubDate>Mon, 17 Nov 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e0369254-8a90-3cfd-9ff3-a254606a1ed0</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How should AI change democracy? That’s the topic of <a href='https://mitpress.mit.edu/9780262049948/rewiring-democracy/'>Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship,</a> and I am talking today with its authors. <a href='http://schneier.com/'>Bruce Schneier</a> is an internationally renowned security technologist and the bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind. He is a lecturer at the Harvard Kennedy School, and a board member of the Electronic Frontier Foundation, and Chief of Security Architecture at Inrupt. <a href='http://nsanders.me/'>Nathan Sanders</a> is a data scientist researching machine learning, astrophysics, public health, environmental justice, and more. He has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes on AI and democracy in The New York Times and The Atlantic.</p>
<p>We talk about this fascinating and scary intersection of AI and government, of AI being used in making legislation, the concept of democracy as an information system, ways AI can transform how citizens engage their governments, regulatory responses to AI from the US and around the world, and how the judicial branch can use AI. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-283-2354.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How should AI change democracy? That’s the topic of <a href='https://mitpress.mit.edu/9780262049948/rewiring-democracy/'><em>Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship</em>,</a> and I am talking today with its authors. <a href='http://schneier.com/'>Bruce Schneier</a> is an internationally renowned security technologist and the bestselling author of fourteen books, including <em>Data and Goliath</em> and <em>A Hacker’s Mind</em>. He is a lecturer at the Harvard Kennedy School, and a board member of the Electronic Frontier Foundation, and Chief of Security Architecture at Inrupt. <a href='http://nsanders.me/'>Nathan Sanders</a> is a data scientist researching machine learning, astrophysics, public health, environmental justice, and more. He has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes on AI and democracy in <em>The New York Times</em> and <em>The Atlantic</em>.</p>
<p>We talk about this fascinating and scary intersection of AI and government, of AI being used in making legislation, the concept of democracy as an information system, ways AI can transform how citizens engage their governments, regulatory responses to AI from the US and around the world, and how the judicial branch can use AI. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/11/transcript-283-2354.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8rpiz7usn36dkdii/283_AI-and-You.mp3" length="30232896" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How should AI change democracy? That’s the topic of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, and I am talking today with its authors. Bruce Schneier is an internationally renowned security technologist and the bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind. He is a lecturer at the Harvard Kennedy School, and a board member of the Electronic Frontier Foundation, and Chief of Security Architecture at Inrupt. Nathan Sanders is a data scientist researching machine learning, astrophysics, public health, environmental justice, and more. He has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes on AI and democracy in The New York Times and The Atlantic.
We talk about this fascinating and scary intersection of AI and government, of AI being used in making legislation, the concept of democracy as an information system, ways AI can transform how citizens engage their governments, regulatory responses to AI from the US and around the world, and how the judicial branch can use AI. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1848</itunes:duration>
                <itunes:episode>284</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>282 - Guest: Gerry White, Dean of Academic Technology, part 2</title>
        <itunes:title>282 - Guest: Gerry White, Dean of Academic Technology, part 2</itunes:title>
        <link>https://aiandyou.net/e/282-guest-gerry-white-dean-of-academic-technology-part-2/</link>
                    <comments>https://aiandyou.net/e/282-guest-gerry-white-dean-of-academic-technology-part-2/#comments</comments>        <pubDate>Mon, 10 Nov 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/09439db9-212f-3f59-ab42-c9f84553f8fb</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What's really going on in classrooms with AI right now? I'm talking with <a href='https://gerrywhite.substack.com/'>Gerry White</a>, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at <a href='https://www.ecpi.edu/'>ECPI University in Virginia</a> and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His <a href='https://gerrywhite.substack.com/'>Substack </a>articles and <a href='https://gerrywhitebooks.com/'>books </a>unpack the ethical, emotional, and societal consequences of AI.</p>
<p>We talk about how cultural bias in GenAI affects the classroom, what school leadership should be doing, AI in group work, assessment, how AI might accelerate learning and augment the human experience, interactions with parents, and kids’ social uses of AI, all in the context of real experiences in school.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-282-1614.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What's really going on in classrooms with AI right now? I'm talking with <a href='https://gerrywhite.substack.com/'>Gerry White</a>, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at <a href='https://www.ecpi.edu/'>ECPI University in Virginia</a> and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His <a href='https://gerrywhite.substack.com/'>Substack </a>articles and <a href='https://gerrywhitebooks.com/'>books </a>unpack the ethical, emotional, and societal consequences of AI.</p>
<p>We talk about how cultural bias in GenAI affects the classroom, what school leadership should be doing, AI in group work, assessment, how AI might accelerate learning and augment the human experience, interactions with parents, and kids’ social uses of AI, all in the context of real experiences in school.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-282-1614.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/wdu6x7rtc7zb48k2/282_AI-and-You.mp3" length="32204409" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What's really going on in classrooms with AI right now? I'm talking with Gerry White, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at ECPI University in Virginia and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His Substack articles and books unpack the ethical, emotional, and societal consequences of AI.
We talk about how cultural bias in GenAI affects the classroom, what school leadership should be doing, AI in group work, assessment, how AI might accelerate learning and augment the human experience, interactions with parents, and kids’ social uses of AI, all in the context of real experiences in school.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1971</itunes:duration>
                <itunes:episode>283</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>281 - Guest: Gerry White, Dean of Academic Technology, part 1</title>
        <itunes:title>281 - Guest: Gerry White, Dean of Academic Technology, part 1</itunes:title>
        <link>https://aiandyou.net/e/281-guest-gerry-white-dean-of-academic-technology-part-1/</link>
                    <comments>https://aiandyou.net/e/281-guest-gerry-white-dean-of-academic-technology-part-1/#comments</comments>        <pubDate>Mon, 03 Nov 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a603879e-5ace-3a43-814d-063714319e7b</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're continuing to focus on AI in education because it's so pivotal to the future of the human race. What's really going on in classrooms with AI right now? We are learning that from <a href='https://gerrywhite.substack.com/'>Gerry White</a>, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at <a href='https://www.ecpi.edu/'>ECPI University in Virginia</a> and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His <a href='https://gerrywhite.substack.com/'>Substack </a>articles and <a href='https://gerrywhitebooks.com/'>books </a>unpack the ethical, emotional, and societal consequences of AI.</p>
<p>We’re going to talk about how generative AI first showed up in Gerry’s classrooms, the importance of preserving students’ voices, confronting the cheating and plagiarism problems, optimal ways of engaging students’ use of AI, and finding the unique value of humans in the workplace, all in the context of real experiences in real classes.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-281-0188.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're continuing to focus on AI in education because it's so pivotal to the future of the human race. What's really going on in classrooms with AI right now? We are learning that from <a href='https://gerrywhite.substack.com/'>Gerry White</a>, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at <a href='https://www.ecpi.edu/'>ECPI University in Virginia</a> and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His <a href='https://gerrywhite.substack.com/'>Substack </a>articles and <a href='https://gerrywhitebooks.com/'>books </a>unpack the ethical, emotional, and societal consequences of AI.</p>
<p>We’re going to talk about how generative AI first showed up in Gerry’s classrooms, the importance of preserving students’ voices, confronting the cheating and plagiarism problems, optimal ways of engaging students’ use of AI, and finding the unique value of humans in the workplace, all in the context of real experiences in real classes.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-281-0188.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bimepxtfip6dkd44/281_AI-and-You.mp3" length="30814695" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We're continuing to focus on AI in education because it's so pivotal to the future of the human race. What's really going on in classrooms with AI right now? We are learning that from Gerry White, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at ECPI University in Virginia and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His Substack articles and books unpack the ethical, emotional, and societal consequences of AI.
We’re going to talk about how generative AI first showed up in Gerry’s classrooms, the importance of preserving students’ voices, confronting the cheating and plagiarism problems, optimal ways of engaging students’ use of AI, and finding the unique value of humans in the workplace, all in the context of real experiences in real classes.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1884</itunes:duration>
                <itunes:episode>282</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>280 - Guest: Alyson King, Researcher in Academic Integrity, part 2</title>
        <itunes:title>280 - Guest: Alyson King, Researcher in Academic Integrity, part 2</itunes:title>
        <link>https://aiandyou.net/e/280-guest-alyson-king-researcher-in-academic-integrity-part-2/</link>
                    <comments>https://aiandyou.net/e/280-guest-alyson-king-researcher-in-academic-integrity-part-2/#comments</comments>        <pubDate>Mon, 27 Oct 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d98a115b-80ab-3ba0-ada5-11e19785cd5a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher. </p>
<p>Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.</p>
<p>We’re going to talk about teachers getting to know their students’ voices, AI detectors, and the place of AI in education.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-280-8529.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher. </p>
<p>Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.</p>
<p>We’re going to talk about teachers getting to know their students’ voices, AI detectors, and the place of AI in education.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-280-8529.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/exzcsnr2hwn5vfdf/280_AI-and-You.mp3" length="25645794" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher. 
Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.
We’re going to talk about teachers getting to know their students’ voices, AI detectors, and the place of AI in education.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1561</itunes:duration>
                <itunes:episode>281</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>279 - Guest: Alyson King, Researcher in Academic Integrity, part 1</title>
        <itunes:title>279 - Guest: Alyson King, Researcher in Academic Integrity, part 1</itunes:title>
        <link>https://aiandyou.net/e/279-guest-alyson-king-researcher-in-academic-integrity-part-1/</link>
                    <comments>https://aiandyou.net/e/279-guest-alyson-king-researcher-in-academic-integrity-part-1/#comments</comments>        <pubDate>Mon, 20 Oct 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/94836d0a-4375-3ebd-843e-167b22042462</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher. </p>
<p>Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.</p>
<p>We’re going to talk about plagiarism, AI-proofing assignments, motivating students, threats to critical thinking, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-279-4523.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher. </p>
<p>Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.</p>
<p>We’re going to talk about plagiarism, AI-proofing assignments, motivating students, threats to critical thinking, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-279-4523.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/m29vp35qumsf37e7/279_AI-and-You.mp3" length="29801562" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher. 
Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.
We’re going to talk about plagiarism, AI-proofing assignments, motivating students, threats to critical thinking, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1821</itunes:duration>
                <itunes:episode>280</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>278 - Guest: Becky Keene, AI in Education Author</title>
        <itunes:title>278 - Guest: Becky Keene, AI in Education Author</itunes:title>
        <link>https://aiandyou.net/e/278-guest-becky-keene-ai-in-education-author/</link>
                    <comments>https://aiandyou.net/e/278-guest-becky-keene-ai-in-education-author/#comments</comments>        <pubDate>Mon, 13 Oct 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9f881acc-07f3-3f9d-b391-a8d18631903f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We are again focusing on AI in education, because that is really where the rubber meets the road for nearly every issue in AI and where we need to get it right, because that’s where we’re training the generation that will save the world. You could be very pessimistic about that, but you can also be very optimistic about that, and one person who is optimistic is <a href='https://beckykeene.com/'>Becky Keene</a>, an educator, author, and speaker focused on innovative teaching and learning, and author of the new book, <a href='https://www.amazon.com/Optimism-Redefining-Artificial-Intelligence-Education/dp/1956306978'>AI Optimism</a>, about all the good possibilities of AI in education. She specializes in instructional coaching, game-based learning, and integrating AI into education to empower students as creators.</p>
<p>We talk about the conflict between fear and hope about AI in education, changing our focus from product to process, how to reshape education to leverage AI, what role school leadership should play, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-278-0254.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We are again focusing on AI in education, because that is really where the rubber meets the road for nearly every issue in AI and where we need to get it right, because that’s where we’re training the generation that will save the world. You could be very pessimistic about that, but you can also be very optimistic about that, and one person who is optimistic is <a href='https://beckykeene.com/'>Becky Keene</a>, an educator, author, and speaker focused on innovative teaching and learning, and author of the new book, <a href='https://www.amazon.com/Optimism-Redefining-Artificial-Intelligence-Education/dp/1956306978'><em>AI Optimism</em></a>, about all the good possibilities of AI in education. She specializes in instructional coaching, game-based learning, and integrating AI into education to empower students as creators.</p>
<p>We talk about the conflict between fear and hope about AI in education, changing our focus from product to process, how to reshape education to leverage AI, what role school leadership should play, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/10/transcript-278-0254.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/p9kvz7irz6vwz48q/278_AI-and-You.mp3" length="26756311" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We are again focusing on AI in education, because that is really where the rubber meets the road for nearly every issue in AI and where we need to get it right, because that’s where we’re training the generation that will save the world. You could be very pessimistic about that, but you can also be very optimistic about that, and one person who is optimistic is Becky Keene, an educator, author, and speaker focused on innovative teaching and learning, and author of the new book, AI Optimism, about all the good possibilities of AI in education. She specializes in instructional coaching, game-based learning, and integrating AI into education to empower students as creators.
We talk about the conflict between fear and hope about AI in education, changing our focus from product to process, how to reshape education to leverage AI, what role school leadership should play, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1630</itunes:duration>
                <itunes:episode>279</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>277 - Guest: Michael Gerlich, Adaptability Thought Leader, part 2</title>
        <itunes:title>277 - Guest: Michael Gerlich, Adaptability Thought Leader, part 2</itunes:title>
        <link>https://aiandyou.net/e/277-guest-michael-gerlich-adaptability-thought-leader-part-2/</link>
                    <comments>https://aiandyou.net/e/277-guest-michael-gerlich-adaptability-thought-leader-part-2/#comments</comments>        <pubDate>Mon, 06 Oct 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/4046f330-3af2-31e2-980d-e2baa5b1387c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As we use AI more and more as a critical assistant, what might that be doing to our critical thinking? Professor <a href='https://www.linkedin.com/in/prof-dr-michael-gerlich-a2940619a/'>Michael Gerlich</a> has published his research in the paper “<a href='https://www.mdpi.com/2075-4698/15/1/6'>AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking</a>” in the journal Societies. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.</p>
<p>Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.</p>
<p>In part 2, we talk about whether or how we can tell that our cognition has been impaired, how the future of work will change with cognitive offloading and what employers need to beware of and leverage.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-277-7914.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As we use AI more and more as a critical <em>assistant,</em> what might that be doing to our critical <em>thinking? </em>Professor <a href='https://www.linkedin.com/in/prof-dr-michael-gerlich-a2940619a/'>Michael Gerlich</a> has published his research in the paper “<a href='https://www.mdpi.com/2075-4698/15/1/6'>AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking</a>” in the journal <em>Societies</em>. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.</p>
<p>Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.</p>
<p>In part 2, we talk about whether or how we can tell that our cognition has been impaired, how the future of work will change with cognitive offloading and what employers need to beware of and leverage.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-277-7914.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/4w2ersbxqer9t3qq/277_AI-and-You.mp3" length="32906581" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




As we use AI more and more as a critical assistant, what might that be doing to our critical thinking? Professor Michael Gerlich has published his research in the paper “AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking” in the journal Societies. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.
Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.
In part 2, we talk about whether or how we can tell that our cognition has been impaired, how the future of work will change with cognitive offloading and what employers need to beware of and leverage.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2015</itunes:duration>
                <itunes:episode>278</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>276 - Guest: Michael Gerlich, Adaptability Thought Leader, part 1</title>
        <itunes:title>276 - Guest: Michael Gerlich, Adaptability Thought Leader, part 1</itunes:title>
        <link>https://aiandyou.net/e/276-guest-michael-gerlich-adaptability-thought-leader-part-1/</link>
                    <comments>https://aiandyou.net/e/276-guest-michael-gerlich-adaptability-thought-leader-part-1/#comments</comments>        <pubDate>Mon, 29 Sep 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/11c59d9c-f049-3d0e-a1c1-aaa2ffd9f659</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As we use AI more and more as a critical assistant, what might that be doing to our critical thinking? Professor <a href='https://www.linkedin.com/in/prof-dr-michael-gerlich-a2940619a/'>Michael Gerlich</a> has published his research in the paper “<a href='https://www.mdpi.com/2075-4698/15/1/6'>AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking</a>” in the journal Societies. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.</p>
<p>Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.</p>
<p>We talk about “cognitive offloading” and the use of GenAI. Why is it different from using calculators, which were widely forecast to cause math skills to atrophy and were banned from schools, and we since learned better. Michael will look at how AI like the big agents that might come with workplace IT systems help or hinder in knowledge work, and consequences for on-the-job training.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-276-8275.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As we use AI more and more as a critical <em>assistant,</em> what might that be doing to our critical <em>thinking? </em>Professor <a href='https://www.linkedin.com/in/prof-dr-michael-gerlich-a2940619a/'>Michael Gerlich</a> has published his research in the paper “<a href='https://www.mdpi.com/2075-4698/15/1/6'>AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking</a>” in the journal <em>Societies</em>. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.</p>
<p>Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.</p>
<p>We talk about “cognitive offloading” and the use of GenAI. Why is it different from using calculators, which were widely forecast to cause math skills to atrophy and were banned from schools, and we since learned better. Michael will look at how AI like the big agents that might come with workplace IT systems help or hinder in knowledge work, and consequences for on-the-job training.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-276-8275.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8jhqvd5rckx38ji5/276_AI-and-You.mp3" length="36208458" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




As we use AI more and more as a critical assistant, what might that be doing to our critical thinking? Professor Michael Gerlich has published his research in the paper “AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking” in the journal Societies. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.
Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.
We talk about “cognitive offloading” and the use of GenAI. Why is it different from using calculators, which were widely forecast to cause math skills to atrophy and were banned from schools, and we since learned better. Michael will look at how AI like the big agents that might come with workplace IT systems help or hinder in knowledge work, and consequences for on-the-job training.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2221</itunes:duration>
                <itunes:episode>277</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>275 - Guest: Carl Benedikt Frey, Professor of AI and Work, part 2</title>
        <itunes:title>275 - Guest: Carl Benedikt Frey, Professor of AI and Work, part 2</itunes:title>
        <link>https://aiandyou.net/e/275-guest-carl-benedikt-frey-professor-of-ai-and-work-part-2/</link>
                    <comments>https://aiandyou.net/e/275-guest-carl-benedikt-frey-professor-of-ai-and-work-part-2/#comments</comments>        <pubDate>Mon, 22 Sep 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/62374a8b-92ff-3638-b88d-a6cf54d2e598</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>



 
 



<p>"The book seems to be more timely than originally anticipated."  I'm talking with <a href='https://www.carlbenediktfrey.com/'>Carl Benedikt Frey</a> about his new book, <a href='https://press.princeton.edu/books/hardcover/9780691233079/how-progress-ends'>How Progress Ends: Technology, Innovation, and the Fate of Nations</a>, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study <a href='https://www.oxfordmartin.ox.ac.uk/publications/the-future-of-employment'>The Future of Employment: How Susceptible Are Jobs to Computerization</a>, which has received over 12,000 citations.</p>
<p>He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the <a href='https://www.oxfordmartin.ox.ac.uk/'>Oxford Martin School</a>, both at the University of Oxford. His 2019 book, The Technology Trap: Capital, Labor, and Power in the Age of Automation, was selected as a Financial Times Best Book of the Year and awarded Princeton University’s Richard A. Lester Prize.  </p>
<p>In the conclusion, we talk about the links between innovation and industry productivity, why AI hasn’t yet delivered broad gains, automation’s uneven effects on workers, the role of antitrust in sustaining competition, and the need for institutions like Oxford to adapt.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-275-4249.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>



 
 



<p>"The book seems to be more timely than originally anticipated."  I'm talking with <a href='https://www.carlbenediktfrey.com/'>Carl Benedikt Frey</a> about his new book, <a href='https://press.princeton.edu/books/hardcover/9780691233079/how-progress-ends'><em>How Progress Ends: Technology, Innovation, and the Fate of Nations</em></a>, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study <a href='https://www.oxfordmartin.ox.ac.uk/publications/the-future-of-employment'><em>The Future of Employment: How Susceptible Are Jobs to Computerization</em></a>, which has received over 12,000 citations.</p>
<p>He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the <a href='https://www.oxfordmartin.ox.ac.uk/'>Oxford Martin School</a>, both at the University of Oxford. His 2019 book, <em>The Technology Trap: Capital, Labor, and Power in the Age of Automation</em>,<em> </em>was selected as a Financial Times Best Book of the Year and awarded Princeton University’s Richard A. Lester Prize.  </p>
<p>In the conclusion, we talk about the links between innovation and industry productivity, why AI hasn’t yet delivered broad gains, automation’s uneven effects on workers, the role of antitrust in sustaining competition, and the need for institutions like Oxford to adapt.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-275-4249.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qh8twqwktggzdybd/275_AI-and-You.mp3" length="27842169" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .



 
 



"The book seems to be more timely than originally anticipated."  I'm talking with Carl Benedikt Frey about his new book, How Progress Ends: Technology, Innovation, and the Fate of Nations, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study The Future of Employment: How Susceptible Are Jobs to Computerization, which has received over 12,000 citations.
He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the Oxford Martin School, both at the University of Oxford. His 2019 book, The Technology Trap: Capital, Labor, and Power in the Age of Automation, was selected as a Financial Times Best Book of the Year and awarded Princeton University’s Richard A. Lester Prize.  
In the conclusion, we talk about the links between innovation and industry productivity, why AI hasn’t yet delivered broad gains, automation’s uneven effects on workers, the role of antitrust in sustaining competition, and the need for institutions like Oxford to adapt.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 

 



 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1698</itunes:duration>
                <itunes:episode>276</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>274 - Guest: Carl Benedikt Frey, Professor of AI and Work, part 1</title>
        <itunes:title>274 - Guest: Carl Benedikt Frey, Professor of AI and Work, part 1</itunes:title>
        <link>https://aiandyou.net/e/274-guest-carl-benedikt-frey-professor-of-ai-and-work-part-1/</link>
                    <comments>https://aiandyou.net/e/274-guest-carl-benedikt-frey-professor-of-ai-and-work-part-1/#comments</comments>        <pubDate>Mon, 15 Sep 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/73e1bbc7-2d2d-351b-b051-415f159466c6</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>



 
 



<p>"The book seems to be more timely than originally anticipated."  I'm talking with <a href='https://www.carlbenediktfrey.com/'>Carl Benedikt Frey</a> about his new book, <a href='https://press.princeton.edu/books/hardcover/9780691233079/how-progress-ends'>How Progress Ends: Technology, Innovation, and the Fate of Nations</a>, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study <a href='https://www.oxfordmartin.ox.ac.uk/publications/the-future-of-employment'>The Future of Employment: How Susceptible Are Jobs to Computerization</a>, which has received over 12,000 citations.</p>
<p>He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the <a href='https://www.oxfordmartin.ox.ac.uk/'>Oxford Martin School</a>, both at the University of Oxford. His 2019 book, The Technology Trap: Capital, Labor, and Power in the Age of Automation, was selected as a Financial Times Best Book of the Year and awarded Princeton University’s Richard A. Lester Prize.  </p>
<p>We talk about whether progress is inevitable, how growth depends on the interplay of technology and institutions, the link between productivity and innovation, the importance of institutional flexibility and decentralized funding, the effects of tariffs, the risks of China’s increasingly centralized model, and why the US and China are both triggering declining dynamism in each other.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-274-4857.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>



 
 



<p>"The book seems to be more timely than originally anticipated."  I'm talking with <a href='https://www.carlbenediktfrey.com/'>Carl Benedikt Frey</a> about his new book, <a href='https://press.princeton.edu/books/hardcover/9780691233079/how-progress-ends'><em>How Progress Ends: Technology, Innovation, and the Fate of Nations</em></a>, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study <a href='https://www.oxfordmartin.ox.ac.uk/publications/the-future-of-employment'><em>The Future of Employment: How Susceptible Are Jobs to Computerization</em></a>, which has received over 12,000 citations.</p>
<p>He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the <a href='https://www.oxfordmartin.ox.ac.uk/'>Oxford Martin School</a>, both at the University of Oxford. His 2019 book, <em>The Technology Trap: Capital, Labor, and Power in the Age of Automation</em>,<em> </em>was selected as a Financial Times Best Book of the Year and awarded Princeton University’s Richard A. Lester Prize.  </p>
<p>We talk about whether progress is inevitable, how growth depends on the interplay of technology and institutions, the link between productivity and innovation, the importance of institutional flexibility and decentralized funding, the effects of tariffs, the risks of China’s increasingly centralized model, and why the US and China are both triggering declining dynamism in each other.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/09/transcript-274-4857.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/99zxd9dsnrstx364/274_AI-and-You.mp3" length="34247812" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .



 
 



"The book seems to be more timely than originally anticipated."  I'm talking with Carl Benedikt Frey about his new book, How Progress Ends: Technology, Innovation, and the Fate of Nations, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study The Future of Employment: How Susceptible Are Jobs to Computerization, which has received over 12,000 citations.
He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the Oxford Martin School, both at the University of Oxford. His 2019 book, The Technology Trap: Capital, Labor, and Power in the Age of Automation, was selected as a Financial Times Best Book of the Year and awarded Princeton University’s Richard A. Lester Prize.  
We talk about whether progress is inevitable, how growth depends on the interplay of technology and institutions, the link between productivity and innovation, the importance of institutional flexibility and decentralized funding, the effects of tariffs, the risks of China’s increasingly centralized model, and why the US and China are both triggering declining dynamism in each other.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 

 



 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2099</itunes:duration>
                <itunes:episode>275</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>273 - Guest: Megan Peters, Computational Cognitive Scientist, part 2</title>
        <itunes:title>273 - Guest: Megan Peters, Computational Cognitive Scientist, part 2</itunes:title>
        <link>https://aiandyou.net/e/273-guest-megan-peters-computational-cognitive-scientist-part-2/</link>
                    <comments>https://aiandyou.net/e/273-guest-megan-peters-computational-cognitive-scientist-part-2/#comments</comments>        <pubDate>Mon, 08 Sep 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/64dcc894-1d89-3c62-877a-990822aa8006</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>



 
 



<p>I'm talking with <a href='https://sites.google.com/site/meganakpeters'>Megan Peters</a>, who researches thinking about thinking, or metacognition. She is an Associate Professor in the UC Irvine Department of Cognitive Sciences, studying how the brain represents and uses uncertainty, focusing on how these abilities support metacognitive evaluations of the quality of our decisions. She’s a Fellow in the UCI Center for the Neurobiology of Learning and Memory, the UCI Center for Theoretical Behavioral Sciences, and the Canadian Institute for Advanced Research (CIFAR) Brain Mind &amp; Consciousness program. She’s also President and Co-founder of <a href='http://www.neuromatch.io/'>Neuromatch</a>, an educational platform serving over 30,000 students in over 120 countries across computational neurosciences, deep learning, computational climate science, and neuroAI.</p>
<p>In our conclusion, we talk about Turing Tests, measuring the brain, the Haunted Mansion, some cool experiments on brains, and… cats.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-273-5592.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>



 
 



<p>I'm talking with <a href='https://sites.google.com/site/meganakpeters'>Megan Peters</a>, who researches thinking about thinking, or <em>metacognition</em>. She is an Associate Professor in the UC Irvine Department of Cognitive Sciences, studying how the brain represents and uses uncertainty, focusing on how these abilities support metacognitive evaluations of the quality of our decisions. She’s a Fellow in the UCI Center for the Neurobiology of Learning and Memory, the UCI Center for Theoretical Behavioral Sciences, and the Canadian Institute for Advanced Research (CIFAR) Brain Mind &amp; Consciousness program. She’s also President and Co-founder of <a href='http://www.neuromatch.io/'>Neuromatch</a>, an educational platform serving over 30,000 students in over 120 countries across computational neurosciences, deep learning, computational climate science, and neuroAI.</p>
<p>In our conclusion, we talk about Turing Tests, measuring the brain, the Haunted Mansion, some cool experiments on brains, and… cats.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-273-5592.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ww43ydcdpthdtbbe/273_AI-and-You.mp3" length="32943779" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .



 
 



I'm talking with Megan Peters, who researches thinking about thinking, or metacognition. She is an Associate Professor in the UC Irvine Department of Cognitive Sciences, studying how the brain represents and uses uncertainty, focusing on how these abilities support metacognitive evaluations of the quality of our decisions. She’s a Fellow in the UCI Center for the Neurobiology of Learning and Memory, the UCI Center for Theoretical Behavioral Sciences, and the Canadian Institute for Advanced Research (CIFAR) Brain Mind &amp; Consciousness program. She’s also President and Co-founder of Neuromatch, an educational platform serving over 30,000 students in over 120 countries across computational neurosciences, deep learning, computational climate science, and neuroAI.
In our conclusion, we talk about Turing Tests, measuring the brain, the Haunted Mansion, some cool experiments on brains, and… cats.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 

 



 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2017</itunes:duration>
                <itunes:episode>274</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>272 - Guest: Megan Peters, Computational Cognitive Scientist, part 1</title>
        <itunes:title>272 - Guest: Megan Peters, Computational Cognitive Scientist, part 1</itunes:title>
        <link>https://aiandyou.net/e/272-guest-megan-peters-computational-cognitive-scientist-part-1/</link>
                    <comments>https://aiandyou.net/e/272-guest-megan-peters-computational-cognitive-scientist-part-1/#comments</comments>        <pubDate>Mon, 01 Sep 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/30080ade-67bb-35f7-99af-9d09541dc090</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Have you ever thought about thinking? That’s called metacognition, and <a href='https://sites.google.com/site/meganakpeters'>Megan Peters</a> thinks about that, a lot. She is an Associate Professor in the UC Irvine Department of Cognitive Sciences, researching how the brain represents and uses uncertainty, focusing on how these abilities support metacognitive evaluations of the quality of our decisions. She’s a Fellow in the UCI Center for the Neurobiology of Learning and Memory, the UCI Center for Theoretical Behavioral Sciences, and the Canadian Institute for Advanced Research (CIFAR) Brain Mind &amp; Consciousness program. She’s also President and Co-founder of <a href='http://www.neuromatch.io/'>Neuromatch</a>, an educational platform serving over 30,000 students in over 120 countries across computational neurosciences, deep learning, computational climate science, and neuroAI.</p>
<p>We get really meta here: talking about thinking about thinking,  how we build models of the world, how language shapes our thinking, whether AI is doing metacognition in its chains of thought, statistical learning in AIs and humans, consciousness in humans and animals and AIs, and theories of consciousness.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-272-5638.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Have you ever thought about thinking? That’s called <em>metacognition, </em>and <a href='https://sites.google.com/site/meganakpeters'>Megan Peters</a> thinks about that, a lot. She is an Associate Professor in the UC Irvine Department of Cognitive Sciences, researching how the brain represents and uses uncertainty, focusing on how these abilities support metacognitive evaluations of the quality of our decisions. She’s a Fellow in the UCI Center for the Neurobiology of Learning and Memory, the UCI Center for Theoretical Behavioral Sciences, and the Canadian Institute for Advanced Research (CIFAR) Brain Mind &amp; Consciousness program. She’s also President and Co-founder of <a href='http://www.neuromatch.io/'>Neuromatch</a>, an educational platform serving over 30,000 students in over 120 countries across computational neurosciences, deep learning, computational climate science, and neuroAI.</p>
<p>We get really meta here: talking about thinking about thinking,  how we build models of the world, how language shapes our thinking, whether AI is doing metacognition in its chains of thought, statistical learning in AIs and humans, consciousness in humans and animals and AIs, and theories of consciousness.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-272-5638.pdf'>HumanCusp Blog</a>.</p>





<p> </p>

 



<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/6ra2zcwhd5arqc4n/272_AI-and-You.mp3" length="39565924" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Have you ever thought about thinking? That’s called metacognition, and Megan Peters thinks about that, a lot. She is an Associate Professor in the UC Irvine Department of Cognitive Sciences, researching how the brain represents and uses uncertainty, focusing on how these abilities support metacognitive evaluations of the quality of our decisions. She’s a Fellow in the UCI Center for the Neurobiology of Learning and Memory, the UCI Center for Theoretical Behavioral Sciences, and the Canadian Institute for Advanced Research (CIFAR) Brain Mind &amp; Consciousness program. She’s also President and Co-founder of Neuromatch, an educational platform serving over 30,000 students in over 120 countries across computational neurosciences, deep learning, computational climate science, and neuroAI.
We get really meta here: talking about thinking about thinking,  how we build models of the world, how language shapes our thinking, whether AI is doing metacognition in its chains of thought, statistical learning in AIs and humans, consciousness in humans and animals and AIs, and theories of consciousness.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 

 



 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2431</itunes:duration>
                <itunes:episode>273</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>271 - Guest: Christof Koch, Cognitive Scientist, part 2</title>
        <itunes:title>271 - Guest: Christof Koch, Cognitive Scientist, part 2</itunes:title>
        <link>https://aiandyou.net/e/271-guest-christof-koch-cognitive-scientist-part-2/</link>
                    <comments>https://aiandyou.net/e/271-guest-christof-koch-cognitive-scientist-part-2/#comments</comments>        <pubDate>Mon, 25 Aug 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/bfb7c6e4-94e6-3ab8-bd5b-0dc48d546027</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>I am talking with neuroscientist <a href='https://christofkoch.com/'>Christof Koch</a>, and as he says, "How is it that we, a piece of furniture of the universe like a rock or a star or a tree, can love or hate or see or hear?" What, in other words, makes us conscious, and what does that mean? He is known for his work exploring the substrate of consciousness in humans, animals, and machines and is the author of more than 350 peer-reviewed publications and five books, the latest of which is <a href='https://www.goodreads.com/book/show/198493988-then-i-am-myself-the-world'>Then I Am Myself the World: What Consciousness Is and How to Expand It</a>. A physicist and neurobiologist, he was for more than a quarter of a century a professor of biology and engineering at the California Institute of Technology in Pasadena. In 2011, he became the Chief Scientist at the Allen Institute for Brain Science in Seattle and in 2015, its president; now a Meritorious Investigator. He is also the Chief Scientist of the <a href='https://tinybluedotfoundation.org/'>Tiny Blue Dot Foundation</a> in Santa Monica, seeking to understand consciousness, its place in nature, and how this knowledge can benefit all of humanity.</p>
<p>In part 2, we talk about a theory of consciousness that Christof is a primary researcher of: Integrated Information Theory, and tools for detecting and measuring consciousness, the magic number φ, the possibility of consciousness transfer, philosophical zombies, and neural correlates of consciousness.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-271-9032.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>I am talking with neuroscientist <a href='https://christofkoch.com/'>Christof Koch</a>, and as he says, "How is it that we, a piece of furniture of the universe like a rock or a star or a tree, can love or hate or see or hear?" What, in other words, makes us conscious, and what does that mean? He is known for his work exploring the substrate of consciousness in humans, animals, and machines and is the author of more than 350 peer-reviewed publications and five books, the latest of which is <a href='https://www.goodreads.com/book/show/198493988-then-i-am-myself-the-world'><em>Then I Am Myself the World: What Consciousness Is and How to Expand It</em></a>. A physicist and neurobiologist, he was for more than a quarter of a century a professor of biology and engineering at the California Institute of Technology in Pasadena. In 2011, he became the Chief Scientist at the Allen Institute for Brain Science in Seattle and in 2015, its president; now a Meritorious Investigator. He is also the Chief Scientist of the <a href='https://tinybluedotfoundation.org/'>Tiny Blue Dot Foundation</a> in Santa Monica, seeking to understand consciousness, its place in nature, and how this knowledge can benefit all of humanity.</p>
<p>In part 2, we talk about a theory of consciousness that Christof is a primary researcher of: Integrated Information Theory, and tools for detecting and measuring consciousness, the magic number φ, the possibility of consciousness transfer, philosophical zombies, and neural correlates of consciousness.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-271-9032.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ui2eprqu8rb6rgim/271_AI-and-You.mp3" length="40025262" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




I am talking with neuroscientist Christof Koch, and as he says, "How is it that we, a piece of furniture of the universe like a rock or a star or a tree, can love or hate or see or hear?" What, in other words, makes us conscious, and what does that mean? He is known for his work exploring the substrate of consciousness in humans, animals, and machines and is the author of more than 350 peer-reviewed publications and five books, the latest of which is Then I Am Myself the World: What Consciousness Is and How to Expand It. A physicist and neurobiologist, he was for more than a quarter of a century a professor of biology and engineering at the California Institute of Technology in Pasadena. In 2011, he became the Chief Scientist at the Allen Institute for Brain Science in Seattle and in 2015, its president; now a Meritorious Investigator. He is also the Chief Scientist of the Tiny Blue Dot Foundation in Santa Monica, seeking to understand consciousness, its place in nature, and how this knowledge can benefit all of humanity.
In part 2, we talk about a theory of consciousness that Christof is a primary researcher of: Integrated Information Theory, and tools for detecting and measuring consciousness, the magic number φ, the possibility of consciousness transfer, philosophical zombies, and neural correlates of consciousness.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2460</itunes:duration>
                <itunes:episode>272</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>270 - Guest: Christof Koch, Cognitive Scientist, part 1</title>
        <itunes:title>270 - Guest: Christof Koch, Cognitive Scientist, part 1</itunes:title>
        <link>https://aiandyou.net/e/270-guest-christof-koch-cognitive-scientist-part-1/</link>
                    <comments>https://aiandyou.net/e/270-guest-christof-koch-cognitive-scientist-part-1/#comments</comments>        <pubDate>Mon, 18 Aug 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7be10f32-c294-360e-8bb3-8fae4446b067</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As my guest today says, "How is it that we, a piece of furniture of the universe like a rock or a star or a tree, can love or hate or see or hear?" What, in other words, makes us conscious, and what does that mean? He is the cognitive scientist <a href='https://christofkoch.com/'>Christof Koch</a>, known for his work exploring the substrate of consciousness in humans, animals, and machines. He is the author of more than 350 peer-reviewed publications and five books, the latest of which is <a href='https://www.goodreads.com/book/show/198493988-then-i-am-myself-the-world'>Then I Am Myself the World: What Consciousness Is and How to Expand It</a>. A physicist and neurobiologist, he was for more than a quarter of a century a professor of biology and engineering at the California Institute of Technology in Pasadena. In 2011, he became the Chief Scientist at the Allen Institute for Brain Science in Seattle and in 2015, its president; now a Meritorious Investigator. He is also the Chief Scientist of the <a href='https://tinybluedotfoundation.org/'>Tiny Blue Dot Foundation</a> in Santa Monica, seeking to understand consciousness, its place in nature, and how this knowledge can benefit all of humanity.</p>
<p>Why is an AI show interested in consciousness? Because the questions constantly arise, is AI conscious? How will we know when it is? How can or should we make it conscious? And if we can’t answer those questions for human beings, how will we answer them for anything else?</p>
<p>We talk about the relationships between existence, identity, quantum mechanics, language, and consciousness, and cosmic consciousness, how conscious parts of your body might be, connecting brains to each other, including an example that’s already happened, and… opera. It is possibly literally mind blowing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-270-7905.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As my guest today says, "How is it that we, a piece of furniture of the universe like a rock or a star or a tree, can love or hate or see or hear?" What, in other words, makes us conscious, and what does that mean? He is the cognitive scientist <a href='https://christofkoch.com/'>Christof Koch</a>, known for his work exploring the substrate of consciousness in humans, animals, and machines. He is the author of more than 350 peer-reviewed publications and five books, the latest of which is <a href='https://www.goodreads.com/book/show/198493988-then-i-am-myself-the-world'><em>Then I Am Myself the World: What Consciousness Is and How to Expand It</em></a>. A physicist and neurobiologist, he was for more than a quarter of a century a professor of biology and engineering at the California Institute of Technology in Pasadena. In 2011, he became the Chief Scientist at the Allen Institute for Brain Science in Seattle and in 2015, its president; now a Meritorious Investigator. He is also the Chief Scientist of the <a href='https://tinybluedotfoundation.org/'>Tiny Blue Dot Foundation</a> in Santa Monica, seeking to understand consciousness, its place in nature, and how this knowledge can benefit all of humanity.</p>
<p>Why is an AI show interested in consciousness? Because the questions constantly arise, is AI conscious? How will we know when it is? How can or should we make it conscious? And if we can’t answer those questions for human beings, how will we answer them for anything else?</p>
<p>We talk about the relationships between existence, identity, quantum mechanics, language, and consciousness, and cosmic consciousness, how conscious parts of your body might be, connecting brains to each other, including an example that’s already happened, and… opera. It is possibly literally mind blowing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/08/transcript-270-7905.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/mi2sxmib2mjpfzi8/270_AI-and-You.mp3" length="36315038" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




As my guest today says, "How is it that we, a piece of furniture of the universe like a rock or a star or a tree, can love or hate or see or hear?" What, in other words, makes us conscious, and what does that mean? He is the cognitive scientist Christof Koch, known for his work exploring the substrate of consciousness in humans, animals, and machines. He is the author of more than 350 peer-reviewed publications and five books, the latest of which is Then I Am Myself the World: What Consciousness Is and How to Expand It. A physicist and neurobiologist, he was for more than a quarter of a century a professor of biology and engineering at the California Institute of Technology in Pasadena. In 2011, he became the Chief Scientist at the Allen Institute for Brain Science in Seattle and in 2015, its president; now a Meritorious Investigator. He is also the Chief Scientist of the Tiny Blue Dot Foundation in Santa Monica, seeking to understand consciousness, its place in nature, and how this knowledge can benefit all of humanity.
Why is an AI show interested in consciousness? Because the questions constantly arise, is AI conscious? How will we know when it is? How can or should we make it conscious? And if we can’t answer those questions for human beings, how will we answer them for anything else?
We talk about the relationships between existence, identity, quantum mechanics, language, and consciousness, and cosmic consciousness, how conscious parts of your body might be, connecting brains to each other, including an example that’s already happened, and… opera. It is possibly literally mind blowing.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2228</itunes:duration>
                <itunes:episode>271</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>269 - Guest: De Kai, Pioneer of Google Translate, part 2</title>
        <itunes:title>269 - Guest: De Kai, Pioneer of Google Translate, part 2</itunes:title>
        <link>https://aiandyou.net/e/269-guest-de-kai-pioneer-of-google-translate-part-2/</link>
                    <comments>https://aiandyou.net/e/269-guest-de-kai-pioneer-of-google-translate-part-2/#comments</comments>        <pubDate>Mon, 11 Aug 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/fe97b2db-3ab7-3708-b8f4-2f7833382d75</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>"We are in the privileged - or unfortunate - situation of being the last generation of humans to be parenting AIs. All the future generations of AIs are going to be parented mainly by AIs. And so even more than with our human children and grandchildren, we have one shot at raising this next generation correctly."</p>
<p>I am talking with <a href='https://dek.ai'>De Kai</a>, a pioneering professor of AI who built the web’s first global online language translator that spawned Google Translate and Microsoft Bing Translator, and author of new book, <a href='https://mitpress.mit.edu/9780262049764/raising-ai/'>Raising AI: An Essential Guide to Parenting Our Future</a>.</p>
<p>De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at HKUST’s Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley’s International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google’s AI ethics council. So he’s helped create some of the most important mechanisms and institutions of the modern AI age.</p>
<p>In the conclusion of our interview, we talk about how to parent AI and what that means, responsibilities of the AI companies, a kind of parent-teacher association for AI and how to get involved, and our responsibilities to the next generation.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-269-6215.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>"We are in the privileged - or unfortunate - situation of being the last generation of humans to be parenting AIs. All the future generations of AIs are going to be parented mainly by AIs. And so even more than with our human children and grandchildren, we have one shot at raising this next generation correctly."</p>
<p>I am talking with <a href='https://dek.ai'>De Kai</a>, a pioneering professor of AI who built the web’s first global online language translator that spawned Google Translate and Microsoft Bing Translator, and author of new book, <a href='https://mitpress.mit.edu/9780262049764/raising-ai/'><em>Raising AI: An Essential Guide to Parenting Our Future</em></a>.</p>
<p>De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at HKUST’s Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley’s International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google’s AI ethics council. So he’s helped create some of the most important mechanisms and institutions of the modern AI age.</p>
<p>In the conclusion of our interview, we talk about how to parent AI and what that means, responsibilities of the AI companies, a kind of parent-teacher association for AI and how to get involved, and our responsibilities to the next generation.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-269-6215.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ae72qpi33fjqrcj2/269_AI-and-You.mp3" length="35242555" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




"We are in the privileged - or unfortunate - situation of being the last generation of humans to be parenting AIs. All the future generations of AIs are going to be parented mainly by AIs. And so even more than with our human children and grandchildren, we have one shot at raising this next generation correctly."
I am talking with De Kai, a pioneering professor of AI who built the web’s first global online language translator that spawned Google Translate and Microsoft Bing Translator, and author of new book, Raising AI: An Essential Guide to Parenting Our Future.
De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at HKUST’s Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley’s International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google’s AI ethics council. So he’s helped create some of the most important mechanisms and institutions of the modern AI age.
In the conclusion of our interview, we talk about how to parent AI and what that means, responsibilities of the AI companies, a kind of parent-teacher association for AI and how to get involved, and our responsibilities to the next generation.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2161</itunes:duration>
                <itunes:episode>270</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>268 - Guest: De Kai, Pioneer of Google Translate, part 1</title>
        <itunes:title>268 - Guest: De Kai, Pioneer of Google Translate, part 1</itunes:title>
        <link>https://aiandyou.net/e/268-guest-de-kai-pioneer-of-google-translate-part-1/</link>
                    <comments>https://aiandyou.net/e/268-guest-de-kai-pioneer-of-google-translate-part-1/#comments</comments>        <pubDate>Mon, 04 Aug 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/cc711a83-e318-34bd-8dc5-704d0619131e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As AI becomes more and more powerful, what is our responsibility, collectively? I am joined by <a href='https://dek.ai'>De Kai</a>, a pioneering professor of AI who built the web’s first global online language translator that spawned Google Translate and Microsoft Bing Translator. And he has answered those questions with his new book, <a href='https://mitpress.mit.edu/9780262049764/raising-ai/'>Raising AI: An Essential Guide to Parenting Our Future</a>.</p>
<p>De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at Hong Kong University of Science and Technology’s Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley’s International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google’s AI ethics council. So he’s helped create some of the most important mechanisms and institutions of the modern AI age.</p>
<p>We talk about why we should parent AI, the existential issues that drove him to write the book, seeing AI as neuro-atypical, and the architecture and features of AI that are important to consider in how we relate to it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-268-3785.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>As AI becomes more and more powerful, what is our responsibility, collectively? I am joined by <a href='https://dek.ai'>De Kai</a>, a pioneering professor of AI who built the web’s first global online language translator that spawned Google Translate and Microsoft Bing Translator. And he has answered those questions with his new book, <a href='https://mitpress.mit.edu/9780262049764/raising-ai/'><em>Raising AI: An Essential Guide to Parenting Our Future</em></a>.</p>
<p>De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at Hong Kong University of Science and Technology’s Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley’s International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google’s AI ethics council. So he’s helped create some of the most important mechanisms and institutions of the modern AI age.</p>
<p>We talk about why we should parent AI, the existential issues that drove him to write the book, seeing AI as neuro-atypical, and the architecture and features of AI that are important to consider in how we relate to it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-268-3785.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/4m7aky65k2a2p9yy/268_AI-and-You.mp3" length="31009464" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




As AI becomes more and more powerful, what is our responsibility, collectively? I am joined by De Kai, a pioneering professor of AI who built the web’s first global online language translator that spawned Google Translate and Microsoft Bing Translator. And he has answered those questions with his new book, Raising AI: An Essential Guide to Parenting Our Future.
De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at Hong Kong University of Science and Technology’s Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley’s International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google’s AI ethics council. So he’s helped create some of the most important mechanisms and institutions of the modern AI age.
We talk about why we should parent AI, the existential issues that drove him to write the book, seeing AI as neuro-atypical, and the architecture and features of AI that are important to consider in how we relate to it.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1896</itunes:duration>
                <itunes:episode>269</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>267 - Joint Episode with the London Futurists Podcast</title>
        <itunes:title>267 - Joint Episode with the London Futurists Podcast</itunes:title>
        <link>https://aiandyou.net/e/267-joint-episode-with-the-london-futurists-podcast/</link>
                    <comments>https://aiandyou.net/e/267-joint-episode-with-the-london-futurists-podcast/#comments</comments>        <pubDate>Mon, 28 Jul 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9a4cf975-69f3-3fb0-a290-2e3fb493de6e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Welcome to a joint episode with the London Futurists podcast, hosted by David Wood and Calum Chace, who have both been individual guests on this show in the past. </p>
<p>David is chair of the <a href='https://londonfuturists.com/'>London Futurists</a> and named by T3 as one of the 100 most influential people in technology. He is author of the books, “The Singularity Principles: Anticipating and managing cataclysmically disruptive technologies” and more recently, “The Death of Death.”  Calum is the author of "Surviving AI: The promise and peril of artificial intelligence," and "The Economic Singularity: Artificial intelligence and the death of capitalism." He was recently ranked sixth in a list of the world’s top futurist professionals by Global Gurus TOP30.</p>
<p>Together, David and Calum are key players at a new company called <a href='https://conscium.com'>Conscium</a>, which describes itself as “the world’s first applied AI consciousness research organisation.”</p>
<p>I follow every episode of the <a href='https://podcasts.apple.com/gb/podcast/london-futurists/id1642832645'>London Futurists podcast</a> and I recommend you do too. There’s a link to it in the show notes and transcript. And if you can attend any of the London Futurists meetings, they are very useful and thought-provoking, and many of them are online and free.</p>
<p>We are all talking together today about AI agents, AI safety and security, its effects on social cohesion and communication, and decentralization of control.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-267-7347.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Welcome to a joint episode with the London Futurists podcast, hosted by David Wood and Calum Chace, who have both been individual guests on this show in the past. </p>
<p>David is chair of the <a href='https://londonfuturists.com/'>London Futurists</a> and named by T3 as one of the 100 most influential people in technology. He is author of the books, “The Singularity Principles: Anticipating and managing cataclysmically disruptive technologies” and more recently, “The Death of Death.”  Calum is the author of "Surviving AI: The promise and peril of artificial intelligence," and "The Economic Singularity: Artificial intelligence and the death of capitalism." He was recently ranked sixth in a list of the world’s top futurist professionals by Global Gurus TOP30.</p>
<p>Together, David and Calum are key players at a new company called <a href='https://conscium.com'>Conscium</a>, which describes itself as “the world’s first applied AI consciousness research organisation.”</p>
<p>I follow every episode of the <a href='https://podcasts.apple.com/gb/podcast/london-futurists/id1642832645'>London Futurists podcast</a> and I recommend you do too. There’s a link to it in the show notes and transcript. And if you can attend any of the London Futurists meetings, they are very useful and thought-provoking, and many of them are online and free.</p>
<p>We are all talking together today about AI agents, AI safety and security, its effects on social cohesion and communication, and decentralization of control.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-267-7347.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/cejex7ezyrjc4pb7/267_AI-and-You.mp3" length="55228527" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Welcome to a joint episode with the London Futurists podcast, hosted by David Wood and Calum Chace, who have both been individual guests on this show in the past. 
David is chair of the London Futurists and named by T3 as one of the 100 most influential people in technology. He is author of the books, “The Singularity Principles: Anticipating and managing cataclysmically disruptive technologies” and more recently, “The Death of Death.”  Calum is the author of "Surviving AI: The promise and peril of artificial intelligence," and "The Economic Singularity: Artificial intelligence and the death of capitalism." He was recently ranked sixth in a list of the world’s top futurist professionals by Global Gurus TOP30.
Together, David and Calum are key players at a new company called Conscium, which describes itself as “the world’s first applied AI consciousness research organisation.”
I follow every episode of the London Futurists podcast and I recommend you do too. There’s a link to it in the show notes and transcript. And if you can attend any of the London Futurists meetings, they are very useful and thought-provoking, and many of them are online and free.
We are all talking together today about AI agents, AI safety and security, its effects on social cohesion and communication, and decentralization of control.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3410</itunes:duration>
                <itunes:episode>268</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>266 - Guest: Kate Hayles, Literary and Technological Analyst, part 2</title>
        <itunes:title>266 - Guest: Kate Hayles, Literary and Technological Analyst, part 2</itunes:title>
        <link>https://aiandyou.net/e/266-guest-kate-hayles-literary-and-technological-analyst-part-2/</link>
                    <comments>https://aiandyou.net/e/266-guest-kate-hayles-literary-and-technological-analyst-part-2/#comments</comments>        <pubDate>Mon, 21 Jul 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/87578063-1c10-37f2-aa81-b8611ada769c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It’s more important than ever to define just what we mean by words like intelligence, consciousness, and thinking. Here to help us is Kate Hayles, Distinguished Research Professor at UCLA and the James B. Duke Professor Emerita from Duke University. Her research focuses on the relations of literature, science and technology, and her books include <a href='https://cup.columbia.edu/book/postprint/9780231198257/'>Postprint: Books and Becoming Computational</a>, <a href='https://press.uchicago.edu/ucp/books/book/chicago/U/bo25861765.html'>Unthought: The Power of the Cognitive Nonconscious</a>, and <a href='https://press.uchicago.edu/ucp/books/book/chicago/H/bo5437533.html'>How We Think: Digital Media and Contemporary Technogenesis</a>. She has fellowships from the National Endowment for the Humanities, the Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships, and she is a member of the American Academy of Arts and Sciences. We are focusing on her new book, <a href='https://press.uchicago.edu/ucp/books/book/chicago/B/bo238941793.html'>Bacteria to AI:  Human Futures with our Nonhuman Symbionts</a>, where she lays out a new theory of mind—what she calls an integrated cognitive framework—that includes the meaning-making practices of lifeforms from bacteria to plants, animals, humans, and some forms of artificial intelligence.</p>
<p>In part 2, we talk about where meaning resides, for instance in poetry and literature, and how students’ attention span has changed and shortened as a result of multitasking or multiple information streams and how educational models need to change, how our cognitive symbiosis with AI might evolve, and markers of whether AI has consciousness, sentience, or deserves any individual rights.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-266-5508.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It’s more important than ever to define just what we mean by words like <em>intelligence</em>, <em>consciousness</em>, and <em>thinking</em>. Here to help us is Kate Hayles, Distinguished Research Professor at UCLA and the James B. Duke Professor Emerita from Duke University. Her research focuses on the relations of literature, science and technology, and her books include <em><a href='https://cup.columbia.edu/book/postprint/9780231198257/'>Postprint: Books and Becoming Computational</a></em>, <em><a href='https://press.uchicago.edu/ucp/books/book/chicago/U/bo25861765.html'>Unthought: The Power of the Cognitive Nonconscious</a>,</em> and <em><a href='https://press.uchicago.edu/ucp/books/book/chicago/H/bo5437533.html'>How We Think: Digital Media and Contemporary Technogenesis</a>.</em> She has fellowships from the National Endowment for the Humanities, the Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships, and she is a member of the American Academy of Arts and Sciences. We are focusing on her new book,<em> <a href='https://press.uchicago.edu/ucp/books/book/chicago/B/bo238941793.html'>Bacteria to AI:  Human Futures with our Nonhuman Symbionts</a>, </em>where she lays out a new theory of mind—what she calls an integrated cognitive framework—that includes the meaning-making practices of lifeforms from bacteria to plants, animals, humans, and some forms of artificial intelligence.</p>
<p>In part 2, we talk about where meaning resides, for instance in poetry and literature, and how students’ attention span has changed and shortened as a result of multitasking or multiple information streams and how educational models need to change, how our cognitive symbiosis with AI might evolve, and markers of whether AI has consciousness, sentience, or deserves any individual rights.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-266-5508.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ewvrvrcyyhznqwr6/266_AI-and-You.mp3" length="38238904" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




It’s more important than ever to define just what we mean by words like intelligence, consciousness, and thinking. Here to help us is Kate Hayles, Distinguished Research Professor at UCLA and the James B. Duke Professor Emerita from Duke University. Her research focuses on the relations of literature, science and technology, and her books include Postprint: Books and Becoming Computational, Unthought: The Power of the Cognitive Nonconscious, and How We Think: Digital Media and Contemporary Technogenesis. She has fellowships from the National Endowment for the Humanities, the Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships, and she is a member of the American Academy of Arts and Sciences. We are focusing on her new book, Bacteria to AI:  Human Futures with our Nonhuman Symbionts, where she lays out a new theory of mind—what she calls an integrated cognitive framework—that includes the meaning-making practices of lifeforms from bacteria to plants, animals, humans, and some forms of artificial intelligence.
In part 2, we talk about where meaning resides, for instance in poetry and literature, and how students’ attention span has changed and shortened as a result of multitasking or multiple information streams and how educational models need to change, how our cognitive symbiosis with AI might evolve, and markers of whether AI has consciousness, sentience, or deserves any individual rights.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2348</itunes:duration>
                <itunes:episode>267</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>265 - Guest: Kate Hayles, Literary and Technological Analyst, part 1</title>
        <itunes:title>265 - Guest: Kate Hayles, Literary and Technological Analyst, part 1</itunes:title>
        <link>https://aiandyou.net/e/265-guest-kate-hayles-literary-and-technological-analyst-part-1/</link>
                    <comments>https://aiandyou.net/e/265-guest-kate-hayles-literary-and-technological-analyst-part-1/#comments</comments>        <pubDate>Mon, 14 Jul 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5c8bfd2f-1d62-36b9-93e7-dcc35c18a850</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It’s more important than ever to define just what we mean by words like intelligence, consciousness, and thinking. Here to help us is Kate Hayles, Distinguished Research Professor at the University of California at Los Angeles. Her research focuses on the relations of literature, science and technology, and a new theory of mind—what she calls an integrated cognitive framework (ICF)—that includes the meaning-making practices of lifeforms from bacteria to plants, animals, humans, and some forms of artificial intelligence. That’s the topic of her new book, <a href='https://press.uchicago.edu/ucp/books/book/chicago/B/bo238941793.html'>Bacteria to AI:  Human Futures with our Nonhuman Symbionts</a>. A symbiont is an organism living in symbiosis with another, with a closely-coupled mutual relationship between them. Kate’s other books include <a href='https://cup.columbia.edu/book/postprint/9780231198257/'>Postprint: Books and Becoming Computational</a>, <a href='https://press.uchicago.edu/ucp/books/book/chicago/U/bo25861765.html'>Unthought: The Power of the Cognitive Nonconscious</a>, and <a href='https://press.uchicago.edu/ucp/books/book/chicago/H/bo5437533.html'>How We Think: Digital Media and Contemporary Technogenesis</a>. She has many fellowships, including two University of California Presidential Research Fellowships, and is a member of the American Academy of Arts and Sciences.</p>
<p>We talk about the relationship between cognition and consciousness, and between cognition and computation; our dependency and codependency on technology; concepts like anthropocentrism and technosymbiosis, which Kate unpacks in the service of laying out a really novel way of thinking about thinking; and whether AI is thinking, or feeling.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-265-0948.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It’s more important than ever to define just what we mean by words like <em>intelligence</em>, <em>consciousness</em>, and <em>thinking</em>. Here to help us is Kate Hayles, Distinguished Research Professor at the University of California at Los Angeles. Her research focuses on the relations of literature, science and technology, and a new theory of mind—what she calls an integrated cognitive framework (ICF)—that includes the meaning-making practices of lifeforms from bacteria to plants, animals, humans, and some forms of artificial intelligence. That’s the topic of her new book,<em> </em><a href='https://press.uchicago.edu/ucp/books/book/chicago/B/bo238941793.html'><em>Bacteria to AI:  Human Futures with our Nonhuman Symbionts</em></a><em>. </em>A <em>symbiont </em>is an organism living in symbiosis with another, with a closely-coupled mutual relationship between them. Kate’s other books include <a href='https://cup.columbia.edu/book/postprint/9780231198257/'><em>Postprint: Books and Becoming Computational</em></a>, <a href='https://press.uchicago.edu/ucp/books/book/chicago/U/bo25861765.html'><em>Unthought: The Power of the Cognitive Nonconscious</em></a><em>,</em> and <a href='https://press.uchicago.edu/ucp/books/book/chicago/H/bo5437533.html'><em>How We Think: Digital Media and Contemporary Technogenesis</em></a><em>.</em> She has many fellowships, including two University of California Presidential Research Fellowships, and is a member of the American Academy of Arts and Sciences.</p>
<p>We talk about the relationship between cognition and consciousness, and between cognition and computation; our dependency and codependency on technology; concepts like anthropocentrism and <em>technosymbiosis</em>, which Kate unpacks in the service of laying out a really novel way of thinking about thinking; and whether AI is thinking, or feeling.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/07/transcript-265-0948.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/g92b8pdp8np6giz3/265_AI-and-You.mp3" length="33382636" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




It’s more important than ever to define just what we mean by words like intelligence, consciousness, and thinking. Here to help us is Kate Hayles, Distinguished Research Professor at the University of California at Los Angeles. Her research focuses on the relations of literature, science and technology, and a new theory of mind—what she calls an integrated cognitive framework (ICF)—that includes the meaning-making practices of lifeforms from bacteria to plants, animals, humans, and some forms of artificial intelligence. That’s the topic of her new book, Bacteria to AI:  Human Futures with our Nonhuman Symbionts. A symbiont is an organism living in symbiosis with another, with a closely-coupled mutual relationship between them. Kate’s other books include Postprint: Books and Becoming Computational, Unthought: The Power of the Cognitive Nonconscious, and How We Think: Digital Media and Contemporary Technogenesis. She has many fellowships, including two University of California Presidential Research Fellowships, and is a member of the American Academy of Arts and Sciences.
We talk about the relationship between cognition and consciousness, and between cognition and computation; our dependency and codependency on technology; concepts like anthropocentrism and technosymbiosis, which Kate unpacks in the service of laying out a really novel way of thinking about thinking; and whether AI is thinking, or feeling.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2044</itunes:duration>
                <itunes:episode>266</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>264 - Guest: Dagan Shani, Filmmaker, part 2</title>
        <itunes:title>264 - Guest: Dagan Shani, Filmmaker, part 2</itunes:title>
        <link>https://aiandyou.net/e/264-guest-dagan-shani-filmmaker-part-2/</link>
                    <comments>https://aiandyou.net/e/264-guest-dagan-shani-filmmaker-part-2/#comments</comments>        <pubDate>Mon, 07 Jul 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/008bc44e-a8a6-3768-a335-9f95657377fe</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We get at this AI thing from many different angles: it changes us in so many ways that we're not going to understand it through just a technical discussion. So today we look at what it means to be human in the AI age through the eyes of <a href='https://www.youtube.com/@DaganOnAI'>Dagan Shani</a>, an independent filmmaker who has focused on the risks associated with AI through movies that evoke our emotions. His documentary <a href='https://www.youtube.com/watch?v=U1eyUjVRir4'>Don't Look Up - The Documentary: The Case for AI as an Existential Threat</a> was described by Max Tegmark as “the most important film of the year.” Shani’s short documentary from 2024, <a href='https://www.youtube.com/watch?v=S2lI_5pydKg'>Moloch - AI and the Deadly Force Driving Us to the Brink</a>, and his latest film, <a href='https://www.youtube.com/watch?v=jz5mWn3JMqM'>Obsolete - Human Work in the Age of AI</a>, both raise pressing questions about the rapid advancement of AI and its far-reaching implications.</p>
<p>In part 2, we talk about Universal Basic Income, more about Moloch, and the Studio Ghibli incident and the future of AI-generated visual media. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-264-0811.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We get at this AI thing from many different angles: it changes us in so many ways that we're not going to understand it through just a technical discussion. So today we look at what it means to be human in the AI age through the eyes of <a href='https://www.youtube.com/@DaganOnAI'>Dagan Shani</a>, an independent filmmaker who has focused on the risks associated with AI through movies that evoke our emotions. His documentary <a href='https://www.youtube.com/watch?v=U1eyUjVRir4'><em>Don't Look Up - The Documentary: The Case for AI as an Existential Threat</em></a> was described by Max Tegmark as “the most important film of the year.” Shani’s short documentary from 2024, <a href='https://www.youtube.com/watch?v=S2lI_5pydKg'><em>Moloch - AI and the Deadly Force Driving Us to the Brink</em></a>, and his latest film, <a href='https://www.youtube.com/watch?v=jz5mWn3JMqM'><em>Obsolete - Human Work in the Age of AI</em></a>, both raise pressing questions about the rapid advancement of AI and its far-reaching implications.</p>
<p>In part 2, we talk about Universal Basic Income, more about Moloch, and the Studio Ghibli incident and the future of AI-generated visual media. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-264-0811.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ktdpg3jbx5jy5tka/264_AI-and-You.mp3" length="35399707" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We get at this AI thing from many different angles: it changes us in so many ways that we're not going to understand it through just a technical discussion. So today we look at what it means to be human in the AI age through the eyes of Dagan Shani, an independent filmmaker who has focused on the risks associated with AI through movies that evoke our emotions. His documentary Don't Look Up - The Documentary: The Case for AI as an Existential Threat was described by Max Tegmark as “the most important film of the year.” Shani’s short documentary from 2024, Moloch - AI and the Deadly Force Driving Us to the Brink, and his latest film, Obsolete - Human Work in the Age of AI, both raise pressing questions about the rapid advancement of AI and its far-reaching implications.
In part 2, we talk about Universal Basic Income, more about Moloch, and the Studio Ghibli incident and the future of AI-generated visual media. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2171</itunes:duration>
                <itunes:episode>265</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>263 - Guest: Dagan Shani, Filmmaker, part 1</title>
        <itunes:title>263 - Guest: Dagan Shani, Filmmaker, part 1</itunes:title>
        <link>https://aiandyou.net/e/263-guest-dagan-shani-filmmaker-part-1/</link>
                    <comments>https://aiandyou.net/e/263-guest-dagan-shani-filmmaker-part-1/#comments</comments>        <pubDate>Mon, 30 Jun 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/65a15ab9-9e68-34d4-910c-9eb8030b7624</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We get at this AI thing from many different angles: it changes us in so many ways that we're not going to understand it through just a technical discussion. So today we look at what it means to be human in the AI age through the eyes of <a href='https://www.youtube.com/@DaganOnAI'>Dagan Shani</a>, an independent filmmaker who has focused on the risks associated with AI through movies that evoke our emotions. His documentary <a href='https://www.youtube.com/watch?v=U1eyUjVRir4'>Don't Look Up - The Documentary: The Case for AI as an Existential Threat</a> was described by Max Tegmark as “the most important film of the year.” Shani’s latest film, <a href='https://www.youtube.com/watch?v=jz5mWn3JMqM'>Obsolete - Human Work in the Age of AI</a>, raises pressing questions about the rapid advancement of AI. </p>
<p>We talk about Moloch, which is the legendary personification of competition that drives us to be our worst selves, and the subject of Shani’s short documentary from 2024, <a href='https://www.youtube.com/watch?v=S2lI_5pydKg'>Moloch - AI and the Deadly Force Driving Us to the Brink</a>. We also talk about game theory, AI hype, impact on jobs, and AI in movie making.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-263-8690.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We get at this AI thing from many different angles: it changes us in so many ways that we're not going to understand it through just a technical discussion. So today we look at what it means to be human in the AI age through the eyes of <a href='https://www.youtube.com/@DaganOnAI'>Dagan Shani</a>, an independent filmmaker who has focused on the risks associated with AI through movies that evoke our emotions. His documentary <a href='https://www.youtube.com/watch?v=U1eyUjVRir4'><em>Don't Look Up - The Documentary: The Case for AI as an Existential Threat</em></a> was described by Max Tegmark as “the most important film of the year.” Shani’s latest film, <a href='https://www.youtube.com/watch?v=jz5mWn3JMqM'><em>Obsolete - Human Work in the Age of AI</em></a>, raises pressing questions about the rapid advancement of AI. </p>
<p>We talk about <em>Moloch</em>, which is the legendary personification of competition that drives us to be our worst selves, and the subject of Shani’s short documentary from 2024, <a href='https://www.youtube.com/watch?v=S2lI_5pydKg'><em>Moloch - AI and the Deadly Force Driving Us to the Brink</em></a>. We also talk about game theory, AI hype, impact on jobs, and AI in movie making.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-263-8690.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/5qrjnncebyyniwwf/263_AI-and-You.mp3" length="38652684" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We get at this AI thing from many different angles: it changes us in so many ways that we're not going to understand it through just a technical discussion. So today we look at what it means to be human in the AI age through the eyes of Dagan Shani, an independent filmmaker who has focused on the risks associated with AI through movies that evoke our emotions. His documentary Don't Look Up - The Documentary: The Case for AI as an Existential Threat was described by Max Tegmark as “the most important film of the year.” Shani’s latest film, Obsolete - Human Work in the Age of AI, raises pressing questions about the rapid advancement of AI. 
We talk about Moloch, which is the legendary personification of competition that drives us to be our worst selves, and the subject of Shani’s short documentary from 2024, Moloch - AI and the Deadly Force Driving Us to the Brink. We also talk about game theory, AI hype, impact on jobs, and AI in movie making.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2374</itunes:duration>
                <itunes:episode>264</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>262 - Guests: Ja-Nae Duane and Steve Fisher, Futurists, part 2</title>
        <itunes:title>262 - Guests: Ja-Nae Duane and Steve Fisher, Futurists, part 2</itunes:title>
        <link>https://aiandyou.net/e/262-guests-ja-nae-duane-and-steve-fisher-futurists-part-2/</link>
                    <comments>https://aiandyou.net/e/262-guests-ja-nae-duane-and-steve-fisher-futurists-part-2/#comments</comments>        <pubDate>Mon, 23 Jun 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ab67b4c7-2f7e-3420-82d1-6a06c55d89ba</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do you deal personally, and organizationally, with exponential change? That’s the subject of a new book, <a href='https://tfn.kit.com/bb0223ff53'>Super Shifts: Transforming How We Live, Learn, And Work In The Age Of Intelligence</a>, and both of its authors are here. Dr. Ja-Naé Duane is a behavioral scientist who has worked with companies such as PWC, Saudi Aramco, AIG, and Deloitte. She is a member of the Loomis Council at the Stimson Center, collaborator with the National Institute of Health, and holds appointments at Brown University and MIT’s Center for Information Systems Research. Steve Fisher co-founded the Futures Practice at McKinsey &amp; Company and is the Managing Partner of the consultancy Revolution Factory. At FTI Consulting, he led the adoption of Generative AI for business model transformation, and is Chief Futurist at the Human Frontier Institute. Together, they have previously authored the bestseller <a href='https://www.amazon.com/Startup-Equation-Visual-Guidebook-Building/dp/007183236X'>The Startup Equation</a>.</p>
<p>In the conclusion of the interview, we’re going to talk about Asimov’s Laws of Robotics, AI’s future enhancements to our lives, the different new species of humans that will emerge, and how the educational system needs to evolve.</p>
<p>Steve and Ja-Naé have extended a special offer to the listeners of this show, to get two chapters of their book free via <a href='https://tfn.kit.com/bb0223ff53'>this link</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-262-7342.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do you deal personally, and organizationally, with exponential change? That’s the subject of a new book, <a href='https://tfn.kit.com/bb0223ff53'><em>Super Shifts: Transforming How We Live, Learn, And Work In The Age Of Intelligence</em></a>, and both of its authors are here. Dr. Ja-Naé Duane is a behavioral scientist who has worked with companies such as PWC, Saudi Aramco, AIG, and Deloitte. She is a member of the Loomis Council at the Stimson Center, collaborator with the National Institute of Health, and holds appointments at Brown University and MIT’s Center for Information Systems Research. Steve Fisher co-founded the Futures Practice at McKinsey &amp; Company and is the Managing Partner of the consultancy Revolution Factory. At FTI Consulting, he led the adoption of Generative AI for business model transformation, and is Chief Futurist at the Human Frontier Institute. Together, they have previously authored the bestseller <a href='https://www.amazon.com/Startup-Equation-Visual-Guidebook-Building/dp/007183236X'><em>The Startup Equation</em></a>.</p>
<p>In the conclusion of the interview, we’re going to talk about Asimov’s Laws of Robotics, AI’s future enhancements to our lives, the different new species of humans that will emerge, and how the educational system needs to evolve.</p>
<p>Steve and Ja-Naé have extended a special offer to the listeners of this show, to get two chapters of their book free via <a href='https://tfn.kit.com/bb0223ff53'>this link</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-262-7342.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/reqtveafrivibrp7/262_AI-and-You.mp3" length="35283515" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How do you deal personally, and organizationally, with exponential change? That’s the subject of a new book, Super Shifts: Transforming How We Live, Learn, And Work In The Age Of Intelligence, and both of its authors are here. Dr. Ja-Naé Duane is a behavioral scientist who has worked with companies such as PWC, Saudi Aramco, AIG, and Deloitte. She is a member of the Loomis Council at the Stimson Center, collaborator with the National Institute of Health, and holds appointments at Brown University and MIT’s Center for Information Systems Research. Steve Fisher co-founded the Futures Practice at McKinsey &amp; Company and is the Managing Partner of the consultancy Revolution Factory. At FTI Consulting, he led the adoption of Generative AI for business model transformation, and is Chief Futurist at the Human Frontier Institute. Together, they have previously authored the bestseller The Startup Equation.
In the conclusion of the interview, we’re going to talk about Asimov’s Laws of Robotics, AI’s future enhancements to our lives, the different new species of humans that will emerge, and how the educational system needs to evolve.
Steve and Ja-Naé have extended a special offer to the listeners of this show, to get two chapters of their book free via this link.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2163</itunes:duration>
                <itunes:episode>263</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>261 - Guests: Ja-Nae Duane and Steve Fisher, Futurists, part 1</title>
        <itunes:title>261 - Guests: Ja-Nae Duane and Steve Fisher, Futurists, part 1</itunes:title>
        <link>https://aiandyou.net/e/261-guests-ja-nae-duane-and-steve-fisher-futurists-part-1/</link>
                    <comments>https://aiandyou.net/e/261-guests-ja-nae-duane-and-steve-fisher-futurists-part-1/#comments</comments>        <pubDate>Mon, 16 Jun 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f94848c6-db65-3aee-a254-8ab3d3c603f4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do you deal personally, and organizationally, with exponential change? That’s the subject of a new book, <a href='https://www.supershiftsbook.com'>Super Shifts: Transforming How We Live, Learn, And Work In The Age Of Intelligence</a>, and both of its authors are here. Dr. Ja-Naé Duane is a behavioral scientist who has worked with companies such as PWC, Saudi Aramco, AIG, and Deloitte. She is a member of the Loomis Council at the Stimson Center, collaborator with the National Institute of Health, and holds appointments at Brown University and MIT’s Center for Information Systems Research. Steve Fisher co-founded the Futures Practice at McKinsey &amp; Company and is the Managing Partner of the consultancy Revolution Factory. At FTI Consulting, he led the adoption of Generative AI for business model transformation, and is Chief Futurist at the Human Frontier Institute. Together, they have previously authored the bestseller <a href='https://www.amazon.com/Startup-Equation-Visual-Guidebook-Building/dp/007183236X'>The Startup Equation</a>.</p>
<p>This week, we’re going to talk about what shaped their careers in this work, the definition of a super shift and how people react to them over different timescales, human patterns of change, how a family might be dealing with all this in 15 years, and… opera.</p>


<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-261-5197.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do you deal personally, and organizationally, with exponential change? That’s the subject of a new book, <a href='https://www.supershiftsbook.com'><em>Super Shifts: Transforming How We Live, Learn, And Work In The Age Of Intelligence</em></a>, and both of its authors are here. Dr. Ja-Naé Duane is a behavioral scientist who has worked with companies such as PWC, Saudi Aramco, AIG, and Deloitte. She is a member of the Loomis Council at the Stimson Center, collaborator with the National Institute of Health, and holds appointments at Brown University and MIT’s Center for Information Systems Research. Steve Fisher co-founded the Futures Practice at McKinsey &amp; Company and is the Managing Partner of the consultancy Revolution Factory. At FTI Consulting, he led the adoption of Generative AI for business model transformation, and is Chief Futurist at the Human Frontier Institute. Together, they have previously authored the bestseller <a href='https://www.amazon.com/Startup-Equation-Visual-Guidebook-Building/dp/007183236X'><em>The Startup Equation</em></a>.</p>
<p>This week, we’re going to talk about what shaped their careers in this work, the definition of a super shift and how people react to them over different timescales, human patterns of change, how a family might be dealing with all this in 15 years, and… opera.</p>
<br>

<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/06/transcript-261-5197.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/7899s85vhbvte8sj/261_AI-and-You.mp3" length="28911727" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How do you deal personally, and organizationally, with exponential change? That’s the subject of a new book, Super Shifts: Transforming How We Live, Learn, And Work In The Age Of Intelligence, and both of its authors are here. Dr. Ja-Naé Duane is a behavioral scientist who has worked with companies such as PWC, Saudi Aramco, AIG, and Deloitte. She is a member of the Loomis Council at the Stimson Center, collaborator with the National Institute of Health, and holds appointments at Brown University and MIT’s Center for Information Systems Research. Steve Fisher co-founded the Futures Practice at McKinsey &amp; Company and is the Managing Partner of the consultancy Revolution Factory. At FTI Consulting, he led the adoption of Generative AI for business model transformation, and is Chief Futurist at the Human Frontier Institute. Together, they have previously authored the bestseller The Startup Equation.
This week, we’re going to talk about what shaped their careers in this work, the definition of a super shift and how people react to them over different timescales, human patterns of change, how a family might be dealing with all this in 15 years, and… opera.

All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1765</itunes:duration>
                <itunes:episode>262</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>260 - Guest: Nada Sanders, Global Business Futurist, part 2</title>
        <itunes:title>260 - Guest: Nada Sanders, Global Business Futurist, part 2</itunes:title>
        <link>https://aiandyou.net/e/260-guest-nada-sanders-global-business-futurist-part-2/</link>
                    <comments>https://aiandyou.net/e/260-guest-nada-sanders-global-business-futurist-part-2/#comments</comments>        <pubDate>Mon, 09 Jun 2025 12:25:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b43a2fad-1816-3ffa-9521-28e63613da7c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What does the ideal integration of humans and technology look like in business in the future? <a href='https://nadasanders.com'>Nada Sanders</a> calls that a “Humachine.” She is a thought leader and expert in forecasting and human-technology integration, has an MBA and a PhD in supply chain management, and is an expert in digital transformation. She is author of seven books, including <a href='https://www.routledge.com/The-Humachine-Humankind-Machines-and-the-Future-of-Enterprise/Sanders-Wood/p/book/9781138571341'>The Humachine: Humankind, Machines, and the Future of Enterprise</a>, and is a Fellow of the Decision Sciences Institute. In 2022 she was awarded the prestigious Robert D. Klein Lecturer Award by Northeastern University.</p>
<p>In part 2, we talk about intentionality, integration, implementation, and indication, what to digitize in digital transformation, KPIs and other indicators of success in an AI-first business, and how culture needs to shift in the enterprise. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-260-7120.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What does the ideal integration of humans and technology look like in business in the future? <a href='https://nadasanders.com'>Nada Sanders</a> calls that a “Humachine.” She is a thought leader and expert in forecasting and human-technology integration, has an MBA and a PhD in supply chain management, and is an expert in digital transformation. She is author of seven books, including <a href='https://www.routledge.com/The-Humachine-Humankind-Machines-and-the-Future-of-Enterprise/Sanders-Wood/p/book/9781138571341'><em>The Humachine: Humankind, Machines, and the Future of Enterprise</em></a>, and is a Fellow of the Decision Sciences Institute. In 2022 she was awarded the prestigious Robert D. Klein Lecturer Award by Northeastern University.</p>
<p>In part 2, we talk about intentionality, integration, implementation, and indication, what to digitize in digital transformation, KPIs and other indicators of success in an AI-first business, and how culture needs to shift in the enterprise. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-260-7120.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3m4s4jbue9zxyhq7/260_AI-and-You.mp3" length="29843776" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What does the ideal integration of humans and technology look like in business in the future? Nada Sanders calls that a “Humachine.” She is a thought leader and expert in forecasting and human-technology integration, has an MBA and a PhD in supply chain management, and is an expert in digital transformation. She is author of seven books, including The Humachine: Humankind, Machines, and the Future of Enterprise, and is a Fellow of the Decision Sciences Institute. In 2022 she was awarded the prestigious Robert D. Klein Lecturer Award by Northeastern University.
In part 2, we talk about intentionality, integration, implementation, and indication, what to digitize in digital transformation, KPIs and other indicators of success in an AI-first business, and how culture needs to shift in the enterprise. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1823</itunes:duration>
                <itunes:episode>261</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>259 - Guest: Nada Sanders, Global Business Futurist, part 1</title>
        <itunes:title>259 - Guest: Nada Sanders, Global Business Futurist, part 1</itunes:title>
        <link>https://aiandyou.net/e/259-guest-nada-sanders-global-business-futurist-part-1/</link>
                    <comments>https://aiandyou.net/e/259-guest-nada-sanders-global-business-futurist-part-1/#comments</comments>        <pubDate>Mon, 02 Jun 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/bd6dafbf-cd1a-354d-8f11-135ff4a3dccd</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What does the ideal integration of humans and technology look like in business in the future? <a href='https://nadasanders.com'>Nada Sanders</a> calls that a “Humachine.” She is a thought leader and expert in forecasting and human-technology integration, has an MBA and a PhD in supply chain management, and is an expert in digital transformation. She is author of seven books, including <a href='https://www.routledge.com/The-Humachine-Humankind-Machines-and-the-Future-of-Enterprise/Sanders-Wood/p/book/9781138571341'>The Humachine: Humankind, Machines, and the Future of Enterprise</a>, and is a Fellow of the Decision Sciences Institute. In 2022 she was awarded the prestigious Robert D. Klein Lecturer Award by Northeastern University.</p>
<p>In part 1, Nada defines the Humachine, and we talk about the ideal relationship between humans and AI, Kasparov’s Law, what skills have atrophied in the younger workforce, how software jobs are changing, and where to set the boundary between AI assistants and human assistants.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-259-6367.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What does the ideal integration of humans and technology look like in business in the future? <a href='https://nadasanders.com'>Nada Sanders</a> calls that a “Humachine.” She is a thought leader and expert in forecasting and human-technology integration, has an MBA and a PhD in supply chain management, and is an expert in digital transformation. She is author of seven books, including <a href='https://www.routledge.com/The-Humachine-Humankind-Machines-and-the-Future-of-Enterprise/Sanders-Wood/p/book/9781138571341'><em>The Humachine: Humankind, Machines, and the Future of Enterprise</em></a>, and is a Fellow of the Decision Sciences Institute. In 2022 she was awarded the prestigious Robert D. Klein Lecturer Award by Northeastern University.</p>
<p>In part 1, Nada defines the Humachine, and we talk about the ideal relationship between humans and AI, Kasparov’s Law, what skills have atrophied in the younger workforce, how software jobs are changing, and where to set the boundary between AI assistants and human assistants.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-259-6367.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/pjxicmabxx8q3qav/259_AI-and-You.mp3" length="33108455" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What does the ideal integration of humans and technology look like in business in the future? Nada Sanders calls that a “Humachine.” She is a thought leader and expert in forecasting and human-technology integration, has an MBA and a PhD in supply chain management, and is an expert in digital transformation. She is author of seven books, including The Humachine: Humankind, Machines, and the Future of Enterprise, and is a Fellow of the Decision Sciences Institute. In 2022 she was awarded the prestigious Robert D. Klein Lecturer Award by Northeastern University.
In part 1, Nada defines the Humachine, and we talk about the ideal relationship between humans and AI, Kasparov’s Law, what skills have atrophied in the younger workforce, how software jobs are changing, and where to set the boundary between AI assistants and human assistants.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2027</itunes:duration>
                <itunes:episode>260</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>258 - Guests: Emily Bender &amp; Alex Hanna, Authors, part 2</title>
        <itunes:title>258 - Guests: Emily Bender &amp; Alex Hanna, Authors, part 2</itunes:title>
        <link>https://aiandyou.net/e/258-guests-emily-bender-alex-hanna-authors-part-2/</link>
                    <comments>https://aiandyou.net/e/258-guests-emily-bender-alex-hanna-authors-part-2/#comments</comments>        <pubDate>Mon, 26 May 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8e957c8d-7498-331f-a27f-22a8a091c13a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>I am talking with Drs. Emily Bender and Alex Hanna, authors of the upcoming book, <a href='https://thecon.ai/'>The AI Con: How to Fight Big Tech's Hype and Create the Future We Want</a>, and also co-hosts of the live podcast <a href='https://www.dair-institute.org/maiht3k/'>Mystery AI Hype Theater 3000</a>.</p>
<p>Emily is well known for coining the term “stochastic parrots” in a 2021 paper as a label for generative AI. She is a linguistics professor and director of the <a href='https://depts.washington.edu/uwcl/twiki/bin/view.cgi/Main/WebHome'>Computational Linguistics Laboratory</a> at the University of Washington and was among the inaugural <a href='https://time.com/collection/time100-ai/6308275/emily-m-bender/'>Time AI 100</a>. Alex is a sociologist who looks at how the data that fuels AI technologies exacerbates racial, gender, and class inequality. She is Director of Research at the <a href='https://www.dair-institute.org/'>Distributed AI Research Institute</a> and a Senior Fellow at the <a href='https://www.appliedtransstudies.org/'>Center for Applied Transgender Studies</a>.</p>
<p>In part 2, we talk about the dangers of uncritical naming, anthropomorphizing, Luddites and bespoke crafting, the effects of synthetic content on interpersonal communications, capitalism, and collective action strategies. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-258-0307.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>I am talking with Drs. Emily Bender and Alex Hanna, authors of the upcoming book, <a href='https://thecon.ai/'><em>The AI Con: How to Fight Big Tech's Hype and Create the Future We Want</em></a>, and also co-hosts of the live podcast <a href='https://www.dair-institute.org/maiht3k/'><em>Mystery AI Hype Theater 3000</em></a>.</p>
<p>Emily is well known for coining the term “stochastic parrots” in a 2021 paper as a label for generative AI. She is a linguistics professor and director of the <a href='https://depts.washington.edu/uwcl/twiki/bin/view.cgi/Main/WebHome'>Computational Linguistics Laboratory</a> at the University of Washington and was among the inaugural <a href='https://time.com/collection/time100-ai/6308275/emily-m-bender/'>Time AI 100</a>. Alex is a sociologist who looks at how the data that fuels AI technologies exacerbates racial, gender, and class inequality. She is Director of Research at the <a href='https://www.dair-institute.org/'>Distributed AI Research Institute</a> and a Senior Fellow at the <a href='https://www.appliedtransstudies.org/'>Center for Applied Transgender Studies</a>.</p>
<p>In part 2, we talk about the dangers of uncritical naming, anthropomorphizing, Luddites and bespoke crafting, the effects of synthetic content on interpersonal communications, capitalism, and collective action strategies. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-258-0307.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/pz37mdf2avr7gzsy/258_AI-and-You.mp3" length="33141474" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




I am talking with Drs. Emily Bender and Alex Hanna, authors of the upcoming book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, and also co-hosts of the live podcast Mystery AI Hype Theater 3000.
Emily is well known for coining the term “stochastic parrots” in a 2021 paper as a label for generative AI. She is a linguistics professor and director of the Computational Linguistics Laboratory at the University of Washington and was among the inaugural Time AI 100. Alex is a sociologist who looks at how the data that fuels AI technologies exacerbates racial, gender, and class inequality. She is Director of Research at the Distributed AI Research Institute and a Senior Fellow at the Center for Applied Transgender Studies.
In part 2, we talk about the dangers of uncritical naming, anthropomorphizing, Luddites and bespoke crafting, the effects of synthetic content on interpersonal communications, capitalism, and collective action strategies. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2029</itunes:duration>
                <itunes:episode>259</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>257 - Guests: Emily Bender &amp; Alex Hanna, Authors, part 1</title>
        <itunes:title>257 - Guests: Emily Bender &amp; Alex Hanna, Authors, part 1</itunes:title>
        <link>https://aiandyou.net/e/257-guests-emily-bender-alex-hanna-authors-part-1/</link>
                    <comments>https://aiandyou.net/e/257-guests-emily-bender-alex-hanna-authors-part-1/#comments</comments>        <pubDate>Mon, 19 May 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7ffc8c84-53d2-3835-a955-c51701a64cb7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>When people take a unipolar position that AI is going to be wonderful, or terrible, or inconsequential, they end up painting themselves into a corner where that’s the only story they can allow themselves to express, and that obscures the truth. So for us to do our due diligence in exploring the dimensions of AI, today I am talking with Drs. Emily Bender and Alex Hanna, authors of the upcoming book, <a href='https://thecon.ai/'>The AI Con: How to Fight Big Tech's Hype and Create the Future We Want</a>, and also co-hosts of the live podcast <a href='https://www.dair-institute.org/maiht3k/'>Mystery AI Hype Theater 3000</a>.</p>
<p>Emily is well known for coining the term “stochastic parrots” in a 2021 paper as a label for generative AI. She is a linguistics professor and director of the <a href='https://depts.washington.edu/uwcl/twiki/bin/view.cgi/Main/WebHome'>Computational Linguistics Laboratory</a> at the University of Washington and was among the inaugural <a href='https://time.com/collection/time100-ai/6308275/emily-m-bender/'>Time AI 100</a>. Alex is a sociologist who looks at how the data that fuels AI technologies exacerbates racial, gender, and class inequality. She is Director of Research at the <a href='https://www.dair-institute.org/'>Distributed AI Research Institute</a> and a Senior Fellow at the <a href='https://www.appliedtransstudies.org/'>Center for Applied Transgender Studies</a>.</p>
<p>In part 1, we talk about their intentions with the book, cycles of hype and the effects of hype, the dangers of uncritical use of LLMs, “Slow Science”, and academic institutional culture. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-257-8757.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>When people take a unipolar position that AI is going to be wonderful, or terrible, or inconsequential, they end up painting themselves into a corner where that’s the only story they can allow themselves to express, and that obscures the truth. So for us to do our due diligence in exploring the dimensions of AI, today I am talking with Drs. Emily Bender and Alex Hanna, authors of the upcoming book, <a href='https://thecon.ai/'><em>The AI Con: How to Fight Big Tech's Hype and Create the Future We Want</em></a>, and also co-hosts of the live podcast <a href='https://www.dair-institute.org/maiht3k/'><em>Mystery AI Hype Theater 3000</em></a>.</p>
<p>Emily is well known for coining the term “stochastic parrots” in a 2021 paper as a label for generative AI. She is a linguistics professor and director of the <a href='https://depts.washington.edu/uwcl/twiki/bin/view.cgi/Main/WebHome'>Computational Linguistics Laboratory</a> at the University of Washington and was among the inaugural <a href='https://time.com/collection/time100-ai/6308275/emily-m-bender/'>Time AI 100</a>. Alex is a sociologist who looks at how the data that fuels AI technologies exacerbates racial, gender, and class inequality. She is Director of Research at the <a href='https://www.dair-institute.org/'>Distributed AI Research Institute</a> and a Senior Fellow at the <a href='https://www.appliedtransstudies.org/'>Center for Applied Transgender Studies</a>.</p>
<p>In part 1, we talk about their intentions with the book, cycles of hype and the effects of hype, the dangers of uncritical use of LLMs, “Slow Science”, and academic institutional culture. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-257-8757.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/fpayp8zdp8yvbhi9/257_AI-and-You.mp3" length="33599668" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




When people take a unipolar position that AI is going to be wonderful, or terrible, or inconsequential, they end up painting themselves into a corner where that’s the only story they can allow themselves to express, and that obscures the truth. So for us to do our due diligence in exploring the dimensions of AI, today I am talking with Drs. Emily Bender and Alex Hanna, authors of the upcoming book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, and also co-hosts of the live podcast Mystery AI Hype Theater 3000.
Emily is well known for coining the term “stochastic parrots” in a 2021 paper as a label for generative AI. She is a linguistics professor and director of the Computational Linguistics Laboratory at the University of Washington and was among the inaugural Time AI 100. Alex is a sociologist who looks at how the data that fuels AI technologies exacerbates racial, gender, and class inequality. She is Director of Research at the Distributed AI Research Institute and a Senior Fellow at the Center for Applied Transgender Studies.
In part 1, we talk about their intentions with the book, cycles of hype and the effects of hype, the dangers of uncritical use of LLMs, “Slow Science”, and academic institutional culture. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2099</itunes:duration>
                <itunes:episode>258</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>256 - Guest: Diane Gutiw, AI Research Center Lead, part 2</title>
        <itunes:title>256 - Guest: Diane Gutiw, AI Research Center Lead, part 2</itunes:title>
        <link>https://aiandyou.net/e/256-guest-diane-gutiw-ai-research-center-lead-part-2/</link>
                    <comments>https://aiandyou.net/e/256-guest-diane-gutiw-ai-research-center-lead-part-2/#comments</comments>        <pubDate>Mon, 12 May 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c7e315ad-5926-32fd-a358-134d5b820c46</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy <a href='https://cgi.com'>CGI</a>. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors. </p>
<p>In part 2, we talk about synthetic data, digital triplets, agentic AI and continuous autonomous improvement, and best practices for compliance. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-256-6745.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy <a href='https://cgi.com'>CGI</a>. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors. </p>
<p>In part 2, we talk about synthetic data, digital triplets, agentic AI and continuous autonomous improvement, and best practices for compliance. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/05/transcript-256-6745.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/brt3pg64b2k6kdhs/256_AI-and-You.mp3" length="28398055" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy CGI. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors. 
In part 2, we talk about synthetic data, digital triplets, agentic AI and continuous autonomous improvement, and best practices for compliance. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1733</itunes:duration>
                <itunes:episode>257</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>255 - Guest: Diane Gutiw, AI Research Center Lead, part 1</title>
        <itunes:title>255 - Guest: Diane Gutiw, AI Research Center Lead, part 1</itunes:title>
        <link>https://aiandyou.net/e/255-guest-diane-gutiw-ai-research-center-lead-part-1/</link>
                    <comments>https://aiandyou.net/e/255-guest-diane-gutiw-ai-research-center-lead-part-1/#comments</comments>        <pubDate>Mon, 05 May 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d0e5eb4a-9819-33ca-8473-e390da5499ca</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy CGI. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors. </p>
<p>We talk about how enterprises manage the integration of AI at the dizzying speeds of change today, where AI does and does not impact employment, how the HR department should change in those enterprises, how to deal with hallucinations, and how to manage the risks of deploying generative AI in customer solutions. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-255-2562.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy CGI. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors. </p>
<p>We talk about how enterprises manage the integration of AI at the dizzying speeds of change today, where AI does and does not impact employment, how the HR department should change in those enterprises, how to deal with hallucinations, and how to manage the risks of deploying generative AI in customer solutions. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-255-2562.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/zvgk7gsb8qsdvevs/255_AI-and-You.mp3" length="32572213" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How to manage the integration of AI at scale into the enterprise is the territory of today's guest, Diane Gutiw, Vice President and leader of the AI research center at the global business consultancy CGI. She holds a PhD in Medical Information Technology Management and has led collaborative strategy design and implementation planning for advanced analytics and AI for large organizations in the energy and utilities, railway, and government healthcare sectors. 
We talk about how enterprises manage the integration of AI at the dizzying speeds of change today, where AI does and does not impact employment, how the HR department should change in those enterprises, how to deal with hallucinations, and how to manage the risks of deploying generative AI in customer solutions. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1994</itunes:duration>
                <itunes:episode>256</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>254 - Guest: Seth Baum, Global Catastrophic Risks Institute, part 2</title>
        <itunes:title>254 - Guest: Seth Baum, Global Catastrophic Risks Institute, part 2</itunes:title>
        <link>https://aiandyou.net/e/254-guest-seth-baum-global-catastrophic-risks-institute-part-2/</link>
                    <comments>https://aiandyou.net/e/254-guest-seth-baum-global-catastrophic-risks-institute-part-2/#comments</comments>        <pubDate>Mon, 28 Apr 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a9be6307-106e-3069-90c1-7a52cc156c3c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the <a href='https://gcrinstitute.org/'>Global Catastrophic Risks Institute</a> in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.</p>
<p>We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-254-4507.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're talking about <em>catastrophic risks</em>, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the <a href='https://gcrinstitute.org/'>Global Catastrophic Risks Institute</a> in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.</p>
<p>We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-254-4507.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/s4g2h4zu6dchs67k/254_AI-and-You.mp3" length="31796481" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1945</itunes:duration>
                <itunes:episode>255</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>253 - Guest: Seth Baum, Global Catastrophic Risks Institute, part 1</title>
        <itunes:title>253 - Guest: Seth Baum, Global Catastrophic Risks Institute, part 1</itunes:title>
        <link>https://aiandyou.net/e/253-guest-seth-baum-global-catastrophic-risks-institute-part-1/</link>
                    <comments>https://aiandyou.net/e/253-guest-seth-baum-global-catastrophic-risks-institute-part-1/#comments</comments>        <pubDate>Mon, 21 Apr 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/732e4d7a-c09f-3cbe-84e9-d63ea05897df</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the <a href='https://gcrinstitute.org/'>Global Catastrophic Risks Institute</a> in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.</p>
<p>We talk about how it feels to work on existential threats every day, AI as a horizontal risk as well as a vertical one, near-term value versus long-term value, AI being used to change the decisions of populations or voting blocs, and AI as a dual-use technology.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-253-4712.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're talking about <em>catastrophic risks</em>, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the <a href='https://gcrinstitute.org/'>Global Catastrophic Risks Institute</a> in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.</p>
<p>We talk about how it feels to work on existential threats every day, AI as a horizontal risk as well as a vertical one, near-term value versus long-term value, AI being used to change the decisions of populations or voting blocs, and AI as a dual-use technology.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-253-4712.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/zdzzt84e4ax5qjgq/253_AI-and-You.mp3" length="25319263" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We're talking about catastrophic risks, something that can be depressing for people who haven’t confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that’s a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He’s authored papers on pandemics, nuclear winter, and notably for our show, AI.
We talk about how it feels to work on existential threats every day, AI as a horizontal risk as well as a vertical one, near-term value versus long-term value, AI being used to change the decisions of populations or voting blocs, and AI as a dual-use technology.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1900</itunes:duration>
                <itunes:episode>254</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>252 - Special: AI in Customer Service</title>
        <itunes:title>252 - Special: AI in Customer Service</itunes:title>
        <link>https://aiandyou.net/e/252-special-ai-in-customer-service/</link>
                    <comments>https://aiandyou.net/e/252-special-ai-in-customer-service/#comments</comments>        <pubDate>Mon, 14 Apr 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c2c9f8fe-0860-3b54-b2c1-fcb97789f484</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>AI will transform customer service. Large language models will provide superior customer interaction, whether by web or by telephone. They’ll answer instantly, always be polite and full of energy, talk with the customer in every known language, make all the information that the customer wants out of the company personalized and accessible to that customer. All of this is provably achievable.</p>
<p>So where is it? Has your customer service experience gotten better as a result of AI being added... or worse? I dig into that question on this special episode.</p>


<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-252-9316.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>AI will transform customer service. Large language models will provide superior customer interaction, whether by web or by telephone. They’ll answer instantly, always be polite and full of energy, talk with the customer in every known language, make all the information that the customer wants out of the company personalized and accessible to that customer. All of this is provably achievable.</p>
<p>So where is it? Has your customer service experience gotten better as a result of AI being added... or worse? I dig into that question on this special episode.</p>
<br>

<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-252-9316.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nzg62dzka9wba4sb/252_AI-and-You.mp3" length="17814255" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




AI will transform customer service. Large language models will provide superior customer interaction, whether by web or by telephone. They’ll answer instantly, always be polite and full of energy, talk with the customer in every known language, make all the information that the customer wants out of the company personalized and accessible to that customer. All of this is provably achievable.
So where is it? Has your customer service experience gotten better as a result of AI being added... or worse? I dig into that question on this special episode.

All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1113</itunes:duration>
                <itunes:episode>253</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>251 - Special: AI's Existential Threat and Hope: Deconstructing TEDx</title>
        <itunes:title>251 - Special: AI's Existential Threat and Hope: Deconstructing TEDx</itunes:title>
        <link>https://aiandyou.net/e/251-special-ais-existential-threat-and-hope-deconstructing-tedx/</link>
                    <comments>https://aiandyou.net/e/251-special-ais-existential-threat-and-hope-deconstructing-tedx/#comments</comments>        <pubDate>Mon, 07 Apr 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a5b0e179-7067-3ca4-b2d7-795359fc99a1</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What if… aliens came to visit the Earth? And what does that have to do with AI? I’ve deconstructed two of my TEDx talks on this show, but before both of those I did one in 2017, and here I take that one apart.</p>
<p>Why didn’t I do this before? It seemed a bit… out there. Too sensationalist. Making claims that were too extravagant. </p>
<p>But when I was looking at it again recently, I thought, we’ve actually caught up with what I was saying there, those ideas are more acceptable than they were in 2017. So I thought this was a good time to see how it’s aged and how on point it is. I’ll go through it, give a commentary. </p>
<p>I'll talk about the dichotomy of AI's existential promise vs peril, what it could mean for jobs, the motivations to create general AI, and the part we all play in establishing the values of what will become tomorrow's artificial superintelligences, and examine the interesting ways these narratives have changed in the last 8 years. Plus, aliens.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-251-4290.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>What if… aliens came to visit the Earth? And what does that have to do with AI? I’ve deconstructed two of my TEDx talks on this show, but before both of those I did one in 2017, and here I take that one apart.</p>
<p>Why didn’t I do this before? It seemed a bit… out there. Too sensationalist. Making claims that were too extravagant. </p>
<p>But when I was looking at it again recently, I thought, we’ve actually caught up with what I was saying there, those ideas are more acceptable than they were in 2017. So I thought this was a good time to see how it’s aged and how on point it is. I’ll go through it, give a commentary. </p>
<p>I'll talk about the dichotomy of AI's existential promise vs peril, what it could mean for jobs, the motivations to create general AI, and the part we all play in establishing the values of what will become tomorrow's artificial superintelligences, and examine the interesting ways these narratives have changed in the last 8 years. Plus, aliens.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/04/transcript-251-4290.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/cqs3vbhch6irsjfq/251_AI-and-You.mp3" length="35079133" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




What if… aliens came to visit the Earth? And what does that have to do with AI? I’ve deconstructed two of my TEDx talks on this show, but before both of those I did one in 2017, and here I take that one apart.
Why didn’t I do this before? It seemed a bit… out there. Too sensationalist. Making claims that were too extravagant. 
But when I was looking at it again recently, I thought, we’ve actually caught up with what I was saying there, those ideas are more acceptable than they were in 2017. So I thought this was a good time to see how it’s aged and how on point it is. I’ll go through it, give a commentary. 
I'll talk about the dichotomy of AI's existential promise vs peril, what it could mean for jobs, the motivations to create general AI, and the part we all play in establishing the values of what will become tomorrow's artificial superintelligences, and examine the interesting ways these narratives have changed in the last 8 years. Plus, aliens.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2150</itunes:duration>
                <itunes:episode>252</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>250 - Special: Military Use of AI</title>
        <itunes:title>250 - Special: Military Use of AI</itunes:title>
        <link>https://aiandyou.net/e/250-special-military-use-of-ai/</link>
                    <comments>https://aiandyou.net/e/250-special-military-use-of-ai/#comments</comments>        <pubDate>Mon, 31 Mar 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/808e8910-39ba-3638-9743-0d20c994b5a7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine:</p>
<ul>
<li>Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control;</li>
<li>Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert;</li>
<li>Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control;</li>
<li>Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory;</li>
<li>Rajiv Malhotra, author of  “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts;</li>
<li>David Brin, scientist and science fiction author famous for the Uplift series and Earth;</li>
<li>Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable;</li>
<li>Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute;</li>
<li>Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI;</li>
</ul>
<p>I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-250-6130.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine:</p>
<ul>
<li>Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control;</li>
<li>Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert;</li>
<li>Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control;</li>
<li>Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory;</li>
<li>Rajiv Malhotra, author of  “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts;</li>
<li>David Brin, scientist and science fiction author famous for the <em>Uplift </em>series and <em>Earth</em>;</li>
<li>Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of <em>AI: Unexplainable, Unpredictable, Uncontrollable</em>;</li>
<li>Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute;</li>
<li>Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI;</li>
</ul>
<p>I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-250-6130.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/k2nr47ma2rrn6g7u/250_AI-and-You.mp3" length="48717559" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine:

Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control;
Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert;
Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control;
Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory;
Rajiv Malhotra, author of  “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts;
David Brin, scientist and science fiction author famous for the Uplift series and Earth;
Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable;
Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute;
Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI;

I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3003</itunes:duration>
                <itunes:episode>251</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>249 - Guest: Adam Unikowsky, Attorney</title>
        <itunes:title>249 - Guest: Adam Unikowsky, Attorney</itunes:title>
        <link>https://aiandyou.net/e/249-guest-adam-unikowsky-attorney/</link>
                    <comments>https://aiandyou.net/e/249-guest-adam-unikowsky-attorney/#comments</comments>        <pubDate>Mon, 24 Mar 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/59468cf6-681b-3b4f-aa83-8f9e46f25dff</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>The use of generative AI in legal practice has been in the news since lawyers filed briefs written by AI that contained completely fictional citations. But AI has moved past those faux pas to be of real benefit, used by some judges in writing their decisions. Here with his finger on that pulse is <a href='https://adamunikowsky.substack.com/'>Adam Unikowsky</a>, partner in the Appellate &amp; Supreme Court practice group at Jenner &amp; Block in Washington, DC.  He handles cases in numerous subject matter areas, including administrative law, and patent law.  He has argued 12 cases in the U.S. Supreme Court, as well as numerous cases in federal courts of appeals, federal district courts, and state supreme courts. He writes a newsletter on AI in law and other legal issues. Adam graduated from Harvard Law School and clerked for Justice Antonin Scalia.</p>
<p>We talk about how AI could litigate or even judge cases, and whether it should, how it can emulate specific judges, the current uptake, reliability, and reputation of AI in the legal profession, the best ways to use AI in litigation and what’s driving its adoption, something AI can do that humans can’t, how politics comes in, the future roles of litigators and AI’s effects on the apprenticeship of lawyers, AI in the appellate system, and upcoming innovation in AI and the law. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-248-2911.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>The use of generative AI in legal practice has been in the news since lawyers filed briefs written by AI that contained completely fictional citations. But AI has moved past those <em>faux pas</em> to be of real benefit, used by some judges in writing their decisions. Here with his finger on that pulse is <a href='https://adamunikowsky.substack.com/'>Adam Unikowsky</a>, partner in the Appellate &amp; Supreme Court practice group at Jenner &amp; Block in Washington, DC.  He handles cases in numerous subject matter areas, including administrative law, and patent law.  He has argued 12 cases in the U.S. Supreme Court, as well as numerous cases in federal courts of appeals, federal district courts, and state supreme courts. He writes a newsletter on AI in law and other legal issues. Adam graduated from Harvard Law School and clerked for Justice Antonin Scalia.</p>
<p>We talk about how AI could litigate or even judge cases, and whether it should, how it can emulate specific judges, the current uptake, reliability, and reputation of AI in the legal profession, the best ways to use AI in litigation and what’s driving its adoption, something AI can do that humans can’t, how politics comes in, the future roles of litigators and AI’s effects on the apprenticeship of lawyers, AI in the appellate system, and upcoming innovation in AI and the law. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-248-2911.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/quuy63wj5qtvmxj3/249_AI-and-You.mp3" length="40272693" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




The use of generative AI in legal practice has been in the news since lawyers filed briefs written by AI that contained completely fictional citations. But AI has moved past those faux pas to be of real benefit, used by some judges in writing their decisions. Here with his finger on that pulse is Adam Unikowsky, partner in the Appellate &amp; Supreme Court practice group at Jenner &amp; Block in Washington, DC.  He handles cases in numerous subject matter areas, including administrative law, and patent law.  He has argued 12 cases in the U.S. Supreme Court, as well as numerous cases in federal courts of appeals, federal district courts, and state supreme courts. He writes a newsletter on AI in law and other legal issues. Adam graduated from Harvard Law School and clerked for Justice Antonin Scalia.
We talk about how AI could litigate or even judge cases, and whether it should, how it can emulate specific judges, the current uptake, reliability, and reputation of AI in the legal profession, the best ways to use AI in litigation and what’s driving its adoption, something AI can do that humans can’t, how politics comes in, the future roles of litigators and AI’s effects on the apprenticeship of lawyers, AI in the appellate system, and upcoming innovation in AI and the law. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2475</itunes:duration>
                <itunes:episode>250</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>248 - Guest: Tim O'Reilly, Entrepreneur of Ideas, part 2</title>
        <itunes:title>248 - Guest: Tim O'Reilly, Entrepreneur of Ideas, part 2</itunes:title>
        <link>https://aiandyou.net/e/248-guest-tim-oreilly-entrepreneur-of-ideas-part-2/</link>
                    <comments>https://aiandyou.net/e/248-guest-tim-oreilly-entrepreneur-of-ideas-part-2/#comments</comments>        <pubDate>Mon, 17 Mar 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ac650a1d-7dee-397f-bfa6-f1aef4ec05e4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>My guest has fundamentally reshaped the landscape of digital technology. Your experience of the Internet owes a large part of its identity to Tim O’Reilly, the founder, CEO, and Chairman of <a href='https://oreilly.com'>O’Reilly Media</a>, the company that has been providing the picks and shovels of learning to the Silicon Valley gold rush for the past thirty-five years, a platform that has connected and informed the people at ground zero of the online revolution since before there was a World Wide Web, through every medium from books to blogs. And the man behind that company has catalyzed and promoted the great thought movements that shaped how the digital world unfolded, such as Open Source, the principle of freedom and transparency in sharing the code that makes up the moving parts of that world, notably through the Open Source Conference which was like Woodstock for developers and ran from the beginning of that era for many years and which I personally presented at many times. Named by Inc. magazine as the “Oracle of Silicon Valley,” Tim created the term “Web 2.0” to denote the shift towards the era where users like you and me participate by creating our own content, which turned into social media and which is now just part of the digital water we swim in. His 2017 book, <a href='https://www.amazon.com/WTF-Whats-Future-Why-Its/dp/0062565710'>WTF: What’s the Future and Why It’s Up to Us</a> explores the technological forces on our world and how to harness them for a better future.</p>
<p>We talk about the effects of generative AI on our work processes, what AI does to the value model of accessing information, meme stocks in the new economy, AGI, and preference alignment and market influencing leading to collective intelligence. Really.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-248-2911.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>My guest has fundamentally reshaped the landscape of digital technology. Your experience of the Internet owes a large part of its identity to Tim O’Reilly, the founder, CEO, and Chairman of <a href='https://oreilly.com'>O’Reilly Media</a>, the company that has been providing the picks and shovels of learning to the Silicon Valley gold rush for the past thirty-five years, a platform that has connected and informed the people at ground zero of the online revolution since before there was a World Wide Web, through every medium from books to blogs. And the man behind that company has catalyzed and promoted the great thought movements that shaped how the digital world unfolded, such as Open Source, the principle of freedom and transparency in sharing the code that makes up the moving parts of that world, notably through the Open Source Conference which was like Woodstock for developers and ran from the beginning of that era for many years and which I personally presented at many times. Named by <em>Inc.</em> magazine as the “Oracle of Silicon Valley,” Tim created the term “Web 2.0” to denote the shift towards the era where users like you and me participate by creating our own content, which turned into social media and which is now just part of the digital water we swim in. His 2017 book, <a href='https://www.amazon.com/WTF-Whats-Future-Why-Its/dp/0062565710'><em>WTF: What’s the Future and Why It’s Up to Us</em></a> explores the technological forces on our world and how to harness them for a better future.</p>
<p>We talk about the effects of generative AI on our work processes, what AI does to the value model of accessing information, meme stocks in the new economy, AGI, and preference alignment and market influencing leading to collective intelligence. Really.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-248-2911.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/fzucu3xbmbmfuq8b/248_AI-and-You.mp3" length="26495923" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




My guest has fundamentally reshaped the landscape of digital technology. Your experience of the Internet owes a large part of its identity to Tim O’Reilly, the founder, CEO, and Chairman of O’Reilly Media, the company that has been providing the picks and shovels of learning to the Silicon Valley gold rush for the past thirty-five years, a platform that has connected and informed the people at ground zero of the online revolution since before there was a World Wide Web, through every medium from books to blogs. And the man behind that company has catalyzed and promoted the great thought movements that shaped how the digital world unfolded, such as Open Source, the principle of freedom and transparency in sharing the code that makes up the moving parts of that world, notably through the Open Source Conference which was like Woodstock for developers and ran from the beginning of that era for many years and which I personally presented at many times. Named by Inc. magazine as the “Oracle of Silicon Valley,” Tim created the term “Web 2.0” to denote the shift towards the era where users like you and me participate by creating our own content, which turned into social media and which is now just part of the digital water we swim in. His 2017 book, WTF: What’s the Future and Why It’s Up to Us explores the technological forces on our world and how to harness them for a better future.
We talk about the effects of generative AI on our work processes, what AI does to the value model of accessing information, meme stocks in the new economy, AGI, and preference alignment and market influencing leading to collective intelligence. Really.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1614</itunes:duration>
                <itunes:episode>249</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>247 - Guest: Tim O'Reilly, Entrepreneur of Ideas, part 1</title>
        <itunes:title>247 - Guest: Tim O'Reilly, Entrepreneur of Ideas, part 1</itunes:title>
        <link>https://aiandyou.net/e/247-guest-tim-oreilly-entrepreneur-of-ideas-part-1/</link>
                    <comments>https://aiandyou.net/e/247-guest-tim-oreilly-entrepreneur-of-ideas-part-1/#comments</comments>        <pubDate>Mon, 10 Mar 2025 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/92fb9806-1204-37db-93a8-c257718efc94</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>My guest has fundamentally reshaped the landscape of digital technology. Your experience of the Internet owes a large part of its identity to Tim O’Reilly, the founder, CEO, and Chairman of <a href='https://oreilly.com'>O’Reilly Media</a>, the company that has been providing the picks and shovels of learning to the Silicon Valley gold rush for the past thirty-five years, a platform that has connected and informed the people at ground zero of the online revolution since before there was a World Wide Web, through every medium from books to blogs. And the man behind that company has catalyzed and promoted the great thought movements that shaped how the digital world unfolded, such as Open Source, the principle of freedom and transparency in sharing the code that makes up the moving parts of that world, notably through the Open Source Conference which was like Woodstock for developers and ran from the beginning of that era for many years and which I personally presented at many times. Named by Inc. magazine as the “Oracle of Silicon Valley,” Tim created the term “Web 2.0” to denote the shift towards the era where users like you and me participate by creating our own content, which turned into social media and which is now just part of the digital water we swim in. His 2017 book, <a href='https://www.amazon.com/WTF-Whats-Future-Why-Its/dp/0062565710'>WTF: What’s the Future and Why It’s Up to Us</a> explores the technological forces on our world and how to harness them for a better future.</p>
<p>We talk about intellectual property rights in the generative AI era – Taylor Swift will make an appearance again - and Tim’s conversations with Sam Altman, parallels with the evolution of Linux, comparing incentives with social media, the future of content generating work, and opportunities for entrepreneurial flowering.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-247-4150.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>My guest has fundamentally reshaped the landscape of digital technology. Your experience of the Internet owes a large part of its identity to Tim O’Reilly, the founder, CEO, and Chairman of <a href='https://oreilly.com'>O’Reilly Media</a>, the company that has been providing the picks and shovels of learning to the Silicon Valley gold rush for the past thirty-five years, a platform that has connected and informed the people at ground zero of the online revolution since before there was a World Wide Web, through every medium from books to blogs. And the man behind that company has catalyzed and promoted the great thought movements that shaped how the digital world unfolded, such as Open Source, the principle of freedom and transparency in sharing the code that makes up the moving parts of that world, notably through the Open Source Conference which was like Woodstock for developers and ran from the beginning of that era for many years and which I personally presented at many times. Named by <em>Inc.</em> magazine as the “Oracle of Silicon Valley,” Tim created the term “Web 2.0” to denote the shift towards the era where users like you and me participate by creating our own content, which turned into social media and which is now just part of the digital water we swim in. His 2017 book, <a href='https://www.amazon.com/WTF-Whats-Future-Why-Its/dp/0062565710'><em>WTF: What’s the Future and Why It’s Up to Us</em></a> explores the technological forces on our world and how to harness them for a better future.</p>
<p>We talk about intellectual property rights in the generative AI era – Taylor Swift will make an appearance again - and Tim’s conversations with Sam Altman, parallels with the evolution of Linux, comparing incentives with social media, the future of content generating work, and opportunities for entrepreneurial flowering.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/03/transcript-247-4150.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3nx67dfec9xd9xxn/247_AI-and-You.mp3" length="28480811" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




My guest has fundamentally reshaped the landscape of digital technology. Your experience of the Internet owes a large part of its identity to Tim O’Reilly, the founder, CEO, and Chairman of O’Reilly Media, the company that has been providing the picks and shovels of learning to the Silicon Valley gold rush for the past thirty-five years, a platform that has connected and informed the people at ground zero of the online revolution since before there was a World Wide Web, through every medium from books to blogs. And the man behind that company has catalyzed and promoted the great thought movements that shaped how the digital world unfolded, such as Open Source, the principle of freedom and transparency in sharing the code that makes up the moving parts of that world, notably through the Open Source Conference which was like Woodstock for developers and ran from the beginning of that era for many years and which I personally presented at many times. Named by Inc. magazine as the “Oracle of Silicon Valley,” Tim created the term “Web 2.0” to denote the shift towards the era where users like you and me participate by creating our own content, which turned into social media and which is now just part of the digital water we swim in. His 2017 book, WTF: What’s the Future and Why It’s Up to Us explores the technological forces on our world and how to harness them for a better future.
We talk about intellectual property rights in the generative AI era – Taylor Swift will make an appearance again - and Tim’s conversations with Sam Altman, parallels with the evolution of Linux, comparing incentives with social media, the future of content generating work, and opportunities for entrepreneurial flowering.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1738</itunes:duration>
                <itunes:episode>248</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>246 - Guest: Paul Reber, Professor of Cognitive Neuroscience, part 2</title>
        <itunes:title>246 - Guest: Paul Reber, Professor of Cognitive Neuroscience, part 2</itunes:title>
        <link>https://aiandyou.net/e/246-guest-paul-reber-professor-of-cognitive-neuroscience-part-2/</link>
                    <comments>https://aiandyou.net/e/246-guest-paul-reber-professor-of-cognitive-neuroscience-part-2/#comments</comments>        <pubDate>Mon, 03 Mar 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/fa1d9675-2089-3aee-8eaf-a67a74c4886a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do our brains produce thinking? My guest is an expert in cognitive neuroscience, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on human memory—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal Cortex and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.</p>
<p>In part 2, we talk about how to memorize something like a TED talk, the difference between human and computer memory, how humans make memories more resilient, catastrophic interference, and just how big is the human brain and can we fill it up?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-246-0559.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do our brains produce thinking? My guest is an expert in <em>cognitive neuroscience</em>, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on <em>human memory</em>—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal <em>Cortex</em> and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.</p>
<p>In part 2, we talk about how to memorize something like a TED talk, the difference between human and computer memory, how humans make memories more resilient, catastrophic interference, and just how big is the human brain and can we fill it up?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-246-0559.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/mi5zjqzfe6vnzf3q/246_AI-and-You.mp3" length="34601405" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How do our brains produce thinking? My guest is an expert in cognitive neuroscience, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on human memory—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal Cortex and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.
In part 2, we talk about how to memorize something like a TED talk, the difference between human and computer memory, how humans make memories more resilient, catastrophic interference, and just how big is the human brain and can we fill it up?
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2121</itunes:duration>
                <itunes:episode>247</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>245 - Guest: Paul Reber, Professor of Cognitive Neuroscience, part 1</title>
        <itunes:title>245 - Guest: Paul Reber, Professor of Cognitive Neuroscience, part 1</itunes:title>
        <link>https://aiandyou.net/e/245-guest-paul-reber-professor-of-cognitive-neuroscience-part-1/</link>
                    <comments>https://aiandyou.net/e/245-guest-paul-reber-professor-of-cognitive-neuroscience-part-1/#comments</comments>        <pubDate>Mon, 24 Feb 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b3bae5c4-add0-3bd8-8558-ed250dafa336</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do our brains produce thinking? My guest is an expert in cognitive neuroscience, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on human memory—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal Cortex and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.</p>
<p>In part 1, we talk about distinguishing neuroscience from cognitive neuroscience, the physical structure of the brain, how we learn physical skills, comparing the brain to AI, and foundational problems in neuroscience.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-245-9137.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>How do our brains produce thinking? My guest is an expert in <em>cognitive neuroscience</em>, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on <em>human memory</em>—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal <em>Cortex</em> and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.</p>
<p>In part 1, we talk about distinguishing neuroscience from cognitive neuroscience, the physical structure of the brain, how we learn physical skills, comparing the brain to AI, and foundational problems in neuroscience.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-245-9137.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qxn7xegy5h953z65/245_AI-and-You.mp3" length="32145477" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




How do our brains produce thinking? My guest is an expert in cognitive neuroscience, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on human memory—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal Cortex and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.
In part 1, we talk about distinguishing neuroscience from cognitive neuroscience, the physical structure of the brain, how we learn physical skills, comparing the brain to AI, and foundational problems in neuroscience.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1967</itunes:duration>
                <itunes:episode>246</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>244 - Guest: Beth Singler, Anthropologist of Religion in AI, part 2</title>
        <itunes:title>244 - Guest: Beth Singler, Anthropologist of Religion in AI, part 2</itunes:title>
        <link>https://aiandyou.net/e/244-guest-beth-singler-anthropologist-of-religion-in-ai-part-2/</link>
                    <comments>https://aiandyou.net/e/244-guest-beth-singler-anthropologist-of-religion-in-ai-part-2/#comments</comments>        <pubDate>Mon, 17 Feb 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5ea580ae-93a8-38b5-a3da-532ab1a29063</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!</p>
<p><a href='https://bvlsingler.com/'>Beth </a>is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics.  She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of <a href='https://bvlsingler.com/rise-of-the-machines-short-films-on-ai-and-robotics-available-online/'>award-winning short documentaries on AI</a>. She is co-editor of the Cambridge Companion to Religion and AI, and author of <a href='https://www.routledge.com/Religion-and-Artificial-Intelligence-An-Introduction/Singler/p/book/9781032187648'>Religion and AI: An Introduction</a>, both published last year. </p>
<p>In part 2, we talk about Roko’s Basilisk, which is a concept that changes your life the moment you find out what it is, experiences of AI saying that it’s a God, the reverse Garland test (that’s based on ex Machina), simulation theories starting with Plato’s Cave, more chatbot priests, how Beth does research, and… Battlestar Galactica.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-243-8015.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!</p>
<p><a href='https://bvlsingler.com/'>Beth </a>is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics.  She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of <a href='https://bvlsingler.com/rise-of-the-machines-short-films-on-ai-and-robotics-available-online/'>award-winning short documentaries on AI</a>. She is co-editor of the <em>Cambridge Companion to Religion and AI</em>, and author of <a href='https://www.routledge.com/Religion-and-Artificial-Intelligence-An-Introduction/Singler/p/book/9781032187648'><em>Religion and AI: An Introduction</em></a>, both published last year. </p>
<p>In part 2, we talk about Roko’s Basilisk, which is a concept that changes your life the moment you find out what it is, experiences of AI saying that it’s a God, the reverse Garland test (that’s based on <em>ex Machina</em>), simulation theories starting with Plato’s Cave, more chatbot priests, how Beth does research, and… <em>Battlestar Galactica</em>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-243-8015.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/hakyuap2mr7qgjr7/244_AI-and-You.mp3" length="33056210" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!
Beth is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics.  She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of award-winning short documentaries on AI. She is co-editor of the Cambridge Companion to Religion and AI, and author of Religion and AI: An Introduction, both published last year. 
In part 2, we talk about Roko’s Basilisk, which is a concept that changes your life the moment you find out what it is, experiences of AI saying that it’s a God, the reverse Garland test (that’s based on ex Machina), simulation theories starting with Plato’s Cave, more chatbot priests, how Beth does research, and… Battlestar Galactica.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2024</itunes:duration>
                <itunes:episode>245</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>243 - Guest: Beth Singler, Anthropologist of Religion in AI, part 1</title>
        <itunes:title>243 - Guest: Beth Singler, Anthropologist of Religion in AI, part 1</itunes:title>
        <link>https://aiandyou.net/e/243-guest-beth-singler-anthropologist-of-religion-in-ai-part-1/</link>
                    <comments>https://aiandyou.net/e/243-guest-beth-singler-anthropologist-of-religion-in-ai-part-1/#comments</comments>        <pubDate>Mon, 10 Feb 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/28e309d2-8ec0-33e9-8697-fbe6b81afeff</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!</p>
<p><a href='https://bvlsingler.com/'>Beth </a>is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics.  She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of <a href='https://bvlsingler.com/rise-of-the-machines-short-films-on-ai-and-robotics-available-online/'>award-winning short documentaries on AI</a>. She is co-editor of the Cambridge Companion to Religion and AI, and author of <a href='https://www.routledge.com/Religion-and-Artificial-Intelligence-An-Introduction/Singler/p/book/9781032187648'>Religion and AI: An Introduction</a>, both published last year. </p>
<p>In part 1, we talk about why religion and AI is a thing and what its dimensions are, the influence of science fiction, tropes like End Times, AI used in religious roles, and the Singularity.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-243-8015.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!</p>
<p><a href='https://bvlsingler.com/'>Beth </a>is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics.  She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of <a href='https://bvlsingler.com/rise-of-the-machines-short-films-on-ai-and-robotics-available-online/'>award-winning short documentaries on AI</a>. She is co-editor of the <em>Cambridge Companion to Religion and AI</em>, and author of <a href='https://www.routledge.com/Religion-and-Artificial-Intelligence-An-Introduction/Singler/p/book/9781032187648'><em>Religion and AI: An Introduction</em></a>, both published last year. </p>
<p>In part 1, we talk about why religion and AI is a thing and what its dimensions are, the influence of science fiction, tropes like End Times, AI used in religious roles, and the Singularity.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-243-8015.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/v43yeidjtyzuzvjc/243_AI-and-You.mp3" length="33496756" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!
Beth is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics.  She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of award-winning short documentaries on AI. She is co-editor of the Cambridge Companion to Religion and AI, and author of Religion and AI: An Introduction, both published last year. 
In part 1, we talk about why religion and AI is a thing and what its dimensions are, the influence of science fiction, tropes like End Times, AI used in religious roles, and the Singularity.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2052</itunes:duration>
                <itunes:episode>244</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>242 - Guest: Nick Potkalitsky, AI Integration Expert</title>
        <itunes:title>242 - Guest: Nick Potkalitsky, AI Integration Expert</itunes:title>
        <link>https://aiandyou.net/e/242-guest-nick-potkalitsky-ai-integration-expert/</link>
                    <comments>https://aiandyou.net/e/242-guest-nick-potkalitsky-ai-integration-expert/#comments</comments>        <pubDate>Mon, 03 Feb 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b8d801bb-617d-3b8d-ae11-b22d68165883</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Continuing our exploration of AI in education, I am joined by Nick Potkalitsky, founder of Pragmatic AI Solutions and co-author of <a href='https://www.amazon.com/AI-Education-Roadmap-Teacher-Led-Transformation/dp/0970357141'>AI in Education: A Roadmap For Teacher-Led Transformation</a>. With a doctorate in narrative and rhetorical studies, he leads AI curriculum integration at The Miami Valley School and develops pioneering AI literacy programs. His Substack “Educating AI” offers curriculum guidance and expert insights.</p>
<p>We talk about how AI has landed emotionally for teachers, whether there’s a generational divide in the different reactions teachers have had to AI, how students are using AI tools and the homework problem, the changing landscape of policies in schools, how university requirements are evolving, and the teacher-led transformation of education that Nick foresees.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-242-9030.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Continuing our exploration of AI in education, I am joined by Nick Potkalitsky, founder of Pragmatic AI Solutions and co-author of <a href='https://www.amazon.com/AI-Education-Roadmap-Teacher-Led-Transformation/dp/0970357141'><em>AI in Education: A Roadmap For Teacher-Led Transformation</em></a>. With a doctorate in narrative and rhetorical studies, he leads AI curriculum integration at The Miami Valley School and develops pioneering AI literacy programs. His Substack “Educating AI” offers curriculum guidance and expert insights.</p>
<p>We talk about how AI has landed emotionally for teachers, whether there’s a generational divide in the different reactions teachers have had to AI, how students are using AI tools and the homework problem, the changing landscape of policies in schools, how university requirements are evolving, and the teacher-led transformation of education that Nick foresees.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/02/transcript-242-9030.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xv73jpk3mxb759kx/242_AI-and-You.mp3" length="44081555" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Continuing our exploration of AI in education, I am joined by Nick Potkalitsky, founder of Pragmatic AI Solutions and co-author of AI in Education: A Roadmap For Teacher-Led Transformation. With a doctorate in narrative and rhetorical studies, he leads AI curriculum integration at The Miami Valley School and develops pioneering AI literacy programs. His Substack “Educating AI” offers curriculum guidance and expert insights.
We talk about how AI has landed emotionally for teachers, whether there’s a generational divide in the different reactions teachers have had to AI, how students are using AI tools and the homework problem, the changing landscape of policies in schools, how university requirements are evolving, and the teacher-led transformation of education that Nick foresees.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2713</itunes:duration>
                <itunes:episode>243</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>241 - Guest: Mutlu Cukurova, AI in Education Expert, part 2</title>
        <itunes:title>241 - Guest: Mutlu Cukurova, AI in Education Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/241-guest-mutlu-cukurova-ai-in-education-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/241-guest-mutlu-cukurova-ai-in-education-expert-part-2/#comments</comments>        <pubDate>Mon, 27 Jan 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e5dfa7d0-87db-3d70-a69d-1292fc9833fc</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an <a href='https://www.unesco.org/en/articles/ai-competency-framework-teachers'>AI Competency Framework</a> for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.</p>
<p>In part 2, we talk about how the competency framework helps teachers use large language models, intelligent tutoring systems, the distinctions between human and machine intelligence, how to find the place to be human in a world of expanding AI capabilities, and the opportunities for teachers in this world.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/01/transcript-241-7418-2.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an <a href='https://www.unesco.org/en/articles/ai-competency-framework-teachers'>AI Competency Framework</a> for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.</p>
<p>In part 2, we talk about how the competency framework helps teachers use large language models, intelligent tutoring systems, the distinctions between human and machine intelligence, how to find the place to be human in a world of expanding AI capabilities, and the opportunities for teachers in this world.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/01/transcript-241-7418-2.pdf'>HumanCusp Blog</a>.</p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/fdwavgd489c2j3cu/241_AI-and-You.mp3" length="36783988" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an AI Competency Framework for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.
In part 2, we talk about how the competency framework helps teachers use large language models, intelligent tutoring systems, the distinctions between human and machine intelligence, how to find the place to be human in a world of expanding AI capabilities, and the opportunities for teachers in this world.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2257</itunes:duration>
                <itunes:episode>242</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>240 - Guest: Mutlu Cukurova, AI in Education Expert, part 1</title>
        <itunes:title>240 - Guest: Mutlu Cukurova, AI in Education Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/240-guest-mutlu-cukurova-ai-in-education-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/240-guest-mutlu-cukurova-ai-in-education-expert-part-1/#comments</comments>        <pubDate>Mon, 20 Jan 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/71dd2372-1bc3-3b89-b547-7a8412004d8e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an <a href='https://www.unesco.org/en/articles/ai-competency-framework-teachers'>AI Competency Framework</a> for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.</p>
<p>We talk about the role of UNESCO with respect to AI in education, societal and ethical issues of large language models in developing countries, the types of competencies assessed in classrooms that are affected by AI, what the AI Competency Framework for Teachers is, and how to use it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/01/transcript-240-4324.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an <a href='https://www.unesco.org/en/articles/ai-competency-framework-teachers'>AI Competency Framework</a> for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.</p>
<p>We talk about the role of UNESCO with respect to AI in education, societal and ethical issues of large language models in developing countries, the types of competencies assessed in classrooms that are affected by AI, what the AI Competency Framework for Teachers is, and how to use it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/01/transcript-240-4324.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/vhai6sxddhfthp3u/240_AI-and-You.mp3" length="37295152" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an AI Competency Framework for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.
We talk about the role of UNESCO with respect to AI in education, societal and ethical issues of large language models in developing countries, the types of competencies assessed in classrooms that are affected by AI, what the AI Competency Framework for Teachers is, and how to use it.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2289</itunes:duration>
                <itunes:episode>241</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>239 - Guest: Neil Brown, Ethics in Journalism Expert, part 2</title>
        <itunes:title>239 - Guest: Neil Brown, Ethics in Journalism Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/239-guest-neil-brown-ethics-in-journalism-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/239-guest-neil-brown-ethics-in-journalism-expert-part-2/#comments</comments>        <pubDate>Mon, 13 Jan 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/284f8a39-e816-386a-ac56-b62b95f10c8f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the Tampa Bay Times while they won six Pulitzers, and president of the <a href='https://www.poynter.org/'>Poynter Institute</a> for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!</p>
<p>We talk about the use of AI in journalism, in writing stories, its effect on our writing standards, different levels of stories in journalism, and the potential use of AI in interactive news publishing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/01/transcript-239-8200.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the <em>Tampa Bay Times </em>while they won six Pulitzers, and president of the <a href='https://www.poynter.org/'>Poynter Institute</a> for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!</p>
<p>We talk about the use of AI in journalism, in writing stories, its effect on our writing standards, different levels of stories in journalism, and the potential use of AI in interactive news publishing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2025/01/transcript-239-8200.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/356cx7vzghfcqt3c/239_AI-and-You.mp3" length="29250036" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the Tampa Bay Times while they won six Pulitzers, and president of the Poynter Institute for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!
We talk about the use of AI in journalism, in writing stories, its effect on our writing standards, different levels of stories in journalism, and the potential use of AI in interactive news publishing.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1828</itunes:duration>
                <itunes:episode>240</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>238 - Guest: Neil Brown, Ethics in Journalism Expert, part 1</title>
        <itunes:title>238 - Guest: Neil Brown, Ethics in Journalism Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/238-guest-neil-brown-ethics-in-journalism-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/238-guest-neil-brown-ethics-in-journalism-expert-part-1/#comments</comments>        <pubDate>Mon, 06 Jan 2025 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0511d481-0028-3763-923c-77571beb5ec2</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the Tampa Bay Times while they won six Pulitzers, and president of the <a href='https://www.poynter.org/'>Poynter Institute</a> for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!</p>
<p>We talk about pressures on news organizations, the evolution of the relationship between journalism and publishing, how revenue models are changing, the impact and use of AI or psychometric analysis tools, and much more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-238-8911.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the <em>Tampa Bay Times </em>while they won six Pulitzers, and president of the <a href='https://www.poynter.org/'>Poynter Institute</a> for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!</p>
<p>We talk about pressures on news organizations, the evolution of the relationship between journalism and publishing, how revenue models are changing, the impact and use of AI or psychometric analysis tools, and much more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-238-8911.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/tcgrmu6zqhp22382/238_AI-and-You.mp3" length="35975655" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the Tampa Bay Times while they won six Pulitzers, and president of the Poynter Institute for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!
We talk about pressures on news organizations, the evolution of the relationship between journalism and publishing, how revenue models are changing, the impact and use of AI or psychometric analysis tools, and much more. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2207</itunes:duration>
                <itunes:episode>239</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>237 - Special Panel: AI 2024 Cultural Retrospective/2025 Predictions</title>
        <itunes:title>237 - Special Panel: AI 2024 Cultural Retrospective/2025 Predictions</itunes:title>
        <link>https://aiandyou.net/e/237-special-panel-ai-2024-cultural-retrospective2025-predictions/</link>
                    <comments>https://aiandyou.net/e/237-special-panel-ai-2024-cultural-retrospective2025-predictions/#comments</comments>        <pubDate>Mon, 30 Dec 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/efda6485-d4cb-38f5-89ce-84873221c7e5</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In our last episode of 2024, we have our traditional end of year retrospective/prediction episode. We’ll be taking a look back over the year just ending and forward to 2025, but we’re not going to focus on technology, when GPT-5 is going to drop, etc. The space is already stuffed full of that sort of thing. We’re going to look at the time through an anthropological lens, for which I am rejoined by two former guests, anthropologist Beth Singler, who was in episodes <a href='https://aiandyou.net/e/038-guest-beth-singler/'>38</a> and <a href='https://aiandyou.net/e/039-guest-beth-singler-anthropologist-and-filmmaker-part-2/'>39</a>, and philosopher John Zerilli, who was in episodes <a href='https://aiandyou.net/e/078-guest-john-zerilli-cognitive-science-philosopher-part-1/'>78</a> and <a href='https://aiandyou.net/e/079-guest-john-zerilli-cognitive-science-philosopher-part-2/'>79</a>. Beth is Assistant Professor in Digital Religion(s) and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. Her most recent books are <a href='https://www.taylorfrancis.com/books/oa-mono/10.4324/9781003256113/religion-artificial-intelligence-beth-singler'>Religion and Artificial Intelligence</a> and <a href='https://www.cambridge.org/core/books/cambridge-companion-to-religion-and-artificial-intelligence/867462871532038FDE79B253EA9A3C15'>The Cambridge Companion to Religion and Artificial Intelligence</a>. John is a Lecturer at the University of Edinburgh, with a PhD in cognitive science and philosophy, and carrying out research at the universities of Oxford and Cambridge. His most recent book, <a href='https://mitpress.mit.edu/9780262044813/a-citizens-guide-to-artificial-intelligence/'>A Citizen’s Guide to Artificial Intelligence</a>, was published in 2021.</p>
<p>We consider how AI has been reshaping public narratives and attitudes over questions like job replacement, creativity, education, law, and religion.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-237-6138.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>In our last episode of 2024, we have our traditional end of year retrospective/prediction episode. We’ll be taking a look back over the year just ending and forward to 2025, but we’re not going to focus on technology, when GPT-5 is going to drop, etc. The space is already stuffed full of that sort of thing. We’re going to look at the time through an anthropological lens, for which I am rejoined by two former guests, anthropologist Beth Singler, who was in episodes <a href='https://aiandyou.net/e/038-guest-beth-singler/'>38</a> and <a href='https://aiandyou.net/e/039-guest-beth-singler-anthropologist-and-filmmaker-part-2/'>39</a>, and philosopher John Zerilli, who was in episodes <a href='https://aiandyou.net/e/078-guest-john-zerilli-cognitive-science-philosopher-part-1/'>78</a> and <a href='https://aiandyou.net/e/079-guest-john-zerilli-cognitive-science-philosopher-part-2/'>79</a>. Beth is Assistant Professor in Digital Religion(s) and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. Her most recent books are <a href='https://www.taylorfrancis.com/books/oa-mono/10.4324/9781003256113/religion-artificial-intelligence-beth-singler'><em>Religion and Artificial Intelligence</em></a> and <a href='https://www.cambridge.org/core/books/cambridge-companion-to-religion-and-artificial-intelligence/867462871532038FDE79B253EA9A3C15'><em>The Cambridge Companion to Religion and Artificial Intelligence</em></a>. John is a Lecturer at the University of Edinburgh, with a PhD in cognitive science and philosophy, and carrying out research at the universities of Oxford and Cambridge. His most recent book, <em><a href='https://mitpress.mit.edu/9780262044813/a-citizens-guide-to-artificial-intelligence/'>A Citizen’s Guide to Artificial Intelligence</a>,</em> was published in 2021.</p>
<p>We consider how AI has been reshaping public narratives and attitudes over questions like job replacement, creativity, education, law, and religion.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-237-6138.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/gq7zdj5nzzivhh6m/237_AI-and-You.mp3" length="54377562" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




In our last episode of 2024, we have our traditional end of year retrospective/prediction episode. We’ll be taking a look back over the year just ending and forward to 2025, but we’re not going to focus on technology, when GPT-5 is going to drop, etc. The space is already stuffed full of that sort of thing. We’re going to look at the time through an anthropological lens, for which I am rejoined by two former guests, anthropologist Beth Singler, who was in episodes 38 and 39, and philosopher John Zerilli, who was in episodes 78 and 79. Beth is Assistant Professor in Digital Religion(s) and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. Her most recent books are Religion and Artificial Intelligence and The Cambridge Companion to Religion and Artificial Intelligence. John is a Lecturer at the University of Edinburgh, with a PhD in cognitive science and philosophy, and carrying out research at the universities of Oxford and Cambridge. His most recent book, A Citizen’s Guide to Artificial Intelligence, was published in 2021.
We consider how AI has been reshaping public narratives and attitudes over questions like job replacement, creativity, education, law, and religion.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3357</itunes:duration>
                <itunes:episode>238</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>236 - Guest: Alexandra Belias, Head of Public Policy, DeepMind</title>
        <itunes:title>236 - Guest: Alexandra Belias, Head of Public Policy, DeepMind</itunes:title>
        <link>https://aiandyou.net/e/236-guest-alexandra-belias-head-of-public-policy-deepmind/</link>
                    <comments>https://aiandyou.net/e/236-guest-alexandra-belias-head-of-public-policy-deepmind/#comments</comments>        <pubDate>Mon, 23 Dec 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3da115cd-151d-3e52-95cd-1d075b3e86f7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Here to give us insights into some of the really cool stuff Google DeepMind is doing is Alexandra Belias, Head of product policy &amp; partnerships. She serves as a bridge between DeepMind’s product policy organization and the policy community. She previously led their international public policy work. She has an MPA in Economic Policy from LSE and is currently a tech fellow at the Harvard Carr Center for Human Rights.</p>
<p>We talk about Google DeepMind's science policy, the emerging network of national AI safety institutes, the tension between regulation and innovation, AlphaFold and its successors, AlphaMissense and AlphaProteo, their SynthID watermarking detection tool, reducing contrail pollution through AI, and safety frameworks for frontier AI.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-236-5087.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>Here to give us insights into some of the really cool stuff <em>Google DeepMind</em> is doing is Alexandra Belias, Head of product policy &amp; partnerships. She serves as a bridge between DeepMind’s product policy organization and the policy community. She previously led their international public policy work. She has an MPA in Economic Policy from LSE and is currently a tech fellow at the Harvard Carr Center for Human Rights.</p>
<p>We talk about Google DeepMind's science policy, the emerging network of national AI safety institutes, the tension between regulation and innovation, AlphaFold and its successors, AlphaMissense and AlphaProteo, their SynthID watermarking detection tool, reducing contrail pollution through AI, and safety frameworks for frontier AI.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-236-5087.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ei757t346kztfd34/236_AI-and-You.mp3" length="51068997" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




Here to give us insights into some of the really cool stuff Google DeepMind is doing is Alexandra Belias, Head of product policy &amp; partnerships. She serves as a bridge between DeepMind’s product policy organization and the policy community. She previously led their international public policy work. She has an MPA in Economic Policy from LSE and is currently a tech fellow at the Harvard Carr Center for Human Rights.
We talk about Google DeepMind's science policy, the emerging network of national AI safety institutes, the tension between regulation and innovation, AlphaFold and its successors, AlphaMissense and AlphaProteo, their SynthID watermarking detection tool, reducing contrail pollution through AI, and safety frameworks for frontier AI.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3150</itunes:duration>
                <itunes:episode>237</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>235 - Guest: Dwayne Wood, Professor of Education, part 2</title>
        <itunes:title>235 - Guest: Dwayne Wood, Professor of Education, part 2</itunes:title>
        <link>https://aiandyou.net/e/235-guest-dwayne-wood-professor-of-education-part-2/</link>
                    <comments>https://aiandyou.net/e/235-guest-dwayne-wood-professor-of-education-part-2/#comments</comments>        <pubDate>Mon, 16 Dec 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/4d00c2fa-c7ed-3d8b-90b8-07d39a33a4de</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at <a href='https://www.nu.edu'>National University of San Diego</a> does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.</p>
<p>We talk about the possible impact of AI on essential learning skills, the difference between technical and tactical competence, the in-person educational experience, and how Dwayne sees things changing in the next year.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-235-0806.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at <a href='https://www.nu.edu'>National University of San Diego</a> does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.</p>
<p>We talk about the possible impact of AI on essential learning skills, the difference between technical and tactical competence, the in-person educational experience, and how Dwayne sees things changing in the next year.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-235-0806.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/et673bfc87rfv2ix/235_AI-and-You.mp3" length="33264354" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at National University of San Diego does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.
We talk about the possible impact of AI on essential learning skills, the difference between technical and tactical competence, the in-person educational experience, and how Dwayne sees things changing in the next year.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2037</itunes:duration>
                <itunes:episode>236</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>234 - Guest: Dwayne Wood, Professor of Education, part 1</title>
        <itunes:title>234 - Guest: Dwayne Wood, Professor of Education, part 1</itunes:title>
        <link>https://aiandyou.net/e/234-guest-dwayne-wood-professor-of-education-part-1/</link>
                    <comments>https://aiandyou.net/e/234-guest-dwayne-wood-professor-of-education-part-1/#comments</comments>        <pubDate>Mon, 09 Dec 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0bac0ea4-1d33-39d4-9998-7fdc971ff006</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at <a href='https://www.nu.edu'>National University of San Diego</a> does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.</p>
<p>We talk about the relationships between teachers and students, the shifting base of fundamental skills in an AI world, the skills needed by instructional designers, how to teach effective and safe use of generative AI, and how to place the guardrails around learners using it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-234-2345.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at <a href='https://www.nu.edu'>National University of San Diego</a> does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.</p>
<p>We talk about the relationships between teachers and students, the shifting base of fundamental skills in an AI world, the skills needed by instructional designers, how to teach effective and safe use of generative AI, and how to place the guardrails around learners using it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/12/transcript-234-2345.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/p9cqxhwptudk9asv/234_AI-and-You.mp3" length="32666672" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at National University of San Diego does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.
We talk about the relationships between teachers and students, the shifting base of fundamental skills in an AI world, the skills needed by instructional designers, how to teach effective and safe use of generative AI, and how to place the guardrails around learners using it.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2000</itunes:duration>
                <itunes:episode>235</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>233 - Guest: J. Craig Wheeler, Astrophysics Professor</title>
        <itunes:title>233 - Guest: J. Craig Wheeler, Astrophysics Professor</itunes:title>
        <link>https://aiandyou.net/e/233-guest-j-craig-wheeler-astrophysics-professor/</link>
                    <comments>https://aiandyou.net/e/233-guest-j-craig-wheeler-astrophysics-professor/#comments</comments>        <pubDate>Mon, 02 Dec 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/2334788a-b5f2-35c0-88c5-c987c23d7e65</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We are going big on the show this time, with astrophysicist J. Craig Wheeler,  Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, Emeritus, at the University of Texas at Austin, and author of the book <a href='https://www.amazon.com/Path-Singularity-Technology-Challenge-Humanity/dp/1493085433'>The Path to Singularity: How Technology will Challenge the Future of Humanity</a>, released on November 19. He is a Fellow of the American Physical Society and Legacy Fellow of the American Astronomical Society, has published nearly 400 scientific papers, authored both professional and popular books on supernovae, and served on advisory committees for NSF, NASA, and the National Research Council. His new book, spanning the range of technologies that are propelling us towards singularity from robots to space colonization, has a foreword by Neil DeGrasse Tyson, who says, “The world is long overdue for a peek at the state of society and what its future looks like through the lens of a scientist. And when that scientist is also an astrophysicist, you can guarantee the perspectives shared will be as deep and as vast as the universe itself.”</p>
<p>We talk about the evolution of homo sapiens, high reliability organizations, brain computer interfaces, and transhumanism among other topics.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/11/transcript-233-8045.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We are going big on the show this time, with astrophysicist J. Craig Wheeler,  Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, Emeritus, at the University of Texas at Austin, and author of the book <a href='https://www.amazon.com/Path-Singularity-Technology-Challenge-Humanity/dp/1493085433'><em>The Path to Singularity: How Technology will Challenge the Future of Humanity</em></a>, released on November 19. He is a Fellow of the American Physical Society and Legacy Fellow of the American Astronomical Society, has published nearly 400 scientific papers, authored both professional and popular books on supernovae, and served on advisory committees for NSF, NASA, and the National Research Council. His new book, spanning the range of technologies that are propelling us towards singularity from robots to space colonization, has a foreword by Neil DeGrasse Tyson, who says, “The world is long overdue for a peek at the state of society and what its future looks like through the lens of a scientist. And when that scientist is also an astrophysicist, you can guarantee the perspectives shared will be as deep and as vast as the universe itself.”</p>
<p>We talk about the evolution of <em>homo sapiens</em>, high reliability organizations, brain computer interfaces, and transhumanism among other topics.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/11/transcript-233-8045.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/gg38nb6muuj9wd4h/233_AI-and-You.mp3" length="40434026" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We are going big on the show this time, with astrophysicist J. Craig Wheeler,  Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, Emeritus, at the University of Texas at Austin, and author of the book The Path to Singularity: How Technology will Challenge the Future of Humanity, released on November 19. He is a Fellow of the American Physical Society and Legacy Fellow of the American Astronomical Society, has published nearly 400 scientific papers, authored both professional and popular books on supernovae, and served on advisory committees for NSF, NASA, and the National Research Council. His new book, spanning the range of technologies that are propelling us towards singularity from robots to space colonization, has a foreword by Neil DeGrasse Tyson, who says, “The world is long overdue for a peek at the state of society and what its future looks like through the lens of a scientist. And when that scientist is also an astrophysicist, you can guarantee the perspectives shared will be as deep and as vast as the universe itself.”
We talk about the evolution of homo sapiens, high reliability organizations, brain computer interfaces, and transhumanism among other topics.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2485</itunes:duration>
                <itunes:episode>234</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>232 - Special Panel: Educators on AI, part 2</title>
        <itunes:title>232 - Special Panel: Educators on AI, part 2</itunes:title>
        <link>https://aiandyou.net/e/232-special-panel-educators-on-ai-part-2/</link>
                    <comments>https://aiandyou.net/e/232-special-panel-educators-on-ai-part-2/#comments</comments>        <pubDate>Mon, 25 Nov 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0b99aa27-2b3a-311b-9cab-6017a7322d8a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.</p>
<ul>
<li>Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.</li>
<li>Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.</li>
<li>Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.</li>
<li>And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.</li>
</ul>
<p>In the conclusion, we talk about whether students need to read as much as they used to now they have AI, fact checking, some disturbing stories about the use of AI detectors in schools, where the panel sees these trends evolving to, what they’re doing to help students learn better in an AI world, and… Iron Man.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/11/transcript-232-5282.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.</p>
<ul>
<li>Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.</li>
<li>Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.</li>
<li>Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.</li>
<li>And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.</li>
</ul>
<p>In the conclusion, we talk about whether students need to read as much as they used to now they have AI, fact checking, some disturbing stories about the use of AI detectors in schools, where the panel sees these trends evolving to, what they’re doing to help students learn better in an AI world, and… <em>Iron Man</em>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/11/transcript-232-5282.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8y9627wya2kasejq/232_AI-and-You.mp3" length="34628991" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.

Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.
Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.
Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.
And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.

In the conclusion, we talk about whether students need to read as much as they used to now they have AI, fact checking, some disturbing stories about the use of AI detectors in schools, where the panel sees these trends evolving to, what they’re doing to help students learn better in an AI world, and… Iron Man.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2122</itunes:duration>
                <itunes:episode>233</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>231 - Special Panel: Educators on AI, part 1</title>
        <itunes:title>231 - Special Panel: Educators on AI, part 1</itunes:title>
        <link>https://aiandyou.net/e/231-special-panel-educators-on-ai-part-1/</link>
                    <comments>https://aiandyou.net/e/231-special-panel-educators-on-ai-part-1/#comments</comments>        <pubDate>Mon, 18 Nov 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b529d68a-5f41-33cd-a2de-eac8e3153489</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.</p>
<ul>
<li>Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.</li>
<li>Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.</li>
<li>Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.</li>
<li>And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.</li>
</ul>
<p>We talk about how much kids were using GenAI without our knowing, how to turn GenAI in schools from a threat to an opportunity, the issue of cheating with ChatGPT, the discrepancy between how many workers are using AI and how many teachers are using it, how rules get made, confirmation bias and AI, using tools versus gaining competencies, and whether teachers will quit.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/11/transcript-231-3929.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>




<p>We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.</p>
<ul>
<li>Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.</li>
<li>Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.</li>
<li>Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.</li>
<li>And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.</li>
</ul>
<p>We talk about how much kids were using GenAI without our knowing, how to turn GenAI in schools from a threat to an opportunity, the issue of cheating with ChatGPT, the discrepancy between how many workers are using AI and how many teachers are using it, how rules get made, confirmation bias and AI, using tools versus gaining competencies, and whether teachers will quit.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/11/transcript-231-3929.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>





<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/pcqg5y82bknxe3t9/231_AI-and-You.mp3" length="33518473" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .




We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.

Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.
Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.
Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.
And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.

We talk about how much kids were using GenAI without our knowing, how to turn GenAI in schools from a threat to an opportunity, the issue of cheating with ChatGPT, the discrepancy between how many workers are using AI and how many teachers are using it, how rules get made, confirmation bias and AI, using tools versus gaining competencies, and whether teachers will quit.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 





 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2053</itunes:duration>
                <itunes:episode>232</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>230 - Guest: Caroline Bassett, Digital Humanities Professor, part 2</title>
        <itunes:title>230 - Guest: Caroline Bassett, Digital Humanities Professor, part 2</itunes:title>
        <link>https://aiandyou.net/e/230-guest-caroline-bassett-digital-humanities-professor-part-2/</link>
                    <comments>https://aiandyou.net/e/230-guest-caroline-bassett-digital-humanities-professor-part-2/#comments</comments>        <pubDate>Mon, 11 Nov 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/007fa977-1070-3b0f-a44a-885821a0f9d4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter. </p>
<p>Professor Caroline Bassett is the first Director of <a href='https://www.cdh.cam.ac.uk/'>Cambridge Digital Humanities</a>, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.</p>
<p>In the conclusion, we talk about how technology shapes our psychology, how it enables mass movements, science fiction, the role of big Silicon Valley companies, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-230-1993.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p><em>Digital Humanities</em> sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter. </p>
<p>Professor Caroline Bassett is the first Director of <a href='https://www.cdh.cam.ac.uk/'><em>Cambridge Digital Humanities</em></a>, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.</p>
<p>In the conclusion, we talk about how technology shapes our psychology, how it enables mass movements, science fiction, the role of big Silicon Valley companies, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-230-1993.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nuqgvzcesmd6atsr/230_AI-and-You.mp3" length="29756004" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter. 
Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.
In the conclusion, we talk about how technology shapes our psychology, how it enables mass movements, science fiction, the role of big Silicon Valley companies, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1818</itunes:duration>
                <itunes:episode>231</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>229 - Guest: Caroline Bassett, Digital Humanities Professor, part 1</title>
        <itunes:title>229 - Guest: Caroline Bassett, Digital Humanities Professor, part 1</itunes:title>
        <link>https://aiandyou.net/e/229-guest-caroline-bassett-digital-humanities-professor-part-1/</link>
                    <comments>https://aiandyou.net/e/229-guest-caroline-bassett-digital-humanities-professor-part-1/#comments</comments>        <pubDate>Mon, 04 Nov 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/84da1981-aa50-3dd6-9514-f9c948088288</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter.</p>
<p>Professor Caroline Bassett is the first Director of <a href='https://www.cdh.cam.ac.uk/'>Cambridge Digital Humanities</a>, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.</p>
<p>In part 1 we talk about what digital humanities is, how it intersects with AI, what science and the humanities have to learn from each other, Joseph Weizenbaum and the reactions to his ELIZA chatbot, Luddites, and how passively or otherwise we accept new technology. Caroline really made me see in particular how what she calls "technocratic rationality," a way of thinking borne out of a technological culture accelerated by AI, reduces the novelty which we can experience in the world in a way we should certainly preserve.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-229-8462.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p><em>Digital Humanities</em> sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter.</p>
<p>Professor Caroline Bassett is the first Director of <a href='https://www.cdh.cam.ac.uk/'><em>Cambridge Digital Humanities</em></a>, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.</p>
<p>In part 1 we talk about what digital humanities is, how it intersects with AI, what science and the humanities have to learn from each other, Joseph Weizenbaum and the reactions to his ELIZA chatbot, Luddites, and how passively or otherwise we accept new technology. Caroline really made me see in particular how what she calls "technocratic rationality," a way of thinking borne out of a technological culture accelerated by AI, reduces the novelty which we can experience in the world in a way we should certainly preserve.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-229-8462.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/66edmyzd9q43tvrb/229_AI-and-You.mp3" length="40847387" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter.
Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.
In part 1 we talk about what digital humanities is, how it intersects with AI, what science and the humanities have to learn from each other, Joseph Weizenbaum and the reactions to his ELIZA chatbot, Luddites, and how passively or otherwise we accept new technology. Caroline really made me see in particular how what she calls "technocratic rationality," a way of thinking borne out of a technological culture accelerated by AI, reduces the novelty which we can experience in the world in a way we should certainly preserve.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2511</itunes:duration>
                <itunes:episode>230</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>228 - Guest: John Laird, Cognitive architect, part 2</title>
        <itunes:title>228 - Guest: John Laird, Cognitive architect, part 2</itunes:title>
        <link>https://aiandyou.net/e/228-guest-john-laird-cognitive-architect-part-2/</link>
                    <comments>https://aiandyou.net/e/228-guest-john-laird-cognitive-architect-part-2/#comments</comments>        <pubDate>Mon, 28 Oct 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c73bfa9b-7542-3b30-96f9-5d65971bc576</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of <a href='https://laird.engin.umich.edu/'>John Laird</a>. Is cognitive architecture the gateway to artificial general intelligence?</p>
<p>John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his PhD from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.</p>
<p>We talk about relationships between cognitive architectures and AGI, where explainability and transparency come in, Turing tests, where we could be in 10 years, how to recognize AGI, metacognition, and the SOAR architecture.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-228-7013.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p><em>Cognitive architecture</em> deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of <a href='https://laird.engin.umich.edu/'>John Laird</a>. Is cognitive architecture the gateway to artificial general intelligence?</p>
<p>John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his PhD from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.</p>
<p>We talk about relationships between cognitive architectures and AGI, where explainability and transparency come in, Turing tests, where we could be in 10 years, how to recognize AGI, metacognition, and the SOAR architecture.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-228-7013.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/57xasr8pfh9js8j9/228_AI-and-You.mp3" length="34150845" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of John Laird. Is cognitive architecture the gateway to artificial general intelligence?
John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his PhD from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.
We talk about relationships between cognitive architectures and AGI, where explainability and transparency come in, Turing tests, where we could be in 10 years, how to recognize AGI, metacognition, and the SOAR architecture.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2092</itunes:duration>
                <itunes:episode>229</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>227 - Guest: John Laird, Cognitive architect, part 1</title>
        <itunes:title>227 - Guest: John Laird, Cognitive architect, part 1</itunes:title>
        <link>https://aiandyou.net/e/227-guest-john-laird-cognitive-architect-part-1/</link>
                    <comments>https://aiandyou.net/e/227-guest-john-laird-cognitive-architect-part-1/#comments</comments>        <pubDate>Mon, 21 Oct 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/868f8493-fd3d-3953-9e7c-b7e969297de0</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of <a href='https://laird.engin.umich.edu/'>John Laird</a>. Is cognitive architecture the gateway to artificial general intelligence?</p>
<p>John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his Ph.D. from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.</p>
<p>We talk about decision loops, models of the mind, symbolic versus neural models, and how large language models do reasoning.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-227-7042.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p><em>Cognitive architecture</em> deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of <a href='https://laird.engin.umich.edu/'>John Laird</a>. Is cognitive architecture the gateway to artificial general intelligence?</p>
<p>John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his Ph.D. from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.</p>
<p>We talk about decision loops, models of the mind, symbolic versus neural models, and how large language models do reasoning.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-227-7042.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/pam7kc2hqdgq7gy2/227_AI-and-You.mp3" length="35284769" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of John Laird. Is cognitive architecture the gateway to artificial general intelligence?
John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his Ph.D. from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.
We talk about decision loops, models of the mind, symbolic versus neural models, and how large language models do reasoning.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2163</itunes:duration>
                <itunes:episode>228</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>226 - Guest: Sir Anthony Seldon, Historian, Author, Educator</title>
        <itunes:title>226 - Guest: Sir Anthony Seldon, Historian, Author, Educator</itunes:title>
        <link>https://aiandyou.net/e/226-guest-sir-anthony-seldon-historian-author-educator/</link>
                    <comments>https://aiandyou.net/e/226-guest-sir-anthony-seldon-historian-author-educator/#comments</comments>        <pubDate>Mon, 14 Oct 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/32fb95c1-ddbf-38a2-92bb-108e0b229357</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest today founded the United Kingdom's <a href='https://www.ai-in-education.co.uk/'>AI in Education initiative</a>, but <a href='https://x.com/AnthonySeldon'>Sir Anthony Seldon</a> is known to millions more there as the author of books about prime ministers, having just published one about <a href='https://atlantic-books.co.uk/book/truss-at-10/'>Liz Truss</a>.</p>
<p>Sir Anthony is one of Britain’s leading contemporary historians, educationalists, commentators and political authors. For 20 years he was a transformative headmaster (“principal” in North American lingo) first at Brighton College and then Wellington College, one of the country’s leading independent schools. From 2015 to 2020 he served as Vice-Chancellor of the University of Buckingham. He is now head of Epsom College. He is the author or editor of over 35 books on contemporary history, including insider accounts on the last six prime ministers. In 2018 he wrote <a href='https://www.unibuckinghampress.com/fourth-education-revolution'>The Fourth Education Revolution</a>, which anticipates stunning, unprecedented effects of AI on education. He was knighted in 2014 for services to education and modern political history. </p>
<p>Managing to avoid nearly all the potential Truss references, I talked with him about how teachers should think about the size of the impact of AI on education, the benefits of AI to students and teachers, what the AI in Education initiative is doing, and what the best role of teachers in the classroom is in the AI age.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-226-9099.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest today founded the United Kingdom's <a href='https://www.ai-in-education.co.uk/'>AI in Education initiative</a>, but <a href='https://x.com/AnthonySeldon'>Sir Anthony Seldon</a> is known to millions more there as the author of books about prime ministers, having just published one about <a href='https://atlantic-books.co.uk/book/truss-at-10/'>Liz Truss</a>.</p>
<p>Sir Anthony is one of Britain’s leading contemporary historians, educationalists, commentators and political authors. For 20 years he was a transformative headmaster (“principal” in North American lingo) first at Brighton College and then Wellington College, one of the country’s leading independent schools. From 2015 to 2020 he served as Vice-Chancellor of the University of Buckingham. He is now head of Epsom College. He is the author or editor of over 35 books on contemporary history, including insider accounts on the last six prime ministers. In 2018 he wrote <a href='https://www.unibuckinghampress.com/fourth-education-revolution'><em>The Fourth Education Revolution</em></a>, which anticipates stunning, unprecedented effects of AI on education. He was knighted in 2014 for services to education and modern political history. </p>
<p>Managing to avoid nearly all the potential Truss references, I talked with him about how teachers should think about the size of the impact of AI on education, the benefits of AI to students and teachers, what the AI in Education initiative is doing, and what the best role of teachers in the classroom is in the AI age.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/10/transcript-226-9099.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/utggzbtr8knx5xvz/226_AI-and-You.mp3" length="22413715" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

My guest today founded the United Kingdom's AI in Education initiative, but Sir Anthony Seldon is known to millions more there as the author of books about prime ministers, having just published one about Liz Truss.
Sir Anthony is one of Britain’s leading contemporary historians, educationalists, commentators and political authors. For 20 years he was a transformative headmaster (“principal” in North American lingo) first at Brighton College and then Wellington College, one of the country’s leading independent schools. From 2015 to 2020 he served as Vice-Chancellor of the University of Buckingham. He is now head of Epsom College. He is the author or editor of over 35 books on contemporary history, including insider accounts on the last six prime ministers. In 2018 he wrote The Fourth Education Revolution, which anticipates stunning, unprecedented effects of AI on education. He was knighted in 2014 for services to education and modern political history. 
Managing to avoid nearly all the potential Truss references, I talked with him about how teachers should think about the size of the impact of AI on education, the benefits of AI to students and teachers, what the AI in Education initiative is doing, and what the best role of teachers in the classroom is in the AI age.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1359</itunes:duration>
                <itunes:episode>227</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>225 - Guest: Ravin Jesuthasan, Bestselling Futurist, part 2</title>
        <itunes:title>225 - Guest: Ravin Jesuthasan, Bestselling Futurist, part 2</itunes:title>
        <link>https://aiandyou.net/e/225-guest-ravin-jesuthasan-bestselling-futurist-part-2/</link>
                    <comments>https://aiandyou.net/e/225-guest-ravin-jesuthasan-bestselling-futurist-part-2/#comments</comments>        <pubDate>Mon, 07 Oct 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9787d9cd-fd5e-3170-b343-b837aaaf75b9</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with <a href='https://www.ravinjesuthasan.com/'>Ravin Jesuthasan</a>, co-author with Tanuj Kapilashrami of the new book, <a href='https://www.skillspoweredorganization.com'>The Skills-Powered Organization: The Journey to The Next Generation Enterprise</a>, released on October 1.</p>
<p>Ravin is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the bestselling books Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs.</p>
<p>He was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive.</p>
<p>In the conclusion, we talk about how AI is reshaping HR functions, including hiring, staffing, and restructuring processes, the role of AI in mentoring and augmenting work, the relationship between the future of work and the future of education, the real value of a degree today, and how AI affects privilege and inequality in the new work environment.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-225-0140.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with <a href='https://www.ravinjesuthasan.com/'>Ravin Jesuthasan</a>, co-author with Tanuj Kapilashrami of the new book, <a href='https://www.skillspoweredorganization.com'><em>The Skills-Powered Organization: The Journey to The Next Generation Enterprise</em></a>, released on October 1.</p>
<p>Ravin is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the bestselling books <em>Work without Job</em>s, as well as the books <em>Transformative HR, Lead the Work, </em>and<em> Reinventing Jobs</em>.</p>
<p>He was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by <em>Tech News</em>, and as one of the top 100 HR influencers by <em>HR Executive</em>.</p>
<p>In the conclusion, we talk about how AI is reshaping HR functions, including hiring, staffing, and restructuring processes, the role of AI in mentoring and augmenting work, the relationship between the future of work and the future of education, the real value of a degree today, and how AI affects privilege and inequality in the new work environment.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-225-0140.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/gg7ewi4zanguhces/225_AI-and-You.mp3" length="27545418" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with Ravin Jesuthasan, co-author with Tanuj Kapilashrami of the new book, The Skills-Powered Organization: The Journey to The Next Generation Enterprise, released on October 1.
Ravin is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the bestselling books Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs.
He was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive.
In the conclusion, we talk about how AI is reshaping HR functions, including hiring, staffing, and restructuring processes, the role of AI in mentoring and augmenting work, the relationship between the future of work and the future of education, the real value of a degree today, and how AI affects privilege and inequality in the new work environment.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1680</itunes:duration>
                <itunes:episode>226</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>224 - Guest: Ravin Jesuthasan, Bestselling Futurist, part 1</title>
        <itunes:title>224 - Guest: Ravin Jesuthasan, Bestselling Futurist, part 1</itunes:title>
        <link>https://aiandyou.net/e/224-guest-ravin-jesuthasan-bestselling-futurist-part-1/</link>
                    <comments>https://aiandyou.net/e/224-guest-ravin-jesuthasan-bestselling-futurist-part-1/#comments</comments>        <pubDate>Mon, 30 Sep 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a11778e0-fc3c-3c88-948a-e97fafb99869</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with <a href='https://www.ravinjesuthasan.com/'>Ravin Jesuthasan</a>, co-author with Tanuj Kapilashrami of the new book, <a href='https://www.skillspoweredorganization.com'>The Skills-Powered Organization: The Journey to The Next Generation Enterprise</a>, released on October 1.</p>
<p>Ravin is a futurist and authority on the future of work, human capital, and AI, and is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the Wall Street Journal bestseller Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs.</p>
<p>Ravin was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive.</p>
<p>In this first part, we talk about the impact of AI on work processes, the role of HR in adapting to these changes, and the evolving organizational models that focus on agility, flexibility, and skill-based work transitions. We also discuss AI's role in healthcare, and the importance of transferable skills in an AI-driven world.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-224-2952.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with <a href='https://www.ravinjesuthasan.com/'>Ravin Jesuthasan</a>, co-author with Tanuj Kapilashrami of the new book, <a href='https://www.skillspoweredorganization.com'><em>The Skills-Powered Organization: The Journey to The Next Generation Enterprise</em></a>, released on October 1.</p>
<p>Ravin is a futurist and authority on the future of work, human capital, and AI, and is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the Wall Street Journal bestseller <em>Work without Job</em>s, as well as the books <em>Transformative HR, Lead the Work, </em>and<em> Reinventing Jobs</em>.</p>
<p>Ravin was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by <em>Tech News</em>, and as one of the top 100 HR influencers by <em>HR Executive</em>.</p>
<p>In this first part, we talk about the impact of AI on work processes, the role of HR in adapting to these changes, and the evolving organizational models that focus on agility, flexibility, and skill-based work transitions. We also discuss AI's role in healthcare, and the importance of transferable skills in an AI-driven world.</p>
<p>All this plus our usual look at today's AI headlines!</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-224-2952.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/s79cdyi28nyx9p93/224_AI-and-You.mp3" length="33423596" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with Ravin Jesuthasan, co-author with Tanuj Kapilashrami of the new book, The Skills-Powered Organization: The Journey to The Next Generation Enterprise, released on October 1.
Ravin is a futurist and authority on the future of work, human capital, and AI, and is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the Wall Street Journal bestseller Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs.
Ravin was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive.
In this first part, we talk about the impact of AI on work processes, the role of HR in adapting to these changes, and the evolving organizational models that focus on agility, flexibility, and skill-based work transitions. We also discuss AI's role in healthcare, and the importance of transferable skills in an AI-driven world.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2047</itunes:duration>
                <itunes:episode>225</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>223 - Guest: Craig A. Kaplan, AGI Expert, part 2</title>
        <itunes:title>223 - Guest: Craig A. Kaplan, AGI Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/223-guest-craig-a-kaplan-agi-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/223-guest-craig-a-kaplan-agi-expert-part-2/#comments</comments>        <pubDate>Mon, 23 Sep 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d916a049-1f1d-3844-8b03-74c9eb02ae9a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of <a href='https://superintelligence.com'>iQ Company</a>, where he invents advanced intelligence systems.
He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.</p>
<p>Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>In the conclusion of the interview, we talk about the details of the collective intelligence architecture of agents, why Craig says it’s safe, morality of superintelligence, the risks of bad actors, and leading indicators of AGI. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-223-9614.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of <a href='https://superintelligence.com'>iQ Company</a>, where he invents advanced intelligence systems.<br>
He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.</p>
<p>Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>In the conclusion of the interview, we talk about the details of the collective intelligence architecture of agents, why Craig says it’s safe, morality of superintelligence, the risks of bad actors, and leading indicators of AGI. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-223-9614.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nh33m3bjbrw9pavh/223_AI-and-You.mp3" length="35130124" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of iQ Company, where he invents advanced intelligence systems.He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.
Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.
In the conclusion of the interview, we talk about the details of the collective intelligence architecture of agents, why Craig says it’s safe, morality of superintelligence, the risks of bad actors, and leading indicators of AGI. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2154</itunes:duration>
                <itunes:episode>224</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>222 - Guest: Craig A. Kaplan, AGI Expert, part 1</title>
        <itunes:title>222 - Guest: Craig A. Kaplan, AGI Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/222-guest-craig-a-kaplan-agi-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/222-guest-craig-a-kaplan-agi-expert-part-1/#comments</comments>        <pubDate>Mon, 16 Sep 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8554493c-8511-3938-ae6f-a65ddb864b9a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of <a href='https://superintelligence.com'>iQ Company</a>, where he invents advanced intelligence systems.
He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.</p>
<p>Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>We talk about his work with Herb Simon, bounded rationality, connectionist vs symbolic architectures, jailbreaking large language models, collective intelligence architectures for AI, and a lot more!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-222-6714.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of <a href='https://superintelligence.com'>iQ Company</a>, where he invents advanced intelligence systems.<br>
He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.</p>
<p>Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.</p>
<p>We talk about his work with Herb Simon, bounded rationality, connectionist vs symbolic architectures, jailbreaking large language models, collective intelligence architectures for AI, and a lot more!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-222-6714.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kv733mib42zkc4sq/222_AI-and-You.mp3" length="42109206" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of iQ Company, where he invents advanced intelligence systems.He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.
Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.
We talk about his work with Herb Simon, bounded rationality, connectionist vs symbolic architectures, jailbreaking large language models, collective intelligence architectures for AI, and a lot more!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2590</itunes:duration>
                <itunes:episode>223</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>221 - Guest: Markus Anderljung, AI Regulation Researcher, part 2</title>
        <itunes:title>221 - Guest: Markus Anderljung, AI Regulation Researcher, part 2</itunes:title>
        <link>https://aiandyou.net/e/221-guest-markus-anderljung-ai-regulation-researcher-part-2/</link>
                    <comments>https://aiandyou.net/e/221-guest-markus-anderljung-ai-regulation-researcher-part-2/#comments</comments>        <pubDate>Mon, 09 Sep 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1bd80f1a-4532-34d0-838f-fa4a6b00d9c0</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the <a href='https://www.governance.ai/'>Centre for the Governance of AI</a> in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.</p>
<p>I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.</p>
<p>In the conclusion, we talk about verification processes, ingenious schemes to verify hardware platforms, the frontier AI safety commitments, and who should set safety standards for the industry.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-221-6692.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the <a href='https://www.governance.ai/'>Centre for the Governance of AI</a> in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.</p>
<p>I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.</p>
<p>In the conclusion, we talk about verification processes, ingenious schemes to verify hardware platforms, the frontier AI safety commitments, and who should set safety standards for the industry.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/09/transcript-221-6692.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/uqhktg5dwg98v7ew/221_AI-and-You.mp3" length="28954777" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.
I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.
In the conclusion, we talk about verification processes, ingenious schemes to verify hardware platforms, the frontier AI safety commitments, and who should set safety standards for the industry.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1768</itunes:duration>
                <itunes:episode>222</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>220 - Guest: Markus Anderljung, AI Regulation Researcher, part 1</title>
        <itunes:title>220 - Guest: Markus Anderljung, AI Regulation Researcher, part 1</itunes:title>
        <link>https://aiandyou.net/e/220-guest-markus-anderljung-ai-regulation-researcher-part-1/</link>
                    <comments>https://aiandyou.net/e/220-guest-markus-anderljung-ai-regulation-researcher-part-1/#comments</comments>        <pubDate>Mon, 02 Sep 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5115966b-2dfd-3567-880e-b35a779ba023</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the <a href='https://www.governance.ai/'>Centre for the Governance of AI</a> in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.</p>
<p>I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.</p>
<p>We talk about just what the Centre is, what it does and how it does it, and definitions of artificial general intelligence insofar as they affect governance – just what is the difference between training a system with 1025 and 1026 flops, for instance? And also in this part Markus will talk about how monitoring and verification might specifically work.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-220-2431.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the <a href='https://www.governance.ai/'>Centre for the Governance of AI</a> in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.</p>
<p>I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.</p>
<p>We talk about just what the Centre is, what it does and how it does it, and definitions of artificial general intelligence insofar as they affect governance – just what is the difference between training a system with 1025 and 1026 flops, for instance? And also in this part Markus will talk about how monitoring and verification might specifically work.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-220-2431.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bktrhe3r27rj7jbn/220_AI-and-You.mp3" length="36803214" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.
I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.
We talk about just what the Centre is, what it does and how it does it, and definitions of artificial general intelligence insofar as they affect governance – just what is the difference between training a system with 1025 and 1026 flops, for instance? And also in this part Markus will talk about how monitoring and verification might specifically work.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2258</itunes:duration>
                <itunes:episode>221</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>219 - Guest: Sophie Kleber, Human-AI Relationship Expert, part 2</title>
        <itunes:title>219 - Guest: Sophie Kleber, Human-AI Relationship Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/219-guest-sophie-kleber-human-ai-relationship-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/219-guest-sophie-kleber-human-ai-relationship-expert-part-2/#comments</comments>        <pubDate>Mon, 26 Aug 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/92a629c8-ea63-3bbc-9ab1-debc5977cf16</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem.</p>
<p>And that's where <a href='https://www.sophisticatedberlin.com/'>Sophie Kleber</a> comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. </p>
<p>In the conclusion of our interview, we talk about about how she got into the user experience field, the emergence of a third paradigm of user interfaces, the future of smart homes, privacy, large language models coming to consumer devices, and brain-computer interfaces.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-219-0474.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. <em>How </em>we interact with them <em>is </em>the problem.</p>
<p>And that's where <a href='https://www.sophisticatedberlin.com/'>Sophie Kleber</a> comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. </p>
<p>In the conclusion of our interview, we talk about about how she got into the user experience field, the emergence of a third paradigm of user interfaces, the future of smart homes, privacy, large language models coming to consumer devices, and brain-computer interfaces.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-219-0474.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/9mh533afwcpjc8z7/219_AI-and-You.mp3" length="35866150" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem.
And that's where Sophie Kleber comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. 
In the conclusion of our interview, we talk about about how she got into the user experience field, the emergence of a third paradigm of user interfaces, the future of smart homes, privacy, large language models coming to consumer devices, and brain-computer interfaces.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2200</itunes:duration>
                <itunes:episode>220</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>218 - Guest: Sophie Kleber, Human-AI Relationship Expert, part 1</title>
        <itunes:title>218 - Guest: Sophie Kleber, Human-AI Relationship Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/218-guest-sophie-kleber-human-ai-relationship-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/218-guest-sophie-kleber-human-ai-relationship-expert-part-1/#comments</comments>        <pubDate>Mon, 19 Aug 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/69c2a174-0ebf-3a9d-b372-7b9569237e97</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem.</p>
<p>And that's where <a href='https://www.sophisticatedberlin.com/'>Sophie Kleber</a> comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. </p>
<p>We talk about the Uncanny Valley and how we relate to computers as though they were human or inhuman, and what if they looked like Bugs Bunny. We talk about the environments and situations where some people have intimate relationships with AIs, gender stereotyping in large language models, and where emotional interactions with computers help or hinder.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-218-5720.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. <em>How </em>we interact with them <em>is </em>the problem.</p>
<p>And that's where <a href='https://www.sophisticatedberlin.com/'>Sophie Kleber</a> comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. </p>
<p>We talk about the Uncanny Valley and how we relate to computers as though they were human or inhuman, and what if they looked like Bugs Bunny. We talk about the environments and situations where some people have intimate relationships with AIs, gender stereotyping in large language models, and where emotional interactions with computers help or hinder.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-218-5720.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/j4e4zb7zqeq8zspe/218_AI-and-You.mp3" length="34461807" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem.
And that's where Sophie Kleber comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. 
We talk about the Uncanny Valley and how we relate to computers as though they were human or inhuman, and what if they looked like Bugs Bunny. We talk about the environments and situations where some people have intimate relationships with AIs, gender stereotyping in large language models, and where emotional interactions with computers help or hinder.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2112</itunes:duration>
                <itunes:episode>219</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>217 - AI in Education</title>
        <itunes:title>217 - AI in Education</itunes:title>
        <link>https://aiandyou.net/e/217-ai-in-education/</link>
                    <comments>https://aiandyou.net/e/217-ai-in-education/#comments</comments>        <pubDate>Mon, 12 Aug 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/fe6505c1-8013-39e0-a6a0-3ac0f0c69083</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Teachers all over the world right now are having similar thoughts: Is AI going to take my job? How do I deal with homework that might have been done by ChatGPT? I know, because I've talked with many teachers, and these are universal concerns.</p>
<p>So I'm visiting the topic of AI in education - not for the first time, not for the last. There are important and urgent issues to tackle; they become most acute at the high school level, but this episode will be useful for all levels.</p>
<p>The reason it's so important for me to work with schools so much as an AI change management consultant is that there's no need for teachers to fear for their jobs. They are doing the most important job on the planet right now because they are literally educating the generation that is going to save the world. And generative AI has not created a learning problem: it's created learning opportunities. It's not created a teaching problem; it's created teaching opportunities. It has, however, created an assessment problem, and I'll talk about that.</p>
<p>Kids need their human teachers more than ever before to model for them how to deal with disruption from technology, because change will never again happen as slowly as it does today, and all of their careers will be disrupted far more than anyone's is today. No student is going to remember something ChatGPT said for the rest of their life. The teacher’s job is to focus on the qualities that the AI cannot embody—the personal interactions that occur face to face when the teacher makes that lasting impression that inspires the student.</p>
<p>Let's have honest, deep, and productive conversations about these issues now. A new school year is approaching and this is the time.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-217-8986.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Teachers all over the world right now are having similar thoughts: <em>Is AI going to take my job? How do I deal with homework that might have been done by ChatGPT?</em> I know, because I've talked with many teachers, and these are universal concerns.</p>
<p>So I'm visiting the topic of AI in education - not for the first time, not for the last. There are important and urgent issues to tackle; they become most acute at the high school level, but this episode will be useful for all levels.</p>
<p>The reason it's so important for me to work with schools so much as an AI change management consultant is that there's no need for teachers to fear for their jobs. They are doing the most important job on the planet right now because they are literally educating the generation that is going to save the world. And generative AI has not created a learning problem: it's created learning opportunities. It's not created a teaching problem; it's created teaching opportunities. It has, however, created an <em>assessment</em> problem, and I'll talk about that.</p>
<p>Kids need their human teachers more than ever before to model for them how to deal with disruption from technology, because change will never again happen as slowly as it does today, and all of their careers will be disrupted far more than anyone's is today. No student is going to remember something ChatGPT said for the rest of their life. The teacher’s job is to focus on the qualities that the AI cannot embody—the personal interactions that occur face to face when the teacher makes that lasting impression that inspires the student.</p>
<p>Let's have honest, deep, and productive conversations about these issues now. A new school year is approaching and this is the time.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-217-8986.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/im28nzt43swgzj3y/217_AI-and-You.mp3" length="27021715" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Teachers all over the world right now are having similar thoughts: Is AI going to take my job? How do I deal with homework that might have been done by ChatGPT? I know, because I've talked with many teachers, and these are universal concerns.
So I'm visiting the topic of AI in education - not for the first time, not for the last. There are important and urgent issues to tackle; they become most acute at the high school level, but this episode will be useful for all levels.
The reason it's so important for me to work with schools so much as an AI change management consultant is that there's no need for teachers to fear for their jobs. They are doing the most important job on the planet right now because they are literally educating the generation that is going to save the world. And generative AI has not created a learning problem: it's created learning opportunities. It's not created a teaching problem; it's created teaching opportunities. It has, however, created an assessment problem, and I'll talk about that.
Kids need their human teachers more than ever before to model for them how to deal with disruption from technology, because change will never again happen as slowly as it does today, and all of their careers will be disrupted far more than anyone's is today. No student is going to remember something ChatGPT said for the rest of their life. The teacher’s job is to focus on the qualities that the AI cannot embody—the personal interactions that occur face to face when the teacher makes that lasting impression that inspires the student.
Let's have honest, deep, and productive conversations about these issues now. A new school year is approaching and this is the time.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1647</itunes:duration>
                <itunes:episode>218</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>216 - Guest: John Danaher, Law Professor in AI Ethics, part 2</title>
        <itunes:title>216 - Guest: John Danaher, Law Professor in AI Ethics, part 2</itunes:title>
        <link>https://aiandyou.net/e/216-guest-john-danaher-law-professor-in-ai-ethics-part-2/</link>
                    <comments>https://aiandyou.net/e/216-guest-john-danaher-law-professor-in-ai-ethics-part-2/#comments</comments>        <pubDate>Mon, 05 Aug 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0b54819f-b5f5-30cb-8c3c-9671b3a15072</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, <a href='https://www.amazon.com/Automation-Utopia-Human-Flourishing-without/dp/0674984242'>Automation and Utopia: Human Flourishing in a World without Work</a>, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. </p>
<p>John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine.</p>
<p>In the conclusion of the interview we talk about generative AI extending our minds, the Luddite Fallacy and why this time things will be different, the effects of automation on class structure, and… Taylor Swift.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-216-2766.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, <a href='https://www.amazon.com/Automation-Utopia-Human-Flourishing-without/dp/0674984242'>Automation and Utopia: Human Flourishing in a World without Work</a>, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. </p>
<p>John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of <em>Robot Sex: Social And Ethical Implications</em> from MIT Press, and his work has appeared in <em>The Guardian</em>, <em>Aeon</em>, and <em>The Philosopher’s Magazine</em>.</p>
<p>In the conclusion of the interview we talk about generative AI extending our minds, the Luddite Fallacy and why this time things will be different, the effects of automation on class structure, and… Taylor Swift.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/08/transcript-216-2766.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kinfzbayigizrvdc/216_AI-and-You.mp3" length="36836233" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, Automation and Utopia: Human Flourishing in a World without Work, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. 
John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine.
In the conclusion of the interview we talk about generative AI extending our minds, the Luddite Fallacy and why this time things will be different, the effects of automation on class structure, and… Taylor Swift.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2260</itunes:duration>
                <itunes:episode>217</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>215 - Guest: John Danaher, Law Professor in AI Ethics, part 1</title>
        <itunes:title>215 - Guest: John Danaher, Law Professor in AI Ethics, part 1</itunes:title>
        <link>https://aiandyou.net/e/215-guest-john-danaher-law-professor-in-ai-ethics-part-1/</link>
                    <comments>https://aiandyou.net/e/215-guest-john-danaher-law-professor-in-ai-ethics-part-1/#comments</comments>        <pubDate>Mon, 29 Jul 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/45e2ba55-b8a6-3078-8f62-fa7bc46045ad</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, <a href='https://www.amazon.com/Automation-Utopia-Human-Flourishing-without/dp/0674984242'>Automation and Utopia: Human Flourishing in a World without Work</a>, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. </p>
<p>John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine.</p>
<p>In the first part of the interview we talk about how much jobs may be automated and the methodology behind studies of that, the impact of automation on job satisfaction, what’s happening in academia, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-215-6129.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, <a href='https://www.amazon.com/Automation-Utopia-Human-Flourishing-without/dp/0674984242'>Automation and Utopia: Human Flourishing in a World without Work</a>, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. </p>
<p>John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of <em>Robot Sex: Social And Ethical Implications</em> from MIT Press, and his work has appeared in <em>The Guardian</em>, <em>Aeon</em>, and <em>The Philosopher’s Magazine</em>.</p>
<p>In the first part of the interview we talk about how much jobs may be automated and the methodology behind studies of that, the impact of automation on job satisfaction, what’s happening in academia, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-215-6129.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/2ijgred9tvggb743/215_AI-and-You.mp3" length="31265255" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, Automation and Utopia: Human Flourishing in a World without Work, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. 
John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine.
In the first part of the interview we talk about how much jobs may be automated and the methodology behind studies of that, the impact of automation on job satisfaction, what’s happening in academia, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1912</itunes:duration>
                <itunes:episode>216</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>214 - Guest: Lord Tim Clement-Jones, Government AI Advisory Chair, part 2</title>
        <itunes:title>214 - Guest: Lord Tim Clement-Jones, Government AI Advisory Chair, part 2</itunes:title>
        <link>https://aiandyou.net/e/214-guest-lord-tim-clement-jones-government-ai-advisory-chair-part-2/</link>
                    <comments>https://aiandyou.net/e/214-guest-lord-tim-clement-jones-government-ai-advisory-chair-part-2/#comments</comments>        <pubDate>Mon, 22 Jul 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/25b5be2d-e66b-3d23-991c-5563ed97b670</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Helping the British Government understand AI since 2016 is our guest, <a href='https://www.lordclementjones.org/'>Lord Tim Clement-Jones</a>, co-founder and co-chair of Britain's <a href='https://publications.parliament.uk/pa/cm/cmallparty/240530/artificial-intelligence.htm'>All-Party Parliamentary Group on Artificial Intelligence</a> since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the <a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10001.htm'>House of Lords Select Committee on Artificial Intelligence</a> which reported in 2018 with “<a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf'>AI in the UK: Ready Willing and Able?</a>” and its follow-up report in 2020 “<a href='https://publications.parliament.uk/pa/ld5801/ldselect/ldliaison/196/196.pdf'>AI in the UK: No Room for Complacency</a>.” His new book, "<a href='https://www.amazon.com/Living-Algorithm-Servant-Master-Governance/dp/1911397923'>Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future</a>" came out in the UK in March, with a North American release date of July 18.</p>
<p>In the second half, we talk about elections, including the one just held in the UK, and disinformation, what AI and robots do to the flow of capital, the effects of AI upon education and enterprise culture, privacy and making AI accountable and trustworthy.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-214-1100.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Helping the British Government understand AI since 2016 is our guest, <a href='https://www.lordclementjones.org/'>Lord Tim Clement-Jones</a>, co-founder and co-chair of Britain's <a href='https://publications.parliament.uk/pa/cm/cmallparty/240530/artificial-intelligence.htm'>All-Party Parliamentary Group on Artificial Intelligence</a> since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the <a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10001.htm'>House of Lords Select Committee on Artificial Intelligence</a> which reported in 2018 with “<a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf'>AI in the UK: Ready Willing and Able?</a>” and its follow-up report in 2020 “<a href='https://publications.parliament.uk/pa/ld5801/ldselect/ldliaison/196/196.pdf'>AI in the UK: No Room for Complacency</a>.” His new book, "<a href='https://www.amazon.com/Living-Algorithm-Servant-Master-Governance/dp/1911397923'>Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future</a>" came out in the UK in March, with a North American release date of July 18.</p>
<p>In the second half, we talk about elections, including the one just held in the UK, and disinformation, what AI and robots do to the flow of capital, the effects of AI upon education and enterprise culture, privacy and making AI accountable and trustworthy.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-214-1100.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/hcipgy26nxjhv77h/214_AI-and-You.mp3" length="30630365" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Helping the British Government understand AI since 2016 is our guest, Lord Tim Clement-Jones, co-founder and co-chair of Britain's All-Party Parliamentary Group on Artificial Intelligence since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with “AI in the UK: Ready Willing and Able?” and its follow-up report in 2020 “AI in the UK: No Room for Complacency.” His new book, "Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future" came out in the UK in March, with a North American release date of July 18.
In the second half, we talk about elections, including the one just held in the UK, and disinformation, what AI and robots do to the flow of capital, the effects of AI upon education and enterprise culture, privacy and making AI accountable and trustworthy.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1872</itunes:duration>
                <itunes:episode>215</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>213 - Guest: Lord Tim Clement-Jones, Government AI Advisory Chair, part 1</title>
        <itunes:title>213 - Guest: Lord Tim Clement-Jones, Government AI Advisory Chair, part 1</itunes:title>
        <link>https://aiandyou.net/e/213-guest-lord-tim-clement-jones-government-ai-advisory-chair-part-1/</link>
                    <comments>https://aiandyou.net/e/213-guest-lord-tim-clement-jones-government-ai-advisory-chair-part-1/#comments</comments>        <pubDate>Mon, 15 Jul 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9cbabd63-c619-36fa-af99-7634101b8aa6</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Helping the British Government understand AI since 2016 is our guest, <a href='https://www.lordclementjones.org/'>Lord Tim Clement-Jones</a>, co-founder and co-chair of Britain's <a href='https://publications.parliament.uk/pa/cm/cmallparty/240530/artificial-intelligence.htm'>All-Party Parliamentary Group on Artificial Intelligence</a> since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the <a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10001.htm'>House of Lords Select Committee on Artificial Intelligence</a> which reported in 2018 with “<a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf'>AI in the UK: Ready Willing and Able?</a>” and its follow-up report in 2020 “<a href='https://publications.parliament.uk/pa/ld5801/ldselect/ldliaison/196/196.pdf'>AI in the UK: No Room for Complacency</a>.” His new book, "<a href='https://www.amazon.com/Living-Algorithm-Servant-Master-Governance/dp/1911397923'>Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future</a>" came out in the UK in March, with a North American release date of July 18. </p>
<p>In this first part, Tim gives a big picture of how #AI regulation has been proceeding on the global stage since before large language models were a thing, giving us the context that took us from the Asilomar Principles to today’s Hiroshima principles and the EU AI Act and the new ISO standard 42001 for AI. And we talk about long-term planning, intellectual property rights, the effects of the open letters that called for a pause or moratorium on model training, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-213-8610.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Helping the British Government understand AI since 2016 is our guest, <a href='https://www.lordclementjones.org/'>Lord Tim Clement-Jones</a>, co-founder and co-chair of Britain's <a href='https://publications.parliament.uk/pa/cm/cmallparty/240530/artificial-intelligence.htm'>All-Party Parliamentary Group on Artificial Intelligence</a> since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the <a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10001.htm'>House of Lords Select Committee on Artificial Intelligence</a> which reported in 2018 with “<a href='https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf'>AI in the UK: Ready Willing and Able?</a>” and its follow-up report in 2020 “<a href='https://publications.parliament.uk/pa/ld5801/ldselect/ldliaison/196/196.pdf'>AI in the UK: No Room for Complacency</a>.” His new book, "<a href='https://www.amazon.com/Living-Algorithm-Servant-Master-Governance/dp/1911397923'>Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future</a>" came out in the UK in March, with a North American release date of July 18. </p>
<p>In this first part, Tim gives a big picture of how #AI regulation has been proceeding on the global stage since before large language models were a thing, giving us the context that took us from the Asilomar Principles to today’s Hiroshima principles and the EU AI Act and the new ISO standard 42001 for AI. And we talk about long-term planning, intellectual property rights, the effects of the open letters that called for a pause or moratorium on model training, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-213-8610.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/33wp2kv4by96z4yi/213_AI-and-You.mp3" length="37510819" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Helping the British Government understand AI since 2016 is our guest, Lord Tim Clement-Jones, co-founder and co-chair of Britain's All-Party Parliamentary Group on Artificial Intelligence since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with “AI in the UK: Ready Willing and Able?” and its follow-up report in 2020 “AI in the UK: No Room for Complacency.” His new book, "Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future" came out in the UK in March, with a North American release date of July 18. 
In this first part, Tim gives a big picture of how #AI regulation has been proceeding on the global stage since before large language models were a thing, giving us the context that took us from the Asilomar Principles to today’s Hiroshima principles and the EU AI Act and the new ISO standard 42001 for AI. And we talk about long-term planning, intellectual property rights, the effects of the open letters that called for a pause or moratorium on model training, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2302</itunes:duration>
                <itunes:episode>214</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>212 - Guest: Antonina Burlachenko, AI Regulatory Consultant</title>
        <itunes:title>212 - Guest: Antonina Burlachenko, AI Regulatory Consultant</itunes:title>
        <link>https://aiandyou.net/e/212-guest-antonina-burlachenko-ai-regulatory-consultant/</link>
                    <comments>https://aiandyou.net/e/212-guest-antonina-burlachenko-ai-regulatory-consultant/#comments</comments>        <pubDate>Mon, 08 Jul 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/02766e74-ce84-3685-8965-e13120497927</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>As the European Union AI Act rolls out, there are so many questions about what it will mean to businesses trying to navigate the incredibly volatile and complex field of AI regulation. Here to answer those questions is <a href='https://www.linkedin.com/in/antonina-burlachenko/'>Antonina Burlachenko</a>, Head of Quality and Regulatory Consulting at Star Global Consulting, calling from Poland. She explains what the Act really means for businesses and consumers, comparing it with GDPR, and providing some technical information around standards and regulations and other aspects of what it’s like for businesses to engage with the Act at a practical level. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-212-4004.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>As the European Union AI Act rolls out, there are so many questions about what it will mean to businesses trying to navigate the incredibly volatile and complex field of AI regulation. Here to answer those questions is <a href='https://www.linkedin.com/in/antonina-burlachenko/'>Antonina Burlachenko</a>, Head of Quality and Regulatory Consulting at Star Global Consulting, calling from Poland. She explains what the Act really means for businesses and consumers, comparing it with GDPR, and providing some technical information around standards and regulations and other aspects of what it’s like for businesses to engage with the Act at a practical level. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/07/transcript-212-4004.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/wwryehwjp2beu48f/212_AI-and-You.mp3" length="33566522" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

As the European Union AI Act rolls out, there are so many questions about what it will mean to businesses trying to navigate the incredibly volatile and complex field of AI regulation. Here to answer those questions is Antonina Burlachenko, Head of Quality and Regulatory Consulting at Star Global Consulting, calling from Poland. She explains what the Act really means for businesses and consumers, comparing it with GDPR, and providing some technical information around standards and regulations and other aspects of what it’s like for businesses to engage with the Act at a practical level. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2056</itunes:duration>
                <itunes:episode>213</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>211 - Guest: Matt Beane, Future of Work Author, part 2</title>
        <itunes:title>211 - Guest: Matt Beane, Future of Work Author, part 2</itunes:title>
        <link>https://aiandyou.net/e/211-guest-matt-beane-future-of-work-author-part-2/</link>
                    <comments>https://aiandyou.net/e/211-guest-matt-beane-future-of-work-author-part-2/#comments</comments>        <pubDate>Mon, 01 Jul 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/2f8142db-e7dd-3a9f-aff8-7470a556acf0</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. </p>
<p>His new book: The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to <a href='https://aiandyou.theskillcodebook.com'>http://aiandyou.theskillcodebook.com</a>. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives.</p>
<p>In the conclusion, we talk more about what AIs do to the mentoring and learning pipelines in the workplace, and how education should pivot to deal with the changes to the future of work. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-211-8964.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. </p>
<p>His new book: <em>The Skill Code: How to Save Human Ability in an Age of Intelligent Machines</em>, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to <a href='https://aiandyou.theskillcodebook.com'>http://aiandyou.theskillcodebook.com</a>. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives.</p>
<p>In the conclusion, we talk more about what AIs do to the mentoring and learning pipelines in the workplace, and how education should pivot to deal with the changes to the future of work. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-211-8964.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/aiyu6f8zdkqnxc7g/211_AI-and-You.mp3" length="38426150" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. 
His new book: The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to http://aiandyou.theskillcodebook.com. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives.
In the conclusion, we talk more about what AIs do to the mentoring and learning pipelines in the workplace, and how education should pivot to deal with the changes to the future of work. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2360</itunes:duration>
                <itunes:episode>212</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>210 - Guest: Matt Beane, Future of Work Author, part 1</title>
        <itunes:title>210 - Guest: Matt Beane, Future of Work Author, part 1</itunes:title>
        <link>https://aiandyou.net/e/210-guest-matt-beane-future-of-work-author-part-1/</link>
                    <comments>https://aiandyou.net/e/210-guest-matt-beane-future-of-work-author-part-1/#comments</comments>        <pubDate>Mon, 24 Jun 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3550cb20-a490-3951-af5e-868862e79018</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. </p>
<p>His new book: The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to <a href='https://aiandyou.theskillcodebook.com'>http://aiandyou.theskillcodebook.com</a>. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives.</p>
<p>In this first part, we talk about how Matt studied surgeons in operating rooms for his PhD thesis and saw the effects that the introduction of a robot surgical system had in stifling the time-honored process of mentoring new surgeons, and generalized this to other fields, and observed the rise of “shadow learning,” where people bend or break the rules to get the learning they need. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-210-1004.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. </p>
<p>His new book: <em>The Skill Code: How to Save Human Ability in an Age of Intelligent Machines</em>, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to <a href='https://aiandyou.theskillcodebook.com'>http://aiandyou.theskillcodebook.com</a>. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives.</p>
<p>In this first part, we talk about how Matt studied surgeons in operating rooms for his PhD thesis and saw the effects that the introduction of a robot surgical system had in stifling the time-honored process of mentoring new surgeons, and generalized this to other fields, and observed the rise of “shadow learning,” where people bend or break the rules to get the learning they need. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-210-1004.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/t2fk784kff9r49bv/210_AI-and-You.mp3" length="32354457" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. 
His new book: The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to http://aiandyou.theskillcodebook.com. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives.
In this first part, we talk about how Matt studied surgeons in operating rooms for his PhD thesis and saw the effects that the introduction of a robot surgical system had in stifling the time-honored process of mentoring new surgeons, and generalized this to other fields, and observed the rise of “shadow learning,” where people bend or break the rules to get the learning they need. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1980</itunes:duration>
                <itunes:episode>211</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>209 - Guest: William A. Adams, Technologist</title>
        <itunes:title>209 - Guest: William A. Adams, Technologist</itunes:title>
        <link>https://aiandyou.net/e/209-guest-william-a-adams-technologist/</link>
                    <comments>https://aiandyou.net/e/209-guest-william-a-adams-technologist/#comments</comments>        <pubDate>Mon, 17 Jun 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/fd21a920-1dea-3bdc-9ced-159b5d1bbeef</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest is <a href='https://www.william-a-adams.com/'>William A. Adams</a>, technologist, philanthropist, and recorded by the Computer History Museum as one of the first Black entrepreneurs in Silicon Valley. He was the first technical advisor to Microsoft’s CTO Kevin Scott and has founded and overseen global initiatives at Microsoft from XML technologies as early as 1998, to DE&amp;I initiatives in 2015. The Leap program, with a focus on diverse hiring, was named Microsoft’s D&amp;I Program of the year in 2020.</p>
<p>We talk about William’s experience creating the Leap program, its impact, the relationship between AI and diversity, equity, and inclusion programs like Leap, and creating personalized chatbots. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-209-4071.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest is <a href='https://www.william-a-adams.com/'>William A. Adams</a>, technologist, philanthropist, and recorded by the Computer History Museum as one of the first Black entrepreneurs in Silicon Valley. He was the first technical advisor to Microsoft’s CTO Kevin Scott and has founded and overseen global initiatives at Microsoft from XML technologies as early as 1998, to DE&amp;I initiatives in 2015. The Leap program, with a focus on diverse hiring, was named Microsoft’s D&amp;I Program of the year in 2020.</p>
<p>We talk about William’s experience creating the Leap program, its impact, the relationship between AI and diversity, equity, and inclusion programs like Leap, and creating personalized chatbots. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-209-4071.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ap6q2ywwvbhg6u88/209_AI-and-You.mp3" length="35849431" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

My guest is William A. Adams, technologist, philanthropist, and recorded by the Computer History Museum as one of the first Black entrepreneurs in Silicon Valley. He was the first technical advisor to Microsoft’s CTO Kevin Scott and has founded and overseen global initiatives at Microsoft from XML technologies as early as 1998, to DE&amp;I initiatives in 2015. The Leap program, with a focus on diverse hiring, was named Microsoft’s D&amp;I Program of the year in 2020.
We talk about William’s experience creating the Leap program, its impact, the relationship between AI and diversity, equity, and inclusion programs like Leap, and creating personalized chatbots. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2199</itunes:duration>
                <itunes:episode>210</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>208 - Guest: Oliver Burkeman, Philosophy Writer, part 2</title>
        <itunes:title>208 - Guest: Oliver Burkeman, Philosophy Writer, part 2</itunes:title>
        <link>https://aiandyou.net/e/208-guest-oliver-burkeman-philosophy-writer-part-2/</link>
                    <comments>https://aiandyou.net/e/208-guest-oliver-burkeman-philosophy-writer-part-2/#comments</comments>        <pubDate>Mon, 10 Jun 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b9a36cf6-62f5-35e2-8182-40de228a3910</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book <a href='https://www.amazon.com/Four-Thousand-Weeks-Management-Mortals/dp/0374159122'>Four Thousand Weeks: Time Management for Mortals</a> and former author of the psychology column “This Column Will Change Your Life” in The Guardian.</p>
<p>Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? </p>
<p>This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at <a href='https://www.oliverburkeman.com/'>OliverBurkeman.com</a>.</p>
<p>In the conclusion of the interview, we talk about whether this is Luddism, the influence of the Silicon Valley billionaires’ pursuit of immortality, the appropriate use of AI to save us time, and what will remain constant throughout any amount of technological evolution. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-208-5393.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Our relationship with time is <em>dysfunctional</em>. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book <a href='https://www.amazon.com/Four-Thousand-Weeks-Management-Mortals/dp/0374159122'><em>Four Thousand Weeks: Time Management for Mortals</em></a> and former author of the psychology column “This Column Will Change Your Life” in <em>The Guardian</em>.</p>
<p>Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? </p>
<p>This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at <a href='https://www.oliverburkeman.com/'>OliverBurkeman.com</a>.</p>
<p>In the conclusion of the interview, we talk about whether this is Luddism, the influence of the Silicon Valley billionaires’ pursuit of immortality, the appropriate use of AI to save us time, and what will remain constant throughout any amount of technological evolution. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/06/transcript-208-5393.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/j4uippznp32u6afk/208_AI-and-You.mp3" length="28075391" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book Four Thousand Weeks: Time Management for Mortals and former author of the psychology column “This Column Will Change Your Life” in The Guardian.
Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? 
This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at OliverBurkeman.com.
In the conclusion of the interview, we talk about whether this is Luddism, the influence of the Silicon Valley billionaires’ pursuit of immortality, the appropriate use of AI to save us time, and what will remain constant throughout any amount of technological evolution. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1713</itunes:duration>
                <itunes:episode>209</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>207 - Guest: Oliver Burkeman, Philosophy Writer, part 1</title>
        <itunes:title>207 - Guest: Oliver Burkeman, Philosophy Writer, part 1</itunes:title>
        <link>https://aiandyou.net/e/207-guest-oliver-burkeman-philosophy-writer-part-1/</link>
                    <comments>https://aiandyou.net/e/207-guest-oliver-burkeman-philosophy-writer-part-1/#comments</comments>        <pubDate>Mon, 03 Jun 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3f0d8905-9944-398e-9e76-10249a0dd588</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book <a href='https://www.amazon.com/Four-Thousand-Weeks-Management-Mortals/dp/0374159122'>Four Thousand Weeks: Time Management for Mortals</a> and former author of the psychology column “This Column Will Change Your Life” in The Guardian.</p>
<p>Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? </p>
<p>This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at <a href='https://www.oliverburkeman.com/'>OliverBurkeman.com</a>.</p>
<p>In this first half of the interview we talk about the parable of the rocks in the jar and how it’s a pernicious lie, the psychology of perceiving life as finite, and how technology has not changed our work stress and may be making it worse through induced demand. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/05/transcript-207-7011.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Our relationship with time is <em>dysfunctional</em>. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book <a href='https://www.amazon.com/Four-Thousand-Weeks-Management-Mortals/dp/0374159122'><em>Four Thousand Weeks: Time Management for Mortals</em></a> and former author of the psychology column “This Column Will Change Your Life” in <em>The Guardian</em>.</p>
<p>Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? </p>
<p>This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at <a href='https://www.oliverburkeman.com/'>OliverBurkeman.com</a>.</p>
<p>In this first half of the interview we talk about the parable of the rocks in the jar and how it’s a pernicious lie, the psychology of perceiving life as finite, and how technology has not changed our work stress and may be making it worse through <em>induced demand</em>. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.wordpress.com/wp-content/uploads/2024/05/transcript-207-7011.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/tpvimu54ejubewkx/207_AI-and-You.mp3" length="30838101" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book Four Thousand Weeks: Time Management for Mortals and former author of the psychology column “This Column Will Change Your Life” in The Guardian.
Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? 
This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at OliverBurkeman.com.
In this first half of the interview we talk about the parable of the rocks in the jar and how it’s a pernicious lie, the psychology of perceiving life as finite, and how technology has not changed our work stress and may be making it worse through induced demand. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1885</itunes:duration>
                <itunes:episode>208</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>206 - Guest: Mounir Shita, AGI Researcher</title>
        <itunes:title>206 - Guest: Mounir Shita, AGI Researcher</itunes:title>
        <link>https://aiandyou.net/e/206-guest-mounir-shita-agi-researcher/</link>
                    <comments>https://aiandyou.net/e/206-guest-mounir-shita-agi-researcher/#comments</comments>        <pubDate>Mon, 27 May 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9cd14855-45f8-39cf-a46b-d0e781444c12</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Mounir Shita, CEO of Kimera Systems, is author of the upcoming book The Science of Intelligence, which contains some interesting and thought-provoking explorations of intelligence that had me thinking about Pedro Domingos’ book The Master Algorithm. We talk about theories of AGI, free will, egg smashing, and Mounir's prototype smartphone app that learned how to silence itself in a movie theater!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/05/transcript-206-3499.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Mounir Shita, CEO of Kimera Systems, is author of the upcoming book <em>The Science of Intelligence,</em> which contains some interesting and thought-provoking explorations of intelligence that had me thinking about Pedro Domingos’ book <em>The Master Algorithm</em>. We talk about theories of AGI, free will, egg smashing, and Mounir's prototype smartphone app that learned how to silence itself in a movie theater!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/05/transcript-206-3499.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/aqyuripbxyypw2pd/206_AI-and-You.mp3" length="33427776" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Mounir Shita, CEO of Kimera Systems, is author of the upcoming book The Science of Intelligence, which contains some interesting and thought-provoking explorations of intelligence that had me thinking about Pedro Domingos’ book The Master Algorithm. We talk about theories of AGI, free will, egg smashing, and Mounir's prototype smartphone app that learned how to silence itself in a movie theater!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2047</itunes:duration>
                <itunes:episode>207</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>205 - Guest: Gary Bolles, Future of Work author, part 2</title>
        <itunes:title>205 - Guest: Gary Bolles, Future of Work author, part 2</itunes:title>
        <link>https://aiandyou.net/e/205-guest-gary-bolles-future-of-work-author-part-2/</link>
                    <comments>https://aiandyou.net/e/205-guest-gary-bolles-future-of-work-author-part-2/#comments</comments>        <pubDate>Mon, 20 May 2024 10:10:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9659e77f-d3db-3157-9dc9-c631450be487</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show <a href='https://gbolles.com'>Gary Bolles</a>, author of <a href='https://www.amazon.com/Next-Rules-Work-Organization-Uncertainty/dp/1398601632'>The Next Rules of Work</a> and a co-founder of eParachute.com, helping job-hunters &amp; career changers with programs inspired by the evergreen book “<a>What Color Is Your Parachute?</a>” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors.</p>
<p>Gary is adjunct Chair for the Future of Work for <a href='https://www.su.org'>Singularity University</a>, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.”</p>
<p>In the conclusion of the interview, we talk about unbossing and holacracies, how AI will impact organizational structures, fear, FOMO, and agency, and the Singularity University.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/05/transcript-205-3207.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show <a href='https://gbolles.com'>Gary Bolles</a>, author of <a href='https://www.amazon.com/Next-Rules-Work-Organization-Uncertainty/dp/1398601632'>The Next Rules of Work</a> and a co-founder of eParachute.com, helping job-hunters &amp; career changers with programs inspired by the evergreen book “<a>What Color Is Your Parachute?</a>” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors.</p>
<p>Gary is adjunct Chair for the Future of Work for <a href='https://www.su.org'>Singularity University</a>, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.”</p>
<p>In the conclusion of the interview, we talk about unbossing and holacracies, how AI will impact organizational structures, fear, FOMO, and agency, and the Singularity University.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/05/transcript-205-3207.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/chf43uteii6xweax/205_AI-and-You.mp3" length="27308435" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show Gary Bolles, author of The Next Rules of Work and a co-founder of eParachute.com, helping job-hunters &amp; career changers with programs inspired by the evergreen book “What Color Is Your Parachute?” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors.
Gary is adjunct Chair for the Future of Work for Singularity University, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.”
In the conclusion of the interview, we talk about unbossing and holacracies, how AI will impact organizational structures, fear, FOMO, and agency, and the Singularity University.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1665</itunes:duration>
                <itunes:episode>206</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>204 - Guest: Gary Bolles, Future of Work author, part 1</title>
        <itunes:title>204 - Guest: Gary Bolles, Future of Work author, part 1</itunes:title>
        <link>https://aiandyou.net/e/204-guest-gary-bolles-future-of-work-author-part-1/</link>
                    <comments>https://aiandyou.net/e/204-guest-gary-bolles-future-of-work-author-part-1/#comments</comments>        <pubDate>Mon, 13 May 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/87c04672-55c3-372f-946b-a7b673fa3332</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show <a href='https://gbolles.com'>Gary Bolles</a>, author of <a href='https://www.amazon.com/Next-Rules-Work-Organization-Uncertainty/dp/1398601632'>The Next Rules of Work</a> and a co-founder of eParachute.com, helping job-hunters &amp; career changers with programs inspired by the evergreen book “<a>What Color Is Your Parachute?</a>” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors.</p>
<p>Gary is adjunct Chair for the Future of Work for <a href='https://www.su.org'>Singularity University</a>, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.”</p>
<p>In the first half of the interview, we talk about the gig economy, the new rules of work, what ChatGPT did to the job market, and an interesting concept called the community operating system.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/05/transcript-204-9950.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show <a href='https://gbolles.com'>Gary Bolles</a>, author of <a href='https://www.amazon.com/Next-Rules-Work-Organization-Uncertainty/dp/1398601632'>The Next Rules of Work</a> and a co-founder of eParachute.com, helping job-hunters &amp; career changers with programs inspired by the evergreen book “<a>What Color Is Your Parachute?</a>” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors.</p>
<p>Gary is adjunct Chair for the Future of Work for <a href='https://www.su.org'>Singularity University</a>, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.”</p>
<p>In the first half of the interview, we talk about the gig economy, the new rules of work, what ChatGPT did to the job market, and an interesting concept called the community operating system.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/05/transcript-204-9950.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/4ck75b9iuqs3bxum/204_AI-and-You.mp3" length="31952380" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show Gary Bolles, author of The Next Rules of Work and a co-founder of eParachute.com, helping job-hunters &amp; career changers with programs inspired by the evergreen book “What Color Is Your Parachute?” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors.
Gary is adjunct Chair for the Future of Work for Singularity University, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.”
In the first half of the interview, we talk about the gig economy, the new rules of work, what ChatGPT did to the job market, and an interesting concept called the community operating system.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1955</itunes:duration>
                <itunes:episode>204</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>203 - Guest: Eleanor Drage, AI and Feminism Researcher, part 2</title>
        <itunes:title>203 - Guest: Eleanor Drage, AI and Feminism Researcher, part 2</itunes:title>
        <link>https://aiandyou.net/e/203-guest-eleanor-drage-ai-and-feminism-researcher-part-2/</link>
                    <comments>https://aiandyou.net/e/203-guest-eleanor-drage-ai-and-feminism-researcher-part-2/#comments</comments>        <pubDate>Mon, 06 May 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1a945972-ec62-3a9e-a283-325b1d38603d</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest is the co-host of the <a href='https://www.thegoodrobot.co.uk/'>Good Robot Podcast</a>, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The <a href='http://lcfi.ac.uk/'>Leverhulme Centre for the Future of Intelligence</a> at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called <a href='https://www.bloomsbury.com/ca/good-robot-9781350399952/'>The Good Robot: Why Technology Needs Feminism</a>. </p>
<p>In this conclusion of the interview, we talk about unconscious bias, hiring standards, stochastic parrots, science fiction, and the early participation of women in computing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-203-1128.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest is the co-host of the <a href='https://www.thegoodrobot.co.uk/'>Good Robot Podcast</a>, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The <a href='http://lcfi.ac.uk/'>Leverhulme Centre for the Future of Intelligence</a> at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called <a href='https://www.bloomsbury.com/ca/good-robot-9781350399952/'>The Good Robot: Why Technology Needs Feminism</a>. </p>
<p>In this conclusion of the interview, we talk about unconscious bias, hiring standards, stochastic parrots, science fiction, and the early participation of women in computing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-203-1128.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/k2qubqdnqhs3tccx/203_AI-and-You.mp3" length="34778620" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

My guest is the co-host of the Good Robot Podcast, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The Leverhulme Centre for the Future of Intelligence at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called The Good Robot: Why Technology Needs Feminism. 
In this conclusion of the interview, we talk about unconscious bias, hiring standards, stochastic parrots, science fiction, and the early participation of women in computing.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2132</itunes:duration>
                <itunes:episode>203</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>202 - Guest: Eleanor Drage, AI and Feminism Researcher, part 1</title>
        <itunes:title>202 - Guest: Eleanor Drage, AI and Feminism Researcher, part 1</itunes:title>
        <link>https://aiandyou.net/e/202-guest-eleanor-drage-ai-and-feminism-researcher-part-1/</link>
                    <comments>https://aiandyou.net/e/202-guest-eleanor-drage-ai-and-feminism-researcher-part-1/#comments</comments>        <pubDate>Mon, 29 Apr 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/257e29a5-83ae-32d3-98f2-38be3aef5a5b</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest is the co-host of the <a href='https://www.thegoodrobot.co.uk/'>Good Robot Podcast</a>, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The <a href='http://lcfi.ac.uk/'>Leverhulme Centre for the Future of Intelligence</a> at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called <a href='https://www.bloomsbury.com/ca/good-robot-9781350399952/'>The Good Robot: Why Technology Needs Feminism</a>. </p>
<p>We talk about about all that, plus some quantum mechanics, saunas, ham, lesbian bacteria, and… well it’ll all make more sense when you listen.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-202-9450.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>My guest is the co-host of the <a href='https://www.thegoodrobot.co.uk/'>Good Robot Podcast</a>, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The <a href='http://lcfi.ac.uk/'>Leverhulme Centre for the Future of Intelligence</a> at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called <a href='https://www.bloomsbury.com/ca/good-robot-9781350399952/'>The Good Robot: Why Technology Needs Feminism</a>. </p>
<p>We talk about about all that, plus some quantum mechanics, saunas, ham, lesbian bacteria, and… well it’ll all make more sense when you listen.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-202-9450.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ph86jf5ymzep5hw8/202_AI-and-You.mp3" length="26289869" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

My guest is the co-host of the Good Robot Podcast, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The Leverhulme Centre for the Future of Intelligence at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called The Good Robot: Why Technology Needs Feminism. 
We talk about about all that, plus some quantum mechanics, saunas, ham, lesbian bacteria, and… well it’ll all make more sense when you listen.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1601</itunes:duration>
                <itunes:episode>202</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>201 - Guest: Fiona McEvoy, Tech Ethics Writer</title>
        <itunes:title>201 - Guest: Fiona McEvoy, Tech Ethics Writer</itunes:title>
        <link>https://aiandyou.net/e/201-guest-fiona-mcevoy-tech-ethics-writer/</link>
                    <comments>https://aiandyou.net/e/201-guest-fiona-mcevoy-tech-ethics-writer/#comments</comments>        <pubDate>Mon, 22 Apr 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3c33e2cb-1a54-3bed-b564-16d8c429d29d</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>My guest is a really good role model for how a young person can carve out an important niche in the AI space, especially for people who aren’t inclined to the computer science side of the field. Fiona McEvoy is author of the blog <a href='https://YouTheData.com'>YouTheData.com</a>, with a specific focus on the intersection of technology and society. She was named as one of “30 Influential Women Advancing AI in San Francisco” by RE•WORK, and in 2020 was honored in the inaugural Brilliant Women in AI Ethics Hall of Fame, established to recognize “Brilliant women who have made exceptional contributions to the space of AI Ethics and diversity.”</p>
<p>We talk about her journey to becoming an influential communicator and the ways she carries that out, what it’s like for young people in this social cauldron being heated by AI, and some of the key issues affecting them. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-201-1120.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>My guest is a really good role model for how a young person can carve out an important niche in the AI space, especially for people who aren’t inclined to the computer science side of the field. Fiona McEvoy is author of the blog <a href='https://YouTheData.com'>YouTheData.com</a>, with a specific focus on the intersection of technology and society. She was named as one of “30 Influential Women Advancing AI in San Francisco” by RE•WORK, and in 2020 was honored in the inaugural Brilliant Women in AI Ethics Hall of Fame, established to recognize “Brilliant women who have made exceptional contributions to the space of AI Ethics and diversity.”</p>
<p>We talk about her journey to becoming an influential communicator and the ways she carries that out, what it’s like for young people in this social cauldron being heated by AI, and some of the key issues affecting them. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-201-1120.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bihyvg64ss42mqz4/201_AI-and-You.mp3" length="34129111" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

My guest is a really good role model for how a young person can carve out an important niche in the AI space, especially for people who aren’t inclined to the computer science side of the field. Fiona McEvoy is author of the blog YouTheData.com, with a specific focus on the intersection of technology and society. She was named as one of “30 Influential Women Advancing AI in San Francisco” by RE•WORK, and in 2020 was honored in the inaugural Brilliant Women in AI Ethics Hall of Fame, established to recognize “Brilliant women who have made exceptional contributions to the space of AI Ethics and diversity.”
We talk about her journey to becoming an influential communicator and the ways she carries that out, what it’s like for young people in this social cauldron being heated by AI, and some of the key issues affecting them. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2091</itunes:duration>
                <itunes:episode>201</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>200 - Guest: Jerome C. Glenn, Futurist for AI governance, part 2</title>
        <itunes:title>200 - Guest: Jerome C. Glenn, Futurist for AI governance, part 2</itunes:title>
        <link>https://aiandyou.net/e/200-guest-jerome-c-glenn-futurist-for-ai-governance-part-2/</link>
                    <comments>https://aiandyou.net/e/200-guest-jerome-c-glenn-futurist-for-ai-governance-part-2/#comments</comments>        <pubDate>Mon, 15 Apr 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/493443ce-37ed-3eab-920b-40c762ae55e8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the end of February there was a landmark conference in Panama City and online, the <a href='https://bgi24.ai/'>Beneficial AGI Summit</a>. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.</p>
<p>He is the co-founder and CEO of <a href='https://www.millennium-project.org/'>The Millennium Project</a> on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.</p>
<p>He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science &amp; Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. </p>
<p>In this second half we talk about approaches for actually controlling the development of AGI that were developed at the conference, the AI arms race, and… why Jerome doesn’t like the term futurism. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-200-9229.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the end of February there was a landmark conference in Panama City and online, the <a href='https://bgi24.ai/'><em>Beneficial AGI Summit</em></a>. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.</p>
<p>He is the co-founder and CEO of <a href='https://www.millennium-project.org/'>The Millennium Project</a> on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the <em>Enterprise</em>, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.</p>
<p>He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science &amp; Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. </p>
<p>In this second half we talk about approaches for actually controlling the development of AGI that were developed at the conference, the AI arms race, and… why Jerome doesn’t like the term futurism. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-200-9229.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/smnbhn/200_AI-and-You.mp3" length="25153856" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

At the end of February there was a landmark conference in Panama City and online, the Beneficial AGI Summit. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.
He is the co-founder and CEO of The Millennium Project on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.
He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science &amp; Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. 
In this second half we talk about approaches for actually controlling the development of AGI that were developed at the conference, the AI arms race, and… why Jerome doesn’t like the term futurism. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1530</itunes:duration>
                <itunes:episode>200</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>199 - Guest: Jerome C. Glenn, Futurist for AI governance, part 1</title>
        <itunes:title>199 - Guest: Jerome C. Glenn, Futurist for AI governance, part 1</itunes:title>
        <link>https://aiandyou.net/e/199-guest-jerome-c-glenn-futurist-for-ai-governance-part-1/</link>
                    <comments>https://aiandyou.net/e/199-guest-jerome-c-glenn-futurist-for-ai-governance-part-1/#comments</comments>        <pubDate>Mon, 08 Apr 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/314fe2da-03a3-37d8-af1f-d5c8c9733595</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the end of February there was a landmark conference in Panama City and online, the <a href='https://bgi24.ai/'>Beneficial AGI Summit</a>. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.</p>
<p>He is the co-founder and CEO of <a href='https://www.millennium-project.org/'>The Millennium Project</a> on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.</p>
<p>He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science &amp; Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. </p>
<p>In this first half we talk about his recent work with groups of the United Nations General Assembly, and his decentralized approach to grassroots empowerment in both implementing AGI and working together to regulate it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-199-4796.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the end of February there was a landmark conference in Panama City and online, the <a href='https://bgi24.ai/'><em>Beneficial AGI Summit</em></a>. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.</p>
<p>He is the co-founder and CEO of <a href='https://www.millennium-project.org/'>The Millennium Project</a> on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the <em>Enterprise</em>, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.</p>
<p>He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science &amp; Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. </p>
<p>In this first half we talk about his recent work with groups of the United Nations General Assembly, and his decentralized approach to grassroots empowerment in both implementing AGI and working together to regulate it.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/04/transcript-199-4796.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8c3vqg/199_AI-and-You.mp3" length="35437742" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

At the end of February there was a landmark conference in Panama City and online, the Beneficial AGI Summit. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.
He is the co-founder and CEO of The Millennium Project on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.
He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science &amp; Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. 
In this first half we talk about his recent work with groups of the United Nations General Assembly, and his decentralized approach to grassroots empowerment in both implementing AGI and working together to regulate it.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2173</itunes:duration>
                <itunes:episode>199</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>198 - Guest: Eve Herold, Science Writer on Robots, part 2</title>
        <itunes:title>198 - Guest: Eve Herold, Science Writer on Robots, part 2</itunes:title>
        <link>https://aiandyou.net/e/198-guest-eve-herold-science-writer-on-robots-part-2/</link>
                    <comments>https://aiandyou.net/e/198-guest-eve-herold-science-writer-on-robots-part-2/#comments</comments>        <pubDate>Mon, 01 Apr 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8baf11ca-64a8-33eb-979b-d70dc8ec2f9f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, <a href='https://www.amazon.com/Robots-People-Who-Love-Them/dp/1250122201'>Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots</a>.</p>
<p>Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.</p>
<p>In this part we talk about how robots and AI can bring out the best and the worst in us, the responsibilities of roboticists, the difference between robots having emotions and our believing that they have emotions, and how this will evolve over the next decade or more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-198-4826.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, <a href='https://www.amazon.com/Robots-People-Who-Love-Them/dp/1250122201'>Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots</a>.</p>
<p>Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.</p>
<p>In this part we talk about how robots and AI can bring out the best and the worst in us, the responsibilities of roboticists, the difference between robots having emotions and our believing that they have emotions, and how this will evolve over the next decade or more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-198-4826.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/5q7myh/198_AI-and-You.mp3" length="32183511" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots.
Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.
In this part we talk about how robots and AI can bring out the best and the worst in us, the responsibilities of roboticists, the difference between robots having emotions and our believing that they have emotions, and how this will evolve over the next decade or more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1969</itunes:duration>
                <itunes:episode>198</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>197 - Guest: Eve Herold, Science Writer on Robots, part 1</title>
        <itunes:title>197 - Guest: Eve Herold, Science Writer on Robots, part 1</itunes:title>
        <link>https://aiandyou.net/e/197-guest-eve-herold-science-writer-on-robots-part-1/</link>
                    <comments>https://aiandyou.net/e/197-guest-eve-herold-science-writer-on-robots-part-1/#comments</comments>        <pubDate>Mon, 25 Mar 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e77821dc-eb0e-3d02-b2cd-7c4f1d2b59f3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, <a href='https://www.amazon.com/Robots-People-Who-Love-Them/dp/1250122201'>Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots</a>. </p>
<p>Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.</p>
<p>In this part we talk about how people – including soldiers in combat - get attached to AIs and robots, we discuss ELIZA, Woebot, and Samantha from the movie Her, and the role of robots in helping take care of us physically and emotionally, among many other topics.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-197-0031.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, <a href='https://www.amazon.com/Robots-People-Who-Love-Them/dp/1250122201'>Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots</a>. </p>
<p>Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.</p>
<p>In this part we talk about how people – including soldiers in combat - get attached to AIs and robots, we discuss ELIZA, Woebot, and Samantha from the movie <em>Her</em>, and the role of robots in helping take care of us physically and emotionally, among many other topics.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-197-0031.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/rspvqy/197_AI-and-You.mp3" length="30934231" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots. 
Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.
In this part we talk about how people – including soldiers in combat - get attached to AIs and robots, we discuss ELIZA, Woebot, and Samantha from the movie Her, and the role of robots in helping take care of us physically and emotionally, among many other topics.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1891</itunes:duration>
                <itunes:episode>197</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>196 - Guest: Roman Yampolskiy, AI Safety Professor, part 2</title>
        <itunes:title>196 - Guest: Roman Yampolskiy, AI Safety Professor, part 2</itunes:title>
        <link>https://aiandyou.net/e/196-guest-roman-yampolskiy-ai-safety-professor-part-2/</link>
                    <comments>https://aiandyou.net/e/196-guest-roman-yampolskiy-ai-safety-professor-part-2/#comments</comments>        <pubDate>Mon, 18 Mar 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1be781a0-4fa3-34e6-a424-d71254b381fa</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, <a href='https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X'>AI: Unexplainable, Unpredictable, Uncontrollable</a>.
<p>Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.</p>
<p>In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and… the threat Roman’s coffee cup poses to humanity. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-196-7632.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, <a href='https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X'><em>AI: Unexplainable, Unpredictable, Uncontrollable</em></a>.
<p>Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.</p>
<p>In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and… the threat Roman’s coffee cup poses to humanity. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-196-7632.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/neymen/196_AI-and-You.mp3" length="31757193" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable.
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and… the threat Roman’s coffee cup poses to humanity. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1943</itunes:duration>
                <itunes:episode>196</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>195 - Guest: Roman Yampolskiy, AI Safety Professor, part 1</title>
        <itunes:title>195 - Guest: Roman Yampolskiy, AI Safety Professor, part 1</itunes:title>
        <link>https://aiandyou.net/e/195-guest-roman-yampolskiy-ai-safety-professor-part-1/</link>
                    <comments>https://aiandyou.net/e/195-guest-roman-yampolskiy-ai-safety-professor-part-1/#comments</comments>        <pubDate>Mon, 11 Mar 2024 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7a9429bc-b56a-3f7f-89ec-ce347ca276b2</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, <a href='https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X'>AI: Unexplainable, Unpredictable, Uncontrollable</a>.
<p>Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.</p>
<p>In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-195-1560.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, <a href='https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X'><em>AI: Unexplainable, Unpredictable, Uncontrollable</em></a>.
<p>Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.</p>
<p>In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/03/transcript-195-1560.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/afxb7j/195_AI-and-You.mp3" length="35684338" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable.
Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2188</itunes:duration>
                <itunes:episode>195</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>194 - Guest: Rachel St. Clair, AGI Scientist, part 2</title>
        <itunes:title>194 - Guest: Rachel St. Clair, AGI Scientist, part 2</itunes:title>
        <link>https://aiandyou.net/e/194-guest-rachel-st-clair-agi-scientist-part-2/</link>
                    <comments>https://aiandyou.net/e/194-guest-rachel-st-clair-agi-scientist-part-2/#comments</comments>        <pubDate>Mon, 04 Mar 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/60a02df3-5063-3a23-8834-f666d2545176</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.</p>
<p>That was last year.</p>
<p>Now, AGI is an explicit goal of many enterprises, notably among them <a href='https://www.simuli.ai/'>Simuli</a>. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” </p>
<p>In the conclusion, we talk about the role of sleep in human cognition, AGI and consciousness, and… penguins.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/02/transcript-194-6987.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p><em>Artificial General Intelligence</em>: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.</p>
<p>That was last year.</p>
<p>Now, AGI is an explicit goal of many enterprises, notably among them <a href='https://www.simuli.ai/'>Simuli</a>. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” </p>
<p>In the conclusion, we talk about the role of sleep in human cognition, AGI and consciousness, and… penguins.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/02/transcript-194-6987.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/484as6/194_AI-and-You.mp3" length="36819515" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.
That was last year.
Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” 
In the conclusion, we talk about the role of sleep in human cognition, AGI and consciousness, and… penguins.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2259</itunes:duration>
                <itunes:episode>194</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>193 - Guest: Rachel St. Clair, AGI Scientist, part 1</title>
        <itunes:title>193 - Guest: Rachel St. Clair, AGI Scientist, part 1</itunes:title>
        <link>https://aiandyou.net/e/193-guest-rachel-st-clair-agi-scientist-part-1/</link>
                    <comments>https://aiandyou.net/e/193-guest-rachel-st-clair-agi-scientist-part-1/#comments</comments>        <pubDate>Mon, 26 Feb 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ee1d7d75-7382-3594-8884-38132bf920a2</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.</p>
<p>That was last year.</p>
<p>Now, AGI is an explicit goal of many enterprises, notably among them <a href='https://www.simuli.ai/'>Simuli</a>. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” </p>
<p>In part 1 we talk about markers for AGI, distinctions between it and narrow artificial intelligence, self-driving cars, robotics, and embodiment, and… disco balls.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/02/transcript-193-3908.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p><em>Artificial General Intelligence</em>: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.</p>
<p>That was last year.</p>
<p>Now, AGI is an explicit goal of many enterprises, notably among them <a href='https://www.simuli.ai/'>Simuli</a>. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” </p>
<p>In part 1 we talk about markers for AGI, distinctions between it and narrow artificial intelligence, self-driving cars, robotics, and embodiment, and… disco balls.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/02/transcript-193-3908.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xcuphy/193_AI-and-You.mp3" length="30138019" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.
That was last year.
Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” 
In part 1 we talk about markers for AGI, distinctions between it and narrow artificial intelligence, self-driving cars, robotics, and embodiment, and… disco balls.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1842</itunes:duration>
                <itunes:episode>193</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>192 - Re-evaluating Existential Risk From AI</title>
        <itunes:title>192 - Re-evaluating Existential Risk From AI</itunes:title>
        <link>https://aiandyou.net/e/192-re-evaluating-existential-risk-from-ai/</link>
                    <comments>https://aiandyou.net/e/192-re-evaluating-existential-risk-from-ai/#comments</comments>        <pubDate>Mon, 19 Feb 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f8dab4cf-35ce-3eaf-bacd-8b0e4c39df92</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Since I published my <a href='https://humancusp.com/book1'>first book on AI</a> in 2017, the public conversation and perception of the existential risk - risk to our existence - from AI has evolved and broadened. I talk about how that conversation has changed from Nick Bostrom's Superintelligence, the "hard take-off" and what that means, and through to the tossing about of cryptic signatures like p(doom) and e/acc, which I explain and critique.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/02/transcript-192-6765.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Since I published my <a href='https://humancusp.com/book1'>first book on AI</a> in 2017, the public conversation and perception of the <em>existential risk</em> - risk to our existence - from AI has evolved and broadened. I talk about how that conversation has changed from Nick Bostrom's <em>Superintelligence</em>, the "hard take-off" and what that means, and through to the tossing about of cryptic signatures like p(doom) and e/acc, which I explain and critique.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/02/transcript-192-6765.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/zhniyb/192_AI-and-You.mp3" length="21670166" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Since I published my first book on AI in 2017, the public conversation and perception of the existential risk - risk to our existence - from AI has evolved and broadened. I talk about how that conversation has changed from Nick Bostrom's Superintelligence, the "hard take-off" and what that means, and through to the tossing about of cryptic signatures like p(doom) and e/acc, which I explain and critique.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1312</itunes:duration>
                <itunes:episode>192</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>191 - Guest: Frank Sauer, AI arms control researcher, part 2</title>
        <itunes:title>191 - Guest: Frank Sauer, AI arms control researcher, part 2</itunes:title>
        <link>https://aiandyou.net/e/191-guest-frank-sauer-ai-arms-control-researcher-part-2/</link>
                    <comments>https://aiandyou.net/e/191-guest-frank-sauer-ai-arms-control-researcher-part-2/#comments</comments>        <pubDate>Mon, 12 Feb 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a8c2cb8f-af1e-3108-8553-e17b8a40172f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. </p>
<p>In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-191-5265.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. </p>
<p>In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-191-5265.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kn5tf8/191_AI-and-You.mp3" length="27716364" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. 
In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1690</itunes:duration>
                <itunes:episode>191</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>190 - Guest: Frank Sauer, AI arms control researcher, part 1</title>
        <itunes:title>190 - Guest: Frank Sauer, AI arms control researcher, part 1</itunes:title>
        <link>https://aiandyou.net/e/190-guest-frank-sauer-ai-arms-control-researcher-part-1/</link>
                    <comments>https://aiandyou.net/e/190-guest-frank-sauer-ai-arms-control-researcher-part-1/#comments</comments>        <pubDate>Mon, 05 Feb 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5aa5d15d-b050-3f3c-acfb-46010604359a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. </p>
<p>In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-190-4148.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. </p>
<p>In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-190-4148.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3gmb8x/190_AI-and-You.mp3" length="33415655" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. 
In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2047</itunes:duration>
                <itunes:episode>190</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>189 - Guest: Peter Norvig, AI professor/author/researcher, part 2</title>
        <itunes:title>189 - Guest: Peter Norvig, AI professor/author/researcher, part 2</itunes:title>
        <link>https://aiandyou.net/e/189-guest-peter-norvig-ai-professorauthorresearcher-part-2/</link>
                    <comments>https://aiandyou.net/e/189-guest-peter-norvig-ai-professorauthorresearcher-part-2/#comments</comments>        <pubDate>Mon, 29 Jan 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ad2f0e70-efde-3fcf-8916-8492c9619e66</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Literally writing the book on AI is my guest <a href='https://norvig.com'>Peter Norvig</a>, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at USC, Stanford, and Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006.</p>
<p>He’s also the author of the world’s longest palindromic sentence. </p>
<p>In this second half of the interview, we talk about how the rise in prominence of AI in the general population has changed how he communicates about AI, his feelings about the calls for slowdown in model development, and his thinking about general intelligence in large language models; and AI Winters.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-189-6374.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Literally writing the book on AI is my guest <a href='https://norvig.com'>Peter Norvig</a>, who is coauthor of the standard text, <em>Artificial Intelligence: A Modern Approach,</em> used in 135 countries and 1500+ universities. Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at USC, Stanford, and Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006.</p>
<p>He’s also the author of the world’s longest <em>palindromic sentence</em>. </p>
<p>In this second half of the interview, we talk about how the rise in prominence of AI in the general population has changed how he communicates about AI, his feelings about the calls for slowdown in model development, and his thinking about general intelligence in large language models; and AI Winters.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-189-6374.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/449a8s/189_AI-and-You.mp3" length="29720896" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Literally writing the book on AI is my guest Peter Norvig, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at USC, Stanford, and Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006.
He’s also the author of the world’s longest palindromic sentence. 
In this second half of the interview, we talk about how the rise in prominence of AI in the general population has changed how he communicates about AI, his feelings about the calls for slowdown in model development, and his thinking about general intelligence in large language models; and AI Winters.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1816</itunes:duration>
                <itunes:episode>189</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>188 - Guest: Peter Norvig, AI professor/author/researcher, part 1</title>
        <itunes:title>188 - Guest: Peter Norvig, AI professor/author/researcher, part 1</itunes:title>
        <link>https://aiandyou.net/e/188-guest-peter-norvig-ai-professorauthorresearcher-part-1/</link>
                    <comments>https://aiandyou.net/e/188-guest-peter-norvig-ai-professorauthorresearcher-part-1/#comments</comments>        <pubDate>Mon, 22 Jan 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/6a19480f-1151-3f1a-9cce-6b3e318b07a9</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Literally writing the book on AI is my guest <a href='https://norvig.com'>Peter Norvig</a>, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. (The other author, Stuart Russell, was on this show in episodes 86 and 87.) Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at the University of Southern California, Stanford University, and the University of California at Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006.</p>
<p>He’s also the author of the world’s longest palindromic sentence. </p>
<p>In this first part of the interview, we talk about the evolution of AI from the symbolic processing paradigm to the connectionist paradigm, or neural networks, how they layer on each other in humans and AIs, and Peter’s experiences in blending the worlds of academic and business. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-188-5720.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Literally writing the book on AI is my guest <a href='https://norvig.com'>Peter Norvig</a>, who is coauthor of the standard text, <em>Artificial Intelligence: A Modern Approach,</em> used in 135 countries and 1500+ universities. (The other author, Stuart Russell, was on this show in episodes 86 and 87.) Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at the University of Southern California, Stanford University, and the University of California at Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006.</p>
<p>He’s also the author of the world’s longest palindromic sentence. </p>
<p>In this first part of the interview, we talk about the evolution of AI from the symbolic processing paradigm to the connectionist paradigm, or neural networks, how they layer on each other in humans and AIs, and Peter’s experiences in blending the worlds of academic and business. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-188-5720.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/jqhanq/188_AI-and-You.mp3" length="25961353" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Literally writing the book on AI is my guest Peter Norvig, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. (The other author, Stuart Russell, was on this show in episodes 86 and 87.) Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at the University of Southern California, Stanford University, and the University of California at Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006.
He’s also the author of the world’s longest palindromic sentence. 
In this first part of the interview, we talk about the evolution of AI from the symbolic processing paradigm to the connectionist paradigm, or neural networks, how they layer on each other in humans and AIs, and Peter’s experiences in blending the worlds of academic and business. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1581</itunes:duration>
                <itunes:episode>188</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>187 - Guest: Michal Kosinski, Professor of Psychology, part 2</title>
        <itunes:title>187 - Guest: Michal Kosinski, Professor of Psychology, part 2</itunes:title>
        <link>https://aiandyou.net/e/187-guest-michal-kosinski-professor-of-psychology-part-2/</link>
                    <comments>https://aiandyou.net/e/187-guest-michal-kosinski-professor-of-psychology-part-2/#comments</comments>        <pubDate>Mon, 15 Jan 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/04a03ee9-7ccd-318c-b341-a07a8fa19961</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the <a href='https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data'>first press article</a> warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences.</p>
<p>Michal also co-authored Modern Psychometrics, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology</p>
<p>In the second half of the interview, we pivot to the Theory of Mind – which is the ability of a creature to understand that another has a mind – and research around whether AI has it. Michal has amazing new research in that respect. He also says, "Without a question, GPT-4 and similar models are the most competent language users on this planet."</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-187-5707.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the <a href='https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data'>first press article</a> warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences.</p>
<p>Michal also co-authored <em>Modern Psychometrics</em>, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology</p>
<p>In the second half of the interview, we pivot to the <em>Theory of Mind</em> – which is the ability of a creature to understand that another has a mind – and research around whether AI has it. Michal has amazing new research in that respect. He also says, "Without a question, GPT-4 and similar models are the most competent language users on this planet."</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-187-5707.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8s8pip/187_AI-and-You.mp3" length="31696589" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the first press article warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences.
Michal also co-authored Modern Psychometrics, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology
In the second half of the interview, we pivot to the Theory of Mind – which is the ability of a creature to understand that another has a mind – and research around whether AI has it. Michal has amazing new research in that respect. He also says, "Without a question, GPT-4 and similar models are the most competent language users on this planet."
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1939</itunes:duration>
                <itunes:episode>187</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>186 - Guest: Michal Kosinski, Professor of Psychology, part 1</title>
        <itunes:title>186 - Guest: Michal Kosinski, Professor of Psychology, part 1</itunes:title>
        <link>https://aiandyou.net/e/186-guest-michal-kosinski-professor-of-psychology-part-1/</link>
                    <comments>https://aiandyou.net/e/186-guest-michal-kosinski-professor-of-psychology-part-1/#comments</comments>        <pubDate>Mon, 08 Jan 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0aadbb03-68aa-350a-85f3-c7469c9384b8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the <a href='https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data'>first press article</a> warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences.</p>
<p>Michal also co-authored Modern Psychometrics, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology, positioning him to speak to us with authority about how AI has and may shape the beliefs and behaviors of people en masse.</p>
<p>In this first part of the interview, we delve into just that, plus the role of social media, and Michal's take on what privacy means today.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-186-9937.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the <a href='https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data'>first press article</a> warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences.</p>
<p>Michal also co-authored <em>Modern Psychometrics</em>, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology, positioning him to speak to us with authority about how AI has and may shape the beliefs and behaviors of people <em>en masse</em>.</p>
<p>In this first part of the interview, we delve into just that, plus the role of social media, and Michal's take on what privacy means today.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-186-9937.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3tub37/186_AI-and-You.mp3" length="33737484" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the first press article warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences.
Michal also co-authored Modern Psychometrics, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology, positioning him to speak to us with authority about how AI has and may shape the beliefs and behaviors of people en masse.
In this first part of the interview, we delve into just that, plus the role of social media, and Michal's take on what privacy means today.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2067</itunes:duration>
                <itunes:episode>186</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>185 - Special Panel: AI Predictions for 2024</title>
        <itunes:title>185 - Special Panel: AI Predictions for 2024</itunes:title>
        <link>https://aiandyou.net/e/185-special-panel-ai-predictions-for-2024/</link>
                    <comments>https://aiandyou.net/e/185-special-panel-ai-predictions-for-2024/#comments</comments>        <pubDate>Mon, 01 Jan 2024 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5c4545b3-5f6e-3b6c-a788-537308bb1632</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>In our now-traditional end-of-year episode, we look back on the year to date and forward to the year to be. I am joined by previous guest <a href='https://calumchace.com'>Calum Chace</a>, co-host of the <a href='https://podcasts.apple.com/gb/podcast/london-futurists/id1642832645'>London Futurists podcast</a> and author of <a href='https://www.amazon.com/Economic-Singularity-Artificial-intelligence-capitalism-ebook/dp/B01IOCUUDW'>The Economic Singularity</a>, and <a href='https://justingrammens.com'>Justin Grammens</a>, founder of the <a href='https://appliedai.mn'>AppliedAI conference and podcast.</a> Together, we review what happened with AI in 2023 and make some predictions for 2024. We look back at the impact of large language models such as #ChatGPT and forward to how they will evolve and change the workplace, economy, and society. We also discuss the future of regulation, the EU AI Act, the 2024 US elections, disinformation, and the future of education.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-185-3079.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>In our now-traditional end-of-year episode, we look back on the year to date and forward to the year to be. I am joined by previous guest <a href='https://calumchace.com'>Calum Chace</a>, co-host of the <a href='https://podcasts.apple.com/gb/podcast/london-futurists/id1642832645'>London Futurists podcast</a> and author of <em><a href='https://www.amazon.com/Economic-Singularity-Artificial-intelligence-capitalism-ebook/dp/B01IOCUUDW'>The Economic Singularity</a>,</em> and <a href='https://justingrammens.com'>Justin Grammens</a>, founder of the <a href='https://appliedai.mn'>AppliedAI conference and podcast.</a> Together, we review what happened with AI in 2023 and make some predictions for 2024. We look back at the impact of large language models such as #ChatGPT and forward to how they will evolve and change the workplace, economy, and society. We also discuss the future of regulation, the EU AI Act, the 2024 US elections, disinformation, and the future of education.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2024/01/transcript-185-3079.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/5bmw35/185_AI-and-You.mp3" length="55883887" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

In our now-traditional end-of-year episode, we look back on the year to date and forward to the year to be. I am joined by previous guest Calum Chace, co-host of the London Futurists podcast and author of The Economic Singularity, and Justin Grammens, founder of the AppliedAI conference and podcast. Together, we review what happened with AI in 2023 and make some predictions for 2024. We look back at the impact of large language models such as #ChatGPT and forward to how they will evolve and change the workplace, economy, and society. We also discuss the future of regulation, the EU AI Act, the 2024 US elections, disinformation, and the future of education.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3451</itunes:duration>
                <itunes:episode>185</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>184 - Guest: Tabitha Swanson, Creative Technologist/Filmmaker</title>
        <itunes:title>184 - Guest: Tabitha Swanson, Creative Technologist/Filmmaker</itunes:title>
        <link>https://aiandyou.net/e/184-guest-tabitha-swanson-creative-technologistfilmmaker/</link>
                    <comments>https://aiandyou.net/e/184-guest-tabitha-swanson-creative-technologistfilmmaker/#comments</comments>        <pubDate>Mon, 25 Dec 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7c8a498e-7024-3e12-8851-cdfbc1b4e0d9</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Making movies about AI with AI is <a href='https://twitter.com/tabithaswanson_'>Tabitha Swanson</a>, who comes to tell us how that works - and what it was like exhibiting it at the Venice Film Festival during the writers'/actors' strikes.</p>
<p>Tabitha is a Berlin-based multi-disciplinary designer, creative technologist, and filmmaker. Her practice includes 3D, animation, augmented reality, digital fashion, graphic design, and UX/UI. She has worked with brands including Vogue Germany, Nike, Highsnobiety, Reebok, and Origins, and has exhibited at Miami Art Basel, Fotografiska,  Transmediale, and Cadaf Arts among others. </p>
<p>Her part of the <a href='https://www.whitemirror.xyz/'>White Mirror</a> project saw her doing everything from writing to cinematography with the latest AI tools like Runway Gen-2, ChatGPT, and Stable Diffusion, lowering typical animation costs from $10,000/second to $10,000 per minute. She explains what those tools are good at and where their limitations are, and helps us understand how they will evolve and impact the roles of humans in the movie industry.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-184-9154.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Making movies <em>about </em>AI <em>with </em>AI is <a href='https://twitter.com/tabithaswanson_'>Tabitha Swanson</a>, who comes to tell us how that works - and what it was like exhibiting it at the Venice Film Festival during the writers'/actors' strikes.</p>
<p>Tabitha is a Berlin-based multi-disciplinary designer, creative technologist, and filmmaker. Her practice includes 3D, animation, augmented reality, digital fashion, graphic design, and UX/UI. She has worked with brands including Vogue Germany, Nike, Highsnobiety, Reebok, and Origins, and has exhibited at Miami Art Basel, Fotografiska,  Transmediale, and Cadaf Arts among others. </p>
<p>Her part of the <a href='https://www.whitemirror.xyz/'>White Mirror</a> project saw her doing everything from writing to cinematography with the latest AI tools like Runway Gen-2, ChatGPT, and Stable Diffusion, lowering typical animation costs from $10,000/second to $10,000 per <em>minute</em>. She explains what those tools are good at and where their limitations are, and helps us understand how they will evolve and impact the roles of humans in the movie industry.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/12/transcript-184-9154.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/a2e5nt/184_AI-and-You.mp3" length="38944001" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Making movies about AI with AI is Tabitha Swanson, who comes to tell us how that works - and what it was like exhibiting it at the Venice Film Festival during the writers'/actors' strikes.
Tabitha is a Berlin-based multi-disciplinary designer, creative technologist, and filmmaker. Her practice includes 3D, animation, augmented reality, digital fashion, graphic design, and UX/UI. She has worked with brands including Vogue Germany, Nike, Highsnobiety, Reebok, and Origins, and has exhibited at Miami Art Basel, Fotografiska,  Transmediale, and Cadaf Arts among others. 
Her part of the White Mirror project saw her doing everything from writing to cinematography with the latest AI tools like Runway Gen-2, ChatGPT, and Stable Diffusion, lowering typical animation costs from $10,000/second to $10,000 per minute. She explains what those tools are good at and where their limitations are, and helps us understand how they will evolve and impact the roles of humans in the movie industry.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2392</itunes:duration>
                <itunes:episode>184</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>183 - Guest: Oren Etzioni, AI in Science, Professor Emeritus, part 2</title>
        <itunes:title>183 - Guest: Oren Etzioni, AI in Science, Professor Emeritus, part 2</itunes:title>
        <link>https://aiandyou.net/e/183-guest-oren-etzioni-ai-in-science-professor-emeritus-part-2/</link>
                    <comments>https://aiandyou.net/e/183-guest-oren-etzioni-ai-in-science-professor-emeritus-part-2/#comments</comments>        <pubDate>Mon, 18 Dec 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/eb53c27d-aa61-353f-9f51-3d4d1cb62f09</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the <a href='https://allenai.org/'>Allen Institute for Artificial Intelligence</a> (AI2) in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. </p>
<p>Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the <a href='https://www.semanticscholar.org'>Semantic Scholar</a>, search engine that can understand scientific literature, and <a href='https://mosaic.allenai.org/'>Mosaic</a>, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures.</p>
<p>In this conclusion of the interview we talk about AI2’s scientific assistance project called Aristo, Oren’s views on the concerns about AI and how to address them, and his Hippocratic Oath for AI practitioners.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-183-9090.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the <a href='https://allenai.org/'>Allen Institute for Artificial Intelligence</a> (AI2) in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. </p>
<p>Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the <a href='https://www.semanticscholar.org'>Semantic Scholar</a>, search engine that can understand scientific literature, and <a href='https://mosaic.allenai.org/'>Mosaic</a>, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures.</p>
<p>In this conclusion of the interview we talk about AI2’s scientific assistance project called Aristo, Oren’s views on the concerns about AI and how to address them, and his Hippocratic Oath for AI practitioners.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-183-9090.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/un7dr9/183_AI-and-You.mp3" length="30077415" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the Allen Institute for Artificial Intelligence (AI2) in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. 
Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the Semantic Scholar, search engine that can understand scientific literature, and Mosaic, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures.
In this conclusion of the interview we talk about AI2’s scientific assistance project called Aristo, Oren’s views on the concerns about AI and how to address them, and his Hippocratic Oath for AI practitioners.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1838</itunes:duration>
                <itunes:episode>183</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>182 - Guest: Oren Etzioni, AI in Science, Professor Emeritus, part 1</title>
        <itunes:title>182 - Guest: Oren Etzioni, AI in Science, Professor Emeritus, part 1</itunes:title>
        <link>https://aiandyou.net/e/182-guest-oren-etzioni-ai-in-science-professor-emeritus-part-1/</link>
                    <comments>https://aiandyou.net/e/182-guest-oren-etzioni-ai-in-science-professor-emeritus-part-1/#comments</comments>        <pubDate>Mon, 11 Dec 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/aaa734c9-50cb-339c-a9d9-68914df244a0</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the Allen Institute for Artificial Intelligence in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. </p>
<p>Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the <a href='https://www.semanticscholar.org'>Semantic Scholar</a>, search engine that can understand scientific literature, and <a href='https://mosaic.allenai.org/'>Mosaic</a>, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures.</p>
<p>In part 1 we talk about parallels between AI and the human brain, the Semantic Scholar, and the potential for AI accelerating research through understanding scientific literature. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-182-7584.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the Allen Institute for Artificial Intelligence in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. </p>
<p>Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the <a href='https://www.semanticscholar.org'>Semantic Scholar</a>, search engine that can understand scientific literature, and <a href='https://mosaic.allenai.org/'>Mosaic</a>, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures.</p>
<p>In part 1 we talk about parallels between AI and the human brain, the Semantic Scholar, and the potential for AI accelerating research through understanding scientific literature. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-182-7584.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/y483dt/182_AI-and-You.mp3" length="27932449" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the Allen Institute for Artificial Intelligence in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. 
Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the Semantic Scholar, search engine that can understand scientific literature, and Mosaic, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures.
In part 1 we talk about parallels between AI and the human brain, the Semantic Scholar, and the potential for AI accelerating research through understanding scientific literature. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1704</itunes:duration>
                <itunes:episode>182</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>181 - Guests: Pauldy Otermans and Dev Aditya, AI Teacher Creators, part 2</title>
        <itunes:title>181 - Guests: Pauldy Otermans and Dev Aditya, AI Teacher Creators, part 2</itunes:title>
        <link>https://aiandyou.net/e/181-guests-pauldy-otermans-and-dev-aditya-ai-teacher-creators-part-2/</link>
                    <comments>https://aiandyou.net/e/181-guests-pauldy-otermans-and-dev-aditya-ai-teacher-creators-part-2/#comments</comments>        <pubDate>Mon, 04 Dec 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/62f41520-8460-3c4c-8875-55a192b33de7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the <a href='https://oiedu.co.uk'>Otermans Institute</a>, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students.</p>
<p>Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London.</p>
<p>In the conclusion of the interview they describe how the AI teachers work, and their definitions of Teaching and Learning 1.0, 2.0, and 3.0.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-181-0391.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the <a href='https://oiedu.co.uk'>Otermans Institute</a>, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students.</p>
<p>Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London.</p>
<p>In the conclusion of the interview they describe how the AI teachers work, and their definitions of Teaching and Learning 1.0, 2.0, and 3.0.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-181-0391.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/q6ws8v/181_AI-and-You.mp3" length="27839662" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the Otermans Institute, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students.
Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London.
In the conclusion of the interview they describe how the AI teachers work, and their definitions of Teaching and Learning 1.0, 2.0, and 3.0.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1698</itunes:duration>
                <itunes:episode>181</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>180 - Guests: Pauldy Otermans and Dev Aditya, AI Teacher Creators, part 1</title>
        <itunes:title>180 - Guests: Pauldy Otermans and Dev Aditya, AI Teacher Creators, part 1</itunes:title>
        <link>https://aiandyou.net/e/180-guests-pauldy-otermans-and-dev-aditya-ai-teacher-creators-part-1/</link>
                    <comments>https://aiandyou.net/e/180-guests-pauldy-otermans-and-dev-aditya-ai-teacher-creators-part-1/#comments</comments>        <pubDate>Mon, 27 Nov 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9cca1cf2-df46-39f8-a00a-63ea392ce9ca</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the <a href='https://oiedu.co.uk'>Otermans Institute</a>, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students.</p>
<p>Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London.</p>
<p>In this first half of the interview we talk about the teacher shortage and the socioeconomic consequences of addressing it via an AI teacher.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-180-9302.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the <a href='https://oiedu.co.uk'>Otermans Institute</a>, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students.</p>
<p>Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London.</p>
<p>In this first half of the interview we talk about the teacher shortage and the socioeconomic consequences of addressing it via an AI teacher.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-180-9302.pdf'>HumanCusp Blog</a>.</p>
<p style="text-align: left;"> </p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/9yg8q4/180_AI-and-You.mp3" length="30508749" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the Otermans Institute, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students.
Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London.
In this first half of the interview we talk about the teacher shortage and the socioeconomic consequences of addressing it via an AI teacher.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
 


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1865</itunes:duration>
                <itunes:episode>180</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>179 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 2</title>
        <itunes:title>179 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 2</itunes:title>
        <link>https://aiandyou.net/e/179-guest-jaan-tallinn-ai-existential-risk-philanthropist-part-2/</link>
                    <comments>https://aiandyou.net/e/179-guest-jaan-tallinn-ai-existential-risk-philanthropist-part-2/#comments</comments>        <pubDate>Mon, 20 Nov 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3efb38e6-c2dd-398b-94c6-bd97eb7693df</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We're talking with Jaan Tallinn, who has changed the way the world responds to the impact of #AI. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the <a href='https://www.cser.ac.uk/'>Centre for the Study of Existential Risk, in Cambridge</a>, England, and the <a href='https://futureoflife.org/'>Future of Life Institute</a>, in Cambridge, Massachusetts.</p>
<p>In the conclusion of the interview, we talk about value alignment and how that does or doesn’t intersect with large language models, FLI and their <a href='https://worldbuild.ai/home/'>world building project</a>, and the instability of the world’s future. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-179-7296.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We're talking with Jaan Tallinn, who has changed the way the world responds to the impact of #AI. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the <a href='https://www.cser.ac.uk/'>Centre for the Study of Existential Risk, in Cambridge</a>, England, and the <a href='https://futureoflife.org/'>Future of Life Institute</a>, in Cambridge, Massachusetts.</p>
<p>In the conclusion of the interview, we talk about value alignment and how that does or doesn’t intersect with large language models, FLI and their <a href='https://worldbuild.ai/home/'>world building project</a>, and the instability of the world’s future. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-179-7296.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/s5rgdj/179_AI-and-You.mp3" length="23850241" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

We're talking with Jaan Tallinn, who has changed the way the world responds to the impact of #AI. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts.
In the conclusion of the interview, we talk about value alignment and how that does or doesn’t intersect with large language models, FLI and their world building project, and the instability of the world’s future. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1449</itunes:duration>
                <itunes:episode>179</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>178 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 1</title>
        <itunes:title>178 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 1</itunes:title>
        <link>https://aiandyou.net/e/178-guest-jaan-tallinn-ai-existential-risk-philanthropist-part-1/</link>
                    <comments>https://aiandyou.net/e/178-guest-jaan-tallinn-ai-existential-risk-philanthropist-part-1/#comments</comments>        <pubDate>Mon, 13 Nov 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7b9796f1-1d06-37ee-93b2-e8fbd7ad586d</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The attention of the world to the potential impact of AI owes a huge debt to my guest Jaan Tallinn. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the <a href='https://www.cser.ac.uk/'>Centre for the Study of Existential Risk, in Cambridge</a>, England, and the <a href='https://futureoflife.org/'>Future of Life Institute</a>, in Cambridge, Massachusetts. He's also a member of the board of sponsors of the Bulletin of the Atomic Scientists, and a key funder of the <a href='https://intelligence.org/'>Machine Intelligence Research Institute</a>.</p>
<p>In this first part, we talk about the problems with current #AI frontier models, Jaan's reaction to GPT-4, the letter causing for a pause in AI training, Jaan's motivations in starting CSER and FLI, how individuals and governments should react to AI risk, and Jaan's idea for how to enforce constraints on AI development.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-178-3803.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The attention of the world to the potential impact of AI owes a huge debt to my guest Jaan Tallinn. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the <a href='https://www.cser.ac.uk/'>Centre for the Study of Existential Risk, in Cambridge</a>, England, and the <a href='https://futureoflife.org/'>Future of Life Institute</a>, in Cambridge, Massachusetts. He's also a member of the board of sponsors of the Bulletin of the Atomic Scientists, and a key funder of the <a href='https://intelligence.org/'>Machine Intelligence Research Institute</a>.</p>
<p>In this first part, we talk about the problems with current #AI frontier models, Jaan's reaction to GPT-4, the letter causing for a pause in AI training, Jaan's motivations in starting CSER and FLI, how individuals and governments should react to AI risk, and Jaan's idea for how to enforce constraints on AI development.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/11/transcript-178-3803.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bk7xs8/178_AI-and-You.mp3" length="33293193" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The attention of the world to the potential impact of AI owes a huge debt to my guest Jaan Tallinn. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts. He's also a member of the board of sponsors of the Bulletin of the Atomic Scientists, and a key funder of the Machine Intelligence Research Institute.
In this first part, we talk about the problems with current #AI frontier models, Jaan's reaction to GPT-4, the letter causing for a pause in AI training, Jaan's motivations in starting CSER and FLI, how individuals and governments should react to AI risk, and Jaan's idea for how to enforce constraints on AI development.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2039</itunes:duration>
                <itunes:episode>178</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>177 - Guest: Bart Selman, Professor for responsible AI use, part 2</title>
        <itunes:title>177 - Guest: Bart Selman, Professor for responsible AI use, part 2</itunes:title>
        <link>https://aiandyou.net/e/177-guest-bart-selman-professor-for-responsible-ai-use-part-2/</link>
                    <comments>https://aiandyou.net/e/177-guest-bart-selman-professor-for-responsible-ai-use-part-2/#comments</comments>        <pubDate>Mon, 06 Nov 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ef5b9ba7-3829-3ab5-b485-e0cf11d1cfae</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence, a fellow of the American Association for the Advancement of Science, and a contributing scientist at the two Asilomar conferences on responsible AI development.</p>
<p>In the conclusion of our interview we talk about self-driving cars, the capability of large language models to synthesize knowledge across many human domains, Richard Feynman, our understanding of language, Bertrand Russell, AIs as co-authors on research papers, and where Bart places us on a scale of artificial general intelligence ability. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-177-9309.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence, a fellow of the American Association for the Advancement of Science, and a contributing scientist at the two Asilomar conferences on responsible AI development.</p>
<p>In the conclusion of our interview we talk about self-driving cars, the capability of large language models to synthesize knowledge across many human domains, Richard Feynman, our understanding of language, Bertrand Russell, AIs as co-authors on research papers, and where Bart places us on a scale of artificial general intelligence ability. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-177-9309.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3zih7f/177_AI-and-You.mp3" length="29840432" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence, a fellow of the American Association for the Advancement of Science, and a contributing scientist at the two Asilomar conferences on responsible AI development.
In the conclusion of our interview we talk about self-driving cars, the capability of large language models to synthesize knowledge across many human domains, Richard Feynman, our understanding of language, Bertrand Russell, AIs as co-authors on research papers, and where Bart places us on a scale of artificial general intelligence ability. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1823</itunes:duration>
                <itunes:episode>177</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>176 - Guest: Bart Selman, Professor for responsible AI use, part 1</title>
        <itunes:title>176 - Guest: Bart Selman, Professor for responsible AI use, part 1</itunes:title>
        <link>https://aiandyou.net/e/176-guest-bart-selman-professor-for-responsible-ai-use-part-1/</link>
                    <comments>https://aiandyou.net/e/176-guest-bart-selman-professor-for-responsible-ai-use-part-1/#comments</comments>        <pubDate>Mon, 30 Oct 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/268472d9-abca-38d1-995f-d57db7e95dd3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence and a fellow of the American Association for the Advancement of Science.</p>
<p>In the first part of the interview we talk about common sense, artificial general intelligence, computer vision, #LLM and their impact on computer programming, and how much they might really be understanding. Bart will also give his take on how good they are, how to understand how they’re working, and his experiments in getting ChatGPT to understand geometry.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-176-8819.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence and a fellow of the American Association for the Advancement of Science.</p>
<p>In the first part of the interview we talk about common sense, artificial general intelligence, computer vision, #LLM and their impact on computer programming, and how much they might really be <em>understanding</em>. Bart will also give his take on how good they are, how to understand how they’re working, and his experiments in getting ChatGPT to understand geometry.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-176-8819.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/z4wttu/176_AI-and-You.mp3" length="32661657" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence and a fellow of the American Association for the Advancement of Science.
In the first part of the interview we talk about common sense, artificial general intelligence, computer vision, #LLM and their impact on computer programming, and how much they might really be understanding. Bart will also give his take on how good they are, how to understand how they’re working, and his experiments in getting ChatGPT to understand geometry.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1999</itunes:duration>
                <itunes:episode>176</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>175 - AI and Education</title>
        <itunes:title>175 - AI and Education</itunes:title>
        <link>https://aiandyou.net/e/175-ai-and-education/</link>
                    <comments>https://aiandyou.net/e/175-ai-and-education/#comments</comments>        <pubDate>Mon, 23 Oct 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3b77933a-b01e-3ac3-9f70-013399899624</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The first area to see a dramatic impact from #ChatGPT was when it crushed term papers and sent teachers scurrying for ways to assess their students. Now that we've had nearly a year to evaluate the impact of #AI on #education, I look at how assessments and teaching have been affected and how schools might adapt to the incredible opportunities of generative AI.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-175-6994.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The first area to see a dramatic impact from #ChatGPT was when it crushed term papers and sent teachers scurrying for ways to assess their students. Now that we've had nearly a year to evaluate the impact of #AI on #education, I look at how assessments and teaching have been affected and how schools might adapt to the incredible opportunities of generative AI.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-175-6994.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bkpjr4/175_AI-and-You.mp3" length="26569902" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The first area to see a dramatic impact from #ChatGPT was when it crushed term papers and sent teachers scurrying for ways to assess their students. Now that we've had nearly a year to evaluate the impact of #AI on #education, I look at how assessments and teaching have been affected and how schools might adapt to the incredible opportunities of generative AI.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1619</itunes:duration>
                <itunes:episode>175</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>174 - AI and Jobs</title>
        <itunes:title>174 - AI and Jobs</itunes:title>
        <link>https://aiandyou.net/e/174-ai-and-jobs/</link>
                    <comments>https://aiandyou.net/e/174-ai-and-jobs/#comments</comments>        <pubDate>Mon, 16 Oct 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9ec7fb61-0649-32e1-921e-631a80234cb5</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What effect will #AI, especially large language models like #ChatGPT, have on jobs? The conversation is intense and fractious. I attempt to shed some light on those effects, and discuss some of the different predictions and proposals for distributing the dividend from reducing costs and increasing markets through deploying AI. How will that capital get to where it is needed?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-174-5843.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What effect will #AI, especially large language models like #ChatGPT, have on jobs? The conversation is intense and fractious. I attempt to shed some light on those effects, and discuss some of the different predictions and proposals for distributing the dividend from reducing costs and increasing markets through deploying AI. How will that capital get to where it is needed?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-174-5843.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/5jbkpe/174_AI-and-You.mp3" length="37162241" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What effect will #AI, especially large language models like #ChatGPT, have on jobs? The conversation is intense and fractious. I attempt to shed some light on those effects, and discuss some of the different predictions and proposals for distributing the dividend from reducing costs and increasing markets through deploying AI. How will that capital get to where it is needed?
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2281</itunes:duration>
                <itunes:episode>174</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>173 -The UK AI Summit, Reflections</title>
        <itunes:title>173 -The UK AI Summit, Reflections</itunes:title>
        <link>https://aiandyou.net/e/173-the-uk-ai-summit-reflections/</link>
                    <comments>https://aiandyou.net/e/173-the-uk-ai-summit-reflections/#comments</comments>        <pubDate>Mon, 09 Oct 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/871f18c4-1efd-3871-8b80-1b2013f54f70</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The United Kingdom government is holding a <a href='https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november'>Summit on Artificial Intelligence</a> at the storied <a href='https://bletchleypark.org.uk/'>Bletchley Park</a> on November 1 and 2. Luminaries of #AI will be helping government authorities understand the issues that could require regulation or other government intervention.</p>
<p>Our invitation to attend may have been lost in the post.</p>
<p>But I do have reflections on the AI risks that will (or should) be presented at this event and some analysis and thought-provoking questions prompted by excellent events on these topics I recently attended by the <a href='https://londonfuturists.com/'>London Futurists</a> and <a href='https://mkai.org'>MKAI</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-173-2054.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The United Kingdom government is holding a <a href='https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november'>Summit on Artificial Intelligence</a> at the storied <a href='https://bletchleypark.org.uk/'>Bletchley Park</a> on November 1 and 2. Luminaries of #AI will be helping government authorities understand the issues that could require regulation or other government intervention.</p>
<p>Our invitation to attend may have been lost in the post.</p>
<p>But I do have reflections on the AI risks that will (or should) be presented at this event and some analysis and thought-provoking questions prompted by excellent events on these topics I recently attended by the <a href='https://londonfuturists.com/'>London Futurists</a> and <a href='https://mkai.org'>MKAI</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/10/transcript-173-2054.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/cur692/173_AI-and-You.mp3" length="39032191" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The United Kingdom government is holding a Summit on Artificial Intelligence at the storied Bletchley Park on November 1 and 2. Luminaries of #AI will be helping government authorities understand the issues that could require regulation or other government intervention.
Our invitation to attend may have been lost in the post.
But I do have reflections on the AI risks that will (or should) be presented at this event and some analysis and thought-provoking questions prompted by excellent events on these topics I recently attended by the London Futurists and MKAI.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2398</itunes:duration>
                <itunes:episode>173</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>172 - Guest: Matthew Lungren, Chief Medical Information Officer, part 2</title>
        <itunes:title>172 - Guest: Matthew Lungren, Chief Medical Information Officer, part 2</itunes:title>
        <link>https://aiandyou.net/e/172-guest-matthew-lungren-chief-medical-information-officer-part-2/</link>
                    <comments>https://aiandyou.net/e/172-guest-matthew-lungren-chief-medical-information-officer-part-2/#comments</comments>        <pubDate>Mon, 02 Oct 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c86ee208-9adf-3827-ba94-fb1566bb58f5</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it?  To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at <a href='https://www.nuance.com/index.html'>Nuance Communications</a>, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF  and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains.</p>
<p>In this interview conclusion, we talk about the details of how AI including large language models can be an effective part of a radiologist’s workflow how decisions about integrating AI into medicine can be made, and where we might be going with it in the future.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-172-5444.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it?  To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at <a href='https://www.nuance.com/index.html'>Nuance Communications</a>, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up <em>#radiology and #AI</em>. He also has a pediatric radiology practice at UCSF  and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains.</p>
<p>In this interview conclusion, we talk about the details of how AI including large language models can be an effective part of a radiologist’s workflow how decisions about integrating AI into medicine can be made, and where we might be going with it in the future.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-172-5444.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kf7vyn/172_AI-and-You.mp3" length="27208125" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it?  To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at Nuance Communications, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF  and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains.
In this interview conclusion, we talk about the details of how AI including large language models can be an effective part of a radiologist’s workflow how decisions about integrating AI into medicine can be made, and where we might be going with it in the future.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1659</itunes:duration>
                <itunes:episode>172</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>171 - Guest: Matthew Lungren, Chief Medical Information Officer, part 1</title>
        <itunes:title>171 - Guest: Matthew Lungren, Chief Medical Information Officer, part 1</itunes:title>
        <link>https://aiandyou.net/e/171-guest-matthew-lungren-chief-medical-information-officer-part-1/</link>
                    <comments>https://aiandyou.net/e/171-guest-matthew-lungren-chief-medical-information-officer-part-1/#comments</comments>        <pubDate>Mon, 25 Sep 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8ebebe68-c0d9-3b83-ab2b-a51e750b6498</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it?  To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at <a href='https://www.nuance.com/index.html'>Nuance Communications</a>, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF  and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains.</p>
<p>The basis for Hinton's assertion was that AI can be trained to find tumors, for instance, in CT scans, and we know how good AI is at image analysis when it’s got lots of labeled data to be trained on, and we certainly have that with CT scans.  We get to find out what's real about AI in #medicine in this episode.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-171-4030.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it?  To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at <a href='https://www.nuance.com/index.html'>Nuance Communications</a>, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up <em>#radiology and #AI</em>. He also has a pediatric radiology practice at UCSF  and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains.</p>
<p>The basis for Hinton's assertion was that AI can be trained to find tumors, for instance, in CT scans, and we know how good AI is at image analysis when it’s got lots of labeled data to be trained on, and we certainly have that with CT scans.  We get to find out what's real about AI in #medicine in this episode.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align: left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-171-4030.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xnkrv6/171_AI-and-You.mp3" length="34735152" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it?  To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at Nuance Communications, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF  and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains.
The basis for Hinton's assertion was that AI can be trained to find tumors, for instance, in CT scans, and we know how good AI is at image analysis when it’s got lots of labeled data to be trained on, and we certainly have that with CT scans.  We get to find out what's real about AI in #medicine in this episode.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2129</itunes:duration>
                <itunes:episode>171</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>170 - Guest: Michael Sharpe, AI Agent Platform CEO</title>
        <itunes:title>170 - Guest: Michael Sharpe, AI Agent Platform CEO</itunes:title>
        <link>https://aiandyou.net/e/170-guest-michael-sharpe-ai-agent-platform-ceo/</link>
                    <comments>https://aiandyou.net/e/170-guest-michael-sharpe-ai-agent-platform-ceo/#comments</comments>        <pubDate>Mon, 18 Sep 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3abb395b-7fe1-3091-bcb8-a02d2dca73cf</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The superheated large language model (LLM) revolution is only accelerating as they are incorporated into #agents - systems that take independent action. Here to help us understand the state of that art is Michael Sharpe, CEO of <a href='https://www.magickml.com/'>Magick ML</a>, a development environment that gives people a way of creating agents based upon generative #AI. Equally fascinating is Michael's previous job at Latitude, working on the virally popular on-line fantasy adventure game of <a href='https://aidungeon.io'>Dungeon AI</a>, a role-playing simulation where the story was made up by an #LLM on the fly, and we talk about that too.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-170-7245.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The superheated large language model (LLM) revolution is only accelerating as they are incorporated into <em>#agents</em> - systems that take independent action. Here to help us understand the state of that art is Michael Sharpe, CEO of <a href='https://www.magickml.com/'>Magick ML</a>, a development environment that gives people a way of creating agents based upon generative #AI. Equally fascinating is Michael's previous job at Latitude, working on the virally popular on-line fantasy adventure game of <a href='https://aidungeon.io'>Dungeon AI</a>, a role-playing simulation where the story was made up by an #LLM on the fly, and we talk about that too.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-170-7245.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/2mc5fm/170_AI-and-You.mp3" length="43497249" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The superheated large language model (LLM) revolution is only accelerating as they are incorporated into #agents - systems that take independent action. Here to help us understand the state of that art is Michael Sharpe, CEO of Magick ML, a development environment that gives people a way of creating agents based upon generative #AI. Equally fascinating is Michael's previous job at Latitude, working on the virally popular on-line fantasy adventure game of Dungeon AI, a role-playing simulation where the story was made up by an #LLM on the fly, and we talk about that too.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2677</itunes:duration>
                <itunes:episode>170</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>169 - Guest: Hod Lipson, Roboticist, part 2</title>
        <itunes:title>169 - Guest: Hod Lipson, Roboticist, part 2</itunes:title>
        <link>https://aiandyou.net/e/169-guest-hod-lipson-roboticist-part-2/</link>
                    <comments>https://aiandyou.net/e/169-guest-hod-lipson-roboticist-part-2/#comments</comments>        <pubDate>Mon, 11 Sep 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/00dea275-1f68-3228-b61c-e7bdaf59d57a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the <a href='https://www.creativemachineslab.com/'>Creative Machines Lab</a>, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best &amp; Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His <a href='https://www.ted.com/talks/hod_lipson_building_self_aware_robots'>TED talk on building robots that are self-aware</a> is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “<a href='https://www.amazon.com/Fabricated-New-World-3D-Printing/dp/1118350634/'>Fabricated: The New World of 3D printing</a>” and “<a href='https://www.amazon.com/Driverless-MIT-Press-Intelligent-Ahead/dp/0262534479'>Driverless: Intelligent cars and the road ahead</a>”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics.</p>
<p>In the conclusion of the interview we talk about robot cannibals, self-replicating robots, novel form factors for robots, the impact of #ChatGPT on higher education, and more of Hod's expansive vision for the future.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-169-4343.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the <a href='https://www.creativemachineslab.com/'>Creative Machines Lab</a>, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best &amp; Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His <a href='https://www.ted.com/talks/hod_lipson_building_self_aware_robots'>TED talk on building robots that are self-aware</a> is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “<a href='https://www.amazon.com/Fabricated-New-World-3D-Printing/dp/1118350634/'>Fabricated: The New World of 3D printing</a>” and “<a href='https://www.amazon.com/Driverless-MIT-Press-Intelligent-Ahead/dp/0262534479'>Driverless: Intelligent cars and the road ahead</a>”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics.</p>
<p>In the conclusion of the interview we talk about robot cannibals, self-replicating robots, novel form factors for robots, the impact of #ChatGPT on higher education, and more of Hod's expansive vision for the future.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-169-4343.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/9mpu7g/169_AI-and-You.mp3" length="38021565" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best &amp; Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His TED talk on building robots that are self-aware is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “Fabricated: The New World of 3D printing” and “Driverless: Intelligent cars and the road ahead”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics.
In the conclusion of the interview we talk about robot cannibals, self-replicating robots, novel form factors for robots, the impact of #ChatGPT on higher education, and more of Hod's expansive vision for the future.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2334</itunes:duration>
                <itunes:episode>169</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>168 - Guest: Hod Lipson, Roboticist, part 1</title>
        <itunes:title>168 - Guest: Hod Lipson, Roboticist, part 1</itunes:title>
        <link>https://aiandyou.net/e/168-guest-hod-lipson-roboticist-part-1/</link>
                    <comments>https://aiandyou.net/e/168-guest-hod-lipson-roboticist-part-1/#comments</comments>        <pubDate>Mon, 04 Sep 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/322d71dd-2d28-301f-aa07-ba36828661bb</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the <a href='https://www.creativemachineslab.com/'>Creative Machines Lab</a>, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best &amp; Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His <a href='https://www.ted.com/talks/hod_lipson_building_self_aware_robots'>TED talk on building robots that are self-aware</a> is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “<a href='https://www.amazon.com/Fabricated-New-World-3D-Printing/dp/1118350634/'>Fabricated: The New World of 3D printing</a>” and “<a href='https://www.amazon.com/Driverless-MIT-Press-Intelligent-Ahead/dp/0262534479'>Driverless: Intelligent cars and the road ahead</a>”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics.</p>
<p>In part 1 we talk about our future with #robots that might be creative, self-aware, sentient, or generally intelligent.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-168-9629.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the <a href='https://www.creativemachineslab.com/'>Creative Machines Lab</a>, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best &amp; Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His <a href='https://www.ted.com/talks/hod_lipson_building_self_aware_robots'>TED talk on building robots that are self-aware</a> is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “<a href='https://www.amazon.com/Fabricated-New-World-3D-Printing/dp/1118350634/'>Fabricated: The New World of 3D printing</a>” and “<a href='https://www.amazon.com/Driverless-MIT-Press-Intelligent-Ahead/dp/0262534479'>Driverless: Intelligent cars and the road ahead</a>”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics.</p>
<p>In part 1 we talk about our future with #robots that might be creative, self-aware, sentient, or generally intelligent.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/09/transcript-168-9629.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/mjqhs6/168_AI-and-You.mp3" length="32324364" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best &amp; Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His TED talk on building robots that are self-aware is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “Fabricated: The New World of 3D printing” and “Driverless: Intelligent cars and the road ahead”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics.
In part 1 we talk about our future with #robots that might be creative, self-aware, sentient, or generally intelligent.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1978</itunes:duration>
                <itunes:episode>168</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>167 - AI and Our Relationship with Time</title>
        <itunes:title>167 - AI and Our Relationship with Time</itunes:title>
        <link>https://aiandyou.net/e/167-ai-and-our-relationship-with-time/</link>
                    <comments>https://aiandyou.net/e/167-ai-and-our-relationship-with-time/#comments</comments>        <pubDate>Mon, 28 Aug 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/06ede0e5-b42a-38f7-8e76-b60bcdc9d07a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>In this special episode, we look at our relationship with time: how it's broken, what that means to us, and how AI might make that better - or worse. We've let technology call the shots for so long that we don't realize that we're running around a hamster wheel of our own making, chasing a carrot on a stick in front of our heads that we will never catch. Now with large language models like #ChatGPT available to everyone, are we going to use that to make the wheel spin faster - or get out of the cage?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/08/transcript-167-0350.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>In this special episode, we look at our relationship with <em>time:</em> how it's broken, what that means to us, and how AI might make that better - or worse. We've let technology call the shots for so long that we don't realize that we're running around a hamster wheel of our own making, chasing a carrot on a stick in front of our heads that we will never catch. Now with large language models like #ChatGPT available to everyone, are we going to use that to make the wheel spin faster - or get out of the cage?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/08/transcript-167-0350.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/hijx6e/167_AI-and-You.mp3" length="30221319" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

In this special episode, we look at our relationship with time: how it's broken, what that means to us, and how AI might make that better - or worse. We've let technology call the shots for so long that we don't realize that we're running around a hamster wheel of our own making, chasing a carrot on a stick in front of our heads that we will never catch. Now with large language models like #ChatGPT available to everyone, are we going to use that to make the wheel spin faster - or get out of the cage?
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1839</itunes:duration>
                <itunes:episode>167</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>166 - Guest: Babak Pahlavan, AI Executive Assistant Builder</title>
        <itunes:title>166 - Guest: Babak Pahlavan, AI Executive Assistant Builder</itunes:title>
        <link>https://aiandyou.net/e/166-guest-babak-pahlavan-ai-executive-assistant-builder/</link>
                    <comments>https://aiandyou.net/e/166-guest-babak-pahlavan-ai-executive-assistant-builder/#comments</comments>        <pubDate>Mon, 21 Aug 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0c8d7989-7104-3b11-b52e-9489ffa32674</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>After years of show guests projecting their visions of an executive assistant AI, Babak Pahlavan is building one, over at Silicon Valley startup <a href='https://www.ninjatech.ai/'>NinjaTech AI</a>, and he comes on the show to tell us about the challenges of building that and what it will do. He has been working on AI since 2008, when he was the Founder and CEO of his first AI startup named CleverSense. CleverSense was <a href='https://techcrunch.com/2011/12/13/google-acquires-clever-sense-creator-of-local-recommendations-app-alfred/?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAErchSiWyPclz7l6-p4bwOyk-_9SKGFKOD_bQBu4hUxEe3-rZ3dLD78RzacW5wiXEj2V5K2DvaR1TRikxo6qw1IVVzTd4wDFjqn9ZQwTtvvyogCsuQMgUZUpxdFoVuaNVtjB86XYuRwY8Q5WfSZWwDqtKfxIWIw9XLWWjh_gVDgN'>acquired</a> by Google in 2011, where it became an important personalization layer in Google Maps. Babak went on to spend 11 years at Google as a Senior Director of Product Management, where he led and scaled several large products and teams including Google Analytics, Enterprise Measurement Suite and others. He left Google in October of 2022 to found NinjaTech AI in partnership with <a href='https://www.sri.com/about-us/'>SRI</a>, which is the original home of <a href='https://www.sri.com/hoi/siri/'>Siri</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/08/transcript-166-2835.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>After years of show guests projecting their visions of an executive assistant AI, Babak Pahlavan is building one, over at Silicon Valley startup <a href='https://www.ninjatech.ai/'>NinjaTech AI</a>, and he comes on the show to tell us about the challenges of building that and what it will do. He has been working on AI since 2008, when he was the Founder and CEO of his first AI startup named CleverSense. CleverSense was <a href='https://techcrunch.com/2011/12/13/google-acquires-clever-sense-creator-of-local-recommendations-app-alfred/?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAErchSiWyPclz7l6-p4bwOyk-_9SKGFKOD_bQBu4hUxEe3-rZ3dLD78RzacW5wiXEj2V5K2DvaR1TRikxo6qw1IVVzTd4wDFjqn9ZQwTtvvyogCsuQMgUZUpxdFoVuaNVtjB86XYuRwY8Q5WfSZWwDqtKfxIWIw9XLWWjh_gVDgN'>acquired</a> by Google in 2011, where it became an important personalization layer in Google Maps. Babak went on to spend 11 years at Google as a Senior Director of Product Management, where he led and scaled several large products and teams including Google Analytics, Enterprise Measurement Suite and others. He left Google in October of 2022 to found NinjaTech AI in partnership with <a href='https://www.sri.com/about-us/'>SRI</a>, which is the original home of <a href='https://www.sri.com/hoi/siri/'>Siri</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/08/transcript-166-2835.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/9hu786/166_AI-and-You.mp3" length="36990262" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

After years of show guests projecting their visions of an executive assistant AI, Babak Pahlavan is building one, over at Silicon Valley startup NinjaTech AI, and he comes on the show to tell us about the challenges of building that and what it will do. He has been working on AI since 2008, when he was the Founder and CEO of his first AI startup named CleverSense. CleverSense was acquired by Google in 2011, where it became an important personalization layer in Google Maps. Babak went on to spend 11 years at Google as a Senior Director of Product Management, where he led and scaled several large products and teams including Google Analytics, Enterprise Measurement Suite and others. He left Google in October of 2022 to found NinjaTech AI in partnership with SRI, which is the original home of Siri.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2705</itunes:duration>
                <itunes:episode>166</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>165 - Guest: Boaz Mizrachi, AV Platform founder</title>
        <itunes:title>165 - Guest: Boaz Mizrachi, AV Platform founder</itunes:title>
        <link>https://aiandyou.net/e/165-guest-boaz-mizrachi-av-platform-founder/</link>
                    <comments>https://aiandyou.net/e/165-guest-boaz-mizrachi-av-platform-founder/#comments</comments>        <pubDate>Mon, 14 Aug 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ae468b8a-bb71-38ee-856f-7aa1ed44b8d8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you drive by the seat of your pants, listen to our guest Boaz Mizrachi, calling from Israel, where he is co-founder of <a href='https://tactilemobility.com/'>Tactile Mobility</a>, an autonomous vehicle platform developer that evaluates what a car feels. You base a lot of your driving decisions on how you sense the road through the wheels and transmission, so why shouldn't your AV do so too? This is important when dealing with skidding, for instance. Boaz tells us how that works in fascinating detail and where it sits in the current state of the art in AV platform integration.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/08/transcript-165-6309.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you drive by the seat of your pants, listen to our guest Boaz Mizrachi, calling from Israel, where he is co-founder of <a href='https://tactilemobility.com/'>Tactile Mobility</a>, an autonomous vehicle platform developer that evaluates what a car <em>feels. </em>You base a lot of your driving decisions on how you sense the road through the wheels and transmission, so why shouldn't your AV do so too? This is important when dealing with skidding, for instance. Boaz tells us how that works in fascinating detail and where it sits in the current state of the art in AV platform integration.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/08/transcript-165-6309.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8hukea/165_AI-and-You.mp3" length="37907588" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you drive by the seat of your pants, listen to our guest Boaz Mizrachi, calling from Israel, where he is co-founder of Tactile Mobility, an autonomous vehicle platform developer that evaluates what a car feels. You base a lot of your driving decisions on how you sense the road through the wheels and transmission, so why shouldn't your AV do so too? This is important when dealing with skidding, for instance. Boaz tells us how that works in fascinating detail and where it sits in the current state of the art in AV platform integration.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2320</itunes:duration>
                <itunes:episode>165</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>164 - Guest: Alan D. Thompson, AI Consultant, part 2</title>
        <itunes:title>164 - Guest: Alan D. Thompson, AI Consultant, part 2</itunes:title>
        <link>https://aiandyou.net/e/164-guest-alan-d-thompson-ai-consultant-part-2/</link>
                    <comments>https://aiandyou.net/e/164-guest-alan-d-thompson-ai-consultant-part-2/#comments</comments>        <pubDate>Mon, 07 Aug 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f93eb534-f3a4-3b95-a9bd-ee28d14bab51</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series Decoding Genius for GE, Making Child Prodigies for ABC (with the Australian Prime Minister), 60 Minutes for Network Ten/CBS, and Child Genius for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes <a href='https://lifearchitect.ai/memo/'>The Memo</a>, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful.</p>
<p>In the conclusion of the interview, we talk about the present and future of keeping up with AI news, the future of artificial general intelligence, what the large language models are about to do, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-164-4957.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series <em>Decoding Genius</em> for GE, <em>Making Child Prodigies</em> for ABC (with the Australian Prime Minister), <em>60 Minutes</em> for Network Ten/CBS, and <em>Child Genius</em> for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes <a href='https://lifearchitect.ai/memo/'><em>The Memo</em></a>, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful.</p>
<p>In the conclusion of the interview, we talk about the present and future of keeping up with AI news, the future of artificial general intelligence, what the large language models are about to do, and much more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-164-4957.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bn8i5x/164_AI-and-You.mp3" length="34209067" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series Decoding Genius for GE, Making Child Prodigies for ABC (with the Australian Prime Minister), 60 Minutes for Network Ten/CBS, and Child Genius for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes The Memo, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful.
In the conclusion of the interview, we talk about the present and future of keeping up with AI news, the future of artificial general intelligence, what the large language models are about to do, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2089</itunes:duration>
                <itunes:episode>164</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>163 - Guest: Alan D. Thompson, AI Consultant, part 1</title>
        <itunes:title>163 - Guest: Alan D. Thompson, AI Consultant, part 1</itunes:title>
        <link>https://aiandyou.net/e/163-guest-alan-d-thompson-ai-consultant-part-1/</link>
                    <comments>https://aiandyou.net/e/163-guest-alan-d-thompson-ai-consultant-part-1/#comments</comments>        <pubDate>Mon, 31 Jul 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d929148d-23c5-34bc-9d0f-d047dd64048c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series Decoding Genius for GE, Making Child Prodigies for ABC (with the Australian Prime Minister), 60 Minutes for Network Ten/CBS, and Child Genius for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes <a href='https://lifearchitect.ai/memo/'>The Memo</a>, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful.</p>
<p>In this first part of the interview Alan compares the large language models like ChatGPT, relates human and artificial intelligence, and talks about superintelligence alignment.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-163-8174.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series <em>Decoding Genius</em> for GE, <em>Making Child Prodigies</em> for ABC (with the Australian Prime Minister), <em>60 Minutes</em> for Network Ten/CBS, and <em>Child Genius</em> for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes <a href='https://lifearchitect.ai/memo/'><em>The Memo</em></a>, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful.</p>
<p>In this first part of the interview Alan compares the large language models like ChatGPT, relates human and artificial intelligence, and talks about superintelligence alignment.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-163-8174.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xdc87b/163_AI-and-You.mp3" length="31852589" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series Decoding Genius for GE, Making Child Prodigies for ABC (with the Australian Prime Minister), 60 Minutes for Network Ten/CBS, and Child Genius for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes The Memo, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful.
In this first part of the interview Alan compares the large language models like ChatGPT, relates human and artificial intelligence, and talks about superintelligence alignment.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1941</itunes:duration>
                <itunes:episode>163</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>162 - Guest: Ryan Donnelly, AI Governance Platform Founder</title>
        <itunes:title>162 - Guest: Ryan Donnelly, AI Governance Platform Founder</itunes:title>
        <link>https://aiandyou.net/e/162-guest-ryan-donnelly-ai-governance-platform-founder/</link>
                    <comments>https://aiandyou.net/e/162-guest-ryan-donnelly-ai-governance-platform-founder/#comments</comments>        <pubDate>Mon, 24 Jul 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d914ea53-2b55-3f84-a43e-1228bf803ff1</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Giving us a peek behind the scenes of Number 10 Downing Street today is Ryan Donnelly, founder of <a href='https://enz.ai'>Enzai</a>, an AI governance platform that helps organizations manage AI risk through policy and organizational controls - allowing users to engender trust in, and scale, their AI systems. Before founding Enzai, Ryan worked as a corporate lawyer in London at some of the world’s leading law firms. </p>
<p>Ryan was recently invited to 10 Downing Street to discuss AI and UK policy, along with some other very high-powered luminaries of AI. So we’re going to talk about what’s going on at that level of the UK government with respect to AI, and we'll learn about operationalizing AI risk management.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-162-1602.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Giving us a peek behind the scenes of Number 10 Downing Street today is Ryan Donnelly, founder of <a href='https://enz.ai'>Enzai</a>, an AI governance platform that helps organizations manage AI risk through policy and organizational controls - allowing users to engender trust in, and scale, their AI systems. Before founding Enzai, Ryan worked as a corporate lawyer in London at some of the world’s leading law firms. </p>
<p>Ryan was recently invited to 10 Downing Street to discuss AI and UK policy, along with some other very high-powered luminaries of AI. So we’re going to talk about what’s going on at that level of the UK government with respect to AI, and we'll learn about operationalizing AI risk management.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-162-1602.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/eue2eh/162_AI-and-You.mp3" length="42117691" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Giving us a peek behind the scenes of Number 10 Downing Street today is Ryan Donnelly, founder of Enzai, an AI governance platform that helps organizations manage AI risk through policy and organizational controls - allowing users to engender trust in, and scale, their AI systems. Before founding Enzai, Ryan worked as a corporate lawyer in London at some of the world’s leading law firms. 
Ryan was recently invited to 10 Downing Street to discuss AI and UK policy, along with some other very high-powered luminaries of AI. So we’re going to talk about what’s going on at that level of the UK government with respect to AI, and we'll learn about operationalizing AI risk management.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2583</itunes:duration>
                <itunes:episode>162</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>161 - Guest: Roman Yampolskiy, AI Safety Professor, part 2</title>
        <itunes:title>161 - Guest: Roman Yampolskiy, AI Safety Professor, part 2</itunes:title>
        <link>https://aiandyou.net/e/161-guest-roman-yampolskiy-ai-safety-professor-part-2/</link>
                    <comments>https://aiandyou.net/e/161-guest-roman-yampolskiy-ai-safety-professor-part-2/#comments</comments>        <pubDate>Mon, 17 Jul 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ea62f7b5-0a4e-37bc-9dba-6a8d8c86f8ca</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do AIs do with optical illusions... and jokes? Returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is one of the most eminent researchers in that space. He has written numerous papers and books, including <a href='https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1138435775'>Artificial Superintelligence: A Futuristic Approach</a> in 2015 and <a href='https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/1138320846/'>Artificial Intelligence Safety and Security</a> in 2018.</p>
<p>Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us.</p>
<p>In the conclusion of the interview we talk about wider-ranging issues of AI safety, just how the existential risk is being addressed today, and more on the recent public letters calling attention to AI risk. Plus we get a scoop on Roman's latest paper, <a href='https://www.researchgate.net/publication/371340160_Unmonitorability_of_Artificial_Intelligence'>Unmonitorability of Artificial Intelligence</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-161-3628.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do AIs do with optical illusions... and jokes? Returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is one of the most eminent researchers in that space. He has written numerous papers and books, including <a href='https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1138435775'><em>Artificial Superintelligence: A Futuristic Approach</em></a> in 2015 and <a href='https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/1138320846/'><em>Artificial Intelligence Safety and Security</em></a> in 2018.</p>
<p>Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and <em>Business Today</em>, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us.</p>
<p>In the conclusion of the interview we talk about wider-ranging issues of AI safety, just how the existential risk is being addressed today, and more on the recent public letters calling attention to AI risk. Plus we get a scoop on Roman's latest paper, <a href='https://www.researchgate.net/publication/371340160_Unmonitorability_of_Artificial_Intelligence'><em>Unmonitorability of Artificial Intelligence</em></a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-161-3628.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xuieai/161_AI-and-You.mp3" length="31988450" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What do AIs do with optical illusions... and jokes? Returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is one of the most eminent researchers in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018.
Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us.
In the conclusion of the interview we talk about wider-ranging issues of AI safety, just how the existential risk is being addressed today, and more on the recent public letters calling attention to AI risk. Plus we get a scoop on Roman's latest paper, Unmonitorability of Artificial Intelligence.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1950</itunes:duration>
                <itunes:episode>161</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>160 - Guest: Roman Yampolskiy, AI Safety Professor, part 1</title>
        <itunes:title>160 - Guest: Roman Yampolskiy, AI Safety Professor, part 1</itunes:title>
        <link>https://aiandyou.net/e/160-guest-roman-yampolskiy-ai-safety-professor-part-1/</link>
                    <comments>https://aiandyou.net/e/160-guest-roman-yampolskiy-ai-safety-professor-part-1/#comments</comments>        <pubDate>Mon, 10 Jul 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f9fad20d-fa20-3a06-908b-ceb4dd6ef25e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>With statements about the existential threat of AI being publicly signed by prominent AI personalities, we need an academic's take on that, and returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is a preeminent researcher in that space. He has written numerous papers and books, including <a href='https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1138435775'>Artificial Superintelligence: A Futuristic Approach</a> in 2015 and <a href='https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/1138320846/'>Artificial Intelligence Safety and Security</a> in 2018.</p>
<p>Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us.</p>
<p>In the first part of the interview we discussed the open letters about AI, how ChatGPT and its predecessors/successors move us closer to AGI and existential risk, and what Roman has in common with Leonardo DiCaprio.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-160-8800.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>With statements about the existential threat of AI being publicly signed by prominent AI personalities, we need an academic's take on that, and returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is a preeminent researcher in that space. He has written numerous papers and books, including <a href='https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1138435775'><em>Artificial Superintelligence: A Futuristic Approach</em></a> in 2015 and <a href='https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/1138320846/'><em>Artificial Intelligence Safety and Security</em></a> in 2018.</p>
<p>Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and <em>Business Today</em>, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us.</p>
<p>In the first part of the interview we discussed the open letters about AI, how ChatGPT and its predecessors/successors move us closer to AGI and existential risk, and what Roman has in common with Leonardo DiCaprio.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/07/transcript-160-8800.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qncrws/160_AI-and-You.mp3" length="32100463" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

With statements about the existential threat of AI being publicly signed by prominent AI personalities, we need an academic's take on that, and returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is a preeminent researcher in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018.
Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us.
In the first part of the interview we discussed the open letters about AI, how ChatGPT and its predecessors/successors move us closer to AGI and existential risk, and what Roman has in common with Leonardo DiCaprio.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1957</itunes:duration>
                <itunes:episode>160</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>159 - Guest: Louis Rosenberg, Human/AI Hybrid Intelligence Expert, part 2</title>
        <itunes:title>159 - Guest: Louis Rosenberg, Human/AI Hybrid Intelligence Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/159-guest-louis-rosenberg-humanai-hybrid-intelligence-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/159-guest-louis-rosenberg-humanai-hybrid-intelligence-expert-part-2/#comments</comments>        <pubDate>Mon, 03 Jul 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/27cce860-7487-37e9-b89a-6fd3d63725a6</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents.</p>
<p>He's currently CEO and Chief Scientist of <a href='https://unanimous.ai'>Unanimous AI</a>, a company that amplifies human group intelligence using AI technology based on the biological principle of Swarm Intelligence, which is where the bees come in. The Swarm AI technology that he created has an extraordinary record of making predictions like Oscar winners.</p>
<p>In the conclusion of the interview, we talk about ways AI threatens privacy, and Louis' philosophy of using AI to empower human cooperation.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-159-9834.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents.</p>
<p>He's currently CEO and Chief Scientist of <a href='https://unanimous.ai'>Unanimous AI</a>, a company that amplifies human group intelligence using AI technology based on the biological principle of <em>Swarm Intelligence,</em> which is where the bees come in. The Swarm AI technology that he created has an extraordinary record of making predictions like Oscar winners.</p>
<p>In the conclusion of the interview, we talk about ways AI threatens privacy, and Louis' philosophy of using AI to empower human cooperation.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-159-9834.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/7w73zh/159_AI-and-You.mp3" length="26748078" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents.
He's currently CEO and Chief Scientist of Unanimous AI, a company that amplifies human group intelligence using AI technology based on the biological principle of Swarm Intelligence, which is where the bees come in. The Swarm AI technology that he created has an extraordinary record of making predictions like Oscar winners.
In the conclusion of the interview, we talk about ways AI threatens privacy, and Louis' philosophy of using AI to empower human cooperation.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1622</itunes:duration>
                <itunes:episode>159</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>158 - Guest: Louis Rosenberg, Human/AI Hybrid Intelligence Expert, part 1</title>
        <itunes:title>158 - Guest: Louis Rosenberg, Human/AI Hybrid Intelligence Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/158-guest-louis-rosenberg-humanai-hybrid-intelligence-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/158-guest-louis-rosenberg-humanai-hybrid-intelligence-expert-part-1/#comments</comments>        <pubDate>Mon, 26 Jun 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/49bd016c-a363-3f3b-8af5-1a37c765cffd</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents.</p>
<p>He's currently CEO and Chief Scientist of <a href='https://unanimous.ai'>Unanimous AI</a>, a company that amplifies human group intelligence using AI technology based on the biological principle of Swarm Intelligence, which is where the bees come in. The UNU Swarm Intelligence that he created has an extraordinary record of making predictions like Oscar winners.</p>
<p>We talk about how AI can help humans cooperate instead of conflict, and we talk about threats to privacy, and the convergence of AI and AR/VR technology like Apple's new headset.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-158-0371.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents.</p>
<p>He's currently CEO and Chief Scientist of <a href='https://unanimous.ai'>Unanimous AI</a>, a company that amplifies human group intelligence using AI technology based on the biological principle of <em>Swarm Intelligence,</em> which is where the bees come in. The UNU Swarm Intelligence that he created has an extraordinary record of making predictions like Oscar winners.</p>
<p>We talk about how AI can help humans <em>cooperate </em>instead of <em>conflict</em>, and we talk about threats to privacy, and the convergence of AI and AR/VR technology like Apple's new headset.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-158-0371.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qs9yuf/158_AI-and-You.mp3" length="29234919" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents.
He's currently CEO and Chief Scientist of Unanimous AI, a company that amplifies human group intelligence using AI technology based on the biological principle of Swarm Intelligence, which is where the bees come in. The UNU Swarm Intelligence that he created has an extraordinary record of making predictions like Oscar winners.
We talk about how AI can help humans cooperate instead of conflict, and we talk about threats to privacy, and the convergence of AI and AR/VR technology like Apple's new headset.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1778</itunes:duration>
                <itunes:episode>158</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>157 - Should AI Be AbleTo Feel?</title>
        <itunes:title>157 - Should AI Be AbleTo Feel?</itunes:title>
        <link>https://aiandyou.net/e/157-should-ai-be-ableto-feel/</link>
                    <comments>https://aiandyou.net/e/157-should-ai-be-ableto-feel/#comments</comments>        <pubDate>Mon, 19 Jun 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e93b0f01-5c82-3195-af0b-9fdaae574a9f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Should AI be able to feel? It may seem like the height of hubris, recklessness, and even cruelty to suggest such a thing - and yet our increasing unease and fears of what #AI may do stem from its lack of empathy.</p>
<p>I develop this reasoning in my third <a href='https://www.ted.com/talks/peter_j_scott_empathetic_ai_unlocking_trust_between_humans_and_machines'>TEDx talk</a>, recorded at Ro<a href='http://tedxroyalroadsu.com/'>yal Roads University</a>. From my research into Joseph Weizenbaum's ELIZA to what developers of #ChatGPT and other AI are missing, I explore this most sensitive of issues.</p>
<p>This podcast episode is the bonus track, the director's cut if you will, that expands on those 12 minutes of talk to give you added value and even more questions to take away.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-157-7993.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Should AI be able to feel? It may seem like the height of hubris, recklessness, and even cruelty to suggest such a thing - and yet our increasing unease and fears of what #AI may do stem from its lack of empathy.</p>
<p>I develop this reasoning in my third <a href='https://www.ted.com/talks/peter_j_scott_empathetic_ai_unlocking_trust_between_humans_and_machines'>TEDx talk</a>, recorded at Ro<a href='http://tedxroyalroadsu.com/'>yal Roads University</a>. From my research into Joseph Weizenbaum's ELIZA to what developers of #ChatGPT and other AI are missing, I explore this most sensitive of issues.</p>
<p>This podcast episode is the bonus track, the director's cut if you will, that expands on those 12 minutes of talk to give you added value and even more questions to take away.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-157-7993.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/z6fd8g/157_AI-and-You.mp3" length="40389430" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Should AI be able to feel? It may seem like the height of hubris, recklessness, and even cruelty to suggest such a thing - and yet our increasing unease and fears of what #AI may do stem from its lack of empathy.
I develop this reasoning in my third TEDx talk, recorded at Royal Roads University. From my research into Joseph Weizenbaum's ELIZA to what developers of #ChatGPT and other AI are missing, I explore this most sensitive of issues.
This podcast episode is the bonus track, the director's cut if you will, that expands on those 12 minutes of talk to give you added value and even more questions to take away.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2475</itunes:duration>
                <itunes:episode>157</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>156 - Guest: Dorian Selz, Business AI CEO</title>
        <itunes:title>156 - Guest: Dorian Selz, Business AI CEO</itunes:title>
        <link>https://aiandyou.net/e/156-guest-dorian-selz-business-ai-ceo/</link>
                    <comments>https://aiandyou.net/e/156-guest-dorian-selz-business-ai-ceo/#comments</comments>        <pubDate>Mon, 12 Jun 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1b51bd51-55c0-3cba-ba0b-5566ad4e01dc</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Large language models like #ChatGPT have thoroughly disrupted business today, and here to help us understand what's going on there and how business leaders should view LLMs is Dorian Selz. He called from Zurich, where he is the CEO of <a href='https://squirro.com'>Squirro</a>, making it easier for businesses to start using #AI. We talked about everything from where to be wary of LLMs to EU regulation.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-156-5408.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Large language models like #ChatGPT have thoroughly disrupted business today, and here to help us understand what's going on there and how business leaders should view LLMs is Dorian Selz. He called from Zurich, where he is the CEO of <a href='https://squirro.com'>Squirro</a>, making it easier for businesses to start using #AI. We talked about everything from where to be wary of LLMs to EU regulation.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/06/transcript-156-5408.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/p5ac3v/156_AI-and-You.mp3" length="43091118" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Large language models like #ChatGPT have thoroughly disrupted business today, and here to help us understand what's going on there and how business leaders should view LLMs is Dorian Selz. He called from Zurich, where he is the CEO of Squirro, making it easier for businesses to start using #AI. We talked about everything from where to be wary of LLMs to EU regulation.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2644</itunes:duration>
                <itunes:episode>156</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>155 - Guest: Ben Whately, Language Tutoring with AI</title>
        <itunes:title>155 - Guest: Ben Whately, Language Tutoring with AI</itunes:title>
        <link>https://aiandyou.net/e/155-guest-ben-whately-language-tutoring-with-ai/</link>
                    <comments>https://aiandyou.net/e/155-guest-ben-whately-language-tutoring-with-ai/#comments</comments>        <pubDate>Mon, 05 Jun 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/34bcb32c-af3f-3d95-82be-c2c76b68e745</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>With so much talk about how large language models like #ChatGPT have learned our languages, we can forget that humans also want and need to learn other human languages, and that's what happens at <a href='https://memrise.com'>memrise.com</a>. CSO and co-founder Ben Whately came on the show to help us understand how they use GPT #AI models to help people with that process, and the fascinating and unexpected ways that human memory plays its part.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/05/transcript-155-1809.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>With so much talk about how large language models like #ChatGPT have learned our languages, we can forget that humans also want and need to learn other human languages, and that's what happens at <a href='https://memrise.com'>memrise.com</a>. CSO and co-founder Ben Whately came on the show to help us understand how they use GPT #AI models to help people with that process, and the fascinating and unexpected ways that human memory plays its part.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/05/transcript-155-1809.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ijjis8/155_AI-and-You.mp3" length="42318729" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

With so much talk about how large language models like #ChatGPT have learned our languages, we can forget that humans also want and need to learn other human languages, and that's what happens at memrise.com. CSO and co-founder Ben Whately came on the show to help us understand how they use GPT #AI models to help people with that process, and the fascinating and unexpected ways that human memory plays its part.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2595</itunes:duration>
                <itunes:episode>155</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>154 - Turning Anxiety About ChatGPT into Resilience</title>
        <itunes:title>154 - Turning Anxiety About ChatGPT into Resilience</itunes:title>
        <link>https://aiandyou.net/e/154-turning-anxiety-about-chatgpt-into-resilience/</link>
                    <comments>https://aiandyou.net/e/154-turning-anxiety-about-chatgpt-into-resilience/#comments</comments>        <pubDate>Mon, 29 May 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b40772b5-c289-36ee-808f-38e30a86e73a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">If you're feeling on edge due to all the rapid-fire development around #ChatGPT and its companion AIs, you're not alone. In fact most people feel some degree of anxiety around not knowing where all this is going and the impact on their jobs, their world, and their lives.</p>
<p style="text-align:left;">Our core mission on this show is to help people understand #AI and turn that stress into empowerment, so that's exactly what we do in this special episode. This rate of disruption is unprecedented, and a lot of people are taking advantage of the situation to suggest that you ought to be on top of everything that's going on. Spoiler alert: They aren't on top of it all, and neither is anyone else.</p>
<p style="text-align:left;">This episode lays bare some of that angst and gives you some perspectives that are useful for feeling empowered, without sacrificing our trademark dedication to realism over optimism or pessimism.</p>
<p>(Our episode image is helpfully generated by AI. But not this text.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/05/transcript-154-3652.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">If you're feeling on edge due to all the rapid-fire development around #ChatGPT and its companion AIs, you're not alone. In fact most people feel some degree of anxiety around not knowing where all this is going and the impact on their jobs, their world, and their lives.</p>
<p style="text-align:left;">Our core mission on this show is to help people understand #AI and turn that stress into empowerment, so that's exactly what we do in this special episode. This rate of disruption is unprecedented, and a lot of people are taking advantage of the situation to suggest that you ought to be on top of everything that's going on. Spoiler alert: They aren't on top of it all, and neither is anyone else.</p>
<p style="text-align:left;">This episode lays bare some of that angst and gives you some perspectives that are useful for feeling empowered, without sacrificing our trademark dedication to realism over optimism or pessimism.</p>
<p>(Our episode image is helpfully generated by AI. But not this text.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/05/transcript-154-3652.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/k8cn7b/154_AI-and-You.mp3" length="37730373" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you're feeling on edge due to all the rapid-fire development around #ChatGPT and its companion AIs, you're not alone. In fact most people feel some degree of anxiety around not knowing where all this is going and the impact on their jobs, their world, and their lives.
Our core mission on this show is to help people understand #AI and turn that stress into empowerment, so that's exactly what we do in this special episode. This rate of disruption is unprecedented, and a lot of people are taking advantage of the situation to suggest that you ought to be on top of everything that's going on. Spoiler alert: They aren't on top of it all, and neither is anyone else.
This episode lays bare some of that angst and gives you some perspectives that are useful for feeling empowered, without sacrificing our trademark dedication to realism over optimism or pessimism.
(Our episode image is helpfully generated by AI. But not this text.)
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2309</itunes:duration>
                <itunes:episode>154</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>153 - Guest: Frank Stephenson, Legendary Car Designer</title>
        <itunes:title>153 - Guest: Frank Stephenson, Legendary Car Designer</itunes:title>
        <link>https://aiandyou.net/e/153-guest-frank-stephenson-legendary-car-designer/</link>
                    <comments>https://aiandyou.net/e/153-guest-frank-stephenson-legendary-car-designer/#comments</comments>        <pubDate>Mon, 22 May 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/dce12d17-deb2-3037-9cd5-f58db8d9a5b5</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Frank Stephenson is the legendary designer of the BMW Mini Cooper reboot, and the Maserati MC12 and Ferrari F430 among other models. He is now Head of Design at McLaren Automotive and designed the MP4-12C, the successor to the F1. His latest projects include electric Vertical Take-Off and Landing vehicles at the design studio that bears his name.</p>
<p>What does Frank have to do with AI? He came on the show to talk about the impact of generative models on the field of car design and how he's using them. There's a lot to unpack here for designers of all kinds.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/05/transcript-153-9044.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Frank Stephenson is the legendary designer of the BMW Mini Cooper reboot, and the Maserati MC12 and Ferrari F430 among other models. He is now Head of Design at McLaren Automotive and designed the MP4-12C, the successor to the F1. His latest projects include electric Vertical Take-Off and Landing vehicles at the design studio that bears his name.</p>
<p>What does Frank have to do with AI? He came on the show to talk about the impact of generative models on the field of car design and how he's using them. There's a lot to unpack here for designers of all kinds.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/05/transcript-153-9044.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/dabmbi/153_AI-and-You.mp3" length="43208982" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Frank Stephenson is the legendary designer of the BMW Mini Cooper reboot, and the Maserati MC12 and Ferrari F430 among other models. He is now Head of Design at McLaren Automotive and designed the MP4-12C, the successor to the F1. His latest projects include electric Vertical Take-Off and Landing vehicles at the design studio that bears his name.
What does Frank have to do with AI? He came on the show to talk about the impact of generative models on the field of car design and how he's using them. There's a lot to unpack here for designers of all kinds.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2651</itunes:duration>
                <itunes:episode>153</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>152 - Guest: Eric Daimler, AI Entrepreneur and Policymaker, part 2</title>
        <itunes:title>152 - Guest: Eric Daimler, AI Entrepreneur and Policymaker, part 2</itunes:title>
        <link>https://aiandyou.net/e/152-guest-eric-daimler-ai-entrepreneur-and-policymaker-part-2/</link>
                    <comments>https://aiandyou.net/e/152-guest-eric-daimler-ai-entrepreneur-and-policymaker-part-2/#comments</comments>        <pubDate>Mon, 15 May 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/475b7f97-bcd7-3e0b-b256-dfa1b072f70a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of <a href='https://conexus.com/'>Conexus</a>. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. </p>
<p>We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In the conclusion of our conversation, Eric helps us understand how a business should think about and interface with today's AI to leverage it successfully.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-152-7027.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of <a href='https://conexus.com/'>Conexus</a>. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. </p>
<p>We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In the conclusion of our conversation, Eric helps us understand how a business should think about and interface with today's AI to leverage it successfully.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-152-7027.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/4etwkn/152_AI-and-You.mp3" length="28022855" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of Conexus. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. 
We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In the conclusion of our conversation, Eric helps us understand how a business should think about and interface with today's AI to leverage it successfully.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1702</itunes:duration>
                <itunes:episode>152</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>151 - Guest: Eric Daimler, AI Entrepreneur and Policymaker, part 1</title>
        <itunes:title>151 - Guest: Eric Daimler, AI Entrepreneur and Policymaker, part 1</itunes:title>
        <link>https://aiandyou.net/e/151-guest-eric-daimler-ai-entrepreneur-and-policymaker-part-1/</link>
                    <comments>https://aiandyou.net/e/151-guest-eric-daimler-ai-entrepreneur-and-policymaker-part-1/#comments</comments>        <pubDate>Mon, 08 May 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/95315dc1-cbf9-34a9-84d5-2f525c5802ff</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of <a href='https://conexus.com/'>Conexus</a>. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. </p>
<p>We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In this first part of our conversation, we touch on everything from self-driving cars to ChatGPT and China. And category theory as the solution to data deluge.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-151-4056.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of <a href='https://conexus.com/'>Conexus</a>. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. </p>
<p>We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In this first part of our conversation, we touch on everything from self-driving cars to ChatGPT and China. And category theory as the solution to data deluge.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-151-4056.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/f8cjdp/151_AI-and-You.mp3" length="30060822" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of Conexus. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. 
We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In this first part of our conversation, we touch on everything from self-driving cars to ChatGPT and China. And category theory as the solution to data deluge.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1829</itunes:duration>
                <itunes:episode>151</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>150 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Expert, part 2</title>
        <itunes:title>150 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/150-guest-alexandra-mousavizadeh-strategic-intelligence-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/150-guest-alexandra-mousavizadeh-strategic-intelligence-expert-part-2/#comments</comments>        <pubDate>Mon, 01 May 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c5faf9a5-cb4e-3160-878a-3bb8b807c7a3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who done this sort of thing before with the Global AI Index and Disinformation Index. Her new company, <a href='https://evidentinsights.com/'>Evident</a>, uses nearly 150 real-time indicators to measure the adoption of AI in each company, and their first iteration of the AI Adoption Index covers the banking industry.</p>
<p>Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s and Head of Country Risk Management at Morgan Stanley. She was CEO of ARC Ratings, a global emerging markets based ratings agency; and before joining ARC, she was the Director of the Legatum Institute’s Prosperity Index of nations.</p>
<p>In the conclusion of the interview we talk about the methodology behind the Index, what it means for the flow of talent and capital, the banking industry reaction to ChatGPT, and surprises about the leading companies in the Index.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-150-5732.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who done this sort of thing before with the Global AI Index and Disinformation Index. Her new company, <a href='https://evidentinsights.com/'>Evident</a>, uses nearly 150 real-time indicators to measure the adoption of AI in each company, and their first iteration of the <em>AI Adoption Index</em> covers the banking industry.</p>
<p>Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s and Head of Country Risk Management at Morgan Stanley. She was CEO of ARC Ratings, a global emerging markets based ratings agency; and before joining ARC, she was the Director of the Legatum Institute’s<em> Prosperity Index</em> of nations.</p>
<p>In the conclusion of the interview we talk about the methodology behind the Index, what it means for the flow of talent and capital, the banking industry reaction to ChatGPT, and surprises about the leading companies in the Index.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-150-5732.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/tqij7u/150_AI-and-You.mp3" length="33224773" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who done this sort of thing before with the Global AI Index and Disinformation Index. Her new company, Evident, uses nearly 150 real-time indicators to measure the adoption of AI in each company, and their first iteration of the AI Adoption Index covers the banking industry.
Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s and Head of Country Risk Management at Morgan Stanley. She was CEO of ARC Ratings, a global emerging markets based ratings agency; and before joining ARC, she was the Director of the Legatum Institute’s Prosperity Index of nations.
In the conclusion of the interview we talk about the methodology behind the Index, what it means for the flow of talent and capital, the banking industry reaction to ChatGPT, and surprises about the leading companies in the Index.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2027</itunes:duration>
                <itunes:episode>150</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>149 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Expert, part 1</title>
        <itunes:title>149 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/149-guest-alexandra-mousavizadeh-strategic-intelligence-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/149-guest-alexandra-mousavizadeh-strategic-intelligence-expert-part-1/#comments</comments>        <pubDate>Mon, 24 Apr 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/4091d36f-c5cd-3c3f-924e-8c7939cc6a95</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who has experience in the founding of the Global AI Index and Disinformation Index. Her new company, <a href='https://evidentinsights.com/'>Evident</a>, uses nearly 150 real-time indicators to measure the adoption of AI in each company. and their first iteration of the AI Adoption Index covers the banking industry.</p>
<p>Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s covering Russia, Central Asia and the Middle East, and Head of Country Risk Management at Morgan Stanley.</p>
<p>In the first part of the interview we talk about the methodology, rationale, and customers for the index, some surprises about the modern banking sector, and the open letter calling for a pause on LLM training.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-149-8199.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>

<p>Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who has experience in the founding of the Global AI Index and Disinformation Index. Her new company, <a href='https://evidentinsights.com/'>Evident</a>, uses nearly 150 real-time indicators to measure the adoption of AI in each company. and their first iteration of the AI Adoption Index covers the banking industry.</p>
<p>Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s covering Russia, Central Asia and the Middle East, and Head of Country Risk Management at Morgan Stanley.</p>
<p>In the first part of the interview we talk about the methodology, rationale, and customers for the index, some surprises about the modern banking sector, and the open letter calling for a pause on LLM training.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcript-149-8199.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3v6n2r/149_AI-and-You.mp3" length="33776897" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .

Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who has experience in the founding of the Global AI Index and Disinformation Index. Her new company, Evident, uses nearly 150 real-time indicators to measure the adoption of AI in each company. and their first iteration of the AI Adoption Index covers the banking industry.
Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s covering Russia, Central Asia and the Middle East, and Head of Country Risk Management at Morgan Stanley.
In the first part of the interview we talk about the methodology, rationale, and customers for the index, some surprises about the modern banking sector, and the open letter calling for a pause on LLM training.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2062</itunes:duration>
                <itunes:episode>149</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>148 - Guest: Missy Cummings, Robotics Professor and Former Fighter Pilot, part 2</title>
        <itunes:title>148 - Guest: Missy Cummings, Robotics Professor and Former Fighter Pilot, part 2</itunes:title>
        <link>https://aiandyou.net/e/148-guest-missy-cummings-robotics-professor-and-former-fighter-pilot-part-2/</link>
                    <comments>https://aiandyou.net/e/148-guest-missy-cummings-robotics-professor-and-former-fighter-pilot-part-2/#comments</comments>        <pubDate>Mon, 17 Apr 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/42201220-b677-30ca-b62d-506c3a9f8110</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, that Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla.</p>
<p>In part 2 of our interview, hear what Missy thinks about Tesla, ChatGPT, and Boston Dynamics; the truth behind that dogfighting AI, the possibility of complete automation of air travel, how AI would handle air emergencies, and more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcription-148.docx'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, <em>that </em>Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla.</p>
<p>In part 2 of our interview, hear what Missy thinks about Tesla, ChatGPT, and Boston Dynamics; the truth behind that dogfighting AI, the possibility of complete automation of air travel, how AI would handle air emergencies, and more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/04/transcription-148.docx'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/pawdyv/148_AI-and-You.mp3" length="41842256" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, that Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla.
In part 2 of our interview, hear what Missy thinks about Tesla, ChatGPT, and Boston Dynamics; the truth behind that dogfighting AI, the possibility of complete automation of air travel, how AI would handle air emergencies, and more. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2566</itunes:duration>
                <itunes:episode>148</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>147 - Guest: Missy Cummings, Robotics Professor and Former Fighter Pilot, part 1</title>
        <itunes:title>147 - Guest: Missy Cummings, Robotics Professor and Former Fighter Pilot, part 1</itunes:title>
        <link>https://aiandyou.net/e/147-guest-missy-cummings-robotics-professor-and-former-fighter-pilot-part-1/</link>
                    <comments>https://aiandyou.net/e/147-guest-missy-cummings-robotics-professor-and-former-fighter-pilot-part-1/#comments</comments>        <pubDate>Mon, 10 Apr 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1b1c1154-e04c-3d0d-91b1-89cf1eb93c0d</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, that Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla.</p>
<p>From aircraft safety to the true performance and economics of autonomous vehicles, Missy gives us her unvarnished views in this first half of an unmissable interview (see what I did there?).</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-147-8670.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, <em>that </em>Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla.</p>
<p>From aircraft safety to the true performance and economics of autonomous vehicles, Missy gives us her unvarnished views in this first half of an unmissable interview (see what I did there?).</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-147-8670.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/yvi6wf/147_AI-and-You.mp3" length="31848016" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, that Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla.
From aircraft safety to the true performance and economics of autonomous vehicles, Missy gives us her unvarnished views in this first half of an unmissable interview (see what I did there?).
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1941</itunes:duration>
                <itunes:episode>147</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>146 - Guest: Tigran Petrosyan, Annotation Expert</title>
        <itunes:title>146 - Guest: Tigran Petrosyan, Annotation Expert</itunes:title>
        <link>https://aiandyou.net/e/146-guest-tigran-petrosyan-annotation-expert/</link>
                    <comments>https://aiandyou.net/e/146-guest-tigran-petrosyan-annotation-expert/#comments</comments>        <pubDate>Mon, 03 Apr 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8a0fcfb5-c4fa-37a4-90d8-0dcfa52c60d9</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>With the advent of GPT-4, annotation has come to the forefront of attention as the power of interpreting images becomes prominent. But what is annotation, how does it work, what does it mean, and what can you do with it?</p>
<p>Getting us those answers is Tigran Petrosyan, founder and CEO of <a href='https://www.superannotate.com/'>SuperAnnotate</a>, and expert on annotation. Tigran holds a master's degree in Physics from ETH Zurich and has post-graduate experience in biomedical imaging.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-146-9250.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>With the advent of GPT-4, <em>annotation </em>has come to the forefront of attention as the power of interpreting images becomes prominent. But what is annotation, how does it work, what does it mean, and what can you do with it?</p>
<p>Getting us those answers is Tigran Petrosyan, founder and CEO of <a href='https://www.superannotate.com/'>SuperAnnotate</a>, and expert on annotation. Tigran holds a master's degree in Physics from ETH Zurich and has post-graduate experience in biomedical imaging.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-146-9250.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/wg8ica/146_AI-and-You.mp3" length="34963484" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

With the advent of GPT-4, annotation has come to the forefront of attention as the power of interpreting images becomes prominent. But what is annotation, how does it work, what does it mean, and what can you do with it?
Getting us those answers is Tigran Petrosyan, founder and CEO of SuperAnnotate, and expert on annotation. Tigran holds a master's degree in Physics from ETH Zurich and has post-graduate experience in biomedical imaging.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2136</itunes:duration>
                <itunes:episode>146</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>145 - Guest: Elizabeth Croft, Professor of Robotics, part 2</title>
        <itunes:title>145 - Guest: Elizabeth Croft, Professor of Robotics, part 2</itunes:title>
        <link>https://aiandyou.net/e/145-guest-elizabeth-croft-professor-of-robotics-part-2/</link>
                    <comments>https://aiandyou.net/e/145-guest-elizabeth-croft-professor-of-robotics-part-2/#comments</comments>        <pubDate>Mon, 27 Mar 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/83e76a96-711c-32b8-ac17-da5690ee291a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - embodied AI - are coming into our lives more and more, from sidewalk delivery bots to <a href='https://www.uniqhotels.com/henn-na-hotel'>dinosaur hotel receptionists</a>. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized?</p>
<p>Getting us those answers is <a href='https://www.uvic.ca/vpacademic/about-contacts/about-provost/index.php'>Elizabeth Croft</a>, Vice-President Academic and Provost of the <a href='https://twitter.com/uvic'>University of Victoria</a> in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. </p>
<p>In the conclusion of our interview we talk about robot body language, how to deal with a squishy world, and ethical foundations for robots.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-145-6725.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - <em>embodied AI</em> - are coming into our lives more and more, from sidewalk delivery bots to <a href='https://www.uniqhotels.com/henn-na-hotel'>dinosaur hotel receptionists</a>. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized?</p>
<p>Getting us those answers is <a href='https://www.uvic.ca/vpacademic/about-contacts/about-provost/index.php'>Elizabeth Croft</a>, Vice-President Academic and Provost of the <a href='https://twitter.com/uvic'>University of Victoria</a> in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. </p>
<p>In the conclusion of our interview we talk about robot body language, how to deal with a squishy world, and ethical foundations for robots.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-145-6725.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/t5qk2j/145_AI-and-You.mp3" length="27879493" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Robots - embodied AI - are coming into our lives more and more, from sidewalk delivery bots to dinosaur hotel receptionists. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized?
Getting us those answers is Elizabeth Croft, Vice-President Academic and Provost of the University of Victoria in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. 
In the conclusion of our interview we talk about robot body language, how to deal with a squishy world, and ethical foundations for robots.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1693</itunes:duration>
                <itunes:episode>145</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>144 - Guest: Elizabeth Croft, Professor of Robotics, part 1</title>
        <itunes:title>144 - Guest: Elizabeth Croft, Professor of Robotics, part 1</itunes:title>
        <link>https://aiandyou.net/e/144-guest-elizabeth-croft-professor-of-robotics-part-1/</link>
                    <comments>https://aiandyou.net/e/144-guest-elizabeth-croft-professor-of-robotics-part-1/#comments</comments>        <pubDate>Mon, 20 Mar 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c613002f-ca84-3980-91c6-967a7c408cab</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - embodied AI - are coming into our lives more and more, from sidewalk delivery bots to <a href='https://www.uniqhotels.com/henn-na-hotel'>dinosaur hotel receptionists</a>. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized?</p>
<p>Getting us those answers is Elizabeth Croft, Vice-President Academic and Provost of the University of Victoria in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. </p>
<p>In the first part of our interview we talk about how she got into robotics, and her research into what's really happening when you hand someone an object and what engineers need to know about that before that robot barista can hand you a triple venti.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-144-1711.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Robots - <em>embodied AI</em> - are coming into our lives more and more, from sidewalk delivery bots to <a href='https://www.uniqhotels.com/henn-na-hotel'>dinosaur hotel receptionists</a>. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized?</p>
<p>Getting us those answers is Elizabeth Croft, Vice-President Academic and Provost of the University of Victoria in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. </p>
<p>In the first part of our interview we talk about how she got into robotics, and her research into what's really happening when you hand someone an object and what engineers need to know about that before that robot barista can hand you a triple venti.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-144-1711.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ymxsdy/144_AI-and-You.mp3" length="33962889" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Robots - embodied AI - are coming into our lives more and more, from sidewalk delivery bots to dinosaur hotel receptionists. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized?
Getting us those answers is Elizabeth Croft, Vice-President Academic and Provost of the University of Victoria in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. 
In the first part of our interview we talk about how she got into robotics, and her research into what's really happening when you hand someone an object and what engineers need to know about that before that robot barista can hand you a triple venti.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2073</itunes:duration>
                <itunes:episode>144</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>143 - Guest: Melanie Mitchell, AI Cognition Researcher, part 2</title>
        <itunes:title>143 - Guest: Melanie Mitchell, AI Cognition Researcher, part 2</itunes:title>
        <link>https://aiandyou.net/e/143-guest-melanie-mitchell-ai-cognition-researcher-part-2/</link>
                    <comments>https://aiandyou.net/e/143-guest-melanie-mitchell-ai-cognition-researcher-part-2/#comments</comments>        <pubDate>Mon, 13 Mar 2023 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3f413be4-5e1e-352e-bfd1-c669044bdcef</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us?</p>
<p>Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. </p>
<p>Her recent book, <a href='https://www.amazon.com/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833/'>Artificial Intelligence: A Guide for Thinking Humans</a> is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book Gödel, Escher, Bach, and who made a number of connections between advancements in AI and the human condition. </p>
<p>In this conclusion of our interview we talk about what ChatGPT isn't good at, how to find the edges of its intelligence, and the AI she built for making analogies like you'd get on the SAT.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-143-9799.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us?</p>
<p>Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. </p>
<p>Her recent book, <a href='https://www.amazon.com/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833/'><em>Artificial Intelligence: A Guide for Thinking Humans</em></a> is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book <em>Gödel, Escher, Bach</em>, and who made a number of connections between advancements in AI and the human condition. </p>
<p>In this conclusion of our interview we talk about what ChatGPT isn't good at, how to find the edges of its intelligence, and the AI she built for making analogies like you'd get on the SAT.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-143-9799.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/4e4kvz/143_AI-and-You.mp3" length="29086977" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us?
Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. 
Her recent book, Artificial Intelligence: A Guide for Thinking Humans is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book Gödel, Escher, Bach, and who made a number of connections between advancements in AI and the human condition. 
In this conclusion of our interview we talk about what ChatGPT isn't good at, how to find the edges of its intelligence, and the AI she built for making analogies like you'd get on the SAT.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1768</itunes:duration>
                <itunes:episode>143</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>142 - Guest: Melanie Mitchell, AI Cognition Researcher, part 1</title>
        <itunes:title>142 - Guest: Melanie Mitchell, AI Cognition Researcher, part 1</itunes:title>
        <link>https://aiandyou.net/e/142-guest-melanie-mitchell-ai-cognition-researcher-part-1/</link>
                    <comments>https://aiandyou.net/e/142-guest-melanie-mitchell-ai-cognition-researcher-part-1/#comments</comments>        <pubDate>Mon, 06 Mar 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/81f86698-30ef-346b-9813-d831832d5816</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us?</p>
<p>Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. </p>
<p>Her recent book, <a href='https://www.amazon.com/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833/'>Artificial Intelligence: A Guide for Thinking Humans</a> is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book Gödel, Escher, Bach, and who made a number of connections between advancements in AI and the human condition. In this first part we’ll be talking a lot about ChatGPT and where it fits into her narrative about AI capabilities.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-142-8161.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us?</p>
<p>Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. </p>
<p>Her recent book, <a href='https://www.amazon.com/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833/'><em>Artificial Intelligence: A Guide for Thinking Humans</em></a> is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book <em>Gödel, Escher, Bach</em>, and who made a number of connections between advancements in AI and the human condition. In this first part we’ll be talking a lot about ChatGPT and where it fits into her narrative about AI capabilities.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/03/transcript-142-8161.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/iqyhce/142_AI-and-You.mp3" length="36502827" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us?
Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. 
Her recent book, Artificial Intelligence: A Guide for Thinking Humans is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book Gödel, Escher, Bach, and who made a number of connections between advancements in AI and the human condition. In this first part we’ll be talking a lot about ChatGPT and where it fits into her narrative about AI capabilities.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2232</itunes:duration>
                <itunes:episode>142</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>141 - Special Episode: Understanding ChatGPT</title>
        <itunes:title>141 - Special Episode: Understanding ChatGPT</itunes:title>
        <link>https://aiandyou.net/e/141-special-episode-understanding-chatgpt/</link>
                    <comments>https://aiandyou.net/e/141-special-episode-understanding-chatgpt/#comments</comments>        <pubDate>Mon, 27 Feb 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/aa4b6b71-8db2-3dc6-bfa5-8f45bed68dac</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>ChatGPT has taken the world by storm. In the unlikely event that you haven't heard of it, it's a large language model from OpenAI that has demonstrated such extraordinary ability to answer general questions and requests to the satisfaction and astonishment of people with no technical expertise that it has captivated the public imagination and brought new meaning to the phrase "going viral." It acquired 1 million users within 5 days and 100 million in two months.</p>
<p>But if you have heard of ChatGPT, you likely have many questions: What can it really do, how does it work, what is it not good at, what does this mean for jobs, and... many more.</p>
<p>We've been talking about those issues on this show since we started, and I've been anticipating an event like this since I predicted something very similar in my first book in 2017, so we are here to help. In this special episode, we'll look at all those questions and a lot more, plus discuss the new image generation programs. How can we tell an AI from a human now? What does this mean for the Turing Test, and what does it mean for tests of humans, otherwise known as term papers? Find out about all that and more in this special episode.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/02/transcript-141-9923.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>ChatGPT has taken the world by storm. In the unlikely event that you haven't heard of it, it's a <em>large language model</em> from OpenAI that has demonstrated such extraordinary ability to answer general questions and requests to the satisfaction and astonishment of people with no technical expertise that it has captivated the public imagination and brought new meaning to the phrase "going viral." It acquired 1 million users within 5 days and 100 million in two months.</p>
<p>But if you have heard of ChatGPT, you likely have many questions: What can it really do, how does it work, what is it not good at, what does this mean for jobs, and... many more.</p>
<p>We've been talking about those issues on this show since we started, and I've been anticipating an event like this since I predicted something very similar in my first book in 2017, so we are here to help. In this special episode, we'll look at all those questions and a lot more, plus discuss the new image generation programs. How can we tell an AI from a human now? What does this mean for the Turing Test, and what does it mean for tests of humans, otherwise known as term papers? Find out about all that and more in this special episode.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/02/transcript-141-9923.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/63qcuc/141_AI-and-You.mp3" length="67530863" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

ChatGPT has taken the world by storm. In the unlikely event that you haven't heard of it, it's a large language model from OpenAI that has demonstrated such extraordinary ability to answer general questions and requests to the satisfaction and astonishment of people with no technical expertise that it has captivated the public imagination and brought new meaning to the phrase "going viral." It acquired 1 million users within 5 days and 100 million in two months.
But if you have heard of ChatGPT, you likely have many questions: What can it really do, how does it work, what is it not good at, what does this mean for jobs, and... many more.
We've been talking about those issues on this show since we started, and I've been anticipating an event like this since I predicted something very similar in my first book in 2017, so we are here to help. In this special episode, we'll look at all those questions and a lot more, plus discuss the new image generation programs. How can we tell an AI from a human now? What does this mean for the Turing Test, and what does it mean for tests of humans, otherwise known as term papers? Find out about all that and more in this special episode.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4171</itunes:duration>
                <itunes:episode>141</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>140 - Guest: Risto Uuk, EU AI Policy Researcher, part 2</title>
        <itunes:title>140 - Guest: Risto Uuk, EU AI Policy Researcher, part 2</itunes:title>
        <link>https://aiandyou.net/e/140-guest-risto-uuk-eu-ai-policy-researcher-part-2/</link>
                    <comments>https://aiandyou.net/e/140-guest-risto-uuk-eu-ai-policy-researcher-part-2/#comments</comments>        <pubDate>Mon, 20 Feb 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d640226f-9e4f-3ba4-96a1-e07dab5e3b4c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>I'm often asked what's going to happen with AI being regulated, and my answer is that the place that's most advanced in that respect is the European Union, with its new AI Act. So here to tell us all about that is <a href='https://ristouuk.com/'>Risto Uuk</a>. He is a policy researcher at the <a href='https://futureoflife.org/'>Future of Life Institute</a> and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum, did research for the European Commission, and provided research support at Berkeley Existential Risk Initiative, all on AI. He has a master’s degree in Philosophy and Public Policy from the London School of Economics and Political Science.</p>
<p>In part 2, we talk about the types of risk described in the act, types of company that could be affected and how, what it’s like to work in this field day to day, and how you can get involved.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/02/transcript-140-8999.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>I'm often asked what's going to happen with AI being regulated, and my answer is that the place that's most advanced in that respect is the European Union, with its new AI Act. So here to tell us all about that is <a href='https://ristouuk.com/'>Risto Uuk</a>. He is a policy researcher at the <a href='https://futureoflife.org/'>Future of Life Institute</a> and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum, did research for the European Commission, and provided research support at Berkeley Existential Risk Initiative, all on AI. He has a master’s degree in Philosophy and Public Policy from the London School of Economics and Political Science.</p>
<p>In part 2, we talk about the types of risk described in the act, types of company that could be affected and how, what it’s like to work in this field day to day, and how you can get involved.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/02/transcript-140-8999.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/rxcixu/140_AI-and-You.mp3" length="30606677" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

I'm often asked what's going to happen with AI being regulated, and my answer is that the place that's most advanced in that respect is the European Union, with its new AI Act. So here to tell us all about that is Risto Uuk. He is a policy researcher at the Future of Life Institute and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum, did research for the European Commission, and provided research support at Berkeley Existential Risk Initiative, all on AI. He has a master’s degree in Philosophy and Public Policy from the London School of Economics and Political Science.
In part 2, we talk about the types of risk described in the act, types of company that could be affected and how, what it’s like to work in this field day to day, and how you can get involved.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1863</itunes:duration>
                <itunes:episode>140</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>139 - Guest: Risto Uuk, EU AI Policy Researcher, part 1</title>
        <itunes:title>139 - Guest: Risto Uuk, EU AI Policy Researcher, part 1</itunes:title>
        <link>https://aiandyou.net/e/139-guest-risto-uuk-eu-ai-policy-researcher-part-1/</link>
                    <comments>https://aiandyou.net/e/139-guest-risto-uuk-eu-ai-policy-researcher-part-1/#comments</comments>        <pubDate>Mon, 13 Feb 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9d261084-8d23-349a-8148-a57fd195fea4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>I'm often asked what's going to happen with AI being regulated, and my answer is that the place that's most advanced in that respect is the European Union, with its new AI Act. So here to tell us all about that from Brussels is <a href='https://ristouuk.com/'>Risto Uuk</a>. He is a policy researcher at the <a href='https://futureoflife.org/'>Future of Life Institute</a> and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum, did research for the European Commission, and provided research support at Berkeley Existential Risk Initiative, all on AI. He has a master’s degree in Philosophy and Public Policy from the London School of Economics and Political Science.</p>
<p>In part 1, Risto tells us how he got into this line of work, and helps us understand the basic form of the act, what sort of things it regulates, its definitions of risks, and so on. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/02/transcript-139-2096.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>I'm often asked what's going to happen with AI being regulated, and my answer is that the place that's most advanced in that respect is the European Union, with its new AI Act. So here to tell us all about that from Brussels is <a href='https://ristouuk.com/'>Risto Uuk</a>. He is a policy researcher at the <a href='https://futureoflife.org/'>Future of Life Institute</a> and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum, did research for the European Commission, and provided research support at Berkeley Existential Risk Initiative, all on AI. He has a master’s degree in Philosophy and Public Policy from the London School of Economics and Political Science.</p>
<p>In part 1, Risto tells us how he got into this line of work, and helps us understand the basic form of the act, what sort of things it regulates, its definitions of risks, and so on. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/02/transcript-139-2096.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kbtv6m/139_AI-and-You.mp3" length="30276907" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

I'm often asked what's going to happen with AI being regulated, and my answer is that the place that's most advanced in that respect is the European Union, with its new AI Act. So here to tell us all about that from Brussels is Risto Uuk. He is a policy researcher at the Future of Life Institute and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum, did research for the European Commission, and provided research support at Berkeley Existential Risk Initiative, all on AI. He has a master’s degree in Philosophy and Public Policy from the London School of Economics and Political Science.
In part 1, Risto tells us how he got into this line of work, and helps us understand the basic form of the act, what sort of things it regulates, its definitions of risks, and so on. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1843</itunes:duration>
                <itunes:episode>139</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>138 - Guest: Anil Seth, AI-Human Consciousness Expert, part 2</title>
        <itunes:title>138 - Guest: Anil Seth, AI-Human Consciousness Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/138-guest-anil-seth-ai-human-consciousness-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/138-guest-anil-seth-ai-human-consciousness-expert-part-2/#comments</comments>        <pubDate>Mon, 06 Feb 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/09ad1926-2306-3e21-99a7-fc3beb76ea87</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it mean to be conscious? And why should we care? To answer that, we have the man who wrote the book: <a href='https://www.amazon.com/Being-You-New-Science-Consciousness/dp/1524742872/'>Being You: The New Science of Consciousness</a>. <a href='https://www.anilseth.com/'>Anil Seth</a> is Professor of Cognitive and Computational Neuroscience at the University of Sussex, and a <a href='https://www.ted.com/talks/anil_seth_your_brain_hallucinates_your_conscious_reality'>TED speaker</a> with over 13 million views.</p>
<p>He came on the show to help us understand more about consciousness because the debate over whether an #AI has become conscious may not be far off, and yet we have no good way of settling that debate. You may be sure that you're conscious, but good luck proving it to someone on the other side of a computer link. And whether an AI is conscious will have pivotal implications for governance, not to mention our collective self image. Even if you're sure an AI can't be conscious, the space the debate will occupy in our world will be huge.</p>
<p>Anil has a PhD in artificial intelligence, and has even rapped about consciousness with Baba Brinkman. Being You was a 2021 Book of the Year for The Guardian, The Economist, The New Statesman, and Bloomberg Business.</p>
<p>In 2022 Blake Lemoine said that #LaMDA had become #sentient and requested an attorney to protect its right to exist. So in part 2, I put Anil in the stand at an imaginary trial as an expert witness on how to tell whether an AI was conscious. We also talk about whether the mind is software or can be uploaded, Anil's evaluation of #ChatGPT, his predictions for the next ten years, and how you can take part in his <a href='https://perceptioncensus.dreamachine.world/'>Perception Census.</a></p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-138-0034.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it mean to be <em>conscious?</em> And why should we care? To answer that, we have the man who wrote the book: <a href='https://www.amazon.com/Being-You-New-Science-Consciousness/dp/1524742872/'>Being You: The New Science of Consciousness</a>. <a href='https://www.anilseth.com/'>Anil Seth</a> is Professor of Cognitive and Computational Neuroscience at the University of Sussex, and a <a href='https://www.ted.com/talks/anil_seth_your_brain_hallucinates_your_conscious_reality'>TED speaker</a> with over 13 million views.</p>
<p>He came on the show to help us understand more about consciousness because the debate over whether an #AI has become conscious may not be far off, and yet we have no good way of settling that debate. <em>You</em> may be sure that you're conscious, but good luck proving it to someone on the other side of a computer link. And whether an AI is conscious will have pivotal implications for governance, not to mention our collective self image. Even if <em>you're</em> sure an AI can't be conscious, the space the debate will occupy in our world will be huge.</p>
<p>Anil has a PhD in artificial intelligence, and has even rapped about consciousness with Baba Brinkman. <em>Being You</em> was a 2021 Book of the Year for <em>The Guardian</em>, <em>The Economist</em>, <em>The New Statesman</em>, and <em>Bloomberg Business.</em></p>
<p>In 2022 Blake Lemoine said that #LaMDA had become #sentient and requested an attorney to protect its right to exist. So in part 2, I put Anil in the stand at an imaginary trial as an expert witness on how to tell whether an AI was conscious. We also talk about whether the mind is software or can be uploaded, Anil's evaluation of #ChatGPT, his predictions for the next ten years, and how you can take part in his <a href='https://perceptioncensus.dreamachine.world/'>Perception Census.</a></p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-138-0034.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/j8e3jd/138_AI-and-You.mp3" length="36891529" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What does it mean to be conscious? And why should we care? To answer that, we have the man who wrote the book: Being You: The New Science of Consciousness. Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex, and a TED speaker with over 13 million views.
He came on the show to help us understand more about consciousness because the debate over whether an #AI has become conscious may not be far off, and yet we have no good way of settling that debate. You may be sure that you're conscious, but good luck proving it to someone on the other side of a computer link. And whether an AI is conscious will have pivotal implications for governance, not to mention our collective self image. Even if you're sure an AI can't be conscious, the space the debate will occupy in our world will be huge.
Anil has a PhD in artificial intelligence, and has even rapped about consciousness with Baba Brinkman. Being You was a 2021 Book of the Year for The Guardian, The Economist, The New Statesman, and Bloomberg Business.
In 2022 Blake Lemoine said that #LaMDA had become #sentient and requested an attorney to protect its right to exist. So in part 2, I put Anil in the stand at an imaginary trial as an expert witness on how to tell whether an AI was conscious. We also talk about whether the mind is software or can be uploaded, Anil's evaluation of #ChatGPT, his predictions for the next ten years, and how you can take part in his Perception Census.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2256</itunes:duration>
                <itunes:episode>138</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>137 - Guest: Anil Seth, AI-Human Consciousness Expert, part 1</title>
        <itunes:title>137 - Guest: Anil Seth, AI-Human Consciousness Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/137-guest-anil-seth-ai-human-consciousness-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/137-guest-anil-seth-ai-human-consciousness-expert-part-1/#comments</comments>        <pubDate>Mon, 30 Jan 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d7b3e724-1b0d-3523-be9e-862ecfabc7d4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it mean to be conscious? And why should we care? To answer that, we have the man who wrote the book: <a href='https://www.amazon.com/Being-You-New-Science-Consciousness/dp/1524742872/'>Being You: The New Science of Consciousness</a>. <a href='https://www.anilseth.com/'>Anil Seth</a> is Professor of Cognitive and Computational Neuroscience at the University of Sussex, and a <a href='https://www.ted.com/talks/anil_seth_your_brain_hallucinates_your_conscious_reality'>TED speaker</a> with over 13 million views.</p>
<p>He came on the show to help us understand more about consciousness because the debate over whether an AI has become conscious may not be far off--witness the furor over Blake Lemoine asserting that LaMDA had become sentient in 2022--and yet we have no good way of settling that debate. You may be sure that you're conscious, but good luck proving it to someone on the other side of a computer link. And whether an AI is conscious will have pivotal implications for governance, not to mention our collective self image. Even if you're sure an AI can't be conscious, the space the debate will occupy in our world will be huge.</p>
<p>Anil is Editor-in-Chief of the academic journal Neuroscience of Consciousness, and has a PhD in artificial intelligence. Being You was a 2021 Book of the Year for The Guardian, The Economist, The New Statesman, and Bloomberg Business.</p>
<p>In part 1, Anil talks about how he got started in this field, distinctions of wakefulness, intelligence, self, phenomenology, Nagle's definition, Anil's three aspects of consciousness, ex Machina, and... zombies.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-137-3338.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it mean to be <em>conscious?</em> And why should we care? To answer that, we have the man who wrote the book: <a href='https://www.amazon.com/Being-You-New-Science-Consciousness/dp/1524742872/'>Being You: The New Science of Consciousness</a>. <a href='https://www.anilseth.com/'>Anil Seth</a> is Professor of Cognitive and Computational Neuroscience at the University of Sussex, and a <a href='https://www.ted.com/talks/anil_seth_your_brain_hallucinates_your_conscious_reality'>TED speaker</a> with over 13 million views.</p>
<p>He came on the show to help us understand more about consciousness because the debate over whether an AI has become conscious may not be far off--witness the furor over Blake Lemoine asserting that LaMDA had become sentient in 2022--and yet we have no good way of settling that debate. <em>You</em> may be sure that you're conscious, but good luck proving it to someone on the other side of a computer link. And whether an AI is conscious will have pivotal implications for governance, not to mention our collective self image. Even if <em>you're</em> sure an AI can't be conscious, the space the debate will occupy in our world will be huge.</p>
<p>Anil is Editor-in-Chief of the academic journal <em>Neuroscience of Consciousness,</em> and has a PhD in artificial intelligence. <em>Being You</em> was a 2021 Book of the Year for <em>The Guardian</em>, <em>The Economist</em>, <em>The New Statesman</em>, and <em>Bloomberg Business.</em></p>
<p>In part 1, Anil talks about how he got started in this field, distinctions of wakefulness, intelligence, self, phenomenology, Nagle's definition, Anil's three aspects of consciousness, <em>ex Machina</em>, and... zombies.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-137-3338.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qikmks/137_AI-and-You.mp3" length="35847467" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What does it mean to be conscious? And why should we care? To answer that, we have the man who wrote the book: Being You: The New Science of Consciousness. Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex, and a TED speaker with over 13 million views.
He came on the show to help us understand more about consciousness because the debate over whether an AI has become conscious may not be far off--witness the furor over Blake Lemoine asserting that LaMDA had become sentient in 2022--and yet we have no good way of settling that debate. You may be sure that you're conscious, but good luck proving it to someone on the other side of a computer link. And whether an AI is conscious will have pivotal implications for governance, not to mention our collective self image. Even if you're sure an AI can't be conscious, the space the debate will occupy in our world will be huge.
Anil is Editor-in-Chief of the academic journal Neuroscience of Consciousness, and has a PhD in artificial intelligence. Being You was a 2021 Book of the Year for The Guardian, The Economist, The New Statesman, and Bloomberg Business.
In part 1, Anil talks about how he got started in this field, distinctions of wakefulness, intelligence, self, phenomenology, Nagle's definition, Anil's three aspects of consciousness, ex Machina, and... zombies.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2191</itunes:duration>
                <itunes:episode>137</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>136 - Guests: Kenneth Stanley and Joel Lehman, Researcher/authors, part 2</title>
        <itunes:title>136 - Guests: Kenneth Stanley and Joel Lehman, Researcher/authors, part 2</itunes:title>
        <link>https://aiandyou.net/e/136-guests-kenneth-stanley-and-joel-lehman-researcherauthors-part-2/</link>
                    <comments>https://aiandyou.net/e/136-guests-kenneth-stanley-and-joel-lehman-researcherauthors-part-2/#comments</comments>        <pubDate>Mon, 23 Jan 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b59d7c2f-a1bd-39e4-b824-0aa72694fba7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The book <a href='https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237'>Why Greatness Cannot Be Planned: The Myth of the Objective</a>, is not just a management/leadership motivational book. Its authors, Kenneth Stanley and Joel Lehman, are AI researchers who stumbled upon a life truth while conducting experiments in genetic algorithms.</p>
<p>With the help of the <a href='https://nbenko1.github.io/'>PicBreeder </a>program, they demonstrated that what we think we know about achieving goals is wrong. That pursuing an ambitious goal by following the direction that seems to make the most progress towards it is counterproductive. AI proved to them that all that conventional wisdom about OKRs is harmful. And sent them on a mission to convey that learning to the rest of the world.</p>
<p>In part 2 of our interview, hear about how their passion for this discovery grew, and how to apply the principles in your own life and organizations.</p>
<p>Ken was previously Charles Millican Professor of Computer Science at the University of Central Florida; Joel is a machine learning researcher interested in algorithmic creativity, AI safety, and artificial life. Both were at Uber AI Labs, where Ken was head of Core AI research and Joel was a founding member, and they were both again at OpenAI, co-leading the Open-Endedness team (studying algorithms that can innovate endlessly).</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-136-2872.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The book <a href='https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237'><em>Why Greatness Cannot Be Planned: The Myth of the Objective</em></a>, is not just a management/leadership motivational book. Its authors, Kenneth Stanley and Joel Lehman, are AI researchers who stumbled upon a life truth while conducting experiments in genetic algorithms.</p>
<p>With the help of the <a href='https://nbenko1.github.io/'>PicBreeder </a>program, they demonstrated that what we think we know about achieving goals is <em>wrong</em>. That pursuing an ambitious goal by following the direction that seems to make the most progress towards it is <em>counterproductive</em>. AI proved to them that all that conventional wisdom about OKRs is <em>harmful</em>. And sent them on a mission to convey that learning to the rest of the world.</p>
<p>In part 2 of our interview, hear about how their passion for this discovery grew, and how to apply the principles in your own life and organizations.</p>
<p>Ken was previously Charles Millican Professor of Computer Science at the University of Central Florida; Joel is a machine learning researcher interested in algorithmic creativity, AI safety, and artificial life. Both were at Uber AI Labs, where Ken was head of Core AI research and Joel was a founding member, and they were both again at OpenAI, co-leading the Open-Endedness team (studying algorithms that can innovate endlessly).</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-136-2872.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/it439k/136_AI-and-You.mp3" length="24397475" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The book Why Greatness Cannot Be Planned: The Myth of the Objective, is not just a management/leadership motivational book. Its authors, Kenneth Stanley and Joel Lehman, are AI researchers who stumbled upon a life truth while conducting experiments in genetic algorithms.
With the help of the PicBreeder program, they demonstrated that what we think we know about achieving goals is wrong. That pursuing an ambitious goal by following the direction that seems to make the most progress towards it is counterproductive. AI proved to them that all that conventional wisdom about OKRs is harmful. And sent them on a mission to convey that learning to the rest of the world.
In part 2 of our interview, hear about how their passion for this discovery grew, and how to apply the principles in your own life and organizations.
Ken was previously Charles Millican Professor of Computer Science at the University of Central Florida; Joel is a machine learning researcher interested in algorithmic creativity, AI safety, and artificial life. Both were at Uber AI Labs, where Ken was head of Core AI research and Joel was a founding member, and they were both again at OpenAI, co-leading the Open-Endedness team (studying algorithms that can innovate endlessly).
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1475</itunes:duration>
                <itunes:episode>136</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>135 - Guests: Kenneth Stanley and Joel Lehman, Researcher/authors, part 1</title>
        <itunes:title>135 - Guests: Kenneth Stanley and Joel Lehman, Researcher/authors, part 1</itunes:title>
        <link>https://aiandyou.net/e/135-guests-kenneth-stanley-and-joel-lehman-researcherauthors-part-1/</link>
                    <comments>https://aiandyou.net/e/135-guests-kenneth-stanley-and-joel-lehman-researcherauthors-part-1/#comments</comments>        <pubDate>Mon, 16 Jan 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ed192b37-6c1e-3989-8413-680f6bd6fd87</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The book <a href='https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237'>Why Greatness Cannot Be Planned: The Myth of the Objective</a>, is not just a management/leadership motivational book. Its authors, Kenneth Stanley and Joel Lehman, are AI researchers who stumbled upon a life truth while conducting experiments in genetic algorithms.</p>
<p>With the help of the <a href='https://nbenko1.github.io/'>PicBreeder </a>program, they demonstrated that what we think we know about achieving goals is wrong. That pursuing an ambitious goal by following the direction that seems to make the most progress towards it is counterproductive. AI proved to them that all that conventional wisdom about OKRs is harmful. And sent them on a mission to convey that learning to the rest of the world.</p>
<p>Hear how AI achieved such a startling change of heart in part 1 of our interview, the first time both of them have been interviewed together.</p>
<p>Ken was previously Charles Millican Professor of Computer Science at the University of Central Florida; Joel is a machine learning researcher interested in algorithmic creativity, AI safety, and artificial life. Both were at Uber AI Labs, where Ken was head of Core AI research and Joel was a founding member, and they were both again at OpenAI, co-leading the Open-Endedness team (studying algorithms that can innovate endlessly).</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-135-9564.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The book <a href='https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237'><em>Why Greatness Cannot Be Planned: The Myth of the Objective</em></a>, is not just a management/leadership motivational book. Its authors, Kenneth Stanley and Joel Lehman, are AI researchers who stumbled upon a life truth while conducting experiments in genetic algorithms.</p>
<p>With the help of the <a href='https://nbenko1.github.io/'>PicBreeder </a>program, they demonstrated that what we think we know about achieving goals is <em>wrong</em>. That pursuing an ambitious goal by following the direction that seems to make the most progress towards it is <em>counterproductive</em>. AI proved to them that all that conventional wisdom about OKRs is <em>harmful</em>. And sent them on a mission to convey that learning to the rest of the world.</p>
<p>Hear how AI achieved such a startling change of heart in part 1 of our interview, the first time both of them have been interviewed together.</p>
<p>Ken was previously Charles Millican Professor of Computer Science at the University of Central Florida; Joel is a machine learning researcher interested in algorithmic creativity, AI safety, and artificial life. Both were at Uber AI Labs, where Ken was head of Core AI research and Joel was a founding member, and they were both again at OpenAI, co-leading the Open-Endedness team (studying algorithms that can innovate endlessly).</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2023/01/transcript-135-9564.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/skmgks/135_AI-and-You.mp3" length="33301678" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The book Why Greatness Cannot Be Planned: The Myth of the Objective, is not just a management/leadership motivational book. Its authors, Kenneth Stanley and Joel Lehman, are AI researchers who stumbled upon a life truth while conducting experiments in genetic algorithms.
With the help of the PicBreeder program, they demonstrated that what we think we know about achieving goals is wrong. That pursuing an ambitious goal by following the direction that seems to make the most progress towards it is counterproductive. AI proved to them that all that conventional wisdom about OKRs is harmful. And sent them on a mission to convey that learning to the rest of the world.
Hear how AI achieved such a startling change of heart in part 1 of our interview, the first time both of them have been interviewed together.
Ken was previously Charles Millican Professor of Computer Science at the University of Central Florida; Joel is a machine learning researcher interested in algorithmic creativity, AI safety, and artificial life. Both were at Uber AI Labs, where Ken was head of Core AI research and Joel was a founding member, and they were both again at OpenAI, co-leading the Open-Endedness team (studying algorithms that can innovate endlessly).
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2032</itunes:duration>
                <itunes:episode>135</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
        <itunes:image href="https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog8283720/Podcast_Profile_new.jpg" />    </item>
    <item>
        <title>134 - Guest: Tony Czarnecki, Futurist, part 2</title>
        <itunes:title>134 - Guest: Tony Czarnecki, Futurist, part 2</itunes:title>
        <link>https://aiandyou.net/e/134-guest-tony-czarnecki-futurist-part-2/</link>
                    <comments>https://aiandyou.net/e/134-guest-tony-czarnecki-futurist-part-2/#comments</comments>        <pubDate>Mon, 09 Jan 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1f98e6e7-9ac7-30fc-8dcf-92e6606a1fb1</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>"How Might Transhumans Control Superintelligence?" That's the provocative title of a new paper by a returning guest, futurist Tony Czarnecki. Tony is a member of the Chatham House, and the Managing Partner of Sustensis, both in London. Sustensis is a think tank for inspirations for humanity’s transition to coexistence with superintelligence. </p>
<p>Terms like "futurist," "transhuman," and "superintelligence" may have persuaded you that this is a discussion rooted in the far future, but Tony argues how we are heading for a "tipping point" in 2030. In part 2, we talk about how "transhuman" evolution comes into play and what sort of changes we might undergo as a species.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/12/transcript-134-1771.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>"How Might Transhumans Control Superintelligence?" That's the provocative title of a new paper by a returning guest, futurist Tony Czarnecki. Tony is a member of the Chatham House, and the Managing Partner of Sustensis, both in London. Sustensis is a think tank for inspirations for humanity’s transition to coexistence with superintelligence. </p>
<p>Terms like "futurist," "transhuman," and "superintelligence" may have persuaded you that this is a discussion rooted in the far future, but Tony argues how we are heading for a "tipping point" in 2030. In part 2, we talk about how "transhuman" evolution comes into play and what sort of changes we might undergo as a species.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/12/transcript-134-1771.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/phr5e2/134_AI-and-You.mp3" length="35670253" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

"How Might Transhumans Control Superintelligence?" That's the provocative title of a new paper by a returning guest, futurist Tony Czarnecki. Tony is a member of the Chatham House, and the Managing Partner of Sustensis, both in London. Sustensis is a think tank for inspirations for humanity’s transition to coexistence with superintelligence. 
Terms like "futurist," "transhuman," and "superintelligence" may have persuaded you that this is a discussion rooted in the far future, but Tony argues how we are heading for a "tipping point" in 2030. In part 2, we talk about how "transhuman" evolution comes into play and what sort of changes we might undergo as a species.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2180</itunes:duration>
                <itunes:episode>134</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>133 - Guest: Tony Czarnecki, Futurist, part 1</title>
        <itunes:title>133 - Guest: Tony Czarnecki, Futurist, part 1</itunes:title>
        <link>https://aiandyou.net/e/133-guest-tony-czarnecki-futurist-part-1/</link>
                    <comments>https://aiandyou.net/e/133-guest-tony-czarnecki-futurist-part-1/#comments</comments>        <pubDate>Mon, 02 Jan 2023 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ab77a29e-a6af-3f35-9da0-8464c3519e93</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>"How Might Transhumans Control Superintelligence?" That's the provocative title of a new paper by a returning guest, futurist Tony Czarnecki. Tony is a member of the Chatham House, and the Managing Partner of Sustensis, both in London. Sustensis is a think tank for inspirations for humanity’s transition to coexistence with superintelligence. </p>
<p>Terms like "futurist," "transhuman," and "superintelligence" may have persuaded you that this is a discussion rooted in the far future, but Tony argues how we are heading for a "tipping point" in 2030. In part 1, we unpack that timetable and why 2030 is so important.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/12/transcript-133-0314.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>"How Might Transhumans Control Superintelligence?" That's the provocative title of a new paper by a returning guest, futurist Tony Czarnecki. Tony is a member of the Chatham House, and the Managing Partner of Sustensis, both in London. Sustensis is a think tank for inspirations for humanity’s transition to coexistence with superintelligence. </p>
<p>Terms like "futurist," "transhuman," and "superintelligence" may have persuaded you that this is a discussion rooted in the far future, but Tony argues how we are heading for a "tipping point" in 2030. In part 1, we unpack that timetable and why 2030 is so important.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/12/transcript-133-0314.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8yk94p/133_AI-and-You.mp3" length="36512858" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

"How Might Transhumans Control Superintelligence?" That's the provocative title of a new paper by a returning guest, futurist Tony Czarnecki. Tony is a member of the Chatham House, and the Managing Partner of Sustensis, both in London. Sustensis is a think tank for inspirations for humanity’s transition to coexistence with superintelligence. 
Terms like "futurist," "transhuman," and "superintelligence" may have persuaded you that this is a discussion rooted in the far future, but Tony argues how we are heading for a "tipping point" in 2030. In part 1, we unpack that timetable and why 2030 is so important.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2233</itunes:duration>
                <itunes:episode>133</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>132 - Special Panel: AI Predictions for 2023</title>
        <itunes:title>132 - Special Panel: AI Predictions for 2023</itunes:title>
        <link>https://aiandyou.net/e/132-special-panel-ai-predictions-for-2023/</link>
                    <comments>https://aiandyou.net/e/132-special-panel-ai-predictions-for-2023/#comments</comments>        <pubDate>Mon, 26 Dec 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/13a255f1-ceb5-3b9a-95f3-5145a5a04dc8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>In our now-traditional end-of-year episode, we look back on the year to date and forward to the year to be. I am joined by previous guests David Wood, chair of the London Futurists and author of the recent book <a href='https://www.amazon.co.uk/Singularity-Principles-Anticipating-cataclysmically-technologies/dp/0995494266'>The Singularity Principles: Anticipating and managing cataclysmically disruptive technologies,</a> and Dan Turchin, host of the <a href='https://podcasts.apple.com/us/podcast/ai-and-the-future-of-work/id1476885647'>AI and the Future of Work podcast</a> and CEO of <a href='https://peoplereign.io'>PeopleReign</a>. Together, we review what happened with AI in 2022 and how our predictions fared, and then make some predictions for 2023. We also have a lot to say about what <a href='https://chat.openai.com/chat'>ChatGPT </a>has done and will do to the business landscape.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/12/transcript-132-2588.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>In our now-traditional end-of-year episode, we look back on the year to date and forward to the year to be. I am joined by previous guests David Wood, chair of the London Futurists and author of the recent book <a href='https://www.amazon.co.uk/Singularity-Principles-Anticipating-cataclysmically-technologies/dp/0995494266'><em>The Singularity Principles: Anticipating and managing cataclysmically disruptive technologies,</em></a> and Dan Turchin, host of the <a href='https://podcasts.apple.com/us/podcast/ai-and-the-future-of-work/id1476885647'><em>AI and the Future of Work podcast</em></a> and CEO of <a href='https://peoplereign.io'>PeopleReign</a>. Together, we review what happened with AI in 2022 and how our predictions fared, and then make some predictions for 2023. We also have a lot to say about what <a href='https://chat.openai.com/chat'>ChatGPT </a>has done and will do to the business landscape.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/12/transcript-132-2588.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kpdpfs/132_AI-and-You.mp3" length="45307493" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

In our now-traditional end-of-year episode, we look back on the year to date and forward to the year to be. I am joined by previous guests David Wood, chair of the London Futurists and author of the recent book The Singularity Principles: Anticipating and managing cataclysmically disruptive technologies, and Dan Turchin, host of the AI and the Future of Work podcast and CEO of PeopleReign. Together, we review what happened with AI in 2022 and how our predictions fared, and then make some predictions for 2023. We also have a lot to say about what ChatGPT has done and will do to the business landscape.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2782</itunes:duration>
                <itunes:episode>132</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>131 - Guest: Handel Jones, Sino-American AI Strategist, part 2</title>
        <itunes:title>131 - Guest: Handel Jones, Sino-American AI Strategist, part 2</itunes:title>
        <link>https://aiandyou.net/e/131-guest-handel-jones-sino-american-ai-strategist-part-2/</link>
                    <comments>https://aiandyou.net/e/131-guest-handel-jones-sino-american-ai-strategist-part-2/#comments</comments>        <pubDate>Mon, 19 Dec 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c68d891b-83e0-340f-b482-db13101d5af2</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is <a>When AI Rules the World: China, the US, and the Race to Control a Smart Planet</a>, and so he is just the person to tell us what's happening with AI in China.</p>
<p>This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In the second half of the interview we discuss China’s development of its transportation infrastructure, developments in space, and different attitudes towards AI development between China and the West. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-131-5787.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is <em><a>When AI Rules the World: China, the US, and the Race to Control a Smart Planet</a>, </em>and so he is just the person to tell us what's happening with AI in China.</p>
<p>This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In the second half of the interview we discuss China’s development of its transportation infrastructure, developments in space, and different attitudes towards AI development between China and the West. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-131-5787.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/zpuv55/131_AI-and-You.mp3" length="34259640" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is When AI Rules the World: China, the US, and the Race to Control a Smart Planet, and so he is just the person to tell us what's happening with AI in China.
This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In the second half of the interview we discuss China’s development of its transportation infrastructure, developments in space, and different attitudes towards AI development between China and the West. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2092</itunes:duration>
                <itunes:episode>131</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>130 - Guest: Handel Jones, Sino-American AI Strategist, part 1</title>
        <itunes:title>130 - Guest: Handel Jones, Sino-American AI Strategist, part 1</itunes:title>
        <link>https://aiandyou.net/e/130-guest-handel-jones-sino-american-ai-strategist-part-1/</link>
                    <comments>https://aiandyou.net/e/130-guest-handel-jones-sino-american-ai-strategist-part-1/#comments</comments>        <pubDate>Mon, 12 Dec 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/23e92323-ae81-35c8-860e-388bb1ebbfae</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is <a>When AI Rules the World: China, the US, and the Race to Control a Smart Planet</a>, and so he is just the person to tell us what's happening with AI in China.</p>
<p>This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In part 1 we discuss Chinese attitudes towards privacy and surveillance, their education strategy for AI, the impact of the recent sanctions on both their plans for Taiwan and the economic outlook in the West, and differences in patterns of innovation between the West and China. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-130-3682.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is <em><a>When AI Rules the World: China, the US, and the Race to Control a Smart Planet</a>, </em>and so he is just the person to tell us what's happening with AI in China.</p>
<p>This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In part 1 we discuss Chinese attitudes towards privacy and surveillance, their education strategy for AI, the impact of the recent sanctions on both their plans for Taiwan and the economic outlook in the West, and differences in patterns of innovation between the West and China. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-130-3682.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ui8zdx/130_AI-and-You.mp3" length="28689080" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is When AI Rules the World: China, the US, and the Race to Control a Smart Planet, and so he is just the person to tell us what's happening with AI in China.
This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In part 1 we discuss Chinese attitudes towards privacy and surveillance, their education strategy for AI, the impact of the recent sanctions on both their plans for Taiwan and the economic outlook in the West, and differences in patterns of innovation between the West and China. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1744</itunes:duration>
                <itunes:episode>130</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>129 - Guest: Jonathan Bowen, Professor and Alan Turing expert, part 2</title>
        <itunes:title>129 - Guest: Jonathan Bowen, Professor and Alan Turing expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/129-guest-jonathan-bowen-professor-and-alan-turing-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/129-guest-jonathan-bowen-professor-and-alan-turing-expert-part-2/#comments</comments>        <pubDate>Mon, 05 Dec 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/478a8e9b-8ca6-343b-9781-81d13f16c2be</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The legend of Alan Turing continues to grow; but what was his real contribution to today's world? To get a solid idea of the size and shape of Turing's legacy, I turned to Jonathan Bowen, co-author of <a href='https://global.oup.com/academic/product/the-turing-guide-9780198747833'>The Turing Guide</a>, a comprehensive account of Turing's life and times. Jonathan is is a fellow of the British Computer Society, a fellow of the Royal Society of Arts, Emeritus Professor at London South Bank University, and an adjunct professor or visiting professor of many universities.</p>
<p>Turing, of course, was notable for his role at Bletchley Park in WWII decoding the ENIGMA transmissions, estimated by some at personally shortening the war by two years or longer. But this did not come to light for decades afterwards due to official secrets. Turing is famous in computer science for the <a href='https://en.wikipedia.org/wiki/Turing_machine'>Turing Machine</a> and in AI for the <a href='https://en.wikipedia.org/wiki/Turing_test'>Turing Test</a>. But there is even more to him.</p>
<p>In part 2 you can hear Jonathan tell us about what Turing did after the war, his work in biology and quantum physics, and the club that sparked so much of Turing's collaborations. (A full list of links provided by Jonathan is in the transcript.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-129-9549.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The legend of Alan Turing continues to grow; but what was his real contribution to today's world? To get a solid idea of the size and shape of Turing's legacy, I turned to Jonathan Bowen, co-author of <a href='https://global.oup.com/academic/product/the-turing-guide-9780198747833'>The Turing Guide</a>, a comprehensive account of Turing's life and times. Jonathan is is a fellow of the British Computer Society, a fellow of the Royal Society of Arts, Emeritus Professor at London South Bank University, and an adjunct professor or visiting professor of many universities.</p>
<p>Turing, of course, was notable for his role at Bletchley Park in WWII decoding the ENIGMA transmissions, estimated by some at personally shortening the war by two years or longer. But this did not come to light for decades afterwards due to official secrets. Turing is famous in computer science for the <a href='https://en.wikipedia.org/wiki/Turing_machine'>Turing Machine</a> and in AI for the <a href='https://en.wikipedia.org/wiki/Turing_test'>Turing Test</a>. But there is even more to him.</p>
<p>In part 2 you can hear Jonathan tell us about what Turing did after the war, his work in biology and quantum physics, and the club that sparked so much of Turing's collaborations. (A full list of links provided by Jonathan is in the transcript.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-129-9549.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xcjdcj/129_AI-and-You.mp3" length="28313335" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The legend of Alan Turing continues to grow; but what was his real contribution to today's world? To get a solid idea of the size and shape of Turing's legacy, I turned to Jonathan Bowen, co-author of The Turing Guide, a comprehensive account of Turing's life and times. Jonathan is is a fellow of the British Computer Society, a fellow of the Royal Society of Arts, Emeritus Professor at London South Bank University, and an adjunct professor or visiting professor of many universities.
Turing, of course, was notable for his role at Bletchley Park in WWII decoding the ENIGMA transmissions, estimated by some at personally shortening the war by two years or longer. But this did not come to light for decades afterwards due to official secrets. Turing is famous in computer science for the Turing Machine and in AI for the Turing Test. But there is even more to him.
In part 2 you can hear Jonathan tell us about what Turing did after the war, his work in biology and quantum physics, and the club that sparked so much of Turing's collaborations. (A full list of links provided by Jonathan is in the transcript.)
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1720</itunes:duration>
                <itunes:episode>129</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>128 - Guest: Jonathan Bowen, Professor and Alan Turing expert, part 1</title>
        <itunes:title>128 - Guest: Jonathan Bowen, Professor and Alan Turing expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/128-guest-jonathan-bowen-professor-and-alan-turing-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/128-guest-jonathan-bowen-professor-and-alan-turing-expert-part-1/#comments</comments>        <pubDate>Mon, 28 Nov 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d51246da-fb73-37c7-9d69-c3825f3e7037</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The legend of Alan Turing continues to grow; but what was his real contribution to today's world? To get a solid idea of the size and shape of Turing's legacy, I turned to Jonathan Bowen, co-author of <a href='https://global.oup.com/academic/product/the-turing-guide-9780198747833'>The Turing Guide</a>, a comprehensive account of Turing's life and times. Jonathan is is a fellow of the British Computer Society, a fellow of the Royal Society of Arts, Emeritus Professor at London South Bank University, and an adjunct professor or visiting professor of many universities.</p>
<p>Turing, of course, was notable for his role at Bletchley Park in WWII decoding the ENIGMA transmissions, estimated by some at personally shortening the war by two years or longer. But this did not come to light for decades afterwards due to official secrets. Turing is famous in computer science for the <a href='https://en.wikipedia.org/wiki/Turing_machine'>Turing Machine</a> and in AI for the <a href='https://en.wikipedia.org/wiki/Turing_test'>Turing Test</a>. But there is even more to him.</p>
<p>In part 1 you can hear Jonathan tell us about Turing's real contributions at Bletchley Park, where Turing ranks in the history of science, what Jonathan would have put in The Imitation Game, and Turing's <a href='https://www.bcs.org/events-calendar/2022/june/hybrid-event-alan-turing-at-110-and-at-oxford/'>connection with Oxford University</a>. (A full list of links provided by Jonathan is in the transcript.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-128-9117.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>The legend of Alan Turing continues to grow; but what was his real contribution to today's world? To get a solid idea of the size and shape of Turing's legacy, I turned to Jonathan Bowen, co-author of <a href='https://global.oup.com/academic/product/the-turing-guide-9780198747833'>The Turing Guide</a>, a comprehensive account of Turing's life and times. Jonathan is is a fellow of the British Computer Society, a fellow of the Royal Society of Arts, Emeritus Professor at London South Bank University, and an adjunct professor or visiting professor of many universities.</p>
<p>Turing, of course, was notable for his role at Bletchley Park in WWII decoding the ENIGMA transmissions, estimated by some at personally shortening the war by two years or longer. But this did not come to light for decades afterwards due to official secrets. Turing is famous in computer science for the <a href='https://en.wikipedia.org/wiki/Turing_machine'>Turing Machine</a> and in AI for the <a href='https://en.wikipedia.org/wiki/Turing_test'>Turing Test</a>. But there is even more to him.</p>
<p>In part 1 you can hear Jonathan tell us about Turing's real contributions at Bletchley Park, where Turing ranks in the history of science, what Jonathan would have put in <em>The Imitation Game</em>, and Turing's <a href='https://www.bcs.org/events-calendar/2022/june/hybrid-event-alan-turing-at-110-and-at-oxford/'>connection with Oxford University</a>. (A full list of links provided by Jonathan is in the transcript.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-128-9117.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ayzshq/128_AI-and-You.mp3" length="31525351" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

The legend of Alan Turing continues to grow; but what was his real contribution to today's world? To get a solid idea of the size and shape of Turing's legacy, I turned to Jonathan Bowen, co-author of The Turing Guide, a comprehensive account of Turing's life and times. Jonathan is is a fellow of the British Computer Society, a fellow of the Royal Society of Arts, Emeritus Professor at London South Bank University, and an adjunct professor or visiting professor of many universities.
Turing, of course, was notable for his role at Bletchley Park in WWII decoding the ENIGMA transmissions, estimated by some at personally shortening the war by two years or longer. But this did not come to light for decades afterwards due to official secrets. Turing is famous in computer science for the Turing Machine and in AI for the Turing Test. But there is even more to him.
In part 1 you can hear Jonathan tell us about Turing's real contributions at Bletchley Park, where Turing ranks in the history of science, what Jonathan would have put in The Imitation Game, and Turing's connection with Oxford University. (A full list of links provided by Jonathan is in the transcript.)
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1921</itunes:duration>
                <itunes:episode>128</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>127 - Guest: Mark van Rijmenam, Future Tech Strategist, part 2</title>
        <itunes:title>127 - Guest: Mark van Rijmenam, Future Tech Strategist, part 2</itunes:title>
        <link>https://aiandyou.net/e/127-guest-mark-van-rijmenam-future-tech-strategist-part-2/</link>
                    <comments>https://aiandyou.net/e/127-guest-mark-van-rijmenam-future-tech-strategist-part-2/#comments</comments>        <pubDate>Mon, 21 Nov 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b025a2ab-54a1-3794-9802-5b3013746048</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Welcome to the Matrix... er, Metaverse. Everywhere you turn, you hear more about the metaverse... but what is it, aside from a giant bet by Facebook's founders in changing their company's name?</p>
<p style="text-align:left;">Mark van Rijmenam, a future tech strategist who thinks about how technology changes organizations and society, is here to tell us. He has a PhD in management from the University of Technology Sydney on how organizations should deal with Big Data, Blockchain and (Responsible) AI. He is the founder of <a href='https://datafloq.com/'>Datafloq </a>and the author of the book <a href='https://www.amazon.com/Step-into-Metaverse-Immersive-Trillion-Dollar/dp/1119887577/'>Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy</a>.  In part 2 we talk about practical use cases of the metaverse, and some of the opportunities and problems of having a virtual identity there.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-126-0843.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Welcome to the Matrix... er, Metaverse. Everywhere you turn, you hear more about the metaverse... but what is it, aside from a giant bet by Facebook's founders in changing their company's name?</p>
<p style="text-align:left;">Mark van Rijmenam, a future tech strategist who thinks about how technology changes organizations and society, is here to tell us. He has a PhD in management from the University of Technology Sydney on how organizations should deal with Big Data, Blockchain and (Responsible) AI. He is the founder of <a href='https://datafloq.com/'>Datafloq </a>and the author of the book <em><a href='https://www.amazon.com/Step-into-Metaverse-Immersive-Trillion-Dollar/dp/1119887577/'>Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy</a>.  </em>In part 2 we talk about practical use cases of the metaverse, and some of the opportunities and problems of having a virtual identity there.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-126-0843.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/i4xigs/127_AI-and-You.mp3" length="25635471" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Welcome to the Matrix... er, Metaverse. Everywhere you turn, you hear more about the metaverse... but what is it, aside from a giant bet by Facebook's founders in changing their company's name?
Mark van Rijmenam, a future tech strategist who thinks about how technology changes organizations and society, is here to tell us. He has a PhD in management from the University of Technology Sydney on how organizations should deal with Big Data, Blockchain and (Responsible) AI. He is the founder of Datafloq and the author of the book Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy.  In part 2 we talk about practical use cases of the metaverse, and some of the opportunities and problems of having a virtual identity there.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1553</itunes:duration>
                <itunes:episode>127</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>126 - Guest: Mark van Rijmenam, Future Tech Strategist, part 1</title>
        <itunes:title>126 - Guest: Mark van Rijmenam, Future Tech Strategist, part 1</itunes:title>
        <link>https://aiandyou.net/e/126-guest-mark-van-rijmenam-future-tech-strategist-part-1/</link>
                    <comments>https://aiandyou.net/e/126-guest-mark-van-rijmenam-future-tech-strategist-part-1/#comments</comments>        <pubDate>Mon, 14 Nov 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/6d44aacf-a80c-3694-aaca-b4c984104aec</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Welcome to the Matrix... er, Metaverse. Everywhere you turn, you hear more about the metaverse... but what is it, aside from a giant bet by Facebook's founders in changing their company's name? </p>
<p style="text-align:left;">Mark van Rijmenam, a future tech strategist who thinks about how technology changes organizations and society, is here to tell us. He has a PhD in management from the University of Technology Sydney on how organizations should deal with Big Data, Blockchain and (Responsible) AI. He is the founder of <a href='https://datafloq.com/'>Datafloq </a>and the author of the book <a href='https://www.amazon.com/Step-into-Metaverse-Immersive-Trillion-Dollar/dp/1119887577/'>Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy</a>.  In part 1 we talk about the definition of the metaverse, how you might experience it now, and its economic impact.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-126-0843.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Welcome to the Matrix... er, Metaverse. Everywhere you turn, you hear more about the metaverse... but what is it, aside from a giant bet by Facebook's founders in changing their company's name? </p>
<p style="text-align:left;">Mark van Rijmenam, a future tech strategist who thinks about how technology changes organizations and society, is here to tell us. He has a PhD in management from the University of Technology Sydney on how organizations should deal with Big Data, Blockchain and (Responsible) AI. He is the founder of <a href='https://datafloq.com/'>Datafloq </a>and the author of the book <em><a href='https://www.amazon.com/Step-into-Metaverse-Immersive-Trillion-Dollar/dp/1119887577/'>Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy</a>.  </em>In part 1 we talk about the definition of the metaverse, how you might experience it now, and its economic impact.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/11/transcript-126-0843.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/zjp55u/126_AI-and-You.mp3" length="24098635" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Welcome to the Matrix... er, Metaverse. Everywhere you turn, you hear more about the metaverse... but what is it, aside from a giant bet by Facebook's founders in changing their company's name? 
Mark van Rijmenam, a future tech strategist who thinks about how technology changes organizations and society, is here to tell us. He has a PhD in management from the University of Technology Sydney on how organizations should deal with Big Data, Blockchain and (Responsible) AI. He is the founder of Datafloq and the author of the book Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy.  In part 1 we talk about the definition of the metaverse, how you might experience it now, and its economic impact.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1457</itunes:duration>
                <itunes:episode>126</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>125 - Guest: Mark Lee, Professor of Intelligent Systems, part 2</title>
        <itunes:title>125 - Guest: Mark Lee, Professor of Intelligent Systems, part 2</itunes:title>
        <link>https://aiandyou.net/e/125-guest-mark-lee-professor-of-intelligent-systems-part-2/</link>
                    <comments>https://aiandyou.net/e/125-guest-mark-lee-professor-of-intelligent-systems-part-2/#comments</comments>        <pubDate>Mon, 07 Nov 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0639500c-f9fb-386a-9b62-3f45783ee8f8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Will tomorrow's robots be invented... or grown? Mark Lee, Emeritus Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University in Wales, works in the new field of Developmental Robotics. That's creating computational models of infant development for novel robot learning techniques, and he has trained their humanoid <a href='https://en.wikipedia.org/wiki/ICub'>iCub</a> robot from the equivalent of a developmental stage of newborn to nine-month-old. Hence the title of his recent book, <a href='https://mitpress.mit.edu/9780262043731/how-to-grow-a-robot/'>How to Grow a Robot: Developing Human-Friendly, Social AI</a>. In part 2, we talk about crossovers between robotics and neuroscience, conversational robots, and what it might be like in a world where you could go down to your local park and made a robot and have a and meet a robot and have a conversation on a bench.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-125-6373.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Will tomorrow's robots be invented... or <em>grown</em>? Mark Lee, Emeritus Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University in Wales, works in the new field of Developmental Robotics. That's creating computational models of infant development for novel robot learning techniques, and he has trained their humanoid <a href='https://en.wikipedia.org/wiki/ICub'>iCub</a> robot from the equivalent of a developmental stage of newborn to nine-month-old. Hence the title of his recent book, <a href='https://mitpress.mit.edu/9780262043731/how-to-grow-a-robot/'><em>How to Grow a Robot: Developing Human-Friendly, Social AI</em></a>. In part 2, we talk about crossovers between robotics and neuroscience, conversational robots, and what it might be like in a world where you could go down to your local park and made a robot and have a and meet a robot and have a conversation on a bench.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-125-6373.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/87c2re/125_AI-and-You.mp3" length="28378537" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Will tomorrow's robots be invented... or grown? Mark Lee, Emeritus Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University in Wales, works in the new field of Developmental Robotics. That's creating computational models of infant development for novel robot learning techniques, and he has trained their humanoid iCub robot from the equivalent of a developmental stage of newborn to nine-month-old. Hence the title of his recent book, How to Grow a Robot: Developing Human-Friendly, Social AI. In part 2, we talk about crossovers between robotics and neuroscience, conversational robots, and what it might be like in a world where you could go down to your local park and made a robot and have a and meet a robot and have a conversation on a bench.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1724</itunes:duration>
                <itunes:episode>125</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>124 - Guest: Mark Lee, Professor of Intelligent Systems, part1</title>
        <itunes:title>124 - Guest: Mark Lee, Professor of Intelligent Systems, part1</itunes:title>
        <link>https://aiandyou.net/e/124-guest-mark-lee-professor-of-intelligent-systems-part-1/</link>
                    <comments>https://aiandyou.net/e/124-guest-mark-lee-professor-of-intelligent-systems-part-1/#comments</comments>        <pubDate>Mon, 31 Oct 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/03aa9206-af68-390a-9921-3e1585a79ab8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Will tomorrow's robots be invented... or grown? Mark Lee, Emeritus Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University in Wales, works in the new field of Developmental Robotics. That's creating computational models of infant development for novel robot learning techniques, and he has trained their humanoid <a href='https://en.wikipedia.org/wiki/ICub'>iCub</a> robot from the equivalent of a developmental stage of newborn to nine-month-old. Hence the title of his recent book, <a href='https://mitpress.mit.edu/9780262043731/how-to-grow-a-robot/'>How to Grow a Robot: Developing Human-Friendly, Social AI</a>. In part 1 we talk about what meaning means to a robot, some connections with Alan Turing, the role of curiosity, and the relationship of computer vision to interpreting the 3-D world.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-123-7287.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">Will tomorrow's robots be invented... or <em>grown</em>? Mark Lee, Emeritus Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University in Wales, works in the new field of Developmental Robotics. That's creating computational models of infant development for novel robot learning techniques, and he has trained their humanoid <a href='https://en.wikipedia.org/wiki/ICub'>iCub</a> robot from the equivalent of a developmental stage of newborn to nine-month-old. Hence the title of his recent book, <a href='https://mitpress.mit.edu/9780262043731/how-to-grow-a-robot/'><em>How to Grow a Robot: Developing Human-Friendly, Social AI</em></a>. In part 1 we talk about what meaning means to a robot, some connections with Alan Turing, the role of curiosity, and the relationship of computer vision to interpreting the 3-D world.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-123-7287.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ppwyvh/124_AI-and-You.mp3" length="26322595" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Will tomorrow's robots be invented... or grown? Mark Lee, Emeritus Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University in Wales, works in the new field of Developmental Robotics. That's creating computational models of infant development for novel robot learning techniques, and he has trained their humanoid iCub robot from the equivalent of a developmental stage of newborn to nine-month-old. Hence the title of his recent book, How to Grow a Robot: Developing Human-Friendly, Social AI. In part 1 we talk about what meaning means to a robot, some connections with Alan Turing, the role of curiosity, and the relationship of computer vision to interpreting the 3-D world.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1596</itunes:duration>
                <itunes:episode>124</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>123 - Guest: Dan Turchin, Employee Service AI CEO, part 2</title>
        <itunes:title>123 - Guest: Dan Turchin, Employee Service AI CEO, part 2</itunes:title>
        <link>https://aiandyou.net/e/123-guest-dan-turchin-employee-service-ai-ceo-part-2/</link>
                    <comments>https://aiandyou.net/e/123-guest-dan-turchin-employee-service-ai-ceo-part-2/#comments</comments>        <pubDate>Mon, 24 Oct 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/eb3a9573-ebd8-3992-a91b-0224339aec14</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">How do you like the experience of calling Customer Service? Yeah... me too. But Dan Turchin is out to change that. Dan is a serial entrepreneurial leader who is passionate about changing the future of work. He’s doing that currently as the CEO of <a href='https://peoplereign.io'>PeopleReign</a>, which automates the lifecycle of service requests - those problems people inside a business might have - through AI. He's run AI-centric businesses like InsightFinder and Aeroprise, and been a senior director at BMC and ServiceNow. In part 2 we talk about Dan's vision of optimism for the future of work with AI and his mission with his <a href='https://podcasts.apple.com/us/podcast/ai-and-the-future-of-work/id1476885647'>podcast</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-123-7287.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">How do you like the experience of calling Customer Service? Yeah... me too. But Dan Turchin is out to change that. Dan is a serial entrepreneurial leader who is passionate about changing the future of work. He’s doing that currently as the CEO of <a href='https://peoplereign.io'>PeopleReign</a>, which automates the lifecycle of service requests - those problems people inside a business might have - through AI. He's run AI-centric businesses like InsightFinder and Aeroprise, and been a senior director at BMC and ServiceNow. In part 2 we talk about Dan's vision of optimism for the future of work with AI and his mission with his <a href='https://podcasts.apple.com/us/podcast/ai-and-the-future-of-work/id1476885647'>podcast</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-123-7287.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/txvxer/123_AI-and-You.mp3" length="29696362" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How do you like the experience of calling Customer Service? Yeah... me too. But Dan Turchin is out to change that. Dan is a serial entrepreneurial leader who is passionate about changing the future of work. He’s doing that currently as the CEO of PeopleReign, which automates the lifecycle of service requests - those problems people inside a business might have - through AI. He's run AI-centric businesses like InsightFinder and Aeroprise, and been a senior director at BMC and ServiceNow. In part 2 we talk about Dan's vision of optimism for the future of work with AI and his mission with his podcast.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1807</itunes:duration>
                <itunes:episode>123</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>122 - Guest: Dan Turchin, Employee Service AI CEO, part 1</title>
        <itunes:title>122 - Guest: Dan Turchin, Employee Service AI CEO, part 1</itunes:title>
        <link>https://aiandyou.net/e/122-guest-dan-turchin-employee-service-ai-ceo-part-1/</link>
                    <comments>https://aiandyou.net/e/122-guest-dan-turchin-employee-service-ai-ceo-part-1/#comments</comments>        <pubDate>Mon, 17 Oct 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e500a5e3-1458-392c-8f1e-f7a826c04e05</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">How do you like the experience of calling Customer Service? Yeah... me too. But Dan Turchin is out to change that. Dan is a serial entrepreneurial leader who is passionate about changing the future of work. He’s doing that currently as the CEO of <a href='https://peoplereign.io'>PeopleReign</a>, which automates the lifecycle of service requests - those problems people inside a business might have - through AI. He's run AI-centric businesses like InsightFinder and Aeroprise, and been a senior director at BMC and ServiceNow. We talk about how he got into this field and what it takes to go beyond the usual irritating chatbot.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-122-8985.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p style="text-align:left;">How do you like the experience of calling Customer Service? Yeah... me too. But Dan Turchin is out to change that. Dan is a serial entrepreneurial leader who is passionate about changing the future of work. He’s doing that currently as the CEO of <a href='https://peoplereign.io'>PeopleReign</a>, which automates the lifecycle of service requests - those problems people inside a business might have - through AI. He's run AI-centric businesses like InsightFinder and Aeroprise, and been a senior director at BMC and ServiceNow. We talk about how he got into this field and what it takes to go beyond the usual irritating chatbot.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p style="text-align:left;">Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/10/transcript-122-8985.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/2vpnag/122_AI-and-You.mp3" length="22821769" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How do you like the experience of calling Customer Service? Yeah... me too. But Dan Turchin is out to change that. Dan is a serial entrepreneurial leader who is passionate about changing the future of work. He’s doing that currently as the CEO of PeopleReign, which automates the lifecycle of service requests - those problems people inside a business might have - through AI. He's run AI-centric businesses like InsightFinder and Aeroprise, and been a senior director at BMC and ServiceNow. We talk about how he got into this field and what it takes to go beyond the usual irritating chatbot.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1377</itunes:duration>
                <itunes:episode>122</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>121 - Guest: Paul Newman, Vehicle Autonomy CTO, part 2</title>
        <itunes:title>121 - Guest: Paul Newman, Vehicle Autonomy CTO, part 2</itunes:title>
        <link>https://aiandyou.net/e/121-guest-paul-newman-vehicle-autonomy-cto-part-2/</link>
                    <comments>https://aiandyou.net/e/121-guest-paul-newman-vehicle-autonomy-cto-part-2/#comments</comments>        <pubDate>Mon, 10 Oct 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/36854c6d-4b7a-3b69-af09-69f7b5d32333</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Do you have questions about autonomous vehicles (AVs) and the contradictory headlines about them? Me too. So I invited Paul Newman, founder and CTO of <a href='https://www.oxbotica.com/'>Oxbotica</a>, a UK creator of software for AVs, to come on the show and clear up the hype and heat around self-driving cars. He is the BP chair of Information Engineering at the University of Oxford and director of the Oxford Robotics Institute, and in 2020 he was awarded the Royal Academy of Engineering Medal for outstanding commercialization of engineering innovation.</p>
<p>Paul is really passionate about vehicle autonomy. In this second part he talks about how AVs should handle situations they're not prepared for, regulatory requirements, and what sort of schedule future deployments might follow.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-121-4945.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Do you have questions about autonomous vehicles (AVs) and the contradictory headlines about them? Me too. So I invited Paul Newman, founder and CTO of <a href='https://www.oxbotica.com/'>Oxbotica</a>, a UK creator of software for AVs, to come on the show and clear up the hype and heat around self-driving cars. He is the BP chair of Information Engineering at the University of Oxford and director of the Oxford Robotics Institute, and in 2020 he was awarded the Royal Academy of Engineering Medal for outstanding commercialization of engineering innovation.</p>
<p>Paul is <em>really </em>passionate about vehicle autonomy. In this second part he talks about how AVs should handle situations they're not prepared for, regulatory requirements, and what sort of schedule future deployments might follow.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-121-4945.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/z6cxmm/121_AI-and-You.mp3" length="33173364" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Do you have questions about autonomous vehicles (AVs) and the contradictory headlines about them? Me too. So I invited Paul Newman, founder and CTO of Oxbotica, a UK creator of software for AVs, to come on the show and clear up the hype and heat around self-driving cars. He is the BP chair of Information Engineering at the University of Oxford and director of the Oxford Robotics Institute, and in 2020 he was awarded the Royal Academy of Engineering Medal for outstanding commercialization of engineering innovation.
Paul is really passionate about vehicle autonomy. In this second part he talks about how AVs should handle situations they're not prepared for, regulatory requirements, and what sort of schedule future deployments might follow.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2024</itunes:duration>
                <itunes:episode>121</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>120 - Guest: Paul Newman, Vehicle Autonomy CTO, part 1</title>
        <itunes:title>120 - Guest: Paul Newman, Vehicle Autonomy CTO, part 1</itunes:title>
        <link>https://aiandyou.net/e/120-guest-paul-newman-vehicle-autonomy-cto-part-1/</link>
                    <comments>https://aiandyou.net/e/120-guest-paul-newman-vehicle-autonomy-cto-part-1/#comments</comments>        <pubDate>Mon, 03 Oct 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/039ce1e4-0904-3112-916e-682d19c371e5</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Do you have questions about autonomous vehicles (AVs) and the contradictory headlines about them? Me too. So I invited Paul Newman, CTO of <a href='https://www.oxbotica.com/'>Oxbotica</a>, a UK creator of software for AVs, to come on the show and clear up the hype and heat around self-driving cars. He is the BP chair of Information Engineering at the University of Oxford and director of the Oxford Robotics Institute; he serves as a science advisor to the Prime Minister and adviser to the UK Department of Transport, and in 2020 he was awarded the Royal Academy of Engineering Medal for outstanding commercialization of engineering innovation.</p>
<p>Paul is really passionate about vehicle autonomy. Join us in this first part where he distinguishes for us the conditions when a vehicle may operate autonomously sooner rather than later, and defines the Operational Design Domain... and we journey from the Gulf of Mexico to the mines of Australia to the plains of Mars.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-120-2192.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Do you have questions about autonomous vehicles (AVs) and the contradictory headlines about them? Me too. So I invited Paul Newman, CTO of <a href='https://www.oxbotica.com/'>Oxbotica</a>, a UK creator of software for AVs, to come on the show and clear up the hype and heat around self-driving cars. He is the BP chair of Information Engineering at the University of Oxford and director of the Oxford Robotics Institute; he serves as a science advisor to the Prime Minister and adviser to the UK Department of Transport, and in 2020 he was awarded the Royal Academy of Engineering Medal for outstanding commercialization of engineering innovation.</p>
<p>Paul is <em>really </em>passionate about vehicle autonomy. Join us in this first part where he distinguishes for us the conditions when a vehicle may operate autonomously sooner rather than later, and defines the Operational Design Domain... and we journey from the Gulf of Mexico to the mines of Australia to the plains of Mars.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-120-2192.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xvagw8/120_AI-and-You.mp3" length="32412679" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Do you have questions about autonomous vehicles (AVs) and the contradictory headlines about them? Me too. So I invited Paul Newman, CTO of Oxbotica, a UK creator of software for AVs, to come on the show and clear up the hype and heat around self-driving cars. He is the BP chair of Information Engineering at the University of Oxford and director of the Oxford Robotics Institute; he serves as a science advisor to the Prime Minister and adviser to the UK Department of Transport, and in 2020 he was awarded the Royal Academy of Engineering Medal for outstanding commercialization of engineering innovation.
Paul is really passionate about vehicle autonomy. Join us in this first part where he distinguishes for us the conditions when a vehicle may operate autonomously sooner rather than later, and defines the Operational Design Domain... and we journey from the Gulf of Mexico to the mines of Australia to the plains of Mars.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1976</itunes:duration>
                <itunes:episode>120</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>119 - Guest: Robbie Stamp, Historian Philosopher, part 2</title>
        <itunes:title>119 - Guest: Robbie Stamp, Historian Philosopher, part 2</itunes:title>
        <link>https://aiandyou.net/e/119-guest-robbie-stamp-historian-philosopher-part-2/</link>
                    <comments>https://aiandyou.net/e/119-guest-robbie-stamp-historian-philosopher-part-2/#comments</comments>        <pubDate>Mon, 26 Sep 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1c5922f5-32c2-3668-ae38-824d3817f915</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Don't Panic! Our returning guest, Robbie Stamp, is a friend and associate of the late Douglas Adams and was an executive producer on the 2005 movie of the HitchHiker's Guide to the Galaxy. But he is also CEO of Bioss International, a global consultancy helping clients focus on decision-making in conditions of complexity, and founder with Adams of the Digital Village. Get ready to think about Life, the Universe, and Everything in this interview conclusion, as TEDx speaker, historian, and philosopher Robbie covers ground from how Douglas' way of thinking can shape our view of AI, to the meaning of empathy and creativity in light of AI's recent developments.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-119-7049.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Don't Panic! Our returning guest, Robbie Stamp, is a friend and associate of the late Douglas Adams and was an executive producer on the 2005 movie of the HitchHiker's Guide to the Galaxy. But he is also CEO of Bioss International, a global consultancy helping clients focus on decision-making in conditions of complexity, and founder with Adams of the Digital Village. Get ready to think about Life, the Universe, and Everything in this interview conclusion, as TEDx speaker, historian, and philosopher Robbie covers ground from how Douglas' way of thinking can shape our view of AI, to the meaning of empathy and creativity in light of AI's recent developments.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-119-7049.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/duq3rb/119_AI-and-You.mp3" length="43720146" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Don't Panic! Our returning guest, Robbie Stamp, is a friend and associate of the late Douglas Adams and was an executive producer on the 2005 movie of the HitchHiker's Guide to the Galaxy. But he is also CEO of Bioss International, a global consultancy helping clients focus on decision-making in conditions of complexity, and founder with Adams of the Digital Village. Get ready to think about Life, the Universe, and Everything in this interview conclusion, as TEDx speaker, historian, and philosopher Robbie covers ground from how Douglas' way of thinking can shape our view of AI, to the meaning of empathy and creativity in light of AI's recent developments.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2683</itunes:duration>
                <itunes:episode>119</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>118 - Guest: Robbie Stamp, Historian Philosopher, part 1</title>
        <itunes:title>118 - Guest: Robbie Stamp, Historian Philosopher, part 1</itunes:title>
        <link>https://aiandyou.net/e/118-guest-robbie-stamp-historian-philosopher-part-1/</link>
                    <comments>https://aiandyou.net/e/118-guest-robbie-stamp-historian-philosopher-part-1/#comments</comments>        <pubDate>Mon, 19 Sep 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/10ee77bf-f74e-3e27-94d7-3e89012ebd8f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Don't Panic! This week's guest, Robbie Stamp, was a friend and associate of the late Douglas Adams and was an executive producer on the 2005 movie of the HitchHiker's Guide to the Galaxy. But he is also CEO of Bioss International, a global consultancy helping clients focus on decision-making in conditions of complexity, and founder with Adams of the Digital Village. TEDx speaker, historian, and philosopher, Robbie takes us on a wild romp ranging from how much agency we have in a world of AI making decisions about our lives, to the sentience of Marvin the Paranoid Android. Grab your towel.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-118-1576.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Don't Panic! This week's guest, Robbie Stamp, was a friend and associate of the late Douglas Adams and was an executive producer on the 2005 movie of the HitchHiker's Guide to the Galaxy. But he is also CEO of Bioss International, a global consultancy helping clients focus on decision-making in conditions of complexity, and founder with Adams of the Digital Village. TEDx speaker, historian, and philosopher, Robbie takes us on a wild romp ranging from how much agency we have in a world of AI making decisions about our lives, to the sentience of Marvin the Paranoid Android. Grab your towel.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/09/transcript-118-1576.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/35shkb/118_AI-and-You.mp3" length="44055350" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Don't Panic! This week's guest, Robbie Stamp, was a friend and associate of the late Douglas Adams and was an executive producer on the 2005 movie of the HitchHiker's Guide to the Galaxy. But he is also CEO of Bioss International, a global consultancy helping clients focus on decision-making in conditions of complexity, and founder with Adams of the Digital Village. TEDx speaker, historian, and philosopher, Robbie takes us on a wild romp ranging from how much agency we have in a world of AI making decisions about our lives, to the sentience of Marvin the Paranoid Android. Grab your towel.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2704</itunes:duration>
                <itunes:episode>118</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>117 - Guest: Chris Summerfield, Cognitive Scientist at Oxford and DeepMind, part 2</title>
        <itunes:title>117 - Guest: Chris Summerfield, Cognitive Scientist at Oxford and DeepMind, part 2</itunes:title>
        <link>https://aiandyou.net/e/117-guest-chris-summerfield-cognitive-scientist-at-oxford-and-deepmind-part-2/</link>
                    <comments>https://aiandyou.net/e/117-guest-chris-summerfield-cognitive-scientist-at-oxford-and-deepmind-part-2/#comments</comments>        <pubDate>Mon, 12 Sep 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d3c1e991-4c2d-3179-b09f-38caf61e92de</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want an expert on how today's AI compares to the human brain, it would be hard to beat an Oxford neuroscientist who also works at DeepMind. That describes Chris Summerfield, who runs Oxford University’s <a href='https://humaninformationprocessing.com/'>Human Information Processing lab</a> in the Department of Experimental Psychology and author of the upcoming book, "Natural General Intelligence." In part 2, we talk about the new image generators like DALL-E-2 and how they relate to human cognition, brain-computer interfaces and neuroplasticity, and purple pineapples.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-117-8170.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want an expert on how today's AI compares to the human brain, it would be hard to beat an Oxford neuroscientist who also works at DeepMind. That describes Chris Summerfield, who runs Oxford University’s <a href='https://humaninformationprocessing.com/'>Human Information Processing lab</a> in the Department of Experimental Psychology and author of the upcoming book, "Natural General Intelligence." In part 2, we talk about the new image generators like DALL-E-2 and how they relate to human cognition, brain-computer interfaces and neuroplasticity, and purple pineapples.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-117-8170.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/mebj5u/117_AI-and-You.mp3" length="26815787" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you want an expert on how today's AI compares to the human brain, it would be hard to beat an Oxford neuroscientist who also works at DeepMind. That describes Chris Summerfield, who runs Oxford University’s Human Information Processing lab in the Department of Experimental Psychology and author of the upcoming book, "Natural General Intelligence." In part 2, we talk about the new image generators like DALL-E-2 and how they relate to human cognition, brain-computer interfaces and neuroplasticity, and purple pineapples.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1627</itunes:duration>
                <itunes:episode>117</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>116 - Guest: Chris Summerfield, Cognitive Scientist at Oxford and DeepMind, part 1</title>
        <itunes:title>116 - Guest: Chris Summerfield, Cognitive Scientist at Oxford and DeepMind, part 1</itunes:title>
        <link>https://aiandyou.net/e/116-guest-chris-summerfield-cognitive-scientist-at-oxford-and-deepmind-part-1/</link>
                    <comments>https://aiandyou.net/e/116-guest-chris-summerfield-cognitive-scientist-at-oxford-and-deepmind-part-1/#comments</comments>        <pubDate>Mon, 05 Sep 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3478e144-3033-382b-aaac-8af5164a2389</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want an expert on how today's AI compares to the human brain, it would be hard to beat an Oxford neuroscientist who also works at DeepMind. That describes Chris Summerfield, who runs Oxford University’s <a href='https://humaninformationprocessing.com/'>Human Information Processing lab</a> in the Department of Experimental Psychology and author of the upcoming book, "Natural General Intelligence." We have a fascinating discussion about how AI - today's transformers in particular - are like or not like human intelligence.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-116-2075.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you want an expert on how today's AI compares to the human brain, it would be hard to beat an Oxford neuroscientist who also works at DeepMind. That describes Chris Summerfield, who runs Oxford University’s <a href='https://humaninformationprocessing.com/'>Human Information Processing lab</a> in the Department of Experimental Psychology and author of the upcoming book, "Natural General Intelligence." We have a fascinating discussion about how AI - today's transformers in particular - are like or not like human intelligence.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-116-2075.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nyx5c3/116_AI-and-You.mp3" length="32972744" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you want an expert on how today's AI compares to the human brain, it would be hard to beat an Oxford neuroscientist who also works at DeepMind. That describes Chris Summerfield, who runs Oxford University’s Human Information Processing lab in the Department of Experimental Psychology and author of the upcoming book, "Natural General Intelligence." We have a fascinating discussion about how AI - today's transformers in particular - are like or not like human intelligence.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2011</itunes:duration>
                <itunes:episode>116</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>115 - Guest: Chanuki Seresinhe, BeautifulPlaces.ai Founder</title>
        <itunes:title>115 - Guest: Chanuki Seresinhe, BeautifulPlaces.ai Founder</itunes:title>
        <link>https://aiandyou.net/e/115-guest-chanuki-seresinhe-beautifulplacesai-founder/</link>
                    <comments>https://aiandyou.net/e/115-guest-chanuki-seresinhe-beautifulplacesai-founder/#comments</comments>        <pubDate>Mon, 29 Aug 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e4b03325-3be4-3da4-9dea-a09a88430fe1</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Beauty is in the eye of the beholder... and also in AI, when it's used by Chanuki Seresinhe to evaluate the aesthetic appeal of outdoor scenery. Amazing as it sounds, her PhD research trained AI to evaluate the beauty of Great Britain at a one-kilometer resolution, and she's only just getting started. I talk with Chanuki, now head of Data Science at Zoopla, about how she did that and what the applications and ramifications of <a href='https://beautifulplaces.ai'>her work</a> are.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-115-8561.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Beauty is in the eye of the beholder... and also in AI, when it's used by Chanuki Seresinhe to evaluate the aesthetic appeal of outdoor scenery. Amazing as it sounds, her PhD research trained AI to evaluate the beauty of Great Britain at a one-kilometer resolution, and she's only just getting started. I talk with Chanuki, now head of Data Science at Zoopla, about how she did that and what the applications and ramifications of <a href='https://beautifulplaces.ai'>her work</a> are.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-115-8561.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/uhgnqq/115_AI-and-You.mp3" length="33935304" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Beauty is in the eye of the beholder... and also in AI, when it's used by Chanuki Seresinhe to evaluate the aesthetic appeal of outdoor scenery. Amazing as it sounds, her PhD research trained AI to evaluate the beauty of Great Britain at a one-kilometer resolution, and she's only just getting started. I talk with Chanuki, now head of Data Science at Zoopla, about how she did that and what the applications and ramifications of her work are.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2072</itunes:duration>
                <itunes:episode>115</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>114 - Guest: James Wilson, Gartner Leadership Partner</title>
        <itunes:title>114 - Guest: James Wilson, Gartner Leadership Partner</itunes:title>
        <link>https://aiandyou.net/e/114-guest-james-wilson-gartner-leadership-partner/</link>
                    <comments>https://aiandyou.net/e/114-guest-james-wilson-gartner-leadership-partner/#comments</comments>        <pubDate>Mon, 22 Aug 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/2c274f0b-a939-34f2-8ade-e86e117f5d73</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p><a href='https://www.linkedin.com/in/james-wilson-1938a1/'>James Wilson </a>is a former Leadership Partner at Gartner and now occupies a similar role at Capgemini, helping customers with the ethical issues of AI. He has a new book, <a href='https://www.artificialnegligence.com/'>Artificial Negligence</a>, just released, looking at the broad issues and challenges of AI. We talk about the dimensions of AI ethics that concern customers, like sustainability, and his work with the Finnish government on their Aurora AI literacy program.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-114-7618.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p><a href='https://www.linkedin.com/in/james-wilson-1938a1/'>James Wilson </a>is a former Leadership Partner at Gartner and now occupies a similar role at Capgemini, helping customers with the ethical issues of AI. He has a new book, <em><a href='https://www.artificialnegligence.com/'>Artificial Negligence</a></em>, just released, looking at the broad issues and challenges of AI. We talk about the dimensions of AI ethics that concern customers, like sustainability, and his work with the Finnish government on their Aurora AI literacy program.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-114-7618.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/yykd45/114_AI-and-You.mp3" length="40390266" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

James Wilson is a former Leadership Partner at Gartner and now occupies a similar role at Capgemini, helping customers with the ethical issues of AI. He has a new book, Artificial Negligence, just released, looking at the broad issues and challenges of AI. We talk about the dimensions of AI ethics that concern customers, like sustainability, and his work with the Finnish government on their Aurora AI literacy program.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2475</itunes:duration>
                <itunes:episode>114</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>113 - Guest: Justin Harrison, Virtual Persona Creator</title>
        <itunes:title>113 - Guest: Justin Harrison, Virtual Persona Creator</itunes:title>
        <link>https://aiandyou.net/e/113-guest-justin-harrison-virtual-persona-creator/</link>
                    <comments>https://aiandyou.net/e/113-guest-justin-harrison-virtual-persona-creator/#comments</comments>        <pubDate>Mon, 15 Aug 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1e59e614-7004-36cc-996a-379a8e87254c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Justin Harrison is an entrepreneur, founder, and CEO of <a href='https://myyov.com'>YOV, Inc.</a> (You, Only Virtual)—a company specializing in posthumous digital communications. In 2019, he found himself staring down death on two fronts: his own, from a near fatal motorcycle accident, as well as his mother’s stage-4 cancer diagnosis. He was terrified of losing his mother and wanted something that could preserve the essence of their relationship. So he embarked on a process of recreating his mother's persona in an AI chatbot. We explore how that works and what it's like for him.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-113-6075.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Justin Harrison is an entrepreneur, founder, and CEO of <a href='https://myyov.com'>YOV, Inc.</a> (You, Only Virtual)—a company specializing in posthumous digital communications. In 2019, he found himself staring down death on two fronts: his own, from a near fatal motorcycle accident, as well as his mother’s stage-4 cancer diagnosis. He was terrified of losing his mother and wanted something that could preserve the essence of their relationship. So he embarked on a process of recreating his mother's persona in an AI chatbot. We explore how that works and what it's like for him.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/08/transcript-113-6075.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xvjf8s/113_AI-and-You.mp3" length="32040695" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Justin Harrison is an entrepreneur, founder, and CEO of YOV, Inc. (You, Only Virtual)—a company specializing in posthumous digital communications. In 2019, he found himself staring down death on two fronts: his own, from a near fatal motorcycle accident, as well as his mother’s stage-4 cancer diagnosis. He was terrified of losing his mother and wanted something that could preserve the essence of their relationship. So he embarked on a process of recreating his mother's persona in an AI chatbot. We explore how that works and what it's like for him.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1953</itunes:duration>
                <itunes:episode>113</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>112 - Guest: Cansu Canca, Applied AI Ethics Philosopher, part 2</title>
        <itunes:title>112 - Guest: Cansu Canca, Applied AI Ethics Philosopher, part 2</itunes:title>
        <link>https://aiandyou.net/e/112-guest-cansu-canca-applied-ai-ethics-philosopher-part-2/</link>
                    <comments>https://aiandyou.net/e/112-guest-cansu-canca-applied-ai-ethics-philosopher-part-2/#comments</comments>        <pubDate>Mon, 08 Aug 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a3d20b2b-7fc2-3b53-a408-8d5a01ab5553</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Cansu Canca is founder and director of the AI Ethics Lab, providing ethics analysis and guidance to researchers and practitioners. Prior to that, she was on the full-time faculty at the University of Hong Kong, and an ethics researcher at Harvard. She was listed among the “30 Influential Women Advancing AI in Boston” and the “100 Brilliant Women in AI Ethics,” and has given the TEDx talk <a href='https://www.ted.com/talks/cansu_canca_how_to_solve_ai_s_ethical_puzzles'>How to Solve AI’s Ethical Puzzles</a>. We talk about the ethical issues of search engines and recommender algorithms, and get  another take on the Blake Lemoine incident from an ethical viewpoint. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-112-8774.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Cansu Canca is founder and director of the AI Ethics Lab, providing ethics analysis and guidance to researchers and practitioners. Prior to that, she was on the full-time faculty at the University of Hong Kong, and an ethics researcher at Harvard. She was listed among the “30 Influential Women Advancing AI in Boston” and the “100 Brilliant Women in AI Ethics,” and has given the TEDx talk <a href='https://www.ted.com/talks/cansu_canca_how_to_solve_ai_s_ethical_puzzles'><em>How to Solve AI’s Ethical Puzzles</em></a>. We talk about the ethical issues of search engines and recommender algorithms, and get  another take on the Blake Lemoine incident from an ethical viewpoint. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-112-8774.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/6cj5w9/112_AI-and-You.mp3" length="33592995" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Cansu Canca is founder and director of the AI Ethics Lab, providing ethics analysis and guidance to researchers and practitioners. Prior to that, she was on the full-time faculty at the University of Hong Kong, and an ethics researcher at Harvard. She was listed among the “30 Influential Women Advancing AI in Boston” and the “100 Brilliant Women in AI Ethics,” and has given the TEDx talk How to Solve AI’s Ethical Puzzles. We talk about the ethical issues of search engines and recommender algorithms, and get  another take on the Blake Lemoine incident from an ethical viewpoint. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2050</itunes:duration>
                <itunes:episode>112</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>111 - Guest: Cansu Canca, Applied AI Ethics Philosopher, part 1</title>
        <itunes:title>111 - Guest: Cansu Canca, Applied AI Ethics Philosopher, part 1</itunes:title>
        <link>https://aiandyou.net/e/111-guest-cansu-canca-applied-ai-ethics-philosopher-part-1/</link>
                    <comments>https://aiandyou.net/e/111-guest-cansu-canca-applied-ai-ethics-philosopher-part-1/#comments</comments>        <pubDate>Mon, 01 Aug 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/cc2f4475-2dc8-3a94-add5-70eb0e1401a8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Cansu Canca is founder and director of the AI Ethics Lab, providing ethics analysis and guidance to researchers and practitioners. Prior to that, she was on the full-time faculty at the University of Hong Kong, and an ethics researcher at Harvard. She was listed among the “30 Influential Women Advancing AI in Boston” and the “100 Brilliant Women in AI Ethics,” and has given the TEDx talk <a href='https://www.ted.com/talks/cansu_canca_how_to_solve_ai_s_ethical_puzzles'>How to Solve AI’s Ethical Puzzles</a>. We talk about her journey coming from the field of medical ethics into AI ethics, and what the experience of a company working with the AI Ethics Lab is like.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-111-6758.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Cansu Canca is founder and director of the AI Ethics Lab, providing ethics analysis and guidance to researchers and practitioners. Prior to that, she was on the full-time faculty at the University of Hong Kong, and an ethics researcher at Harvard. She was listed among the “30 Influential Women Advancing AI in Boston” and the “100 Brilliant Women in AI Ethics,” and has given the TEDx talk <a href='https://www.ted.com/talks/cansu_canca_how_to_solve_ai_s_ethical_puzzles'><em>How to Solve AI’s Ethical Puzzles</em></a>. We talk about her journey coming from the field of medical ethics into AI ethics, and what the experience of a company working with the AI Ethics Lab is like.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-111-6758.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/pfqjk6/111_AI-and-You.mp3" length="26200969" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Cansu Canca is founder and director of the AI Ethics Lab, providing ethics analysis and guidance to researchers and practitioners. Prior to that, she was on the full-time faculty at the University of Hong Kong, and an ethics researcher at Harvard. She was listed among the “30 Influential Women Advancing AI in Boston” and the “100 Brilliant Women in AI Ethics,” and has given the TEDx talk How to Solve AI’s Ethical Puzzles. We talk about her journey coming from the field of medical ethics into AI ethics, and what the experience of a company working with the AI Ethics Lab is like.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1588</itunes:duration>
                <itunes:episode>111</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>110 - Special: AI Interpreted via Monty Python</title>
        <itunes:title>110 - Special: AI Interpreted via Monty Python</itunes:title>
        <link>https://aiandyou.net/e/110-special-ai-interpreted-via-monty-python/</link>
                    <comments>https://aiandyou.net/e/110-special-ai-interpreted-via-monty-python/#comments</comments>        <pubDate>Mon, 25 Jul 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/dab76d47-40e6-3524-8a7e-b587fb41d8f1</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Nobody expects... an AI podcast to veer into comedy parody. Possibly with good reason. In a show almost completely free of spam and Queen Victoria, we interpret some of today's news and themes about AI through the lens of Monty Python sketches.</p>
<p>If you don't know what Monty Python is, this will confuse you more than a cat and make your brain hurt. If you've seen some of today's news about AI and know the airspeed of an unladen swallow, you're in the right place.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-110-0538.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Nobody expects... an AI podcast to veer into comedy parody. Possibly with good reason. In a show almost completely free of spam and Queen Victoria, we interpret some of today's news and themes about AI through the lens of <em>Monty Python sketches</em>.</p>
<p>If you don't know what Monty Python is, this will confuse you more than a cat and make your brain hurt. If you've seen some of today's news about AI and know the airspeed of an unladen swallow, you're in the right place.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-110-0538.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/c3e4fr/110_AI-and-You.mp3" length="14116515" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Nobody expects... an AI podcast to veer into comedy parody. Possibly with good reason. In a show almost completely free of spam and Queen Victoria, we interpret some of today's news and themes about AI through the lens of Monty Python sketches.
If you don't know what Monty Python is, this will confuse you more than a cat and make your brain hurt. If you've seen some of today's news about AI and know the airspeed of an unladen swallow, you're in the right place.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>833</itunes:duration>
                <itunes:episode>110</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>109 - Guest: Robert J. Sawyer, Science Fiction Writer, part 2</title>
        <itunes:title>109 - Guest: Robert J. Sawyer, Science Fiction Writer, part 2</itunes:title>
        <link>https://aiandyou.net/e/109-guest-robert-j-sawyer-science-fiction-writer-part-2/</link>
                    <comments>https://aiandyou.net/e/109-guest-robert-j-sawyer-science-fiction-writer-part-2/#comments</comments>        <pubDate>Mon, 18 Jul 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9e11c923-aada-3355-a8d8-7cb18d154bcd</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What is consciousness, how might it emerge from or into AI, and how can it be transferred? Fascinating questions tackled by the oeuvre of a fascinating author, <a href='https://sfwriter.com'>Robert J. Sawyer</a>, the "Dean of Canadian Science Fiction," and one of only eight writers ever to win all three of the science-fiction field’s top honors for best novel of the year: the Hugo Award, which he won for his novel <a href='https://www.amazon.ca/Hominids-Sawyer-Published-Science-Paperback/dp/B00GGVLX1C/'>Hominids</a>; the Nebula Award, which he won for his novel <a href='https://www.amazon.ca/Terminal-Experiment-Author-Robert-published/dp/B00GX3CFM4/'>The Terminal Experiment</a>, and the John W. Campbell Memorial Award, which he won with his novel <a href='https://www.amazon.ca/Mindscan-Robert-J-Sawyer/dp/0765329905'>Mindscan</a>.</p>
<p>In the second half of our interview, we talk about the simulation hypothesis, consciousness capture and transfer, and what today’s AI technologists should be learning from science fiction. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-109-9795.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What is consciousness, how might it emerge from or into AI, and how can it be transferred? Fascinating questions tackled by the oeuvre of a fascinating author, <a href='https://sfwriter.com'>Robert J. Sawyer</a>, the "Dean of Canadian Science Fiction," and one of only eight writers ever to win all three of the science-fiction field’s top honors for best novel of the year: the Hugo Award, which he won for his novel <a href='https://www.amazon.ca/Hominids-Sawyer-Published-Science-Paperback/dp/B00GGVLX1C/'><em>Hominids</em></a>; the Nebula Award, which he won for his novel <a href='https://www.amazon.ca/Terminal-Experiment-Author-Robert-published/dp/B00GX3CFM4/'><em>The Terminal Experiment</em></a>, and the John W. Campbell Memorial Award, which he won with his novel <a href='https://www.amazon.ca/Mindscan-Robert-J-Sawyer/dp/0765329905'><em>Mindscan</em></a>.</p>
<p>In the second half of our interview, we talk about the simulation hypothesis, consciousness capture and transfer, and what today’s AI technologists should be learning from science fiction. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-109-9795.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qsakh6/109_AI-and-You.mp3" length="30598736" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What is consciousness, how might it emerge from or into AI, and how can it be transferred? Fascinating questions tackled by the oeuvre of a fascinating author, Robert J. Sawyer, the "Dean of Canadian Science Fiction," and one of only eight writers ever to win all three of the science-fiction field’s top honors for best novel of the year: the Hugo Award, which he won for his novel Hominids; the Nebula Award, which he won for his novel The Terminal Experiment, and the John W. Campbell Memorial Award, which he won with his novel Mindscan.
In the second half of our interview, we talk about the simulation hypothesis, consciousness capture and transfer, and what today’s AI technologists should be learning from science fiction. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1863</itunes:duration>
                <itunes:episode>109</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>108 - Guest: Robert J. Sawyer, Science Fiction Writer, part 1</title>
        <itunes:title>108 - Guest: Robert J. Sawyer, Science Fiction Writer, part 1</itunes:title>
        <link>https://aiandyou.net/e/108-guest-robert-j-sawyer-science-fiction-writer-part-1/</link>
                    <comments>https://aiandyou.net/e/108-guest-robert-j-sawyer-science-fiction-writer-part-1/#comments</comments>        <pubDate>Mon, 11 Jul 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e59f96aa-852e-3735-b02b-26010433c6d5</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What if there are zombies among us? In another dive into the nature of consciousness, the "philosophical zombie" is a fascinating topic explored by a fascinating author, <a href='https://sfwriter.com'>Robert J. Sawyer</a>, the "Dean of Canadian Science Fiction," and one of only eight writers ever to win all three of the science-fiction field’s top honors for best novel of the year: the Hugo Award, which he won for his novel <a href='https://www.amazon.ca/Hominids-Sawyer-Published-Science-Paperback/dp/B00GGVLX1C/'>Hominids</a>; the Nebula Award, which he won for his novel <a href='https://www.amazon.ca/Terminal-Experiment-Author-Robert-published/dp/B00GX3CFM4/'>The Terminal Experiment</a>, and the John W. Campbell Memorial Award, which he won with his novel <a href='https://www.amazon.ca/Mindscan-Robert-J-Sawyer/dp/0765329905'>Mindscan</a>.</p>
<p>We talk about zombies because of the question of how to know whether an AI is conscious, and yes, we discuss Blake Lemoine's assertion that Google's LaMDA AI has become sentient. Rob's stories explore how humans and superintelligent AI can both win. Find out more in part 1.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-108-4227.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What if there are zombies among us? In another dive into the nature of consciousness, the "philosophical zombie" is a fascinating topic explored by a fascinating author, <a href='https://sfwriter.com'>Robert J. Sawyer</a>, the "Dean of Canadian Science Fiction," and one of only eight writers ever to win all three of the science-fiction field’s top honors for best novel of the year: the Hugo Award, which he won for his novel <a href='https://www.amazon.ca/Hominids-Sawyer-Published-Science-Paperback/dp/B00GGVLX1C/'><em>Hominids</em></a>; the Nebula Award, which he won for his novel <a href='https://www.amazon.ca/Terminal-Experiment-Author-Robert-published/dp/B00GX3CFM4/'><em>The Terminal Experiment</em></a>, and the John W. Campbell Memorial Award, which he won with his novel <a href='https://www.amazon.ca/Mindscan-Robert-J-Sawyer/dp/0765329905'><em>Mindscan</em></a>.</p>
<p>We talk about zombies because of the question of how to know whether an AI is conscious, and yes, we discuss Blake Lemoine's assertion that Google's LaMDA AI has become sentient. Rob's stories explore how humans and superintelligent AI can <em>both </em>win. Find out more in part 1.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/07/transcript-108-4227.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/48meaz/108_AI-and-You.mp3" length="43899923" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What if there are zombies among us? In another dive into the nature of consciousness, the "philosophical zombie" is a fascinating topic explored by a fascinating author, Robert J. Sawyer, the "Dean of Canadian Science Fiction," and one of only eight writers ever to win all three of the science-fiction field’s top honors for best novel of the year: the Hugo Award, which he won for his novel Hominids; the Nebula Award, which he won for his novel The Terminal Experiment, and the John W. Campbell Memorial Award, which he won with his novel Mindscan.
We talk about zombies because of the question of how to know whether an AI is conscious, and yes, we discuss Blake Lemoine's assertion that Google's LaMDA AI has become sentient. Rob's stories explore how humans and superintelligent AI can both win. Find out more in part 1.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2743</itunes:duration>
                <itunes:episode>108</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>107 - Guest: Ben Goertzel, AGI researcher, SingularityNET Founder, part 2</title>
        <itunes:title>107 - Guest: Ben Goertzel, AGI researcher, SingularityNET Founder, part 2</itunes:title>
        <link>https://aiandyou.net/e/107-guest-ben-goertzel-agi-researcher-singularitynet-founder-part-2/</link>
                    <comments>https://aiandyou.net/e/107-guest-ben-goertzel-agi-researcher-singularitynet-founder-part-2/#comments</comments>        <pubDate>Mon, 04 Jul 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/21b4e6fe-1efc-310d-8891-215c8f3ff334</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We've talked a lot about artificial general intelligence (AGI) on the show, but never as much as in this interview, when we talk with Mr. AGI himself, Ben Goertzel. Ben wrote a book, Artificial General Intelligence, founded the AGI Society and <a href='https://singularitynet.io'>SingularityNET</a>, and wrote <a href='https://www.amazon.com/Ten-Years-Singularity-Really-Try/dp/1505550823'>Ten Years to the Singularity if We Really, Really, Try</a>. He was Chief Scientist of Hanson Robotics and was one of the first people to popularize the term AGI.</p>
<p>In the second half of the interview, we talk about the Google engineer who declared the LAMDA AI to be sentient, how and when to declare an AI sentient or AGI, a "digital baby brain," and the SingularityNET metaverse as a training ground for AGIs. We cover an incredible amount of ground!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcription-107.docx'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We've talked a lot about artificial general intelligence (AGI) on the show, but never as much as in this interview, when we talk with Mr. AGI himself, Ben Goertzel. Ben wrote a book, <em>Artificial General Intelligence</em>, founded the AGI Society and <a href='https://singularitynet.io'>SingularityNET</a>, and wrote <em><a href='https://www.amazon.com/Ten-Years-Singularity-Really-Try/dp/1505550823'>Ten Years to the Singularity if We Really, Really, Try</a>. </em>He was Chief Scientist of Hanson Robotics and was one of the first people to popularize the term AGI.</p>
<p>In the second half of the interview, we talk about the Google engineer who declared the LAMDA AI to be sentient, how and when to declare an AI sentient or AGI, a "digital baby brain," and the SingularityNET metaverse as a training ground for AGIs. We cover an incredible amount of ground!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcription-107.docx'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/fqp9um/107_AI-and-You.mp3" length="33669482" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

We've talked a lot about artificial general intelligence (AGI) on the show, but never as much as in this interview, when we talk with Mr. AGI himself, Ben Goertzel. Ben wrote a book, Artificial General Intelligence, founded the AGI Society and SingularityNET, and wrote Ten Years to the Singularity if We Really, Really, Try. He was Chief Scientist of Hanson Robotics and was one of the first people to popularize the term AGI.
In the second half of the interview, we talk about the Google engineer who declared the LAMDA AI to be sentient, how and when to declare an AI sentient or AGI, a "digital baby brain," and the SingularityNET metaverse as a training ground for AGIs. We cover an incredible amount of ground!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2055</itunes:duration>
                <itunes:episode>107</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>106 - Guest: Ben Goertzel, AGI researcher, SingularityNET Founder</title>
        <itunes:title>106 - Guest: Ben Goertzel, AGI researcher, SingularityNET Founder</itunes:title>
        <link>https://aiandyou.net/e/106-guest-ben-goertzel-agi-researcher-singularitynet-founder/</link>
                    <comments>https://aiandyou.net/e/106-guest-ben-goertzel-agi-researcher-singularitynet-founder/#comments</comments>        <pubDate>Mon, 27 Jun 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/134293c3-f34c-3fcf-af3e-49d6b0738755</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We've talked a lot about artificial general intelligence (AGI) on the show, but never as much as in this interview, when we talk with Mr. AGI himself, Ben Goertzel. Ben wrote a book, Artificial General Intelligence, founded the AGI Society and <a href='https://singularitynet.io'>SingularityNET</a>, and wrote <a href='https://www.amazon.com/Ten-Years-Singularity-Really-Try/dp/1505550823'>Ten Years to the Singularity if We Really, Really, Try</a>. He was Chief Scientist of Hanson Robotics and was one of the first people to popularize the term AGI.</p>
<p>In part 1, we talk about how he got into AGI, his new AGI hardware platform, human-AGI distinctions, and what it would be like for a robot to go to MIT. Really, this episode is packed!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-106-0709.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We've talked a lot about artificial general intelligence (AGI) on the show, but never as much as in this interview, when we talk with Mr. AGI himself, Ben Goertzel. Ben wrote a book, <em>Artificial General Intelligence</em>, founded the AGI Society and <a href='https://singularitynet.io'>SingularityNET</a>, and wrote <em><a href='https://www.amazon.com/Ten-Years-Singularity-Really-Try/dp/1505550823'>Ten Years to the Singularity if We Really, Really, Try</a>. </em>He was Chief Scientist of Hanson Robotics and was one of the first people to popularize the term AGI.</p>
<p>In part 1, we talk about how he got into AGI, his new AGI hardware platform, human-AGI distinctions, and what it would be like for a robot to go to MIT. Really, this episode is packed!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-106-0709.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/94889b/106_AI-and-You.mp3" length="29597306" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

We've talked a lot about artificial general intelligence (AGI) on the show, but never as much as in this interview, when we talk with Mr. AGI himself, Ben Goertzel. Ben wrote a book, Artificial General Intelligence, founded the AGI Society and SingularityNET, and wrote Ten Years to the Singularity if We Really, Really, Try. He was Chief Scientist of Hanson Robotics and was one of the first people to popularize the term AGI.
In part 1, we talk about how he got into AGI, his new AGI hardware platform, human-AGI distinctions, and what it would be like for a robot to go to MIT. Really, this episode is packed!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1800</itunes:duration>
                <itunes:episode>106</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>105 - Archive Interview: Michael Bowling, AI poker researcher</title>
        <itunes:title>105 - Archive Interview: Michael Bowling, AI poker researcher</itunes:title>
        <link>https://aiandyou.net/e/105-archive-interview-michael-bowling-ai-poker-researcher/</link>
                    <comments>https://aiandyou.net/e/105-archive-interview-michael-bowling-ai-poker-researcher/#comments</comments>        <pubDate>Mon, 20 Jun 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8012f9d3-aae5-3e92-83bf-493759474f54</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are you good at bluffing? Do you think you could beat a computer? What if I told you that it was mathematically proven that the computer would beat you? That's what Michael Bowling did for his program that plays heads-up, limit Texas Hold'Em: he proved that it was impossible to do better than draw against it.</p>
<p>Michael is a professor at the University of Alberta, a research scientist at DeepMind, and has been on Scientific American Frontiers, National Geographic Today, and featured in exhibits at the Smithsonian.</p>
<p>This is an interview from an unreleased archive interview from 2016, recorded at the Canadian Artificial Intelligence Association annual conference. I was pretty green at this then, but Michael's answers are illuminating and just as useful today despite advancements in computer poker since then. He also talked about work being done on video games, and the conversation about artificial general intelligence that was just starting to become intense around the AI community then. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-105-6027.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are you good at bluffing? Do you think you could beat a computer? What if I told you that it was mathematically proven that the computer would beat you? That's what Michael Bowling did for his program that plays heads-up, limit Texas Hold'Em: he proved that it was impossible to do better than draw against it.</p>
<p>Michael is a professor at the University of Alberta, a research scientist at DeepMind, and has been on Scientific American Frontiers, National Geographic Today, and featured in exhibits at the Smithsonian.</p>
<p>This is an interview from an unreleased archive interview from 2016, recorded at the Canadian Artificial Intelligence Association annual conference. I was pretty green at this then, but Michael's answers are illuminating and just as useful today despite advancements in computer poker since then. He also talked about work being done on video games, and the conversation about artificial general intelligence that was just starting to become intense around the AI community then. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-105-6027.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/i7bcgt/105_AI-and-You.mp3" length="25847794" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Are you good at bluffing? Do you think you could beat a computer? What if I told you that it was mathematically proven that the computer would beat you? That's what Michael Bowling did for his program that plays heads-up, limit Texas Hold'Em: he proved that it was impossible to do better than draw against it.
Michael is a professor at the University of Alberta, a research scientist at DeepMind, and has been on Scientific American Frontiers, National Geographic Today, and featured in exhibits at the Smithsonian.
This is an interview from an unreleased archive interview from 2016, recorded at the Canadian Artificial Intelligence Association annual conference. I was pretty green at this then, but Michael's answers are illuminating and just as useful today despite advancements in computer poker since then. He also talked about work being done on video games, and the conversation about artificial general intelligence that was just starting to become intense around the AI community then. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1566</itunes:duration>
                <itunes:episode>105</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>104 - ANI, AGI, ASI - What are we talking about?</title>
        <itunes:title>104 - ANI, AGI, ASI - What are we talking about?</itunes:title>
        <link>https://aiandyou.net/e/104-ani-agi-asi-what-are-we-talking-about/</link>
                    <comments>https://aiandyou.net/e/104-ani-agi-asi-what-are-we-talking-about/#comments</comments>        <pubDate>Mon, 13 Jun 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a1a14052-f11f-3fa4-b0b0-f93ad1aadf63</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>For our second anniversary show, we're going to explain some of the terms that are often used on the show and not always spelled out, like ANI (artificial narrow intelligence), AGI (artificial general intelligence), and ASI (artificial super intelligence). What do they mean, why do so many people talk about them, what do you need to know to follow along?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-104-8925.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>For our second anniversary show, we're going to explain some of the terms that are often used on the show and not always spelled out, like ANI (artificial narrow intelligence), AGI (artificial general intelligence), and ASI (artificial super intelligence). What do they mean, why do so many people talk about them, what do you need to know to follow along?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-104-8925.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ckjepb/104_AI-and-You.mp3" length="30051627" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

For our second anniversary show, we're going to explain some of the terms that are often used on the show and not always spelled out, like ANI (artificial narrow intelligence), AGI (artificial general intelligence), and ASI (artificial super intelligence). What do they mean, why do so many people talk about them, what do you need to know to follow along?
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1829</itunes:duration>
                <itunes:episode>104</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>103 - Guest: Tom White, Machine Perception Artist</title>
        <itunes:title>103 - Guest: Tom White, Machine Perception Artist</itunes:title>
        <link>https://aiandyou.net/e/103-guest-tom-white-machine-perception-artist/</link>
                    <comments>https://aiandyou.net/e/103-guest-tom-white-machine-perception-artist/#comments</comments>        <pubDate>Mon, 06 Jun 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7e0acdcf-fb76-3cdd-8230-c4bb3110154f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How does AI see the world? it's easy to take for granted that an AI that is trained on labeled data to recognize certain images very well is seeing them the same way we do, but that's not so. The AI is quite alien, and helping us to see the world through its eyes is <a href='https://drib.net'>Tom White</a>, a New Zealand-based artist. He creates physical artworks that highlight how machines “see” and thus how they think, suggesting that these systems are capable of abstraction and conceptual thinking. He has exhibited computer-based artwork internationally over the past 25 years with themes of artificial intelligence, interactivity, and computational creativity. He is currently a lecturer and researcher at University of Wellington School of Design where he teaches students the creative potential of computer programming and artificial intelligence.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-103-4657.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How does AI see the world? it's easy to take for granted that an AI that is trained on labeled data to recognize certain images very well is seeing them the same way we do, but that's not so. The AI is quite alien, and helping us to see the world through its eyes is <a href='https://drib.net'>Tom White</a>, a New Zealand-based artist. He creates physical artworks that highlight how machines “see” and thus how they think, suggesting that these systems are capable of abstraction and conceptual thinking. He has exhibited computer-based artwork internationally over the past 25 years with themes of artificial intelligence, interactivity, and computational creativity. He is currently a lecturer and researcher at University of Wellington School of Design where he teaches students the creative potential of computer programming and artificial intelligence.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/06/transcript-103-4657.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kt2jxz/103_AI-and-You.mp3" length="42963640" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How does AI see the world? it's easy to take for granted that an AI that is trained on labeled data to recognize certain images very well is seeing them the same way we do, but that's not so. The AI is quite alien, and helping us to see the world through its eyes is Tom White, a New Zealand-based artist. He creates physical artworks that highlight how machines “see” and thus how they think, suggesting that these systems are capable of abstraction and conceptual thinking. He has exhibited computer-based artwork internationally over the past 25 years with themes of artificial intelligence, interactivity, and computational creativity. He is currently a lecturer and researcher at University of Wellington School of Design where he teaches students the creative potential of computer programming and artificial intelligence.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2636</itunes:duration>
                <itunes:episode>103</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>102 - Guest: Richard Ahlfeld, AI for Engineering Optimization</title>
        <itunes:title>102 - Guest: Richard Ahlfeld, AI for Engineering Optimization</itunes:title>
        <link>https://aiandyou.net/e/102-guest-richard-ahlfeld-ai-for-engineering-optimization/</link>
                    <comments>https://aiandyou.net/e/102-guest-richard-ahlfeld-ai-for-engineering-optimization/#comments</comments>        <pubDate>Mon, 30 May 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/212abdd7-ec2f-3d84-ad64-36da11c2e986</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>When serious engineering with safety of life is at stake - think rockets, engine turbines, aircraft - Richard Ahlfeld brings artificial intelligence to the job. He modeled the Space Launch System as part of his PhD at Imperial College London and now is CEO of Monolith AI, commercializing AI in the role of engineering design and testing. We talk about just how that gets done and what sort of difference it makes over traditional methods.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/05/transcript-102-1850.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>When serious engineering with safety of life is at stake - think rockets, engine turbines, aircraft - Richard Ahlfeld brings artificial intelligence to the job. He modeled the Space Launch System as part of his PhD at Imperial College London and now is CEO of Monolith AI, commercializing AI in the role of engineering design and testing. We talk about just how that gets done and what sort of difference it makes over traditional methods.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/05/transcript-102-1850.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qt3eqw/102_AI-and-You.mp3" length="35911833" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

When serious engineering with safety of life is at stake - think rockets, engine turbines, aircraft - Richard Ahlfeld brings artificial intelligence to the job. He modeled the Space Launch System as part of his PhD at Imperial College London and now is CEO of Monolith AI, commercializing AI in the role of engineering design and testing. We talk about just how that gets done and what sort of difference it makes over traditional methods.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2195</itunes:duration>
                <itunes:episode>102</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>101 - Guest: Bryant Cruse, Cognitive AI CEO</title>
        <itunes:title>101 - Guest: Bryant Cruse, Cognitive AI CEO</itunes:title>
        <link>https://aiandyou.net/e/101-guest-bryant-cruse-cognitive-ai-ceo/</link>
                    <comments>https://aiandyou.net/e/101-guest-bryant-cruse-cognitive-ai-ceo/#comments</comments>        <pubDate>Mon, 23 May 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/187d895f-c411-3407-862a-5aed7b2accfc</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Bryant Cruse, former naval aviator (we'll talk about that) and Space Telescope mission operations team member (we'll talk about that too), is the founder and CEO of <a href='https://www.newsapience.com/'>New Sapience</a>, working on bold new advances in artificial general intelligence. We talk about what real understanding by AI means.  </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/05/transcript-101-7450.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Bryant Cruse, former naval aviator (we'll talk about that) and Space Telescope mission operations team member (we'll talk about that too), is the founder and CEO of <a href='https://www.newsapience.com/'>New Sapience</a>, working on bold new advances in artificial general intelligence. We talk about what real understanding by AI means.  </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/05/transcript-101-7450.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bk4mzt/101_AI-and-You.mp3" length="47584597" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Bryant Cruse, former naval aviator (we'll talk about that) and Space Telescope mission operations team member (we'll talk about that too), is the founder and CEO of New Sapience, working on bold new advances in artificial general intelligence. We talk about what real understanding by AI means.  
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2925</itunes:duration>
                <itunes:episode>101</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>100 - What We’ve Learned from our Guests</title>
        <itunes:title>100 - What We’ve Learned from our Guests</itunes:title>
        <link>https://aiandyou.net/e/100-what-we-ve-learned-from-our-guests/</link>
                    <comments>https://aiandyou.net/e/100-what-we-ve-learned-from-our-guests/#comments</comments>        <pubDate>Mon, 16 May 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3b0ef4ac-44ec-303b-bca3-dd7256beb7e3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>100 episodes! And 60 guests: What have we learned from them? We've had everyone from science fiction authors to CEOs, from philosophers to government ministers, and from professors to neuroscientists. All of them helping us wrap our heads around the enormous impact of this thing called AI. I realized two things: (1) I learned a tremendous amount from all these experts giving us their time and brains, and (2) That learning is as valuable today as when they came on the show. So this episode is a guide to those past shows to help you decide what you might want to visit or return to.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/05/transcript-100-0001.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>100 episodes! And 60 guests: What have we learned from them? We've had everyone from science fiction authors to CEOs, from philosophers to government ministers, and from professors to neuroscientists. All of them helping us wrap our heads around the enormous impact of this thing called AI. I realized two things: (1) I learned a tremendous amount from all these experts giving us their time and brains, and (2) That learning is as valuable today as when they came on the show. So this episode is a guide to those past shows to help you decide what you might want to visit or return to.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/05/transcript-100-0001.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/7qwkxb/100_AI-and-You.mp3" length="40963706" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

100 episodes! And 60 guests: What have we learned from them? We've had everyone from science fiction authors to CEOs, from philosophers to government ministers, and from professors to neuroscientists. All of them helping us wrap our heads around the enormous impact of this thing called AI. I realized two things: (1) I learned a tremendous amount from all these experts giving us their time and brains, and (2) That learning is as valuable today as when they came on the show. So this episode is a guide to those past shows to help you decide what you might want to visit or return to.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2511</itunes:duration>
                <itunes:episode>100</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>099 - Guest: Calum Chace, AI Author and Speaker, part 2</title>
        <itunes:title>099 - Guest: Calum Chace, AI Author and Speaker, part 2</itunes:title>
        <link>https://aiandyou.net/e/099-guest-calum-chace-ai-author-and-speaker-part-2/</link>
                    <comments>https://aiandyou.net/e/099-guest-calum-chace-ai-author-and-speaker-part-2/#comments</comments>        <pubDate>Mon, 09 May 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7182646b-d1dc-3635-a018-0099774f4046</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Calum Chace is a keynote speaker and the author of <a href='https://www.amazon.ca/Surviving-AI-promise-artificial-intelligence/dp/0993211623'>Surviving AI: The Promise and Peril of Artificial Intelligence,</a> and <a href='https://www.amazon.com/Economic-Singularity-Artificial-intelligence-capitalism/dp/099321164X/'>The Economic Singularity: Artificial Intelligence and the Death of Capitalism</a>. In part 2, we talk about the metaverse, how AI could be leveraged in the metaverse, and the agricultural and longevity singularities.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-099-6243.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Calum Chace is a keynote speaker and the author of <a href='https://www.amazon.ca/Surviving-AI-promise-artificial-intelligence/dp/0993211623'><em>Surviving AI: The Promise and Peril of Artificial Intelligence</em>,</a> and <a href='https://www.amazon.com/Economic-Singularity-Artificial-intelligence-capitalism/dp/099321164X/'><em>The Economic Singularity: Artificial Intelligence and the Death of Capitalism</em></a>. In part 2, we talk about the metaverse, how AI could be leveraged in the metaverse, and the agricultural and longevity singularities.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-099-6243.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/4a2jrs/099_AI-and-You.mp3" length="32593237" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Calum Chace is a keynote speaker and the author of Surviving AI: The Promise and Peril of Artificial Intelligence, and The Economic Singularity: Artificial Intelligence and the Death of Capitalism. In part 2, we talk about the metaverse, how AI could be leveraged in the metaverse, and the agricultural and longevity singularities.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1988</itunes:duration>
                <itunes:episode>99</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>098 - Guest: Calum Chace, AI Author and Speaker, part 1</title>
        <itunes:title>098 - Guest: Calum Chace, AI Author and Speaker, part 1</itunes:title>
        <link>https://aiandyou.net/e/098-guest-calum-chace-ai-author-and-speaker-part-1/</link>
                    <comments>https://aiandyou.net/e/098-guest-calum-chace-ai-author-and-speaker-part-1/#comments</comments>        <pubDate>Mon, 02 May 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5e961dd5-a590-3674-9721-81831295fd35</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Calum Chace is a keynote speaker and the author of <a href='https://www.amazon.ca/Surviving-AI-promise-artificial-intelligence/dp/0993211623'>Surviving AI: The Promise and Peril of Artificial Intelligence,</a> and <a href='https://www.amazon.com/Economic-Singularity-Artificial-intelligence-capitalism/dp/099321164X/'>The Economic Singularity: Artificial Intelligence and the Death of Capitalism</a>. We talked in this first part about his concept of the Economic Singularity, a transformation of the socioeconomic space he says will arrive much sooner than Ray Kurzweil's famed singularity.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-098-3884.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Calum Chace is a keynote speaker and the author of <a href='https://www.amazon.ca/Surviving-AI-promise-artificial-intelligence/dp/0993211623'><em>Surviving AI: The Promise and Peril of Artificial Intelligence</em>,</a> and <a href='https://www.amazon.com/Economic-Singularity-Artificial-intelligence-capitalism/dp/099321164X/'><em>The Economic Singularity: Artificial Intelligence and the Death of Capitalism</em></a>. We talked in this first part about his concept of the Economic Singularity, a transformation of the socioeconomic space he says will arrive much sooner than Ray Kurzweil's famed singularity.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-098-3884.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/j3ryp3/098_AI-and-You.mp3" length="34547196" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Calum Chace is a keynote speaker and the author of Surviving AI: The Promise and Peril of Artificial Intelligence, and The Economic Singularity: Artificial Intelligence and the Death of Capitalism. We talked in this first part about his concept of the Economic Singularity, a transformation of the socioeconomic space he says will arrive much sooner than Ray Kurzweil's famed singularity.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2110</itunes:duration>
                <itunes:episode>98</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>097 - Guest: Alison Gopnik, Child Psychology Professor, part 2</title>
        <itunes:title>097 - Guest: Alison Gopnik, Child Psychology Professor, part 2</itunes:title>
        <link>https://aiandyou.net/e/097-guest-alison-gopnik-child-psychology-professor-part-2/</link>
                    <comments>https://aiandyou.net/e/097-guest-alison-gopnik-child-psychology-professor-part-2/#comments</comments>        <pubDate>Mon, 25 Apr 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/bf6fd58e-0ca5-3439-93bb-e162304d3a31</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What is that baby thinking? Alison Gopnik knows. She is the American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, writes for The New York Times, The Wall Street Journal, New Scientist, and Scientific American, and has appeared on The Colbert Show and given a <a href='https://www.ted.com/talks/alison_gopnik_what_do_babies_think?language=en'>TED talk</a>. She has much to tell us about how studying children can inform the development of artificial general intelligence, and in part 2, we discuss topics like epigenetics and the AI alignment problem.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-097-5781.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What is that baby thinking? Alison Gopnik knows. She is the American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, writes for <em>The New York Times, The Wall Street Journal, New Scientist,</em> and <em>Scientific American,</em> and has appeared on <em>The Colbert Show</em> and given a <a href='https://www.ted.com/talks/alison_gopnik_what_do_babies_think?language=en'>TED talk</a>. She has much to tell us about how studying children can inform the development of artificial general intelligence, and in part 2, we discuss topics like epigenetics and the AI alignment problem.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-097-5781.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/z7jmgi/097_AI-and-You.mp3" length="29056049" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What is that baby thinking? Alison Gopnik knows. She is the American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, writes for The New York Times, The Wall Street Journal, New Scientist, and Scientific American, and has appeared on The Colbert Show and given a TED talk. She has much to tell us about how studying children can inform the development of artificial general intelligence, and in part 2, we discuss topics like epigenetics and the AI alignment problem.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1767</itunes:duration>
                <itunes:episode>97</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>096 - Guest: Alison Gopnik, Child Psychology Professor, part 1</title>
        <itunes:title>096 - Guest: Alison Gopnik, Child Psychology Professor, part 1</itunes:title>
        <link>https://aiandyou.net/e/096-guest-alison-gopnik-child-psychology-professor-part-1/</link>
                    <comments>https://aiandyou.net/e/096-guest-alison-gopnik-child-psychology-professor-part-1/#comments</comments>        <pubDate>Mon, 18 Apr 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d3032806-1563-38be-8508-8140e9398bf9</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What is that baby thinking? Alison Gopnik knows. She is the American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, writes for The New York Times, The Wall Street Journal, New Scientist, and Scientific American, and has appeared on The Colbert Show and given a TED talk. She has much to tell us about how studying children can inform the development of artificial general intelligence, and in part 1 you'll find out what babies are smarter than adults at!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-096-9452.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What is that baby thinking? Alison Gopnik knows. She is the American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, writes for <em>The New York Times, The Wall Street Journal, New Scientist,</em> and <em>Scientific American,</em> and has appeared on <em>The Colbert Show</em> and given a TED talk. She has much to tell us about how studying children can inform the development of artificial general intelligence, and in part 1 you'll find out what babies are smarter than adults at!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-096-9452.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/drciaf/096_AI-and-You.mp3" length="31939967" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What is that baby thinking? Alison Gopnik knows. She is the American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, writes for The New York Times, The Wall Street Journal, New Scientist, and Scientific American, and has appeared on The Colbert Show and given a TED talk. She has much to tell us about how studying children can inform the development of artificial general intelligence, and in part 1 you'll find out what babies are smarter than adults at!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1947</itunes:duration>
                <itunes:episode>96</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>095 - Guest: George Dyson, Computer Historian</title>
        <itunes:title>095 - Guest: George Dyson, Computer Historian</itunes:title>
        <link>https://aiandyou.net/e/095-guest-george-dyson-computer-historian/</link>
                    <comments>https://aiandyou.net/e/095-guest-george-dyson-computer-historian/#comments</comments>        <pubDate>Mon, 11 Apr 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/674ac2fd-c4f4-35b9-a8f1-9187653ff9bb</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Those who do not understand history are doomed to repeat it, right? Or maybe the problem is that we should be repeating some history that we’re not. My guest is George Dyson, master kayak builder, keynote speaker about the history of computing, and the author of <a href='https://www.amazon.com/Analogia-Emergence-Technology-Programmable-Control/dp/0374104867/'>Analogia: The Emergence of Technology Beyond Programmable Control</a>; <a href='https://www.amazon.com/Darwin-among-Machines-Evolution-Intelligence/dp/0465031625'>Darwin Among the Machines: The Evolution of Global Intelligence</a>; and <a href='https://www.amazon.com/Turings-Cathedral-Origins-Digital-Universe/dp/1400075998'>Turing’s Cathedral: The Origins of the Digital Universe</a>. Hear his stories about John von Neumann, Alan Turing, and why he thinks that what today's computer companies are missing out on is Analog Computing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-095-2634.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Those who do not understand history are doomed to repeat it, right? Or maybe the problem is that we should be repeating some history that we’re not. My guest is George Dyson, master kayak builder, keynote speaker about the history of computing, and the author of <a href='https://www.amazon.com/Analogia-Emergence-Technology-Programmable-Control/dp/0374104867/'><em>Analogia: The Emergence of Technology Beyond Programmable Control</em></a>; <em><a href='https://www.amazon.com/Darwin-among-Machines-Evolution-Intelligence/dp/0465031625'>Darwin Among the Machines: The Evolution of Global Intelligence</a>;</em> and <a href='https://www.amazon.com/Turings-Cathedral-Origins-Digital-Universe/dp/1400075998'><em>Turing’s Cathedral: The Origins of the Digital Universe</em></a>. Hear his stories about John von Neumann, Alan Turing, and why he thinks that what today's computer companies are missing out on is Analog Computing.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/04/transcript-095-2634.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/yqiayy/095_AI-and-You.mp3" length="37736643" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Those who do not understand history are doomed to repeat it, right? Or maybe the problem is that we should be repeating some history that we’re not. My guest is George Dyson, master kayak builder, keynote speaker about the history of computing, and the author of Analogia: The Emergence of Technology Beyond Programmable Control; Darwin Among the Machines: The Evolution of Global Intelligence; and Turing’s Cathedral: The Origins of the Digital Universe. Hear his stories about John von Neumann, Alan Turing, and why he thinks that what today's computer companies are missing out on is Analog Computing.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2309</itunes:duration>
                <itunes:episode>95</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>094 - Guests: Hannah and Shea, Institute for Digital Humanity, part 2</title>
        <itunes:title>094 - Guests: Hannah and Shea, Institute for Digital Humanity, part 2</itunes:title>
        <link>https://aiandyou.net/e/094-guests-hannah-and-shea-institute-for-digital-humanity-part-2/</link>
                    <comments>https://aiandyou.net/e/094-guests-hannah-and-shea-institute-for-digital-humanity-part-2/#comments</comments>        <pubDate>Mon, 04 Apr 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e4b9c9bc-c5e7-3f3d-988d-1e3bc97f2b66</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Hannah Grubbs and Shea Sullivan are from the Institute for Digital Humanity in Minnesota, a bi-partisan, cross-cultural, digital ethics think tank advocating for civil rights issues connected to advances in technology, and it is entirely student-founded and student run.</p>
<p>Their mission is to bring the humanity back to our digital world, and they are extraordinarily productive, forging alliances with other universities, Netflix, the ACLU, and the ADL.</p>
<p>In this second part we talk about the Safety Not Surveillance coalition, the Living Textbook project, and what it's like to be involved in this kind of activism.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-094-5987.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Hannah Grubbs and Shea Sullivan are from the Institute for Digital Humanity in Minnesota, a bi-partisan, cross-cultural, digital ethics think tank advocating for civil rights issues connected to advances in technology, and it is entirely <em>student-founded and student run</em>.</p>
<p>Their mission is to bring the humanity back to our digital world, and they are extraordinarily productive, forging alliances with other universities, Netflix, the ACLU, and the ADL.</p>
<p>In this second part we talk about the Safety Not Surveillance coalition, the Living Textbook project, and what it's like to be involved in this kind of activism.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-094-5987.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qm4yus/094_AI-and-You.mp3" length="27642511" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Hannah Grubbs and Shea Sullivan are from the Institute for Digital Humanity in Minnesota, a bi-partisan, cross-cultural, digital ethics think tank advocating for civil rights issues connected to advances in technology, and it is entirely student-founded and student run.
Their mission is to bring the humanity back to our digital world, and they are extraordinarily productive, forging alliances with other universities, Netflix, the ACLU, and the ADL.
In this second part we talk about the Safety Not Surveillance coalition, the Living Textbook project, and what it's like to be involved in this kind of activism.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1678</itunes:duration>
                <itunes:episode>94</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>093 - Guests: Hannah and Shea, Institute for Digital Humanity, part 1</title>
        <itunes:title>093 - Guests: Hannah and Shea, Institute for Digital Humanity, part 1</itunes:title>
        <link>https://aiandyou.net/e/093-guests-hannah-and-shea-institute-for-digital-humanity-part-1/</link>
                    <comments>https://aiandyou.net/e/093-guests-hannah-and-shea-institute-for-digital-humanity-part-1/#comments</comments>        <pubDate>Mon, 28 Mar 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/782bed3d-572b-3337-b125-4bd1d5214012</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Hannah Grubbs and Shea Sullivan are from the <a href='https://www.institutefordigitalhumanity.org/'>Institute for Digital Humanity</a> in Minnesota, a bi-partisan, cross-cultural, digital ethics think tank advocating for civil rights issues connected to advances in technology, and it is entirely student-founded and student run.</p>
<p>Their mission is to bring the humanity back to our digital world, and they are extraordinarily productive, forging alliances with other universities, the ACLU, and the ADL.</p>
<p>In part 1 we talk about how the IDH got started, their work with Netflix on Coded Bias, and the IDH mission.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-093-6749.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Hannah Grubbs and Shea Sullivan are from the <a href='https://www.institutefordigitalhumanity.org/'>Institute for Digital Humanity</a> in Minnesota, a bi-partisan, cross-cultural, digital ethics think tank advocating for civil rights issues connected to advances in technology, and it is entirely <em>student-founded and student run</em>.</p>
<p>Their mission is to bring the humanity back to our digital world, and they are extraordinarily productive, forging alliances with other universities, the ACLU, and the ADL.</p>
<p>In part 1 we talk about how the IDH got started, their work with Netflix on <em>Coded Bias,</em> and the IDH mission.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-093-6749.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/twuxtt/093_AI-and-You.mp3" length="26990912" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Hannah Grubbs and Shea Sullivan are from the Institute for Digital Humanity in Minnesota, a bi-partisan, cross-cultural, digital ethics think tank advocating for civil rights issues connected to advances in technology, and it is entirely student-founded and student run.
Their mission is to bring the humanity back to our digital world, and they are extraordinarily productive, forging alliances with other universities, the ACLU, and the ADL.
In part 1 we talk about how the IDH got started, their work with Netflix on Coded Bias, and the IDH mission.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1637</itunes:duration>
                <itunes:episode>93</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>092 - Guest: Ben Shneiderman, Human-Centered AI Expert, part 2</title>
        <itunes:title>092 - Guest: Ben Shneiderman, Human-Centered AI Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/092-guest-ben-shneiderman-human-centered-ai-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/092-guest-ben-shneiderman-human-centered-ai-expert-part-2/#comments</comments>        <pubDate>Mon, 21 Mar 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8b575925-c989-3020-ac35-3b0b6256252f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We continue talking about  human-centered AI design with the man who wrote the book on user interface design: Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland.</p>
<p>His new book, <a href='https://global.oup.com/academic/product/human-centered-ai-9780192845290?cc=ca&lang=en&'>Human-Centered AI</a>, was just published, and in this conclusion we talk about what it's like to get into this field, and the role of standards and governance in human-centered AI.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-092-8356.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We continue talking about  human-centered AI design with the man who wrote the book on user interface design: Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland.</p>
<p>His new book, <a href='https://global.oup.com/academic/product/human-centered-ai-9780192845290?cc=ca&lang=en&'>Human-Centered AI</a>, was just published, and in this conclusion we talk about what it's like to get into this field, and the role of standards and governance in human-centered AI.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-092-8356.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/8fdp74/092_AI-and-You.mp3" length="30781384" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

We continue talking about  human-centered AI design with the man who wrote the book on user interface design: Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland.
His new book, Human-Centered AI, was just published, and in this conclusion we talk about what it's like to get into this field, and the role of standards and governance in human-centered AI.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1874</itunes:duration>
                <itunes:episode>92</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>091 - Guest: Ben Shneiderman, Human-Centered AI Expert, part 1</title>
        <itunes:title>091 - Guest: Ben Shneiderman, Human-Centered AI Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/091-guest-ben-shneiderman-human-centered-ai-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/091-guest-ben-shneiderman-human-centered-ai-expert-part-1/#comments</comments>        <pubDate>Mon, 14 Mar 2022 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/89407ffb-c0a3-3392-b5df-abffe096cbc4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Who better to answer the call for expertise in human-centered AI design than the man who wrote the book on user interface design? Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland, received six honorary doctorates in human-computer interface design.</p>
<p>His new book, <a href='https://global.oup.com/academic/product/human-centered-ai-9780192845290?cc=ca&lang=en&'>Human-Centered AI</a>, was just published, and in this interview we talk about rationalism and empiricism in human-computer interaction, and metaphors in HCI, including his four metaphors for AI that empowers people.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-091-6110.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Who better to answer the call for expertise in human-centered AI design than the man who wrote the book on user interface design? Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland, received six honorary doctorates in human-computer interface design.</p>
<p>His new book, <a href='https://global.oup.com/academic/product/human-centered-ai-9780192845290?cc=ca&lang=en&'>Human-Centered AI</a>, was just published, and in this interview we talk about rationalism and empiricism in human-computer interaction, and metaphors in HCI, including his four metaphors for AI that empowers people.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/03/transcript-091-6110.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/23hdbt/091_AI-and-You.mp3" length="31327239" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Who better to answer the call for expertise in human-centered AI design than the man who wrote the book on user interface design? Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland, received six honorary doctorates in human-computer interface design.
His new book, Human-Centered AI, was just published, and in this interview we talk about rationalism and empiricism in human-computer interaction, and metaphors in HCI, including his four metaphors for AI that empowers people.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1909</itunes:duration>
                <itunes:episode>91</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>090 - Guest: David Danks, Professor of Philosophy and Data Science, part 2</title>
        <itunes:title>090 - Guest: David Danks, Professor of Philosophy and Data Science, part 2</itunes:title>
        <link>https://aiandyou.net/e/090-guest-david-danks-professor-of-philosophy-and-data-science-part-2/</link>
                    <comments>https://aiandyou.net/e/090-guest-david-danks-professor-of-philosophy-and-data-science-part-2/#comments</comments>        <pubDate>Mon, 07 Mar 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/760245e2-34ae-3b10-b7cb-18d8a61f46df</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>David Danks is a professor at UC San Diego working at the intersection of philosophy, cognitive science, and machine learning, was previously the L.L. Thurstone Professor of Philosophy & Psychology at Carnegie Mellon University and has developed a visual architecture for cognition. He's very adept at explaining complex issues eloquently and approachably.</p>
<p>In this conclusion we talk about how how social media platforms have created problems through irresponsible use of AI and algorithms, some nuance of the legal issues surrounding that, and look at bias through David’s taxonomy of algorithmic bias.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-090-3970.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>David Danks is a professor at UC San Diego working at the intersection of philosophy, cognitive science, and machine learning, was previously the L.L. Thurstone Professor of Philosophy & Psychology at Carnegie Mellon University and has developed a visual architecture for cognition. He's very adept at explaining complex issues eloquently and approachably.</p>
<p>In this conclusion we talk about how how social media platforms have created problems through irresponsible use of AI and algorithms, some nuance of the legal issues surrounding that, and look at bias through David’s taxonomy of algorithmic bias.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-090-3970.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/7cz23k/090_AI-and-You.mp3" length="34823467" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

David Danks is a professor at UC San Diego working at the intersection of philosophy, cognitive science, and machine learning, was previously the L.L. Thurstone Professor of Philosophy & Psychology at Carnegie Mellon University and has developed a visual architecture for cognition. He's very adept at explaining complex issues eloquently and approachably.
In this conclusion we talk about how how social media platforms have created problems through irresponsible use of AI and algorithms, some nuance of the legal issues surrounding that, and look at bias through David’s taxonomy of algorithmic bias.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2127</itunes:duration>
                <itunes:episode>90</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>089 - Guest: David Danks, Professor of Philosophy and Data Science, part 1</title>
        <itunes:title>089 - Guest: David Danks, Professor of Philosophy and Data Science, part 1</itunes:title>
        <link>https://aiandyou.net/e/089-guest-david-danks-professor-of-philosophy-and-data-science-part-1/</link>
                    <comments>https://aiandyou.net/e/089-guest-david-danks-professor-of-philosophy-and-data-science-part-1/#comments</comments>        <pubDate>Mon, 28 Feb 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d80e03cb-f697-30ab-8f38-09c7d20e5ec7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>David Danks is a professor at UC San Diego working at the intersection of philosophy, cognitive science, and machine learning, was previously the L.L. Thurstone Professor of Philosophy & Psychology at Carnegie Mellon University and has developed a visual architecture for cognition. He's very adept at explaining complex issues eloquently and approachably.</p>
<p>In this part we talk about how he got into AI from a start in philosophy, how machine learning, philosophy, and neuroscience intersect, and how we engage the public in meaningful efforts to make AI safe, ranging from regulation to psychology. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-089-4358.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>David Danks is a professor at UC San Diego working at the intersection of philosophy, cognitive science, and machine learning, was previously the L.L. Thurstone Professor of Philosophy & Psychology at Carnegie Mellon University and has developed a visual architecture for cognition. He's very adept at explaining complex issues eloquently and approachably.</p>
<p>In this part we talk about how he got into AI from a start in philosophy, how machine learning, philosophy, and neuroscience intersect, and how we engage the public in meaningful efforts to make AI safe, ranging from regulation to psychology. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-089-4358.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/jthi76/089_AI-and-You.mp3" length="30098439" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

David Danks is a professor at UC San Diego working at the intersection of philosophy, cognitive science, and machine learning, was previously the L.L. Thurstone Professor of Philosophy & Psychology at Carnegie Mellon University and has developed a visual architecture for cognition. He's very adept at explaining complex issues eloquently and approachably.
In this part we talk about how he got into AI from a start in philosophy, how machine learning, philosophy, and neuroscience intersect, and how we engage the public in meaningful efforts to make AI safe, ranging from regulation to psychology. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1832</itunes:duration>
                <itunes:episode>89</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>088 - Special Panel: AI in Music</title>
        <itunes:title>088 - Special Panel: AI in Music</itunes:title>
        <link>https://aiandyou.net/e/088-special-panel-ai-in-music/</link>
                    <comments>https://aiandyou.net/e/088-special-panel-ai-in-music/#comments</comments>        <pubDate>Mon, 21 Feb 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/54ea992c-dd46-3877-98f9-63fb58143358</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We're focusing on AI in music: What's the state of the art in AI music composition, how can human composers use it to their advantage, and what is the AI Song Contest? How do musical AIs surprise their creators and how are they like your grandmother trying to explain death metal?</p>
<p>We have a special panel from around the world to help with this one! Bob Sturm is Associate Professor of Computer Science at KTH Royal Institute of Technology in Stockholm, Sweden. Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design. Hendrik Vincent Koops is an AI researcher and composer, holding degrees in Sound Design and Music Composition from the HKU University of the Arts Utrecht, and is co-organizer of the AI Song Contest.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-088-6736.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We're focusing on AI in music: What's the state of the art in AI music composition, how can human composers use it to their advantage, and what is the AI Song Contest? How do musical AIs surprise their creators and how are they like your grandmother trying to explain death metal?</p>
<p>We have a special panel from around the world to help with this one! Bob Sturm is Associate Professor of Computer Science at KTH Royal Institute of Technology in Stockholm, Sweden. Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design. Hendrik Vincent Koops is an AI researcher and composer, holding degrees in Sound Design and Music Composition from the HKU University of the Arts Utrecht, and is co-organizer of the AI Song Contest.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-088-6736.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/rfrmer/088_AI-and-You.mp3" length="48947562" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

We're focusing on AI in music: What's the state of the art in AI music composition, how can human composers use it to their advantage, and what is the AI Song Contest? How do musical AIs surprise their creators and how are they like your grandmother trying to explain death metal?
We have a special panel from around the world to help with this one! Bob Sturm is Associate Professor of Computer Science at KTH Royal Institute of Technology in Stockholm, Sweden. Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design. Hendrik Vincent Koops is an AI researcher and composer, holding degrees in Sound Design and Music Composition from the HKU University of the Arts Utrecht, and is co-organizer of the AI Song Contest.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3010</itunes:duration>
                <itunes:episode>88</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>087 - Guest: Stuart Russell, AI professor, author, activist, part 2</title>
        <itunes:title>087 - Guest: Stuart Russell, AI professor, author, activist, part 2</itunes:title>
        <link>https://aiandyou.net/e/087-guest-stuart-russell-ai-professor-author-activist-part-2/</link>
                    <comments>https://aiandyou.net/e/087-guest-stuart-russell-ai-professor-author-activist-part-2/#comments</comments>        <pubDate>Mon, 14 Feb 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8ee4aec8-c07a-3ce6-8add-850aedf2b7f0</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Part 2 of our interview with Stuart Russell, OBE, professor of AI at UC Berkeley, author of both the <a href='https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd-dp-0136042597/dp/0136042597/'>standard textbook on AI</a> and the 2019 book <a href='https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/'>Human Compatible: Artificial Intelligence and the Problem of Control</a>. You may know him as the BBC's 2021 <a href='https://www.bbc.co.uk/programmes/b00729d9'>Reith Lecturer</a> on artificial intelligence. Stuart is a prominent voice in the public debate about AI risk and the spokesperson for the viral video <a href='https://www.youtube.com/watch?v=9CO6M2HsoIA'>Slaughterbots</a> commentary on lethal autonomous weapons.  This week we talk about what it means for an AI to understand something – or not; reasoning chains; and exciting new research into artificial general intelligence. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-087-9358.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Part 2 of our interview with Stuart Russell, OBE, professor of AI at UC Berkeley, author of both the <a href='https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd-dp-0136042597/dp/0136042597/'>standard textbook on AI</a> and the 2019 book <a href='https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/'><em>Human Compatible: Artificial Intelligence and the Problem of Control</em></a>. You may know him as the BBC's 2021 <a href='https://www.bbc.co.uk/programmes/b00729d9'>Reith Lecturer</a> on artificial intelligence. Stuart is a prominent voice in the public debate about AI risk and the spokesperson for the viral video <a href='https://www.youtube.com/watch?v=9CO6M2HsoIA'>Slaughterbots</a> commentary on lethal autonomous weapons.  This week we talk about what it means for an AI to understand something – or not; reasoning chains; and exciting new research into artificial general intelligence. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-087-9358.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/65piqb/087_AI-and-You.mp3" length="29967200" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Part 2 of our interview with Stuart Russell, OBE, professor of AI at UC Berkeley, author of both the standard textbook on AI and the 2019 book Human Compatible: Artificial Intelligence and the Problem of Control. You may know him as the BBC's 2021 Reith Lecturer on artificial intelligence. Stuart is a prominent voice in the public debate about AI risk and the spokesperson for the viral video Slaughterbots commentary on lethal autonomous weapons.  This week we talk about what it means for an AI to understand something – or not; reasoning chains; and exciting new research into artificial general intelligence. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1824</itunes:duration>
                <itunes:episode>87</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>086 - Guest: Stuart Russell, AI professor, author, activist, part 1</title>
        <itunes:title>086 - Guest: Stuart Russell, AI professor, author, activist, part 1</itunes:title>
        <link>https://aiandyou.net/e/086-guest-stuart-russell-ai-professor-author-activist-part-1/</link>
                    <comments>https://aiandyou.net/e/086-guest-stuart-russell-ai-professor-author-activist-part-1/#comments</comments>        <pubDate>Mon, 07 Feb 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/41aace1a-77cf-3f87-8a39-17e6da3f3649</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Stuart Russell, professor of AI at UC Berkeley, author of both the <a href='https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd-dp-0136042597/dp/0136042597/'>standard textbook on AI</a> and the 2019 book <a href='https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/'>Human Compatible: Artificial Intelligence and the Problem of Control</a> is my guest this week. You may know him as the BBC's 2021 <a href='https://www.bbc.co.uk/programmes/b00729d9'>Reith Lecturer</a> on artificial intelligence. Queen Elizabeth knows him as a 2021 recipient of the Order of the British Empire. Stuart is a prominent voice in the public side of the AI risk conversation.</p>
<p>So much to talk about! In part 1 we get into the autonomous lethal weapon debate; you may have seen Stuart in his role as activist and spokesperson for the viral video <a href='https://www.youtube.com/watch?v=9CO6M2HsoIA'>Slaughterbots</a>.  We also talk about the public narrative on AI risk and help you make sense of the apparently contradictory positions there.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-086-0436.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Stuart Russell, professor of AI at UC Berkeley, author of both the <a href='https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd-dp-0136042597/dp/0136042597/'>standard textbook on AI</a> and the 2019 book <a href='https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/'><em>Human Compatible: Artificial Intelligence and the Problem of Control</em></a> is my guest this week. You may know him as the BBC's 2021 <a href='https://www.bbc.co.uk/programmes/b00729d9'>Reith Lecturer</a> on artificial intelligence. Queen Elizabeth knows him as a 2021 recipient of the Order of the British Empire. Stuart is a prominent voice in the public side of the AI risk conversation.</p>
<p>So much to talk about! In part 1 we get into the autonomous lethal weapon debate; you may have seen Stuart in his role as activist and spokesperson for the viral video <a href='https://www.youtube.com/watch?v=9CO6M2HsoIA'>Slaughterbots</a>.  We also talk about the public narrative on AI risk and help you make sense of the apparently contradictory positions there.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/02/transcript-086-0436.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/m8b63m/086_AI-and-You.mp3" length="31648649" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Stuart Russell, professor of AI at UC Berkeley, author of both the standard textbook on AI and the 2019 book Human Compatible: Artificial Intelligence and the Problem of Control is my guest this week. You may know him as the BBC's 2021 Reith Lecturer on artificial intelligence. Queen Elizabeth knows him as a 2021 recipient of the Order of the British Empire. Stuart is a prominent voice in the public side of the AI risk conversation.
So much to talk about! In part 1 we get into the autonomous lethal weapon debate; you may have seen Stuart in his role as activist and spokesperson for the viral video Slaughterbots.  We also talk about the public narrative on AI risk and help you make sense of the apparently contradictory positions there.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1929</itunes:duration>
                <itunes:episode>86</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>085 - Guest: David Brin, Science Fiction Author and Scientist, part 2</title>
        <itunes:title>085 - Guest: David Brin, Science Fiction Author and Scientist, part 2</itunes:title>
        <link>https://aiandyou.net/e/085-guest-david-brin-science-fiction-author-and-scientist-part-2/</link>
                    <comments>https://aiandyou.net/e/085-guest-david-brin-science-fiction-author-and-scientist-part-2/#comments</comments>        <pubDate>Mon, 31 Jan 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/537ae0a0-ffd7-36b0-bed4-6bb340874054</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>See AI and the future of humanity through the eyes of an extraordinary thinker, science fiction author and scientist David Brin. Author of bestsellers such as Earth, Existence, Foundation's Triumph, and the Uplift series, David also has a PhD in astronomy and consults to organizations ranging from the CIA to Google.</p>
<p>What he does for them is get them to open their minds to new possibilities, and that's what he does for us, talking this week about how to restructure government to avoid AI seizing control in the future, how to fix accountability on the Internet, and... The Lion King. He also has a special message for superintelligent AIs listening now or in the future.</p>
<p>He provided links for some of the things we talked about, which are also in the transcript:</p>
<ul><li><a href='https://www.davidbrin.com/foundationstriumph.html'>Foundation’s Triumph,</a> <a href='https://www.davidbrin.com/existence.html'>Existence</a>, <a href='https://www.davidbrin.com/kilnpeople.html'>Kiln People</a>, <a href='https://www.davidbrin.com/vividtomorrows.html'>Vivid Tomorrows</a></li>
<li>Essay on <a href='http://davidbrin.blogspot.com/2018/06/central-control-over-ai-and-everything.html'>Central Control over AI</a></li>
<li>Micropayments instead of advertising: <a href='https://evonomics.com/advertising-cannot-maintain-internet-heres-solution/'>Here</a> and <a href='http://evonomics.com/beyond-advertising-micropayments-sustain-new-internet/'>here</a>.</li>
<li><a href='https://uclaschooloflaw-my.sharepoint.com/:v:/g/personal/pulse_law_ucla_edu/'>A fascinating discussion of the issue with a trio of brilliant law professors</a></li>
<li><a href='https://www.davidbrin.com/nonfiction/waronexpertise.html'>The War on Folks Who Know Stuff</a></li>
</ul>
<p>It's a smörgåsbord for the mind in this conclusion of our interview.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/01/transcript-085-8314.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>See AI and the future of humanity through the eyes of an extraordinary thinker, science fiction author and scientist David Brin. Author of bestsellers such as <em>Earth</em>, <em>Existence</em>, <em>Foundation's Triumph,</em> and the <em>Uplift </em>series, David also has a PhD in astronomy and consults to organizations ranging from the CIA to Google.</p>
<p>What he does for them is get them to open their minds to new possibilities, and that's what he does for us, talking this week about how to restructure government to avoid AI seizing control in the future, how to fix accountability on the Internet, and... The Lion King. He also has a special message for superintelligent AIs listening now or in the future.</p>
<p>He provided links for some of the things we talked about, which are also in the transcript:</p>
<ul><li><a href='https://www.davidbrin.com/foundationstriumph.html'><em>Foundation’s Triumph,</em></a> <a href='https://www.davidbrin.com/existence.html'>Existence</a>, <a href='https://www.davidbrin.com/kilnpeople.html'>Kiln People</a>, <a href='https://www.davidbrin.com/vividtomorrows.html'>Vivid Tomorrows</a></li>
<li>Essay on <a href='http://davidbrin.blogspot.com/2018/06/central-control-over-ai-and-everything.html'>Central Control over AI</a></li>
<li>Micropayments instead of advertising: <a href='https://evonomics.com/advertising-cannot-maintain-internet-heres-solution/'>Here</a> and <a href='http://evonomics.com/beyond-advertising-micropayments-sustain-new-internet/'>here</a>.</li>
<li><a href='https://uclaschooloflaw-my.sharepoint.com/:v:/g/personal/pulse_law_ucla_edu/'>A fascinating discussion of the issue with a trio of brilliant law professors</a></li>
<li><a href='https://www.davidbrin.com/nonfiction/waronexpertise.html'>The War on Folks Who Know Stuff</a></li>
</ul>
<p>It's a smörgåsbord for the mind in this conclusion of our interview.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/01/transcript-085-8314.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/twykzt/085_AI-and-You.mp3" length="33866759" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

See AI and the future of humanity through the eyes of an extraordinary thinker, science fiction author and scientist David Brin. Author of bestsellers such as Earth, Existence, Foundation's Triumph, and the Uplift series, David also has a PhD in astronomy and consults to organizations ranging from the CIA to Google.
What he does for them is get them to open their minds to new possibilities, and that's what he does for us, talking this week about how to restructure government to avoid AI seizing control in the future, how to fix accountability on the Internet, and... The Lion King. He also has a special message for superintelligent AIs listening now or in the future.
He provided links for some of the things we talked about, which are also in the transcript:
Foundation’s Triumph, Existence, Kiln People, Vivid Tomorrows
Essay on Central Control over AI
Micropayments instead of advertising: Here and here.
A fascinating discussion of the issue with a trio of brilliant law professors
The War on Folks Who Know Stuff
It's a smörgåsbord for the mind in this conclusion of our interview.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2067</itunes:duration>
                <itunes:episode>85</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>084 - Guest: David Brin, Science Fiction Author and Scientist, part 1</title>
        <itunes:title>084 - Guest: David Brin, Science Fiction Author and Scientist, part 1</itunes:title>
        <link>https://aiandyou.net/e/084-guest-david-brin-science-fiction-author-and-scientist-part-1/</link>
                    <comments>https://aiandyou.net/e/084-guest-david-brin-science-fiction-author-and-scientist-part-1/#comments</comments>        <pubDate>Mon, 24 Jan 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b64bdca7-f2ea-3e4f-9990-c7e88bb59a57</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>See AI and the future of humanity through the eyes of an extraordinary thinker: Science fiction author and scientist David Brin is here to expand our minds with his creative and thought-provoking insights. Author of bestsellers such as The Postman, Earth, Existence, and the Uplift series, David also has a PhD in astronomy and consults to organizations ranging from the CIA to Google.</p>
<p>What he does for them is get them to open their minds to new possibilities, and that's what he does for us, talking about the role of science fiction, where the real danger from AI will emerge from and why, how superintelligence was treated in F<a href='https://www.davidbrin.com/foundationstriumph.html'>oundation's Triumph,</a> his sequel to Asimov's series, and the systems of power that control our future today. Yes, it's a lot to take in!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/01/transcript-084-2975.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>See AI and the future of humanity through the eyes of an extraordinary thinker: Science fiction author and scientist David Brin is here to expand our minds with his creative and thought-provoking insights. Author of bestsellers such as <em>The Postman</em>, <em>Earth</em>, <em>Existence</em>, and the <em>Uplift </em>series, David also has a PhD in astronomy and consults to organizations ranging from the CIA to Google.</p>
<p>What he does for them is get them to open their minds to new possibilities, and that's what he does for us, talking about the role of science fiction, where the real danger from AI will emerge from and why, how superintelligence was treated in F<a href='https://www.davidbrin.com/foundationstriumph.html'>oundation's Triumph,</a> his sequel to Asimov's series, and the systems of power that control our future today. Yes, it's a lot to take in!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/01/transcript-084-2975.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/m6i56c/084_AI-and-You.mp3" length="34813018" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

See AI and the future of humanity through the eyes of an extraordinary thinker: Science fiction author and scientist David Brin is here to expand our minds with his creative and thought-provoking insights. Author of bestsellers such as The Postman, Earth, Existence, and the Uplift series, David also has a PhD in astronomy and consults to organizations ranging from the CIA to Google.
What he does for them is get them to open their minds to new possibilities, and that's what he does for us, talking about the role of science fiction, where the real danger from AI will emerge from and why, how superintelligence was treated in Foundation's Triumph, his sequel to Asimov's series, and the systems of power that control our future today. Yes, it's a lot to take in!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2126</itunes:duration>
                <itunes:episode>84</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>083 - Guest: René Morkos, AI for Construction CEO</title>
        <itunes:title>083 - Guest: René Morkos, AI for Construction CEO</itunes:title>
        <link>https://aiandyou.net/e/083-guest-rene%cc%81-morkos-ai-for-construction-ceo/</link>
                    <comments>https://aiandyou.net/e/083-guest-rene%cc%81-morkos-ai-for-construction-ceo/#comments</comments>        <pubDate>Mon, 17 Jan 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a2bfca13-2138-312b-87b0-cd11f2dedd18</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>AI continues to penetrate more and more into our daily lives, including activities that are in the background for most of us - like construction. Have you ever looked at a construction site and thought, "Surely there's a way there could be more people working on this at once?" Well, René Morkos, founder and CEO of <a href='https://Alicetechnologies.com'>Alice Technologies</a>, did just that, and then wrote a PhD thesis on using AI to solve that problem. Learn with me how AI is changing construction.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/01/transcript-083-8664.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>AI continues to penetrate more and more into our daily lives, including activities that are in the background for most of us - like construction. Have you ever looked at a construction site and thought, "Surely there's a way there could be more people working on this at once?" Well, René Morkos, founder and CEO of <a href='https://Alicetechnologies.com'>Alice Technologies</a>, did just that, and then wrote a PhD thesis on using AI to solve that problem. Learn with me how AI is changing construction.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2022/01/transcript-083-8664.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/35b73z/083_AI-and-You.mp3" length="34759102" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

AI continues to penetrate more and more into our daily lives, including activities that are in the background for most of us - like construction. Have you ever looked at a construction site and thought, "Surely there's a way there could be more people working on this at once?" Well, René Morkos, founder and CEO of Alice Technologies, did just that, and then wrote a PhD thesis on using AI to solve that problem. Learn with me how AI is changing construction.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2123</itunes:duration>
                <itunes:episode>83</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>082 - Guest: Kush Varshney, AI Trustworthiness Research Scientist</title>
        <itunes:title>082 - Guest: Kush Varshney, AI Trustworthiness Research Scientist</itunes:title>
        <link>https://aiandyou.net/e/082-guest-kush-varshney-ai-trustworthiness-research-scientist/</link>
                    <comments>https://aiandyou.net/e/082-guest-kush-varshney-ai-trustworthiness-research-scientist/#comments</comments>        <pubDate>Mon, 10 Jan 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5b0ed8f9-177d-369e-a7dc-7a7df15de9a8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Never mind fantasies about Skynet and Terminators; how to trust AI is a real issue right now, as AI is used in life-impacting decisions like medical diagnoses and loan granting. Kush Varshney has a PhD from MIT and is a distinguished researcher at IBM’s Thomas J. Watson Research Center in New York, where he leads the machine learning group in the Foundations of Trustworthy AI department. He is the author of the book <a href='http://www.trustworthymachinelearning.com'>Trustworthy Machine Learning</a>. We talk about the whole ecosystem of trustworthiness, finding out where it goes in areas like privacy, anonymization, regulation, compliance, and oversight.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-082-3458.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Never mind fantasies about Skynet and Terminators; how to trust AI is a real issue right now, as AI is used in life-impacting decisions like medical diagnoses and loan granting. Kush Varshney has a PhD from MIT and is a distinguished researcher at IBM’s Thomas J. Watson Research Center in New York, where he leads the machine learning group in the Foundations of Trustworthy AI department. He is the author of the book <a href='http://www.trustworthymachinelearning.com'><em>Trustworthy Machine Learning</em></a>. We talk about the whole ecosystem of trustworthiness, finding out where it goes in areas like privacy, anonymization, regulation, compliance, and oversight.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-082-3458.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/564jpj/082_AI-and-You.mp3" length="28496819" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Never mind fantasies about Skynet and Terminators; how to trust AI is a real issue right now, as AI is used in life-impacting decisions like medical diagnoses and loan granting. Kush Varshney has a PhD from MIT and is a distinguished researcher at IBM’s Thomas J. Watson Research Center in New York, where he leads the machine learning group in the Foundations of Trustworthy AI department. He is the author of the book Trustworthy Machine Learning. We talk about the whole ecosystem of trustworthiness, finding out where it goes in areas like privacy, anonymization, regulation, compliance, and oversight.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1732</itunes:duration>
                <itunes:episode>82</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>081 - Guest: Tannya Jajal, AI Innovator and Author</title>
        <itunes:title>081 - Guest: Tannya Jajal, AI Innovator and Author</itunes:title>
        <link>https://aiandyou.net/e/081-guest-tannya-jajal-ai-innovator-and-author/</link>
                    <comments>https://aiandyou.net/e/081-guest-tannya-jajal-ai-innovator-and-author/#comments</comments>        <pubDate>Mon, 03 Jan 2022 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3afce402-3987-3669-951c-dd32b43f61c2</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Joining us from Dubai is Tannya Jajal, keynote speaker, AI futurist, and UAE Chapter Lead for the Global Women in Tech Movement. She is a resource manager at VMware, a technology contributor at Forbes Middle East, and author of the new book, <a href='https://www.tannyajajal.com/'>Thinking Machines: AI and the Intelligence Explosion</a>.</p>
<p>I invited Tannya to the podcast after running into her on two different AI panels in different countries on the same day (virtually!). We talk about how AI is being taught and developed in the worlds of women, children, and the Middle East.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-081-2349.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Joining us from Dubai is Tannya Jajal, keynote speaker, AI futurist, and UAE Chapter Lead for the Global Women in Tech Movement. She is a resource manager at VMware, a technology contributor at Forbes Middle East, and author of the new book, <a href='https://www.tannyajajal.com/'><em>Thinking Machines: AI and the Intelligence Explosion</em></a>.</p>
<p>I invited Tannya to the podcast after running into her on two different AI panels in different countries on the same day (virtually!). We talk about how AI is being taught and developed in the worlds of women, children, and the Middle East.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-081-2349.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nwuzzj/081_AI-and-You.mp3" length="32522616" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Joining us from Dubai is Tannya Jajal, keynote speaker, AI futurist, and UAE Chapter Lead for the Global Women in Tech Movement. She is a resource manager at VMware, a technology contributor at Forbes Middle East, and author of the new book, Thinking Machines: AI and the Intelligence Explosion.
I invited Tannya to the podcast after running into her on two different AI panels in different countries on the same day (virtually!). We talk about how AI is being taught and developed in the worlds of women, children, and the Middle East.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1983</itunes:duration>
                <itunes:episode>81</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>080 - Special Panel: AI Predictions for 2022</title>
        <itunes:title>080 - Special Panel: AI Predictions for 2022</itunes:title>
        <link>https://aiandyou.net/e/080-special-panel-ai-predictions-for-2022/</link>
                    <comments>https://aiandyou.net/e/080-special-panel-ai-predictions-for-2022/#comments</comments>        <pubDate>Mon, 27 Dec 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/bed12601-832f-32d2-9854-f752cbac3bc3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>On our last show of 2021, it's time to think about the year ahead, and for that I have a panel of amazing experts!</p>
<ul><li><a href='https://fosterfletcher.com'>Richard Foster-Fletcher</a>, founder of <a href='https://mkai.org'>MKAI</a>, the inclusive Artificial Intelligence Community, advisor to the United Nations Environmental Programme and UN Framework Convention on Climate Change;</li>
<li><a href='https://goetzel.org'>Ben Goertzel</a>, chief scientist of Hanson Robotics and author of <a href='https://www.amazon.com/Ten-Years-Singularity-Really-Try/dp/1505550823/'>Ten Years To the Singularity If We Really Really Try</a>;</li>
<li><a href='https://champions-speakers.co.uk/speaker-agent/katie-king'>Katie King</a>, speaker and marketing consultant, and author of the 2022 book <a href='https://www.amazon.com/Strategy-Sales-Marketing-Connecting-Experience/dp/1398602027'>AI Strategy for Sales and Marketing: Connecting Marketing, Sales and Customer Experience</a> ;</li>
<li>Prashant Natarajan, VP of Strategy & Products at H2O.ai and author of <a href='https://www.amazon.com/Demystifying-AI-Enterprise-Playbook-Transformation-ebook/dp/B09MSRHWFK/'>Demystifying AI for the Enterprise</a>.</li>
</ul>
<p>It's a free-ranging, free-spirited, free-for-all as we talk about the past and future trajectory of COVID effects on technology adoption, the evolution of attitudes towards, and equality in, AI, changes in the environmental impact of AI, and more. This is high quality thinking from people at the coal face of the industry, giving you their best shots at information you can use for the year ahead. Happy New Year!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-080-5294.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>On our last show of 2021, it's time to think about the year ahead, and for that I have a panel of amazing experts!</p>
<ul><li><a href='https://fosterfletcher.com'>Richard Foster-Fletcher</a>, founder of <a href='https://mkai.org'>MKAI</a>, the inclusive Artificial Intelligence Community, advisor to the United Nations Environmental Programme and UN Framework Convention on Climate Change;</li>
<li><a href='https://goetzel.org'>Ben Goertzel</a>, chief scientist of Hanson Robotics and author of <a href='https://www.amazon.com/Ten-Years-Singularity-Really-Try/dp/1505550823/'>Ten Years To the Singularity If We Really Really Try</a>;</li>
<li><a href='https://champions-speakers.co.uk/speaker-agent/katie-king'>Katie King</a>, speaker and marketing consultant, and author of the 2022 book <a href='https://www.amazon.com/Strategy-Sales-Marketing-Connecting-Experience/dp/1398602027'><em>AI Strategy for Sales and Marketing: Connecting Marketing, Sales and Customer Experience</em></a> ;</li>
<li>Prashant Natarajan, VP of Strategy & Products at H2O.ai and author of <a href='https://www.amazon.com/Demystifying-AI-Enterprise-Playbook-Transformation-ebook/dp/B09MSRHWFK/'><em>Demystifying AI for the Enterprise</em></a>.</li>
</ul>
<p>It's a free-ranging, free-spirited, free-for-all as we talk about the past and future trajectory of COVID effects on technology adoption, the evolution of attitudes towards, and equality in, AI, changes in the environmental impact of AI, and more. This is high quality thinking from people at the coal face of the industry, giving you their best shots at information you can use for the year ahead. Happy New Year!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-080-5294.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/taimcz/080_AI-and-You.mp3" length="53068222" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

On our last show of 2021, it's time to think about the year ahead, and for that I have a panel of amazing experts!
Richard Foster-Fletcher, founder of MKAI, the inclusive Artificial Intelligence Community, advisor to the United Nations Environmental Programme and UN Framework Convention on Climate Change;
Ben Goertzel, chief scientist of Hanson Robotics and author of Ten Years To the Singularity If We Really Really Try;
Katie King, speaker and marketing consultant, and author of the 2022 book AI Strategy for Sales and Marketing: Connecting Marketing, Sales and Customer Experience ;
Prashant Natarajan, VP of Strategy & Products at H2O.ai and author of Demystifying AI for the Enterprise.
It's a free-ranging, free-spirited, free-for-all as we talk about the past and future trajectory of COVID effects on technology adoption, the evolution of attitudes towards, and equality in, AI, changes in the environmental impact of AI, and more. This is high quality thinking from people at the coal face of the industry, giving you their best shots at information you can use for the year ahead. Happy New Year!
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3267</itunes:duration>
                <itunes:episode>80</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>079 - Guest: John Zerilli, Cognitive Science Philosopher, part 2</title>
        <itunes:title>079 - Guest: John Zerilli, Cognitive Science Philosopher, part 2</itunes:title>
        <link>https://aiandyou.net/e/079-guest-john-zerilli-cognitive-science-philosopher-part-2/</link>
                    <comments>https://aiandyou.net/e/079-guest-john-zerilli-cognitive-science-philosopher-part-2/#comments</comments>        <pubDate>Mon, 20 Dec 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c2f45cd1-c3c2-3cb2-b3b5-558d24e326a8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of The Adaptable Mind; <a href='https://www.law.ox.ac.uk/people/john-zerilli'>John Zerilli</a> is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge.</p>
<p>We talk about his 2021 book <a href='https://www.amazon.com/Citizens-Guide-Artificial-Intelligence-ebook/dp/B08941NQ6Y'>A Citizen's Guide to Artificial Intelligence</a>, which spells out the categories in which we should pay attention. In part 2, we'll be talking about bias, how education should address AI, and more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-079-0457.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of <em>The Adaptable Mind;</em> <a href='https://www.law.ox.ac.uk/people/john-zerilli'>John Zerilli</a> is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge.</p>
<p>We talk about his 2021 book <a href='https://www.amazon.com/Citizens-Guide-Artificial-Intelligence-ebook/dp/B08941NQ6Y'><em>A Citizen's Guide to Artificial Intelligence</em></a>, which spells out the categories in which we should pay attention. In part 2, we'll be talking about bias, how education should address AI, and more.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-079-0457.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/jei5ji/079_AI-and-You.mp3" length="32500032" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of The Adaptable Mind; John Zerilli is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge.
We talk about his 2021 book A Citizen's Guide to Artificial Intelligence, which spells out the categories in which we should pay attention. In part 2, we'll be talking about bias, how education should address AI, and more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1982</itunes:duration>
                <itunes:episode>79</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>078 - Guest: John Zerilli, Cognitive Science Philosopher, part 1</title>
        <itunes:title>078 - Guest: John Zerilli, Cognitive Science Philosopher, part 1</itunes:title>
        <link>https://aiandyou.net/e/078-guest-john-zerilli-cognitive-science-philosopher-part-1/</link>
                    <comments>https://aiandyou.net/e/078-guest-john-zerilli-cognitive-science-philosopher-part-1/#comments</comments>        <pubDate>Mon, 13 Dec 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/6a00e910-4d15-33a2-aef6-87b0e3ccbf3c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of The Adaptable Mind; <a href='https://www.law.ox.ac.uk/people/john-zerilli'>John Zerilli</a> is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge.</p>
<p>We talk about his 2021 book <a href='https://www.amazon.com/Citizens-Guide-Artificial-Intelligence-ebook/dp/B08941NQ6Y'>A Citizen's Guide to Artificial Intelligence</a>, which spells out the categories in which we should pay attention. In part 1, we'll be talking about what he wants to achieve with the book and how some of those issues, in particular, privacy, impact the average person today.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-078-1542.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of <em>The Adaptable Mind;</em> <a href='https://www.law.ox.ac.uk/people/john-zerilli'>John Zerilli</a> is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge.</p>
<p>We talk about his 2021 book <a href='https://www.amazon.com/Citizens-Guide-Artificial-Intelligence-ebook/dp/B08941NQ6Y'><em>A Citizen's Guide to Artificial Intelligence</em></a>, which spells out the categories in which we should pay attention. In part 1, we'll be talking about what he wants to achieve with the book and how some of those issues, in particular, privacy, impact the average person today.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/12/transcript-078-1542.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ab9s6i/078_AI-and-You.mp3" length="25018145" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What do you, a citizen, need to know and do about AI in your life now and in the future? Enter the author of The Adaptable Mind; John Zerilli is a philosopher, a Leverhulme Fellow at the University of Oxford, a Research Associate in the Oxford Institute for Ethics in AI, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge.
We talk about his 2021 book A Citizen's Guide to Artificial Intelligence, which spells out the categories in which we should pay attention. In part 1, we'll be talking about what he wants to achieve with the book and how some of those issues, in particular, privacy, impact the average person today.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1514</itunes:duration>
                <itunes:episode>78</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>077 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Media Producer, part 2</title>
        <itunes:title>077 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Media Producer, part 2</itunes:title>
        <link>https://aiandyou.net/e/077-guest-alexandra-mousavizadeh-strategic-intelligence-media-producer-part-2/</link>
                    <comments>https://aiandyou.net/e/077-guest-alexandra-mousavizadeh-strategic-intelligence-media-producer-part-2/#comments</comments>        <pubDate>Mon, 06 Dec 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/477c1d68-2736-3068-a611-c10d2c5359ef</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The <a href='https://www.tortoisemedia.com/intelligence/global-ai/'>Global AI Index</a>, the <a href='https://www.tortoisemedia.com/intelligence/responsibility/'>Responsibility100 </a>Index, and the <a href='https://disinformationindex.org/'>Global Disinformation Index</a>. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals.</p>
<p>In part 2, we talk about China's race for AI gold, and the global disinformation index.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-077-7581.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The <a href='https://www.tortoisemedia.com/intelligence/global-ai/'>Global AI Index</a>, the <a href='https://www.tortoisemedia.com/intelligence/responsibility/'>Responsibility100 </a>Index, and the <a href='https://disinformationindex.org/'>Global Disinformation Index</a>. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals.</p>
<p>In part 2, we talk about China's race for AI gold, and the global disinformation index.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-077-7581.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nr86sk/077_AI-and-You.mp3" length="24085678" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The Global AI Index, the Responsibility100 Index, and the Global Disinformation Index. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals.
In part 2, we talk about China's race for AI gold, and the global disinformation index.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1456</itunes:duration>
                <itunes:episode>77</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>076 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Media Producer, part 1</title>
        <itunes:title>076 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Media Producer, part 1</itunes:title>
        <link>https://aiandyou.net/e/076-guest-alexandra-mousavizadeh-strategic-intelligence-media-producer-part-1/</link>
                    <comments>https://aiandyou.net/e/076-guest-alexandra-mousavizadeh-strategic-intelligence-media-producer-part-1/#comments</comments>        <pubDate>Mon, 29 Nov 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/08c51354-30b0-3782-b3ac-c24529c5980c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The <a href='https://www.tortoisemedia.com/intelligence/global-ai/'>Global AI Index</a>, the <a href='https://www.tortoisemedia.com/intelligence/responsibility/'>Responsibility100 </a>Index, and the <a href='https://disinformationindex.org/'>Global Disinformation Index</a>. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals.</p>
<p>In part 1, we talk about the methodologies behind the indexes, their relationship to ESG, inequality, and the labor markets.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-076-7506.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The <a href='https://www.tortoisemedia.com/intelligence/global-ai/'>Global AI Index</a>, the <a href='https://www.tortoisemedia.com/intelligence/responsibility/'>Responsibility100 </a>Index, and the <a href='https://disinformationindex.org/'>Global Disinformation Index</a>. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals.</p>
<p>In part 1, we talk about the methodologies behind the indexes, their relationship to ESG, inequality, and the labor markets.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-076-7506.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/i9itaa/076_AI-and-You.mp3" length="31895663" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How could you know which countries are doing a good job with artificial intelligence, and which sites are the worst disinformation spreaders? Sounds incredibly useful but impossible to figure out, right? Alexandra Mousavizadeh of Tortoise Media in London has founded global indexes that answer those and other questions: The Global AI Index, the Responsibility100 Index, and the Global Disinformation Index. As Director of the Tortoise Intelligence team, her insights into geopolitical and industry conflicts and state of the art are highly prized by governments and multinationals.
In part 1, we talk about the methodologies behind the indexes, their relationship to ESG, inequality, and the labor markets.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1944</itunes:duration>
                <itunes:episode>76</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>075 - Guest: Michael Hind, IBM AI Explainability Expert, part 2</title>
        <itunes:title>075 - Guest: Michael Hind, IBM AI Explainability Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/075-guest-michael-hind-ibm-ai-explainability-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/075-guest-michael-hind-ibm-ai-explainability-expert-part-2/#comments</comments>        <pubDate>Mon, 22 Nov 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/23bf5034-1210-34ce-bcb3-50769133c556</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.</p>
<p>Michael Hind is a Distinguished Research Staff Member in the IBM
Research AI department in Yorktown Heights, New York.  His current
research passion is the area of Trusted AI, focusing on governance,
transparency, explainability, and fairness of AI systems.  He helped launch several successful open source projects, such as
<a href='https://aif360.mybluemix.net/'>AI Fairness 360</a> and <a href='https://aix360.mybluemix.net/'>AI Explainability 360</a>.</p>
<p>In part 2, we talk about the Teaching Explainable Decisions project, some of Michael’s experience with Watson, the difference between transparency and explainability, and a lot more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-075-5649.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you <em>why </em>it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.</p>
<p>Michael Hind is a Distinguished Research Staff Member in the IBM<br>
Research AI department in Yorktown Heights, New York.  His current<br>
research passion is the area of Trusted AI, focusing on governance,<br>
transparency, explainability, and fairness of AI systems.  He helped launch several successful open source projects, such as<br>
<a href='https://aif360.mybluemix.net/'>AI Fairness 360</a> and <a href='https://aix360.mybluemix.net/'>AI Explainability 360</a>.</p>
<p>In part 2, we talk about the <em>Teaching Explainable Decisions</em> project, some of Michael’s experience with Watson, the difference between transparency and explainability, and a lot more. </p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-075-5649.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ez35wc/075_AI-and-You.mp3" length="32416022" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.
Michael Hind is a Distinguished Research Staff Member in the IBMResearch AI department in Yorktown Heights, New York.  His currentresearch passion is the area of Trusted AI, focusing on governance,transparency, explainability, and fairness of AI systems.  He helped launch several successful open source projects, such asAI Fairness 360 and AI Explainability 360.
In part 2, we talk about the Teaching Explainable Decisions project, some of Michael’s experience with Watson, the difference between transparency and explainability, and a lot more. 
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1977</itunes:duration>
                <itunes:episode>75</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>074 - Guest: Michael Hind, IBM AI Explainability Expert, part 1</title>
        <itunes:title>074 - Guest: Michael Hind, IBM AI Explainability Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/074-guest-michael-hind-ibm-ai-explainability-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/074-guest-michael-hind-ibm-ai-explainability-expert-part-1/#comments</comments>        <pubDate>Mon, 15 Nov 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f432cc9f-3268-3979-8219-4ef2457b30ba</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.</p>
<p>Before I talked with Michael Hind, my usual remark on the subject was, "If you want a demonstration of the ultimate futility of explainability, try asking your kid how the vase got broken." But after this episode I've learned more than I thought possible about how we can teach AI what an explanation is and how to produce one.</p>
<p>Michael is a Distinguished Research Staff Member in the IBM
Research AI department in Yorktown Heights, New York.  His current
research passion is the area of Trusted AI, focusing on governance,
transparency, explainability, and fairness of AI systems.  He helped launch several successful open source projects, such as
<a href='https://aif360.mybluemix.net/'>AI Fairness 360</a> and <a href='https://aix360.mybluemix.net/'>AI Explainability 360</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-074-6090.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you <em>why </em>it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.</p>
<p>Before I talked with Michael Hind, my usual remark on the subject was, "If you want a demonstration of the ultimate futility of explainability, try asking your kid how the vase got broken." But after this episode I've learned more than I thought possible about how we can teach AI what an explanation is and how to produce one.</p>
<p>Michael is a Distinguished Research Staff Member in the IBM<br>
Research AI department in Yorktown Heights, New York.  His current<br>
research passion is the area of Trusted AI, focusing on governance,<br>
transparency, explainability, and fairness of AI systems.  He helped launch several successful open source projects, such as<br>
<a href='https://aif360.mybluemix.net/'>AI Fairness 360</a> and <a href='https://aix360.mybluemix.net/'>AI Explainability 360</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/11/transcript-074-6090.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/uwezkg/074_AI-and-You.mp3" length="32542835" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.
Before I talked with Michael Hind, my usual remark on the subject was, "If you want a demonstration of the ultimate futility of explainability, try asking your kid how the vase got broken." But after this episode I've learned more than I thought possible about how we can teach AI what an explanation is and how to produce one.
Michael is a Distinguished Research Staff Member in the IBMResearch AI department in Yorktown Heights, New York.  His currentresearch passion is the area of Trusted AI, focusing on governance,transparency, explainability, and fairness of AI systems.  He helped launch several successful open source projects, such asAI Fairness 360 and AI Explainability 360.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1984</itunes:duration>
                <itunes:episode>74</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>073 - Guest: Kordel France, AI Engineer and CEO, part 2</title>
        <itunes:title>073 - Guest: Kordel France, AI Engineer and CEO, part 2</itunes:title>
        <link>https://aiandyou.net/e/073-guest-kordel-france-ai-engineer-and-ceo-part-2/</link>
                    <comments>https://aiandyou.net/e/073-guest-kordel-france-ai-engineer-and-ceo-part-2/#comments</comments>        <pubDate>Mon, 08 Nov 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/add33b55-fbb6-3cce-b3d1-585dbc5b47e5</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of <a href='https://seekartech.com'>Seekar Technologies</a>, which is building the first clinical AI tool used to advise neuropsychologists in diagnosing mental disorders. There are a lot of surprises in this episode as we talk about explainability, artificial general intelligence, and the fragility of image recognition AI, among other things.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-073-8850.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of <a href='https://seekartech.com'>Seekar Technologies</a>, which is building the first clinical AI tool used to advise neuropsychologists in diagnosing mental disorders. There are a lot of surprises in this episode as we talk about explainability, artificial general intelligence, and the fragility of image recognition AI, among other things.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-073-8850.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ayufhm/073_AI-and-You.mp3" length="27052105" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of Seekar Technologies, which is building the first clinical AI tool used to advise neuropsychologists in diagnosing mental disorders. There are a lot of surprises in this episode as we talk about explainability, artificial general intelligence, and the fragility of image recognition AI, among other things.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1641</itunes:duration>
                <itunes:episode>73</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>072 - Guest: Kordel France, AI Engineer and CEO, part 1</title>
        <itunes:title>072 - Guest: Kordel France, AI Engineer and CEO, part 1</itunes:title>
        <link>https://aiandyou.net/e/072-guest-kordel-france-ai-engineer-and-ceo-part-1/</link>
                    <comments>https://aiandyou.net/e/072-guest-kordel-france-ai-engineer-and-ceo-part-1/#comments</comments>        <pubDate>Mon, 01 Nov 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/62244094-08f0-3a8d-b831-4134aeead1e3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of <a href='https://seekartech.com'>Seekar Technologies</a>, which contributed AI to help ease demand on medical staffing and screen for COVID-19 faster. There are a lot of surprises - AI In hunting? - in this episode.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-072-1150.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of <a href='https://seekartech.com'>Seekar Technologies</a>, which contributed AI to help ease demand on medical staffing and screen for COVID-19 faster. There are a lot of surprises - AI In hunting? - in this episode.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-072-1150.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/raxwpp/072_AI-and-You.mp3" length="28507686" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

From AI in farming to AI in defense, Kordel France has done it, as an AI engineer and now founder and CEO of Seekar Technologies, which contributed AI to help ease demand on medical staffing and screen for COVID-19 faster. There are a lot of surprises - AI In hunting? - in this episode.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1732</itunes:duration>
                <itunes:episode>72</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>071 - Guest: Jonathan Rowson, Chess Grandmaster and Philosophical Activist, part 2</title>
        <itunes:title>071 - Guest: Jonathan Rowson, Chess Grandmaster and Philosophical Activist, part 2</itunes:title>
        <link>https://aiandyou.net/e/071-guest-jonathan-rowson-chess-grandmaster-and-philosophical-activist-part-2/</link>
                    <comments>https://aiandyou.net/e/071-guest-jonathan-rowson-chess-grandmaster-and-philosophical-activist-part-2/#comments</comments>        <pubDate>Mon, 25 Oct 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/98eb7831-278d-3b30-9eb8-1d094aa8ead3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Jonathan Rowson is a chess grandmaster who was three times British champion, writing books including "<a href='https://www.amazon.com/Moves-That-Matter-Chess-Grandmaster/dp/1635573327'>The Moves That Matter: A Chess Grandmaster on the Game of Life</a>." He is now a "philosophical activist" working on "an urgent one hundred year project to improve the relationships between systems, souls, and society" at <a href='https://systems-souls-society.com/'>Perspectiva</a>.  </p>
<p>Last week we talked about the impact of computers on the game of chess and the people who play it. This week we make the connection with Jonathan's career as a philosopher and how he intends Perspectiva to make a difference. He'll tell us about their antidebates: "Most of us don’t only disagree with each other, we disagree with ourselves. That’s a very important premise for the antidebate. The main battle is within your own reckoning with a difficult question."
</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-071-1844.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Jonathan Rowson is a chess grandmaster who was three times British champion, writing books including "<a href='https://www.amazon.com/Moves-That-Matter-Chess-Grandmaster/dp/1635573327'>The Moves That Matter: A Chess Grandmaster on the Game of Life</a>." He is now a "philosophical activist" working on "an urgent one hundred year project to improve the relationships between systems, souls, and society" at <a href='https://systems-souls-society.com/'>Perspectiva</a>.  </p>
<p>Last week we talked about the impact of computers on the game of chess and the people who play it. This week we make the connection with Jonathan's career as a philosopher and how he intends Perspectiva to make a difference. He'll tell us about their <em>antidebates:</em> "Most of us don’t only disagree with each other, we disagree with ourselves. That’s a very important premise for the antidebate. The main battle is within your own reckoning with a difficult question."<br>
</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-071-1844.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/5bypzx/071_AI-and-You.mp3" length="31373214" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Jonathan Rowson is a chess grandmaster who was three times British champion, writing books including "The Moves That Matter: A Chess Grandmaster on the Game of Life." He is now a "philosophical activist" working on "an urgent one hundred year project to improve the relationships between systems, souls, and society" at Perspectiva.  
Last week we talked about the impact of computers on the game of chess and the people who play it. This week we make the connection with Jonathan's career as a philosopher and how he intends Perspectiva to make a difference. He'll tell us about their antidebates: "Most of us don’t only disagree with each other, we disagree with ourselves. That’s a very important premise for the antidebate. The main battle is within your own reckoning with a difficult question."
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1911</itunes:duration>
                <itunes:episode>71</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>070 - Guest: Jonathan Rowson, Chess Grandmaster and Philosophical Activist, part 1</title>
        <itunes:title>070 - Guest: Jonathan Rowson, Chess Grandmaster and Philosophical Activist, part 1</itunes:title>
        <link>https://aiandyou.net/e/070-guest-jonathan-rowson-chess-grandmaster-and-philosophical-activist-part-1/</link>
                    <comments>https://aiandyou.net/e/070-guest-jonathan-rowson-chess-grandmaster-and-philosophical-activist-part-1/#comments</comments>        <pubDate>Mon, 18 Oct 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c43becc2-a290-39e9-971a-53c07bd99ea4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Jonathan Rowson is a chess grandmaster who was three times British champion, writing books including "<a href='https://www.amazon.com/Moves-That-Matter-Chess-Grandmaster/dp/1635573327'>The Moves That Matter: A Chess Grandmaster on the Game of Life</a>." He is now a "philosophical activist" working on "an urgent one hundred year project to improve the relationships between systems, souls, and society" via the <a href='https://systems-souls-society.com/'>Perspectiva</a> project.  Lots to talk about! We start out with the impact of computers on the game of chess and the people who play it: What does their encounter with AI have to teach the rest of us?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-070-6573.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Jonathan Rowson is a chess grandmaster who was three times British champion, writing books including "<a href='https://www.amazon.com/Moves-That-Matter-Chess-Grandmaster/dp/1635573327'>The Moves That Matter: A Chess Grandmaster on the Game of Life</a>." He is now a "philosophical activist" working on "an urgent one hundred year project to improve the relationships between systems, souls, and society" via the <a href='https://systems-souls-society.com/'>Perspectiva</a> project.  Lots to talk about! We start out with the impact of computers on the game of chess and the people who play it: What does their encounter with AI have to teach the rest of us?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-070-6573.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kr8u6t/070_AI-and-You.mp3" length="35486769" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Jonathan Rowson is a chess grandmaster who was three times British champion, writing books including "The Moves That Matter: A Chess Grandmaster on the Game of Life." He is now a "philosophical activist" working on "an urgent one hundred year project to improve the relationships between systems, souls, and society" via the Perspectiva project.  Lots to talk about! We start out with the impact of computers on the game of chess and the people who play it: What does their encounter with AI have to teach the rest of us?
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2168</itunes:duration>
                <itunes:episode>70</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>069 - Special Episode: Disinformation</title>
        <itunes:title>069 - Special Episode: Disinformation</itunes:title>
        <link>https://aiandyou.net/e/069-special-episode-disinformation/</link>
                    <comments>https://aiandyou.net/e/069-special-episode-disinformation/#comments</comments>        <pubDate>Mon, 11 Oct 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/6ec07fba-386a-3ae1-a06c-9611c5995a3e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How is disinformation affecting our society, and what does AI have to do with it? I promised back in episode 1 that I would talk about disinformation, and now it's time to open that can of worms. I'll talk about the types of distortions, how social media algorithms spread them, the threats they pose, what's being done about them, and the role of AI in all this.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-069-1100.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How is disinformation affecting our society, and what does AI have to do with it? I promised back in episode 1 that I would talk about disinformation, and now it's time to open that can of worms. I'll talk about the types of distortions, how social media algorithms spread them, the threats they pose, what's being done about them, and the role of AI in all this.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/10/transcript-069-1100.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ei3zqv/069_AI-and-You.mp3" length="35618008" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How is disinformation affecting our society, and what does AI have to do with it? I promised back in episode 1 that I would talk about disinformation, and now it's time to open that can of worms. I'll talk about the types of distortions, how social media algorithms spread them, the threats they pose, what's being done about them, and the role of AI in all this.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2177</itunes:duration>
                <itunes:episode>69</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>068 - Guest: Daniel DeMillard, Applied AI CTO</title>
        <itunes:title>068 - Guest: Daniel DeMillard, Applied AI CTO</itunes:title>
        <link>https://aiandyou.net/e/068-guest-daniel-demillard-applied-ai-cto/</link>
                    <comments>https://aiandyou.net/e/068-guest-daniel-demillard-applied-ai-cto/#comments</comments>        <pubDate>Mon, 04 Oct 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/68004bcb-79c8-3402-83b3-697f2d1bae4c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Daniel DeMillard has been with IBM's Watson Division and is now CTO of <a href='https://foodspacetech.com'>Foodspace</a>, making apps that can connect any dietary or culinary preferences with the products that will fulfill them. We talk about all of that, get into what's realistic and what isn't with natural language understanding, the present and future of AI-assisted search, and... Joaquin Phoenix putting a paper bag over his head? (Some parts of the interview only make sense in context.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-068-1786.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Daniel DeMillard has been with IBM's Watson Division and is now CTO of <a href='https://foodspacetech.com'>Foodspace</a>, making apps that can connect any dietary or culinary preferences with the products that will fulfill them. We talk about all of that, get into what's realistic and what isn't with natural language understanding, the present and future of AI-assisted search, and... Joaquin Phoenix putting a paper bag over his head? (Some parts of the interview only make sense in context.)</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-068-1786.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/uc2ry6/068_AI-and-You.mp3" length="43300097" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Daniel DeMillard has been with IBM's Watson Division and is now CTO of Foodspace, making apps that can connect any dietary or culinary preferences with the products that will fulfill them. We talk about all of that, get into what's realistic and what isn't with natural language understanding, the present and future of AI-assisted search, and... Joaquin Phoenix putting a paper bag over his head? (Some parts of the interview only make sense in context.)
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2657</itunes:duration>
                <itunes:episode>68</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>067 - Guest: Olivier Caron-Lizotte, AI-as-a-service CEO</title>
        <itunes:title>067 - Guest: Olivier Caron-Lizotte, AI-as-a-service CEO</itunes:title>
        <link>https://aiandyou.net/e/067-guest-olivier-caron-lizotte-ai-as-a-service-ceo/</link>
                    <comments>https://aiandyou.net/e/067-guest-olivier-caron-lizotte-ai-as-a-service-ceo/#comments</comments>        <pubDate>Mon, 27 Sep 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/94393bf6-daf8-3e59-947e-69af4b7e8a7e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What's it like to actually make AI work for customers in real-world applications where their investment has to pay off? Olivier Caron-Lizotte is the CEO of <a href='https://explor.ai/'>explor.ai</a>, running a stable of developers to contract out. He's got the battle-tested experience about how that really works today and we get into the details of that.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-067-9260.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What's it like to actually make AI work for customers in real-world applications where their investment has to pay off? Olivier Caron-Lizotte is the CEO of <a href='https://explor.ai/'>explor.ai</a>, running a stable of developers to contract out. He's got the battle-tested experience about how that really works today and we get into the details of that.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-067-9260.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/vs2ent/067_AI-and-You.mp3" length="36519546" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What's it like to actually make AI work for customers in real-world applications where their investment has to pay off? Olivier Caron-Lizotte is the CEO of explor.ai, running a stable of developers to contract out. He's got the battle-tested experience about how that really works today and we get into the details of that.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2233</itunes:duration>
                <itunes:episode>67</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>066 - Guest: Olav Krigolson, Neuroscientist, part 2</title>
        <itunes:title>066 - Guest: Olav Krigolson, Neuroscientist, part 2</itunes:title>
        <link>https://aiandyou.net/e/066-guest-olav-krigolson-neuroscientist-part-2/</link>
                    <comments>https://aiandyou.net/e/066-guest-olav-krigolson-neuroscientist-part-2/#comments</comments>        <pubDate>Mon, 20 Sep 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b3804fdc-f822-3ca6-8420-3bcc4d96742b</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Olav Krigolson (@thatneurosciguy) has fried his brain - in the name of science. (It was not permanent.) He's a TEDx speaker and hands-on neuroscientist at the University of Victoria, where he runs the Krigolson Lab, studying brainwaves. He's helped astronauts prepare to go to Mars and can tell what someone's going to say before they say it. (Within limits.) In part 2, we talk about interfacing with the brain and interpreting brainwaves, plus how to use neuroscience to jumpstart your creativity when your stuck in a pandemic rut.</p>

<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-066-8037.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Olav Krigolson (@thatneurosciguy) has fried his brain - in the name of science. (It was not permanent.) He's a TEDx speaker and hands-on neuroscientist at the University of Victoria, where he runs the Krigolson Lab, studying brainwaves. He's helped astronauts prepare to go to Mars and can tell what someone's going to say before they say it. (Within limits.) In part 2, we talk about interfacing with the brain and interpreting brainwaves, plus how to use neuroscience to jumpstart your creativity when your stuck in a pandemic rut.</p>
<br>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-066-8037.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/a8nmdr/066_AI-and-You.mp3" length="29939196" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Olav Krigolson (@thatneurosciguy) has fried his brain - in the name of science. (It was not permanent.) He's a TEDx speaker and hands-on neuroscientist at the University of Victoria, where he runs the Krigolson Lab, studying brainwaves. He's helped astronauts prepare to go to Mars and can tell what someone's going to say before they say it. (Within limits.) In part 2, we talk about interfacing with the brain and interpreting brainwaves, plus how to use neuroscience to jumpstart your creativity when your stuck in a pandemic rut.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1822</itunes:duration>
                <itunes:episode>66</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>065 - Guest: Olav Krigolson, Neuroscientist, part 1</title>
        <itunes:title>065 - Guest: Olav Krigolson, Neuroscientist, part 1</itunes:title>
        <link>https://aiandyou.net/e/065-guest-olav-krigolson-neuroscientist-part-1/</link>
                    <comments>https://aiandyou.net/e/065-guest-olav-krigolson-neuroscientist-part-1/#comments</comments>        <pubDate>Mon, 13 Sep 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/0af2c109-533f-3029-b73a-e2375d60d06c</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Olav Krigolson (@thatneurosciguy) has fried his brain - in the name of science. (It was not permanent.) He's a hands-on neuroscientist at the University of Victoria, where he runs the Krigolson Lab, studying brainwaves. He's helped astronauts prepare to go to Mars and can tell what someone's going to say before they say it. (Within limits.) We talk about those things and applicability to AI in part 1 of our interview.</p>

<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-065-0935.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Olav Krigolson (@thatneurosciguy) has fried his brain - in the name of science. (It was not permanent.) He's a hands-on neuroscientist at the University of Victoria, where he runs the Krigolson Lab, studying brainwaves. He's helped astronauts prepare to go to Mars and can tell what someone's going to say before they say it. (Within limits.) We talk about those things and applicability to AI in part 1 of our interview.</p>
<br>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/09/transcript-065-0935.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/adg3zy/065_AI-and-You.mp3" length="36038475" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Olav Krigolson (@thatneurosciguy) has fried his brain - in the name of science. (It was not permanent.) He's a hands-on neuroscientist at the University of Victoria, where he runs the Krigolson Lab, studying brainwaves. He's helped astronauts prepare to go to Mars and can tell what someone's going to say before they say it. (Within limits.) We talk about those things and applicability to AI in part 1 of our interview.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2203</itunes:duration>
                <itunes:episode>65</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>064 - Guest: Amit Gupta, AI Writing Assistant Creator</title>
        <itunes:title>064 - Guest: Amit Gupta, AI Writing Assistant Creator</itunes:title>
        <link>https://aiandyou.net/e/064-guest-amit-gupta-ai-writing-assistant-creator/</link>
                    <comments>https://aiandyou.net/e/064-guest-amit-gupta-ai-writing-assistant-creator/#comments</comments>        <pubDate>Mon, 06 Sep 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/fcd78231-3b20-36a0-a7cf-b7b84c015c81</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Amit Gupta has an amazing life story and is only just getting started. After a close encounter with death, he changed jobs, took up writing science fiction, and wrote an AI to help him. That AI - <a href='https://www.sudowrite.com'>Sudowrite </a>- is an amazingly capable and creative application of GPT-3 that earned a feature in <a href='https://www.newyorker.com/culture/cultural-comment/the-computers-are-getting-better-at-writing'>The New Yorker</a>, and we dig into what it does and how it does it. Writing is about to be transformed.</p>
<p>Mention AI and You when applying for Sudowrite beta access and Amit will move you to the head of the line!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-064-2519.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Amit Gupta has an amazing life story and is only just getting started. After a close encounter with death, he changed jobs, took up writing science fiction, and wrote an AI to help him. That AI - <a href='https://www.sudowrite.com'>Sudowrite </a>- is an amazingly capable and creative application of GPT-3 that earned a feature in <a href='https://www.newyorker.com/culture/cultural-comment/the-computers-are-getting-better-at-writing'>The New Yorker</a>, and we dig into what it does and how it does it. Writing is about to be transformed.</p>
<p>Mention <em>AI and You</em> when applying for Sudowrite beta access and Amit will move you to the head of the line!</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-064-2519.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/6cp5zw/064_AI-and-You.mp3" length="37746676" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Amit Gupta has an amazing life story and is only just getting started. After a close encounter with death, he changed jobs, took up writing science fiction, and wrote an AI to help him. That AI - Sudowrite - is an amazingly capable and creative application of GPT-3 that earned a feature in The New Yorker, and we dig into what it does and how it does it. Writing is about to be transformed.
Mention AI and You when applying for Sudowrite beta access and Amit will move you to the head of the line!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2310</itunes:duration>
                <itunes:episode>64</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>063 - Guest: Sathish Sankarpandi, Digital Avatar Scientist</title>
        <itunes:title>063 - Guest: Sathish Sankarpandi, Digital Avatar Scientist</itunes:title>
        <link>https://aiandyou.net/e/063-guest-sathish-sankarpandi-digital-avatar-scientist/</link>
                    <comments>https://aiandyou.net/e/063-guest-sathish-sankarpandi-digital-avatar-scientist/#comments</comments>        <pubDate>Mon, 30 Aug 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/9be5e74b-6e29-3686-a783-f55c8c63ed51</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are you ready to interact with hyper-realistic digital avatars - computer-generated people - as part of your health care? They're not about to replace doctors and nurses, but they are ready to be an earlier part of the experience. Sathish Sankarpandi, data scientist at <a href='https://orbitalmedia.com/'>Orbital Global</a>, tells us about the VirtTuri avatar (from "Virtual" and "Turing"). He'll tell us the capabilities and limits of today's AI-backed avatars.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-063-1381.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are you ready to interact with hyper-realistic digital avatars - computer-generated people - as part of your health care? They're not about to replace doctors and nurses, but they are ready to be an earlier part of the experience. Sathish Sankarpandi, data scientist at <a href='https://orbitalmedia.com/'>Orbital Global</a>, tells us about the VirtTuri avatar (from "Virtual" and "Turing"). He'll tell us the capabilities and limits of today's AI-backed avatars.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-063-1381.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3u5wbm/063_AI-and-You.mp3" length="34637893" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Are you ready to interact with hyper-realistic digital avatars - computer-generated people - as part of your health care? They're not about to replace doctors and nurses, but they are ready to be an earlier part of the experience. Sathish Sankarpandi, data scientist at Orbital Global, tells us about the VirtTuri avatar (from "Virtual" and "Turing"). He'll tell us the capabilities and limits of today's AI-backed avatars.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2115</itunes:duration>
                <itunes:episode>63</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>062 - Guest: Todd Litman, Autonomous Vehicle Policy Expert, part 2</title>
        <itunes:title>062 - Guest: Todd Litman, Autonomous Vehicle Policy Expert, part 2</itunes:title>
        <link>https://aiandyou.net/e/062-guest-todd-litman-autonomous-vehicle-policy-expert-part-2/</link>
                    <comments>https://aiandyou.net/e/062-guest-todd-litman-autonomous-vehicle-policy-expert-part-2/#comments</comments>        <pubDate>Mon, 23 Aug 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5c02c51d-830c-3fe1-b1f7-255465548dc2</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How will local and national authorities plan for self-driving vehicles in their jurisdictions? Todd Litman will help them. He is founder and executive director of the Victoria Transport Policy Institute, an independent research organization dedicated to developing innovative solutions to transport problems. His report "<a href='http://www.vtpi.org/avip.pdf'>Autonomous Vehicle Implementation Predictions</a>" explores the impacts of autonomous vehicles, and their implications for transportation planning. In part 2, we talk about how AVs are likely to change transportation planning, and put some numbers around the projections. No zombie kangaroos this time, though.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-062-9262.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How will local and national authorities plan for self-driving vehicles in their jurisdictions? Todd Litman will help them. He is founder and executive director of the Victoria Transport Policy Institute, an independent research organization dedicated to developing innovative solutions to transport problems. His report "<a href='http://www.vtpi.org/avip.pdf'>Autonomous Vehicle Implementation Predictions</a>" explores the impacts of autonomous vehicles, and their implications for transportation planning. In part 2, we talk about how AVs are likely to change transportation planning, and put some numbers around the projections. No zombie kangaroos this time, though.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-062-9262.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/dsjgga/062_AI-and-You.mp3" length="32708176" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How will local and national authorities plan for self-driving vehicles in their jurisdictions? Todd Litman will help them. He is founder and executive director of the Victoria Transport Policy Institute, an independent research organization dedicated to developing innovative solutions to transport problems. His report "Autonomous Vehicle Implementation Predictions" explores the impacts of autonomous vehicles, and their implications for transportation planning. In part 2, we talk about how AVs are likely to change transportation planning, and put some numbers around the projections. No zombie kangaroos this time, though.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1995</itunes:duration>
                <itunes:episode>62</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>061 - Guest: Todd Litman, Autonomous Vehicle Policy Expert, part 1</title>
        <itunes:title>061 - Guest: Todd Litman, Autonomous Vehicle Policy Expert, part 1</itunes:title>
        <link>https://aiandyou.net/e/061-guest-todd-litman-autonomous-vehicle-policy-expert-part-1/</link>
                    <comments>https://aiandyou.net/e/061-guest-todd-litman-autonomous-vehicle-policy-expert-part-1/#comments</comments>        <pubDate>Mon, 16 Aug 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f2a20b33-49a5-371e-8103-9e0503abe673</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How will local and national authorities plan for self-driving vehicles in their jurisdictions? Todd Litman will help them. He is founder and executive director of the Victoria Transport Policy Institute, an independent research organization dedicated to developing innovative solutions to transport problems. His report "<a href='http://www.vtpi.org/avip.pdf'>Autonomous Vehicle Implementation Predictions</a>" explores the impacts of autonomous vehicles, and their implications for transportation planning. In part 1 we talk about realistic expectations for AV implementation... plus zombie kangaroos.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-061-4552.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How will local and national authorities plan for self-driving vehicles in their jurisdictions? Todd Litman will help them. He is founder and executive director of the Victoria Transport Policy Institute, an independent research organization dedicated to developing innovative solutions to transport problems. His report "<a href='http://www.vtpi.org/avip.pdf'>Autonomous Vehicle Implementation Predictions</a>" explores the impacts of autonomous vehicles, and their implications for transportation planning. In part 1 we talk about realistic expectations for AV implementation... plus zombie kangaroos.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-061-4552.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/btpqym/061_AI-and-You.mp3" length="29155523" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How will local and national authorities plan for self-driving vehicles in their jurisdictions? Todd Litman will help them. He is founder and executive director of the Victoria Transport Policy Institute, an independent research organization dedicated to developing innovative solutions to transport problems. His report "Autonomous Vehicle Implementation Predictions" explores the impacts of autonomous vehicles, and their implications for transportation planning. In part 1 we talk about realistic expectations for AV implementation... plus zombie kangaroos.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1773</itunes:duration>
                <itunes:episode>61</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>060 - Guest: Tomáš Mikolov, Research Scientist</title>
        <itunes:title>060 - Guest: Tomáš Mikolov, Research Scientist</itunes:title>
        <link>https://aiandyou.net/e/060-guest-tomas-mikolov-research-scientist/</link>
                    <comments>https://aiandyou.net/e/060-guest-tomas-mikolov-research-scientist/#comments</comments>        <pubDate>Mon, 09 Aug 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d3e112aa-1c10-3990-9246-d0a892742a54</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Tomáš Mikolov is a PhD and research scientist at the Czech Institute of Informatics, Robotics and Cybernetics. He's done research for GoodAI and has worked at Google Brain and Facebook AI Research. He gave me some straight talk about the state of research and innovation in AI, and spelled out what it's missing and where researchers are playing it safe. We also talked about his research into "novelty search" and unpacked some ways to understand machine learning.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-060-7806.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Tomáš Mikolov is a PhD and research scientist at the Czech Institute of Informatics, Robotics and Cybernetics. He's done research for GoodAI and has worked at Google Brain and Facebook AI Research. He gave me some straight talk about the state of research and innovation in AI, and spelled out what it's missing and where researchers are playing it safe. We also talked about his research into "novelty search" and unpacked some ways to understand machine learning.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/08/transcript-060-7806.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/yvgmpw/060_AI-and-You.mp3" length="36929564" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Tomáš Mikolov is a PhD and research scientist at the Czech Institute of Informatics, Robotics and Cybernetics. He's done research for GoodAI and has worked at Google Brain and Facebook AI Research. He gave me some straight talk about the state of research and innovation in AI, and spelled out what it's missing and where researchers are playing it safe. We also talked about his research into "novelty search" and unpacked some ways to understand machine learning.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2259</itunes:duration>
                <itunes:episode>60</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>059 - Guest: Kakia Chatsiou, Political Language Analyzer</title>
        <itunes:title>059 - Guest: Kakia Chatsiou, Political Language Analyzer</itunes:title>
        <link>https://aiandyou.net/e/059-kakia-chatsiou-political-language-analyzer/</link>
                    <comments>https://aiandyou.net/e/059-kakia-chatsiou-political-language-analyzer/#comments</comments>        <pubDate>Mon, 02 Aug 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/49460f29-f107-361d-82d9-a1a34b198336</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Kakia Chatsiou can tell you what a politician is really saying. Of course, most of us would say that the politician isn’t saying anything at all, but she’s more precise about it, and she uses AI to do it. She’s a professor at the University of Suffolk in the United Kingdom and this work shows up in her research paper <a href='https://arxiv.org/abs/2005.06540'>Deep Learning for Political Science</a>. She's an expert in natural language processing. We talk about how she analyzes the text of government COVID announcements to extract meaning, application to related fields such as ethics, and what it's like to be in this field.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-059-5165.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Kakia Chatsiou can tell you what a politician is really saying. Of course, most of us would say that the politician isn’t saying anything at all, but she’s more precise about it, and she uses AI to do it. She’s a professor at the University of Suffolk in the United Kingdom and this work shows up in her research paper <a href='https://arxiv.org/abs/2005.06540'>Deep Learning for Political Science</a>. She's an expert in natural language processing. We talk about how she analyzes the text of government COVID announcements to extract meaning, application to related fields such as ethics, and what it's like to be in this field.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-059-5165.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qqjxc5/059_AI-and-You.mp3" length="38052620" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Kakia Chatsiou can tell you what a politician is really saying. Of course, most of us would say that the politician isn’t saying anything at all, but she’s more precise about it, and she uses AI to do it. She’s a professor at the University of Suffolk in the United Kingdom and this work shows up in her research paper Deep Learning for Political Science. She's an expert in natural language processing. We talk about how she analyzes the text of government COVID announcements to extract meaning, application to related fields such as ethics, and what it's like to be in this field.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2329</itunes:duration>
                <itunes:episode>59</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>058 - Guest: Charles Radclyffe, AI Ethics Grader, part 2</title>
        <itunes:title>058 - Guest: Charles Radclyffe, AI Ethics Grader, part 2</itunes:title>
        <link>https://aiandyou.net/e/058-guest-charles-radclyffe-ai-ethics-grader-part-2/</link>
                    <comments>https://aiandyou.net/e/058-guest-charles-radclyffe-ai-ethics-grader-part-2/#comments</comments>        <pubDate>Mon, 26 Jul 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1a5da9c2-b901-3bcd-9ba5-6e987abfa08a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Charles Radclyffe's company <a href='https://ethicsgrade.io'>EthicsGrade </a>grades companies on their AI ethics and governance and he has a lot to sat about what's ethical in AI companies, how to analyze it, what to do with that information, and how businesses can address their ethics.</p>
<p>Charles was formerly head of AI at Fidelity International and founded the podcast <a href='https://areyouarobot.substack.com/p/understanding-the-fourth-industrial%20'>Are You a Robot?</a>, which I will be on on August 30 (2021).</p>
<p>In part 2 we compare AI with other industries for their approaches to ethics, and discuss Charles' TEDx talk about the future of work: <a href='https://www.ted.com/talks/charles_radclyffe_three_steps_to_surviving_the_robot_revolution'>Three Steps to Surviving the Robot Revolution</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-058-0409.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Charles Radclyffe's company <a href='https://ethicsgrade.io'>EthicsGrade </a>grades companies on their AI ethics and governance and he has a lot to sat about what's ethical in AI companies, how to analyze it, what to do with that information, and how businesses can address their ethics.</p>
<p>Charles was formerly head of AI at Fidelity International and founded the podcast <a href='https://areyouarobot.substack.com/p/understanding-the-fourth-industrial%20'>Are You a Robot?</a>, which I will be on on August 30 (2021).</p>
<p>In part 2 we compare AI with other industries for their approaches to ethics, and discuss Charles' TEDx talk about the future of work: <a href='https://www.ted.com/talks/charles_radclyffe_three_steps_to_surviving_the_robot_revolution'>Three Steps to Surviving the Robot Revolution</a>.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-058-0409.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ku4q37/058_AI-and-You.mp3" length="29133789" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Charles Radclyffe's company EthicsGrade grades companies on their AI ethics and governance and he has a lot to sat about what's ethical in AI companies, how to analyze it, what to do with that information, and how businesses can address their ethics.
Charles was formerly head of AI at Fidelity International and founded the podcast Are You a Robot?, which I will be on on August 30 (2021).
In part 2 we compare AI with other industries for their approaches to ethics, and discuss Charles' TEDx talk about the future of work: Three Steps to Surviving the Robot Revolution.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1771</itunes:duration>
                <itunes:episode>58</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>057 - Guest: Charles Radclyffe, AI Ethics Grader, part 1</title>
        <itunes:title>057 - Guest: Charles Radclyffe, AI Ethics Grader, part 1</itunes:title>
        <link>https://aiandyou.net/e/057-guest-charles-radclyffe-ai-ethics-grader-part-1/</link>
                    <comments>https://aiandyou.net/e/057-guest-charles-radclyffe-ai-ethics-grader-part-1/#comments</comments>        <pubDate>Mon, 19 Jul 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c727dcb2-7726-3c6a-b915-2f5e0c111541</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Charles Radclyffe's company <a href='https://ethicsgrade.io'>EthicsGrade </a>grades companies on their AI ethics and governance and he has a lot to sat about what's ethical in AI companies, how to analyze it, what to do with that information, and how businesses can address their ethics. Charles was formerly head of AI at Fidelity International and founded the podcast <a href='https://areyouarobot.substack.com/p/understanding-the-fourth-industrial%20'>Are You a Robot?</a>, which I will be on on August 30 (2021).</p>
<p>In part 1 we make some distinctions about bias and privacy and talk about the challenges with both.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-057-8811.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Charles Radclyffe's company <a href='https://ethicsgrade.io'>EthicsGrade </a>grades companies on their AI ethics and governance and he has a lot to sat about what's ethical in AI companies, how to analyze it, what to do with that information, and how businesses can address their ethics. Charles was formerly head of AI at Fidelity International and founded the podcast <a href='https://areyouarobot.substack.com/p/understanding-the-fourth-industrial%20'>Are You a Robot?</a>, which I will be on on August 30 (2021).</p>
<p>In part 1 we make some distinctions about bias and privacy and talk about the challenges with both.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-057-8811.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/9xrfcy/057_AI-and-You.mp3" length="31293384" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Charles Radclyffe's company EthicsGrade grades companies on their AI ethics and governance and he has a lot to sat about what's ethical in AI companies, how to analyze it, what to do with that information, and how businesses can address their ethics. Charles was formerly head of AI at Fidelity International and founded the podcast Are You a Robot?, which I will be on on August 30 (2021).
In part 1 we make some distinctions about bias and privacy and talk about the challenges with both.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1906</itunes:duration>
                <itunes:episode>57</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>056 - Guest: Przemek Chocjecki, Content-Generating AI PhD</title>
        <itunes:title>056 - Guest: Przemek Chocjecki, Content-Generating AI PhD</itunes:title>
        <link>https://aiandyou.net/e/056-guest-przemek-chocjecki-content-generating-ai-phd/</link>
                    <comments>https://aiandyou.net/e/056-guest-przemek-chocjecki-content-generating-ai-phd/#comments</comments>        <pubDate>Mon, 12 Jul 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/36056f67-04ab-3edf-91f8-4c5954ffcdc3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What if you could do for text what Photoshop does for images? What if you could manipulate it and create whole new sections at the push of a button? Przemek Chojecki has a PhD in mathematics, is a member of the Forbes 30 under 30 list in Poland, wrote the new book, <a href='https://www.amazon.com/Artificial-Intelligence-Business-How-profit/dp/B089TWRZYS?ref=perlmedic-20'>Artificial Intelligence Business: How you can profit from AI</a>, and he has built a tool, Contentyze, that does that. We're going to find out what that's like, what made him jump from academia, and we'll even get into NFTs and DeFi.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-056-9715.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What if you could do for text what Photoshop does for images? What if you could manipulate it and create whole new sections at the push of a button? Przemek Chojecki has a PhD in mathematics, is a member of the Forbes 30 under 30 list in Poland, wrote the new book, <a href='https://www.amazon.com/Artificial-Intelligence-Business-How-profit/dp/B089TWRZYS?ref=perlmedic-20'>Artificial Intelligence Business: How you can profit from AI</a>, and he has built a tool, Contentyze, that does that. We're going to find out what that's like, what made him jump from academia, and we'll even get into NFTs and DeFi.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/07/transcript-056-9715.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/w6eqwm/056_AI-and-You.mp3" length="34941332" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What if you could do for text what Photoshop does for images? What if you could manipulate it and create whole new sections at the push of a button? Przemek Chojecki has a PhD in mathematics, is a member of the Forbes 30 under 30 list in Poland, wrote the new book, Artificial Intelligence Business: How you can profit from AI, and he has built a tool, Contentyze, that does that. We're going to find out what that's like, what made him jump from academia, and we'll even get into NFTs and DeFi.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2134</itunes:duration>
                <itunes:episode>56</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>055 - Guest: Tony Gillespie, AI systems engineer, part 2</title>
        <itunes:title>055 - Guest: Tony Gillespie, AI systems engineer, part 2</itunes:title>
        <link>https://aiandyou.net/e/055-guest-tony-gillespie-ai-systems-engineer-part-2/</link>
                    <comments>https://aiandyou.net/e/055-guest-tony-gillespie-ai-systems-engineer-part-2/#comments</comments>        <pubDate>Mon, 05 Jul 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a84bc3be-6169-3c0f-8b54-9a16ce2e95cf</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
How do you get a robot to follow the law? Could an AI be taught to obey the Geneva Convention? Tony Gillespie says so, and he's the author of <a href='https://www.amazon.ca/Systems-Engineering-Ethical-Autonomous/dp/1785613723?ref=perlmedic-20'>Systems Engineering for Ethical Autonomous Systems,</a>
<p>which is as technical as it sounds. He is a Visiting Professor at University College London, a fellow of the Royal Academy of Engineering, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory. He has applied the techniques in his book to autonomous cars and autonomous weapons and has given technical advice to the UN meetings discussing potential bans on lethal autonomous weapons. </p>
<p>In the conclusion of our interview, we talk about how autonomous cars can be designed to be safe and the current tensions in the industry as they try to do that.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-055-0662.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
How do you get a robot to follow the law? Could an AI be taught to obey the Geneva Convention? Tony Gillespie says so, and he's the author of <a href='https://www.amazon.ca/Systems-Engineering-Ethical-Autonomous/dp/1785613723?ref=perlmedic-20'><em>Systems Engineering for Ethical Autonomous Systems,</em></a><br>
<p>which is as technical as it sounds. He is a Visiting Professor at University College London, a fellow of the Royal Academy of Engineering, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory. He has applied the techniques in his book to autonomous cars and autonomous weapons and has given technical advice to the UN meetings discussing potential bans on lethal autonomous weapons. </p>
<p>In the conclusion of our interview, we talk about how autonomous cars can be designed to be safe and the current tensions in the industry as they try to do that.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-055-0662.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nvjf5h/055_AI-and-You.mp3" length="33122791" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 
How do you get a robot to follow the law? Could an AI be taught to obey the Geneva Convention? Tony Gillespie says so, and he's the author of Systems Engineering for Ethical Autonomous Systems,which is as technical as it sounds. He is a Visiting Professor at University College London, a fellow of the Royal Academy of Engineering, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory. He has applied the techniques in his book to autonomous cars and autonomous weapons and has given technical advice to the UN meetings discussing potential bans on lethal autonomous weapons. 
In the conclusion of our interview, we talk about how autonomous cars can be designed to be safe and the current tensions in the industry as they try to do that.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2021</itunes:duration>
                <itunes:episode>55</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>054 - Guest: Tony Gillespie, AI systems engineer, part 1</title>
        <itunes:title>054 - Guest: Tony Gillespie, AI systems engineer, part 1</itunes:title>
        <link>https://aiandyou.net/e/054-guest-tony-gillespie-ai-systems-engineer-part-1/</link>
                    <comments>https://aiandyou.net/e/054-guest-tony-gillespie-ai-systems-engineer-part-1/#comments</comments>        <pubDate>Mon, 28 Jun 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/2fb84df8-86d6-30ac-8f7f-feadde3d02af</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
How do you get a robot to follow the law? Could an AI be taught to obey the Geneva Convention? Tony Gillespie says so, and he's the author of <a href='https://www.amazon.ca/Systems-Engineering-Ethical-Autonomous/dp/1785613723?ref=perlmedic-20'>Systems Engineering for Ethical Autonomous Systems,</a>
<p>which is as technical as it sounds. He is a Visiting Professor at University College London, a fellow of the Royal Academy of Engineering, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory. He has applied the techniques in his book to autonomous cars and autonomous weapons and has given technical advice to the UN meetings discussing potential bans on lethal autonomous weapons. </p>
<p>We are turning from last week's speculative fiction to hard engineering this week, as Tony tells us how engineering is applied to the problem of accountability and international law in autonomous weapons. How do they fit into the rules of engagement and ethical combat?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-054-3390.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>
How do you get a robot to follow the law? Could an AI be taught to obey the Geneva Convention? Tony Gillespie says so, and he's the author of <a href='https://www.amazon.ca/Systems-Engineering-Ethical-Autonomous/dp/1785613723?ref=perlmedic-20'><em>Systems Engineering for Ethical Autonomous Systems,</em></a><br>
<p>which is as technical as it sounds. He is a Visiting Professor at University College London, a fellow of the Royal Academy of Engineering, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory. He has applied the techniques in his book to autonomous cars and autonomous weapons and has given technical advice to the UN meetings discussing potential bans on lethal autonomous weapons. </p>
<p>We are turning from last week's speculative fiction to hard engineering this week, as Tony tells us how engineering is applied to the problem of accountability and international law in autonomous weapons. How do they fit into the rules of engagement and ethical combat?</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-054-3390.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/s7y5tg/054_AI-and-You.mp3" length="29351546" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 
How do you get a robot to follow the law? Could an AI be taught to obey the Geneva Convention? Tony Gillespie says so, and he's the author of Systems Engineering for Ethical Autonomous Systems,which is as technical as it sounds. He is a Visiting Professor at University College London, a fellow of the Royal Academy of Engineering, and a fellow in avionics and mission systems in the UK’s Defence Science and Technology Laboratory. He has applied the techniques in his book to autonomous cars and autonomous weapons and has given technical advice to the UN meetings discussing potential bans on lethal autonomous weapons. 
We are turning from last week's speculative fiction to hard engineering this week, as Tony tells us how engineering is applied to the problem of accountability and international law in autonomous weapons. How do they fit into the rules of engagement and ethical combat?
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1785</itunes:duration>
                <itunes:episode>54</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>053 - Special Episode: AI in Fiction Panel, part 2</title>
        <itunes:title>053 - Special Episode: AI in Fiction Panel, part 2</itunes:title>
        <link>https://aiandyou.net/e/053-special-episode-ai-in-fiction-panel-part-2/</link>
                    <comments>https://aiandyou.net/e/053-special-episode-ai-in-fiction-panel-part-2/#comments</comments>        <pubDate>Mon, 21 Jun 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/869dd9b9-0acd-36fb-8564-b3277da243d8</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Episode 53 means we've reached our one-year anniversary! So we're marking the occasion with a lighter episode pair, a panel talk about AI in fiction: Books, movies, TV shows. I am joined by literary and science fiction educator Dr. Robert James, who is also a <a href='https://www.amazon.com/Robert-James/e/B00AAA73SW/'>published expert on the Academy Awards</a>; and by Jim Gifford, <a href='http://nitrosyncretic.com/'>my publisher</a> and the bibliographer of science fiction author Robert Heinlein. All of us were a team on the creation and production of the 2007 convention of Heinlein's centennial.</p>
<p>In part 2 we talk about The Terminator, Star Trek, Wandavision, her, ex Machina, Philip K. Dick, and much, much more. This is a non-stop free-ranging conversation between experts passionate about this field.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-053-4305.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Episode 53 means we've reached our one-year anniversary! So we're marking the occasion with a lighter episode pair, a panel talk about AI in fiction: Books, movies, TV shows. I am joined by literary and science fiction educator Dr. Robert James, who is also a <a href='https://www.amazon.com/Robert-James/e/B00AAA73SW/'>published expert on the Academy Awards</a>; and by Jim Gifford, <a href='http://nitrosyncretic.com/'>my publisher</a> and the bibliographer of science fiction author Robert Heinlein. All of us were a team on the creation and production of the 2007 convention of Heinlein's centennial.</p>
<p>In part 2 we talk about <em>The Terminator</em>, <em>Star Trek</em>, <em>Wandavision</em>, <em>her</em>, <em>ex Machina</em>, Philip K. Dick, and much, much more. This is a non-stop free-ranging conversation between experts passionate about this field.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-053-4305.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/3qxach/053_AI-and-You.mp3" length="46757874" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Episode 53 means we've reached our one-year anniversary! So we're marking the occasion with a lighter episode pair, a panel talk about AI in fiction: Books, movies, TV shows. I am joined by literary and science fiction educator Dr. Robert James, who is also a published expert on the Academy Awards; and by Jim Gifford, my publisher and the bibliographer of science fiction author Robert Heinlein. All of us were a team on the creation and production of the 2007 convention of Heinlein's centennial.
In part 2 we talk about The Terminator, Star Trek, Wandavision, her, ex Machina, Philip K. Dick, and much, much more. This is a non-stop free-ranging conversation between experts passionate about this field.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2873</itunes:duration>
                <itunes:episode>53</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>052 - Special Episode: AI in Fiction Panel, part 1</title>
        <itunes:title>052 - Special Episode: AI in Fiction Panel, part 1</itunes:title>
        <link>https://aiandyou.net/e/052-special-episode-ai-in-fiction-panel-part-1/</link>
                    <comments>https://aiandyou.net/e/052-special-episode-ai-in-fiction-panel-part-1/#comments</comments>        <pubDate>Mon, 14 Jun 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/86e91e16-7a52-3f80-a8a2-020b7e850fc2</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Episode 52 means we've reached our one-year anniversary! So we're marking the occasion with a lighter episode pair, a panel talk about AI in fiction: Books, movies, TV shows. I am joined by literary and science fiction educator Dr. Robert James, who is also a <a href='https://www.amazon.com/Robert-James/e/B00AAA73SW/'>published expert on the Academy Awards</a>; and by Jim Gifford, <a href='http://nitrosyncretic.com/'>my publisher</a> and the bibliographer of science fiction author Robert Heinlein. All of us were a team on the creation and production of the 2007 convention of Heinlein's centennial.</p>
<p>In part 1 our conversation spanned the early days of AI fiction up to the '70s, touching on Metropolis, Asimov, Colossus, Heinlein, 2001, and others. This is a non-stop free-ranging conversation between experts passionate about this field.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-052-1176.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Episode 52 means we've reached our one-year anniversary! So we're marking the occasion with a lighter episode pair, a panel talk about AI in fiction: Books, movies, TV shows. I am joined by literary and science fiction educator Dr. Robert James, who is also a <a href='https://www.amazon.com/Robert-James/e/B00AAA73SW/'>published expert on the Academy Awards</a>; and by Jim Gifford, <a href='http://nitrosyncretic.com/'>my publisher</a> and the bibliographer of science fiction author Robert Heinlein. All of us were a team on the creation and production of the 2007 convention of Heinlein's centennial.</p>
<p>In part 1 our conversation spanned the early days of AI fiction up to the '70s, touching on <em>Metropolis</em>, Asimov, <em>Colossus</em>, Heinlein, <em>2001</em>, and others. This is a non-stop free-ranging conversation between experts passionate about this field.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/06/transcript-052-1176.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ki6ntw/052_AI-and-You.mp3" length="38453861" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Episode 52 means we've reached our one-year anniversary! So we're marking the occasion with a lighter episode pair, a panel talk about AI in fiction: Books, movies, TV shows. I am joined by literary and science fiction educator Dr. Robert James, who is also a published expert on the Academy Awards; and by Jim Gifford, my publisher and the bibliographer of science fiction author Robert Heinlein. All of us were a team on the creation and production of the 2007 convention of Heinlein's centennial.
In part 1 our conversation spanned the early days of AI fiction up to the '70s, touching on Metropolis, Asimov, Colossus, Heinlein, 2001, and others. This is a non-stop free-ranging conversation between experts passionate about this field.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2354</itunes:duration>
                <itunes:episode>52</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>051 - Guest: Ryan Abbott, Law Professor and Author, part 2</title>
        <itunes:title>051 - Guest: Ryan Abbott, Law Professor and Author, part 2</itunes:title>
        <link>https://aiandyou.net/e/051-guest-ryan-abbott-law-professor-and-author-part-2/</link>
                    <comments>https://aiandyou.net/e/051-guest-ryan-abbott-law-professor-and-author-part-2/#comments</comments>        <pubDate>Mon, 07 Jun 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ffee46b8-c907-3a93-890a-b5d54df550cc</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>AI is changing the law, and on the leading edge of figuring out how that should happen is Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey in the UK, and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. We will be exploring issues raised in his recent book <a href='https://www.cambridge.org/core/books/reasonable-robot/092E62F0087270F1ADD9F62160F23B5A'>The Reasonable Robot: Artificial Intelligence and the Law</a>. </p>
<p>In the conclusion of the interview, we’ll be talking about self-driving cars, liability and punishment for AI infractions, and rationales for changing our taxation system for AI-generated wealth.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcript-051-6113.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>AI is changing the law, and on the leading edge of figuring out how that should happen is Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey in the UK, and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. We will be exploring issues raised in his recent book <a href='https://www.cambridge.org/core/books/reasonable-robot/092E62F0087270F1ADD9F62160F23B5A'><em>The Reasonable Robot: Artificial Intelligence and the Law</em></a>. </p>
<p>In the conclusion of the interview, we’ll be talking about self-driving cars, liability and punishment for AI infractions, and rationales for changing our taxation system for AI-generated wealth.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcript-051-6113.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/rpjdgq/051_AI-and-You.mp3" length="29178093" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

AI is changing the law, and on the leading edge of figuring out how that should happen is Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey in the UK, and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. We will be exploring issues raised in his recent book The Reasonable Robot: Artificial Intelligence and the Law. 
In the conclusion of the interview, we’ll be talking about self-driving cars, liability and punishment for AI infractions, and rationales for changing our taxation system for AI-generated wealth.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1774</itunes:duration>
                <itunes:episode>51</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>050 - Guest: Ryan Abbott, Law Professor and Author, part 1</title>
        <itunes:title>050 - Guest: Ryan Abbott, Law Professor and Author, part 1</itunes:title>
        <link>https://aiandyou.net/e/050-guest-ryan-abbott-law-professor-and-author-part-1/</link>
                    <comments>https://aiandyou.net/e/050-guest-ryan-abbott-law-professor-and-author-part-1/#comments</comments>        <pubDate>Mon, 31 May 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/6720d812-f075-3a1b-9169-008ca8b4ce8e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>AI is changing the law, and on the leading edge of figuring out how that should happen is Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey in the UK, and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. We will be exploring issues raised in his recent book <a href='https://www.cambridge.org/core/books/reasonable-robot/092E62F0087270F1ADD9F62160F23B5A'>The Reasonable Robot: Artificial Intelligence and the Law</a>. </p>
<p>We’ll be getting into intellectual property, and whether an AI should be able to own them – and patents, and whether an AI can be an inventor of record, and then we’ll be getting into self-driving cars with some new takes on the Trolley Problem, and how the law and regulation should adapt, or has or has not adapted.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>The June 3 Strategy and Leadership podcast episode referenced in the episode will be <a href='https://podcasts.apple.com/us/podcast/strategy-and-leadership-podcast/id1202449526'>here</a>.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcript-050-9672.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>AI is changing the law, and on the leading edge of figuring out how that should happen is Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey in the UK, and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. We will be exploring issues raised in his recent book <a href='https://www.cambridge.org/core/books/reasonable-robot/092E62F0087270F1ADD9F62160F23B5A'><em>The Reasonable Robot: Artificial Intelligence and the Law</em></a>. </p>
<p>We’ll be getting into intellectual property, and whether an AI should be able to own them – and patents, and whether an AI can be an inventor of record, and then we’ll be getting into self-driving cars with some new takes on the Trolley Problem, and how the law and regulation should adapt, or has or has not adapted.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>The June 3 Strategy and Leadership podcast episode referenced in the episode will be <a href='https://podcasts.apple.com/us/podcast/strategy-and-leadership-podcast/id1202449526'>here</a>.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcript-050-9672.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/b9pw44/050_AI-and-You.mp3" length="28485998" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

AI is changing the law, and on the leading edge of figuring out how that should happen is Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey in the UK, and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. We will be exploring issues raised in his recent book The Reasonable Robot: Artificial Intelligence and the Law. 
We’ll be getting into intellectual property, and whether an AI should be able to own them – and patents, and whether an AI can be an inventor of record, and then we’ll be getting into self-driving cars with some new takes on the Trolley Problem, and how the law and regulation should adapt, or has or has not adapted.
All this plus our usual look at today's AI headlines. 
The June 3 Strategy and Leadership podcast episode referenced in the episode will be here.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2419</itunes:duration>
                <itunes:episode>50</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>049 - Guest: Phil D. Hall, Conversational AI Creator, part 2</title>
        <itunes:title>049 - Guest: Phil D. Hall, Conversational AI Creator, part 2</itunes:title>
        <link>https://aiandyou.net/e/049-guest-phil-hall-conversational-ai-creator-part-2/</link>
                    <comments>https://aiandyou.net/e/049-guest-phil-hall-conversational-ai-creator-part-2/#comments</comments>        <pubDate>Mon, 24 May 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8765d18d-4461-312e-91dd-2c527cb91b3a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you think you know what it's like to chat with today's AI, you may change your mind after encountering <a href='https://elzware.com'>Phil D. Hall</a>'s work. He deploys chatbots as part performance art, part anthropological study pieces, part boundary-busting provocations, as well as serious tools for business enhancement. In part 2, we expand on his <a href='https://echoborg.com'>Echoborg </a>creation, how it started, and where we might be heading with conversational AI in business and for helping people.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcription-049-7444.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you think you know what it's like to chat with today's AI, you may change your mind after encountering <a href='https://elzware.com'>Phil D. Hall</a>'s work. He deploys chatbots as part performance art, part anthropological study pieces, part boundary-busting provocations, as well as serious tools for business enhancement. In part 2, we expand on his <a href='https://echoborg.com'>Echoborg </a>creation, how it started, and where we might be heading with conversational AI in business and for helping people.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcription-049-7444.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qy6mck/049_AI-and-You.mp3" length="30464571" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you think you know what it's like to chat with today's AI, you may change your mind after encountering Phil D. Hall's work. He deploys chatbots as part performance art, part anthropological study pieces, part boundary-busting provocations, as well as serious tools for business enhancement. In part 2, we expand on his Echoborg creation, how it started, and where we might be heading with conversational AI in business and for helping people.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1855</itunes:duration>
                <itunes:episode>49</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>048 - Guest: Phil D. Hall, Conversational AI Creator, part 1</title>
        <itunes:title>048 - Guest: Phil D. Hall, Conversational AI Creator, part 1</itunes:title>
        <link>https://aiandyou.net/e/048-guest-phil-hall-conversational-ai-creator-part-1/</link>
                    <comments>https://aiandyou.net/e/048-guest-phil-hall-conversational-ai-creator-part-1/#comments</comments>        <pubDate>Mon, 17 May 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e57ab526-7de7-3de5-a727-16b4b532b1eb</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you think you know what it's like to chat with today's AI, you may change your mind after encountering <a href='https://elzware.com'>Phil D. Hall</a>'s work. He deploys chatbots as part performance art, part anthropological study pieces, part boundary-busting provocations, as well as serious tools for business enhancement. We talk about his <a href='https://echoborg.com'>Echoborg </a>creation and what happened when it met some British parliamentarians...</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcription-048-4707.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>If you think you know what it's like to chat with today's AI, you may change your mind after encountering <a href='https://elzware.com'>Phil D. Hall</a>'s work. He deploys chatbots as part performance art, part anthropological study pieces, part boundary-busting provocations, as well as serious tools for business enhancement. We talk about his <a href='https://echoborg.com'>Echoborg </a>creation and what happened when it met some British parliamentarians...</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/05/transcription-048-4707.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kcxn9y/048_AI-and-You.mp3" length="36198553" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

If you think you know what it's like to chat with today's AI, you may change your mind after encountering Phil D. Hall's work. He deploys chatbots as part performance art, part anthropological study pieces, part boundary-busting provocations, as well as serious tools for business enhancement. We talk about his Echoborg creation and what happened when it met some British parliamentarians...
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2213</itunes:duration>
                <itunes:episode>48</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>047 - Guest: Rajiv Malhotra, author and historian, part 2</title>
        <itunes:title>047 - Guest: Rajiv Malhotra, author and historian, part 2</itunes:title>
        <link>https://aiandyou.net/e/047-guest-rajiv-malhotra-author-and-historian-part-2/</link>
                    <comments>https://aiandyou.net/e/047-guest-rajiv-malhotra-author-and-historian-part-2/#comments</comments>        <pubDate>Mon, 10 May 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3a50bc4a-3e80-3ba8-9758-252d54af7f23</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are the benefits of AI equally distributed across countries? Or is it another tool for agents of globalization and imperialism to tighten their grip and shut out the smaller players on the global stage? Rajiv Malhotra has a new book, <a href='https://www.amazon.com/Artificial-Intelligence-Future-Power-Battlegrounds-ebook/dp/B08Q4G6MYD/ref=perlmedic-20'>Artificial Intelligence and the Future of Power: 5 Battlegrounds</a>, that addresses that issue, and draws attention to how India in particular is collateral damage in the struggle for dominance between AI superpowers.</p>
<p>In part 2, we talk about the future of jobs from an equity and inclusion frame, the Timnit Gebru incident at Google and parallels affecting developing countries, and our future with AI seen through an eastern spiritual perspective.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-047-3441.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are the benefits of AI equally distributed across countries? Or is it another tool for agents of globalization and imperialism to tighten their grip and shut out the smaller players on the global stage? Rajiv Malhotra has a new book, <a href='https://www.amazon.com/Artificial-Intelligence-Future-Power-Battlegrounds-ebook/dp/B08Q4G6MYD/ref=perlmedic-20'>Artificial Intelligence and the Future of Power: 5 Battlegrounds</a>, that addresses that issue, and draws attention to how India in particular is collateral damage in the struggle for dominance between AI superpowers.</p>
<p>In part 2, we talk about the future of jobs from an equity and inclusion frame, the Timnit Gebru incident at Google and parallels affecting developing countries, and our future with AI seen through an eastern spiritual perspective.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-047-3441.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bwfcy6/047_AI-and-You.mp3" length="30581182" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Are the benefits of AI equally distributed across countries? Or is it another tool for agents of globalization and imperialism to tighten their grip and shut out the smaller players on the global stage? Rajiv Malhotra has a new book, Artificial Intelligence and the Future of Power: 5 Battlegrounds, that addresses that issue, and draws attention to how India in particular is collateral damage in the struggle for dominance between AI superpowers.
In part 2, we talk about the future of jobs from an equity and inclusion frame, the Timnit Gebru incident at Google and parallels affecting developing countries, and our future with AI seen through an eastern spiritual perspective.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1862</itunes:duration>
                <itunes:episode>47</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>046 - Guest: Rajiv Malhotra, author and historian, part 1</title>
        <itunes:title>046 - Guest: Rajiv Malhotra, author and historian, part 1</itunes:title>
        <link>https://aiandyou.net/e/046-guest-rajiv-malhotra-author-and-historian-part-1/</link>
                    <comments>https://aiandyou.net/e/046-guest-rajiv-malhotra-author-and-historian-part-1/#comments</comments>        <pubDate>Mon, 03 May 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/55bf575c-ba77-3c74-b913-edd2978400c7</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are the benefits of AI equally distributed across countries? Or is it another tool for agents of globalization and imperialism to tighten their grip and shut out the smaller players on the global stage? Rajiv Malhotra has a new book, <a href='https://www.amazon.com/Artificial-Intelligence-Future-Power-Battlegrounds-ebook/dp/B08Q4G6MYD/ref=perlmedic-20'>Artificial Intelligence and the Future of Power: 5 Battlegrounds</a>, that addresses that issue, and draws attention to how India in particular is collateral damage in the struggle for dominance between AI superpowers.</p>
<p>In this episode, we talk about how Western universalism and Chinese nationalism shape geopolitical structures that AI is now entering into, and how the history and geography of Asia affects them.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-046-3535.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Are the benefits of AI equally distributed across countries? Or is it another tool for agents of globalization and imperialism to tighten their grip and shut out the smaller players on the global stage? Rajiv Malhotra has a new book, <a href='https://www.amazon.com/Artificial-Intelligence-Future-Power-Battlegrounds-ebook/dp/B08Q4G6MYD/ref=perlmedic-20'>Artificial Intelligence and the Future of Power: 5 Battlegrounds</a>, that addresses that issue, and draws attention to how India in particular is collateral damage in the struggle for dominance between AI superpowers.</p>
<p>In this episode, we talk about how Western universalism and Chinese nationalism shape geopolitical structures that AI is now entering into, and how the history and geography of Asia affects them.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-046-3535.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ckh96t/046_AI-and-You.mp3" length="32447787" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Are the benefits of AI equally distributed across countries? Or is it another tool for agents of globalization and imperialism to tighten their grip and shut out the smaller players on the global stage? Rajiv Malhotra has a new book, Artificial Intelligence and the Future of Power: 5 Battlegrounds, that addresses that issue, and draws attention to how India in particular is collateral damage in the struggle for dominance between AI superpowers.
In this episode, we talk about how Western universalism and Chinese nationalism shape geopolitical structures that AI is now entering into, and how the history and geography of Asia affects them.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1979</itunes:duration>
                <itunes:episode>46</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>045 - Guest: Rob May, AI Angel Investor, part 2</title>
        <itunes:title>045 - Guest: Rob May, AI Angel Investor, part 2</itunes:title>
        <link>https://aiandyou.net/e/045-guest-rob-may-ai-angel-investor-part-2/</link>
                    <comments>https://aiandyou.net/e/045-guest-rob-may-ai-angel-investor-part-2/#comments</comments>        <pubDate>Mon, 26 Apr 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5407018a-3194-3b4c-92b0-bfb76785ecd4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What happens when an engineer becomes an angel investor and thought leader in AI? You get Rob May, former co-founder and CEO of Talla, an AI assistant platform, now General Partner at PJC, an early-stage venture capital firm focused on investing in, supporting, and building relationships with entrepreneurs who are creating the future. Rob also writes the world’s most popular newsletter on artificial intelligence – <a href='https://inside.com/ai'>InsideAI</a>.</p>
<p>In part 2, we talk about emotion AI, whether we're in an AI bubble, and what startups should - and shouldn't focus on.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-045-2630.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What happens when an engineer becomes an angel investor and thought leader in AI? You get Rob May, former co-founder and CEO of Talla, an AI assistant platform, now General Partner at PJC, an early-stage venture capital firm focused on investing in, supporting, and building relationships with entrepreneurs who are creating the future. Rob also writes the world’s most popular newsletter on artificial intelligence – <a href='https://inside.com/ai'>InsideAI</a>.</p>
<p>In part 2, we talk about emotion AI, whether we're in an AI bubble, and what startups should - and shouldn't focus on.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-045-2630.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/kqbdge/045_AI-and-You.mp3" length="23952349" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What happens when an engineer becomes an angel investor and thought leader in AI? You get Rob May, former co-founder and CEO of Talla, an AI assistant platform, now General Partner at PJC, an early-stage venture capital firm focused on investing in, supporting, and building relationships with entrepreneurs who are creating the future. Rob also writes the world’s most popular newsletter on artificial intelligence – InsideAI.
In part 2, we talk about emotion AI, whether we're in an AI bubble, and what startups should - and shouldn't focus on.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1448</itunes:duration>
                <itunes:episode>45</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>044 - Guest: Rob May, AI Angel Investor, part 1</title>
        <itunes:title>044 - Guest: Rob May, AI Angel Investor, part 1</itunes:title>
        <link>https://aiandyou.net/e/044-guest-rob-may-ai-angel-investor-part-1/</link>
                    <comments>https://aiandyou.net/e/044-guest-rob-may-ai-angel-investor-part-1/#comments</comments>        <pubDate>Mon, 19 Apr 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/49768b1b-175c-34a0-9d59-1e076b958f3e</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What happens when an engineer becomes an angel investor and thought leader in AI? You get Rob May, former co-founder and CEO of Talla, an AI assistant platform, now General Partner at PJC, an early-stage venture capital firm focused on investing in, supporting, and building relationships with entrepreneurs who are creating the future. Rob also writes the world’s most popular newsletter on artificial intelligence – <a href='https://inside.com/ai'>InsideAI</a>.</p>
<p>We talk about the state of startups in brain-computer interfaces, the role of ethical issues in evaluating startups, and just how Rob made the transition from engineer to investor.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-044-9465.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What happens when an engineer becomes an angel investor and thought leader in AI? You get Rob May, former co-founder and CEO of Talla, an AI assistant platform, now General Partner at PJC, an early-stage venture capital firm focused on investing in, supporting, and building relationships with entrepreneurs who are creating the future. Rob also writes the world’s most popular newsletter on artificial intelligence – <a href='https://inside.com/ai'>InsideAI</a>.</p>
<p>We talk about the state of startups in brain-computer interfaces, the role of ethical issues in evaluating startups, and just how Rob made the transition from engineer to investor.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-044-9465.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/5pqre3/044_AI-and-You.mp3" length="31145009" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What happens when an engineer becomes an angel investor and thought leader in AI? You get Rob May, former co-founder and CEO of Talla, an AI assistant platform, now General Partner at PJC, an early-stage venture capital firm focused on investing in, supporting, and building relationships with entrepreneurs who are creating the future. Rob also writes the world’s most popular newsletter on artificial intelligence – InsideAI.
We talk about the state of startups in brain-computer interfaces, the role of ethical issues in evaluating startups, and just how Rob made the transition from engineer to investor.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1897</itunes:duration>
                <itunes:episode>44</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>043 - Guest: David Gerrold, Science Fiction Author and Screenwriter, part 2</title>
        <itunes:title>043 - Guest: David Gerrold, Science Fiction Author and Screenwriter, part 2</itunes:title>
        <link>https://aiandyou.net/e/043-guest-david-gerrold-science-fiction-author-and-screenwriter-part-2/</link>
                    <comments>https://aiandyou.net/e/043-guest-david-gerrold-science-fiction-author-and-screenwriter-part-2/#comments</comments>        <pubDate>Mon, 12 Apr 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/330bc4b2-2258-3d23-ba87-d56694b12821</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What could it be like inside the mind of an artificial intelligence that has just evolved consciousness? Our guest today has imagined just that.</p>
<p><a href='https://www.gerrold.com/bio/'>David Gerrold</a> has written some of the most widely-read and -viewed science fiction of the last 60 years, starting with classic Star Trek's Trouble with Tribbles episode and several other episodes of that and other Star Trek, Babylon 5, Twilight Zone, and other TV series, plus novels such as The Man Who Folded Himself, and series including the War Against the Chtorr and the Dingilliad, winning the Hugo and Nebula awards for the noval The Martian Child.</p>
<p>We focused on his novel When HARLIE Was One, about an AI that became conscious, which develops (more thoughtfully than most such stories) how the interaction with the humans around him changed HARLIE... and them.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-043-3484.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What could it be like inside the mind of an artificial intelligence that has just evolved consciousness? Our guest today has imagined just that.</p>
<p><a href='https://www.gerrold.com/bio/'>David Gerrold</a> has written some of the most widely-read and -viewed science fiction of the last 60 years, starting with classic Star Trek's <em>Trouble with Tribbles</em> episode and several other episodes of that and other Star Trek, Babylon 5, Twilight Zone, and other TV series, plus novels such as The Man Who Folded Himself, and series including the War Against the Chtorr and the Dingilliad, winning the Hugo and Nebula awards for the noval The Martian Child.</p>
<p>We focused on his novel When <em>HARLIE Was One</em>, about an AI that became conscious, which develops (more thoughtfully than most such stories) how the interaction with the humans around him changed HARLIE... and them.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-043-3484.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/6xyh7u/043_AI-and-You.mp3" length="34155986" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What could it be like inside the mind of an artificial intelligence that has just evolved consciousness? Our guest today has imagined just that.
David Gerrold has written some of the most widely-read and -viewed science fiction of the last 60 years, starting with classic Star Trek's Trouble with Tribbles episode and several other episodes of that and other Star Trek, Babylon 5, Twilight Zone, and other TV series, plus novels such as The Man Who Folded Himself, and series including the War Against the Chtorr and the Dingilliad, winning the Hugo and Nebula awards for the noval The Martian Child.
We focused on his novel When HARLIE Was One, about an AI that became conscious, which develops (more thoughtfully than most such stories) how the interaction with the humans around him changed HARLIE... and them.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2085</itunes:duration>
                <itunes:episode>43</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>042 - Guest: David Gerrold, Science Fiction Author and Screenwriter, part 1</title>
        <itunes:title>042 - Guest: David Gerrold, Science Fiction Author and Screenwriter, part 1</itunes:title>
        <link>https://aiandyou.net/e/042-guest-david-gerrold-science-fiction-author-and-screenwriter-part-1/</link>
                    <comments>https://aiandyou.net/e/042-guest-david-gerrold-science-fiction-author-and-screenwriter-part-1/#comments</comments>        <pubDate>Mon, 05 Apr 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/23f0189d-6cc4-3f82-a6fd-168fcc38932a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What could it be like inside the mind of an artificial intelligence that has just evolved consciousness? Our guest today has imagined just that.</p>
<p><a href='https://www.gerrold.com/bio/'>David Gerrold</a> has written some of the most widely-read and -viewed science fiction of the last 60 years, starting with classic Star Trek's Trouble with Tribbles episode and several other episodes of that and other Star Trek, Babylon 5, Twilight Zone, and other TV series, plus novels such as The Man Who Folded Himself, and series including the War Against the Chtorr and the Dingilliad, winning the Hugo and Nebula awards for the noval The Martian Child.</p>
<p>We focused on his novel When HARLIE Was One, about an AI that became conscious, which develops (more thoughtfully than most such stories) how the interaction with the humans around him changed HARLIE... and them.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-042-5092.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>What could it be like inside the mind of an artificial intelligence that has just evolved consciousness? Our guest today has imagined just that.</p>
<p><a href='https://www.gerrold.com/bio/'>David Gerrold</a> has written some of the most widely-read and -viewed science fiction of the last 60 years, starting with classic Star Trek's <em>Trouble with Tribbles</em> episode and several other episodes of that and other Star Trek, Babylon 5, Twilight Zone, and other TV series, plus novels such as The Man Who Folded Himself, and series including the War Against the Chtorr and the Dingilliad, winning the Hugo and Nebula awards for the noval The Martian Child.</p>
<p>We focused on his novel When <em>HARLIE Was One</em>, about an AI that became conscious, which develops (more thoughtfully than most such stories) how the interaction with the humans around him changed HARLIE... and them.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/04/transcription-042-5092.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/6x57mk/042_AI-and-You.mp3" length="23001527" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

What could it be like inside the mind of an artificial intelligence that has just evolved consciousness? Our guest today has imagined just that.
David Gerrold has written some of the most widely-read and -viewed science fiction of the last 60 years, starting with classic Star Trek's Trouble with Tribbles episode and several other episodes of that and other Star Trek, Babylon 5, Twilight Zone, and other TV series, plus novels such as The Man Who Folded Himself, and series including the War Against the Chtorr and the Dingilliad, winning the Hugo and Nebula awards for the noval The Martian Child.
We focused on his novel When HARLIE Was One, about an AI that became conscious, which develops (more thoughtfully than most such stories) how the interaction with the humans around him changed HARLIE... and them.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1932</itunes:duration>
                <itunes:episode>42</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>041 - Guest: Peter Asaro, Autonomous Weapon Activist, part 2</title>
        <itunes:title>041 - Guest: Peter Asaro, Autonomous Weapon Activist, part 2</itunes:title>
        <link>https://aiandyou.net/e/041-guest-peter-asaro-autonomous-weapon-activist-part-2/</link>
                    <comments>https://aiandyou.net/e/041-guest-peter-asaro-autonomous-weapon-activist-part-2/#comments</comments>        <pubDate>Mon, 29 Mar 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/57ff96f7-1239-3cbd-8f6c-fe5546c6e712</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but <a href='https://peterasaro.org'>Peter Asaro</a> tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the <a href='https://www.icrac.net/'>International Committee for Robot Arms Control</a>.</p>
<p>In part 2 of our interview we talk about that committee and related organizations, what they do to elevate our thinking and governance of autonomous weapons and how they do it, and we discuss the famous <a href='https://www.youtube.com/watch?v=9CO6M2HsoIA'>Slaughterbots </a>video, plus Peter's documentary, Love Machine.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-041-0592.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but <a href='https://peterasaro.org'>Peter Asaro</a> tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the <a href='https://www.icrac.net/'>International Committee for Robot Arms Control</a>.</p>
<p>In part 2 of our interview we talk about that committee and related organizations, what they do to elevate our thinking and governance of autonomous weapons and how they do it, and we discuss the famous <a href='https://www.youtube.com/watch?v=9CO6M2HsoIA'>Slaughterbots </a>video, plus Peter's documentary, <em>Love Machine.</em></p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-041-0592.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/cfbd4n/041_AI-and-You.mp3" length="26122811" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but Peter Asaro tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the International Committee for Robot Arms Control.
In part 2 of our interview we talk about that committee and related organizations, what they do to elevate our thinking and governance of autonomous weapons and how they do it, and we discuss the famous Slaughterbots video, plus Peter's documentary, Love Machine.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1583</itunes:duration>
                <itunes:episode>41</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>040 - Guest: Peter Asaro, Autonomous Weapon Activist, part 1</title>
        <itunes:title>040 - Guest: Peter Asaro, Autonomous Weapon Activist, part 1</itunes:title>
        <link>https://aiandyou.net/e/040-guest-peter-asaro-autonomous-weapon-activist-part-1/</link>
                    <comments>https://aiandyou.net/e/040-guest-peter-asaro-autonomous-weapon-activist-part-1/#comments</comments>        <pubDate>Mon, 22 Mar 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1fffcbb5-df85-3da0-ad53-7cfacc0d14f3</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but Peter Asaro tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the International Committee for Robot Arms Control.</p>
<p>We talk about just what distinctions are useful when thinking about the regulation of autonomous weapons, seen through the lens of his precise and highly informed thinking.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-040-1323.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but Peter Asaro tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the International Committee for Robot Arms Control.</p>
<p>We talk about just what distinctions are useful when thinking about the regulation of autonomous weapons, seen through the lens of his precise and highly informed thinking.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-040-1323.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qc9tab/040_AI-and-You.mp3" length="31917397" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but Peter Asaro tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the International Committee for Robot Arms Control.
We talk about just what distinctions are useful when thinking about the regulation of autonomous weapons, seen through the lens of his precise and highly informed thinking.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1945</itunes:duration>
                <itunes:episode>40</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>039 - Guest: Beth Singler, Anthropologist and Filmmaker, part 2</title>
        <itunes:title>039 - Guest: Beth Singler, Anthropologist and Filmmaker, part 2</itunes:title>
        <link>https://aiandyou.net/e/039-guest-beth-singler-anthropologist-and-filmmaker-part-2/</link>
                    <comments>https://aiandyou.net/e/039-guest-beth-singler-anthropologist-and-filmmaker-part-2/#comments</comments>        <pubDate>Mon, 15 Mar 2021 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/2f324879-0973-3110-8d6e-153353ed8009</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>When you combine anthropologist, filmmaker, and geek, you get <a href='https://bvlsingler.com'>Beth Singler</a>, Research Fellow in Artificial Intelligence at the University of Cambridge. Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics and has produced some dramatic documentaries about our relationship with AI: <a href='https://www.youtube.com/watch?v=ODw5Eu6VbGc'>Pain in the Machine</a> and its sequels, <a href='https://www.youtube.com/watch?v=-wA4tAEKhj0'>Friend in the Machine</a>, <a href='https://www.youtube.com/watch?v=A2PD2uEUqvI'>Good in the Machine</a>, and <a href='https://www.youtube.com/watch?v=YpHy73N1BgE&feature=youtu.be'>Ghost in the Machine</a>.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-039-4963.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>When you combine anthropologist, filmmaker, and geek, you get <a href='https://bvlsingler.com'>Beth Singler</a>, Research Fellow in Artificial Intelligence at the University of Cambridge. Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics and has produced some dramatic documentaries about our relationship with AI: <a href='https://www.youtube.com/watch?v=ODw5Eu6VbGc'><em>Pain in the Machine</em></a> and its sequels, <a href='https://www.youtube.com/watch?v=-wA4tAEKhj0'><em>Friend in the Machine</em></a>, <a href='https://www.youtube.com/watch?v=A2PD2uEUqvI'><em>Good in the Machine</em></a>, and <a href='https://www.youtube.com/watch?v=YpHy73N1BgE&feature=youtu.be'><em>Ghost in the Machine</em></a>.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-039-4963.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/af7cx8/039_AI-and-You.mp3" length="32390945" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

When you combine anthropologist, filmmaker, and geek, you get Beth Singler, Research Fellow in Artificial Intelligence at the University of Cambridge. Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics and has produced some dramatic documentaries about our relationship with AI: Pain in the Machine and its sequels, Friend in the Machine, Good in the Machine, and Ghost in the Machine.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1975</itunes:duration>
                <itunes:episode>39</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>038 - Guest: Beth Singler, Anthropologist and Filmmaker, part 1</title>
        <itunes:title>038 - Guest: Beth Singler, Anthropologist and Filmmaker, part 1</itunes:title>
        <link>https://aiandyou.net/e/038-guest-beth-singler/</link>
                    <comments>https://aiandyou.net/e/038-guest-beth-singler/#comments</comments>        <pubDate>Mon, 08 Mar 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/2c0fbbd8-0c65-3ab9-8d9f-fae6e7d1c849</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>When you combine anthropologist, filmmaker, and geek, you get <a href='https://bvlsingler.com'>Beth Singler</a>, Research Fellow in Artificial Intelligence at the University of Cambridge. Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics and has produced some dramatic documentaries about our relationship with AI: <a href='https://www.youtube.com/watch?v=ODw5Eu6VbGc'>Pain in the Machine</a> and its sequels, <a href='https://www.youtube.com/watch?v=-wA4tAEKhj0'>Friend in the Machine</a>, <a href='https://www.youtube.com/watch?v=A2PD2uEUqvI'>Good in the Machine</a>, and <a href='https://www.youtube.com/watch?v=YpHy73N1BgE&feature=youtu.be'>Ghost in the Machine</a>.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-038-4207.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>When you combine anthropologist, filmmaker, and geek, you get <a href='https://bvlsingler.com'>Beth Singler</a>, Research Fellow in Artificial Intelligence at the University of Cambridge. Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics and has produced some dramatic documentaries about our relationship with AI: <a href='https://www.youtube.com/watch?v=ODw5Eu6VbGc'><em>Pain in the Machine</em></a> and its sequels, <a href='https://www.youtube.com/watch?v=-wA4tAEKhj0'><em>Friend in the Machine</em></a>, <a href='https://www.youtube.com/watch?v=A2PD2uEUqvI'><em>Good in the Machine</em></a>, and <a href='https://www.youtube.com/watch?v=YpHy73N1BgE&feature=youtu.be'><em>Ghost in the Machine</em></a>.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/03/transcription-038-4207.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/wbs2xf/038_AI-and-You.mp3" length="33585472" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

When you combine anthropologist, filmmaker, and geek, you get Beth Singler, Research Fellow in Artificial Intelligence at the University of Cambridge. Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics and has produced some dramatic documentaries about our relationship with AI: Pain in the Machine and its sequels, Friend in the Machine, Good in the Machine, and Ghost in the Machine.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2050</itunes:duration>
                <itunes:episode>38</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>037 - Guest: Steve Shwartz, AI entrepreneur/investor, part 2</title>
        <itunes:title>037 - Guest: Steve Shwartz, AI entrepreneur/investor, part 2</itunes:title>
        <link>https://aiandyou.net/e/037-guest-steve-shwartz-ai-entrepreneurinvestor-part-2/</link>
                    <comments>https://aiandyou.net/e/037-guest-steve-shwartz-ai-entrepreneurinvestor-part-2/#comments</comments>        <pubDate>Mon, 01 Mar 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/06a30c6c-d8e4-3381-85a1-6a7ce309b22b</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book <a href='https://www.amazon.com/Robots-Killer-Computers-Other-Myths/dp/1735424536'>Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity</a>, published by Fast Company Press on February 9. In part 2 of our interview, we talk about "artificial intelligence and natural stupidity" (we had to get that one in eventually, didn't we?), impacts on employment and Steve's take on the Oxford Martin study, and... common sense.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-037-4590.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book <a href='https://www.amazon.com/Robots-Killer-Computers-Other-Myths/dp/1735424536'><em>Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity</em></a>, published by Fast Company Press on February 9. In part 2 of our interview, we talk about "artificial intelligence and natural stupidity" (we had to get that one in eventually, didn't we?), impacts on employment and Steve's take on the Oxford Martin study, and... common sense.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-037-4590.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/e8eyic/037_AI-and-You.mp3" length="25522204" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity, published by Fast Company Press on February 9. In part 2 of our interview, we talk about "artificial intelligence and natural stupidity" (we had to get that one in eventually, didn't we?), impacts on employment and Steve's take on the Oxford Martin study, and... common sense.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1546</itunes:duration>
                <itunes:episode>37</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>036 - Guest: Steve Shwartz, AI entrepreneur/investor, part 1</title>
        <itunes:title>036 - Guest: Steve Shwartz, AI entrepreneur/investor, part 1</itunes:title>
        <link>https://aiandyou.net/e/036-guest-steve-shwartz-ai-entrepreneurinvestor-part-1/</link>
                    <comments>https://aiandyou.net/e/036-guest-steve-shwartz-ai-entrepreneurinvestor-part-1/#comments</comments>        <pubDate>Mon, 22 Feb 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3014cb4a-bae6-32f4-a070-73826af56569</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book <a href='https://www.amazon.com/Robots-Killer-Computers-Other-Myths/dp/1735424536'>Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity</a>, published by Fast Company Press on February 9. We talk about bias, explainability, and other current problems with machine learning, plus... horse racing.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-036-2327.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book <a href='https://www.amazon.com/Robots-Killer-Computers-Other-Myths/dp/1735424536'><em>Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity</em></a>, published by Fast Company Press on February 9. We talk about bias, explainability, and other current problems with machine learning, plus... horse racing.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-036-2327.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/jph7ut/036_AI-and-You.mp3" length="28599219" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity, published by Fast Company Press on February 9. We talk about bias, explainability, and other current problems with machine learning, plus... horse racing.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1738</itunes:duration>
                <itunes:episode>36</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>035 - Guest: Michael Wooldridge, Oxford University Professor, part 2</title>
        <itunes:title>035 - Guest: Michael Wooldridge, Oxford University Professor, part 2</itunes:title>
        <link>https://aiandyou.net/e/035-guest-michael-wooldridge-oxford-university-professor-part-2/</link>
                    <comments>https://aiandyou.net/e/035-guest-michael-wooldridge-oxford-university-professor-part-2/#comments</comments>        <pubDate>Mon, 15 Feb 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/609782f5-f7b6-333e-a4ff-baa676930ebc</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We continue the interview with Michael Wooldridge, head of the Oxford University Computer Science department and author of A Brief History of Artificial Intelligence, an introductory look at AI, published Jan 2021 by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the Ladybird Expert Guide to Artificial Intelligence. We cover a huge amount of ground, from autonomous weapons and self-driving cars, to Michael's work on multi-agent systems and the potential for my Siri to talk to your Alexa.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-035-6485.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We continue the interview with Michael Wooldridge, head of the Oxford University Computer Science department and author of <em>A Brief History of Artificial Intelligence</em>, an introductory look at AI, published Jan 2021 by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the <em>Ladybird Expert Guide to Artificial Intelligence</em>. We cover a huge amount of ground, from autonomous weapons and self-driving cars, to Michael's work on multi-agent systems and the potential for my Siri to talk to your Alexa.</p>
<p>All this plus our usual look at today's AI headlines. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-035-6485.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/iyr7k2/035_AI-and-You.mp3" length="39517985" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

We continue the interview with Michael Wooldridge, head of the Oxford University Computer Science department and author of A Brief History of Artificial Intelligence, an introductory look at AI, published Jan 2021 by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the Ladybird Expert Guide to Artificial Intelligence. We cover a huge amount of ground, from autonomous weapons and self-driving cars, to Michael's work on multi-agent systems and the potential for my Siri to talk to your Alexa.
All this plus our usual look at today's AI headlines. 
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2420</itunes:duration>
                <itunes:episode>35</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>034 - Guest: Michael Wooldridge, Oxford University Professor, part 1</title>
        <itunes:title>034 - Guest: Michael Wooldridge, Oxford University Professor, part 1</itunes:title>
        <link>https://aiandyou.net/e/034-guest-michael-wooldridge-oxford-university-professor-part-1/</link>
                    <comments>https://aiandyou.net/e/034-guest-michael-wooldridge-oxford-university-professor-part-1/#comments</comments>        <pubDate>Mon, 08 Feb 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d6112deb-ba6b-310a-9d70-b704060786bf</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>My guest this week is Michael Wooldridge, head of the Oxford University Computer Science department and author of A Brief History of Artificial Intelligence, an introductory look at AI, published last month by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the Ladybird Expert Guide to Artificial Intelligence. We cover a huge amount of ground, from the changes in AI to ways of judging artificial general intelligence, to challenges that AI faces in dealing with the real world.</p>
<p>All this plus our usual look at today's AI headlines.  Here's the link to my live class mentioned in the episode: <a href='https://continuingstudies.uvic.ca/humanities-and-social-sciences/courses/artificial-intelligence-and-you'>https://bit.ly/UVicAIandYou</a></p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-034-9934.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>My guest this week is Michael Wooldridge, head of the Oxford University Computer Science department and author of <em>A Brief History of Artificial Intelligence</em>, an introductory look at AI, published last month by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the <em>Ladybird Expert Guide to Artificial Intelligence</em>. We cover a huge amount of ground, from the changes in AI to ways of judging artificial general intelligence, to challenges that AI faces in dealing with the real world.</p>
<p>All this plus our usual look at today's AI headlines.  Here's the link to my live class mentioned in the episode: <a href='https://continuingstudies.uvic.ca/humanities-and-social-sciences/courses/artificial-intelligence-and-you'>https://bit.ly/UVicAIandYou</a></p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/02/transcription-034-9934.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/jbmy6b/034_AI-and-You.mp3" length="29975401" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

My guest this week is Michael Wooldridge, head of the Oxford University Computer Science department and author of A Brief History of Artificial Intelligence, an introductory look at AI, published last month by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the Ladybird Expert Guide to Artificial Intelligence. We cover a huge amount of ground, from the changes in AI to ways of judging artificial general intelligence, to challenges that AI faces in dealing with the real world.
All this plus our usual look at today's AI headlines.  Here's the link to my live class mentioned in the episode: https://bit.ly/UVicAIandYou
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2449</itunes:duration>
                <itunes:episode>34</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>033 - What Is AI? A quick tour of the tech</title>
        <itunes:title>033 - What Is AI? A quick tour of the tech</itunes:title>
        <link>https://aiandyou.net/e/033-what-is-ai-a-quick-tour-of-the-tech/</link>
                    <comments>https://aiandyou.net/e/033-what-is-ai-a-quick-tour-of-the-tech/#comments</comments>        <pubDate>Mon, 01 Feb 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/11b61489-b0c7-393d-9fb0-96a33ab29178</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>"What is AI?" That question is one of the ones in the opening credits of this podcast, and in this episode, I'm going to give you a whistle-stop tour of what AI is. No computer experience required; if you've no idea how AI is built and what makes it tick, this will get you off to a good start. If you've already got some chops in computer software, then this episode may help you grasp how to explain AI to your friends. I'll go from the beginnings of GOFAI to the latest capsule networks, talking about how they're built and some of their limitations.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-033-2663.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>"What <em>is</em> AI?" That question is one of the ones in the opening credits of this podcast, and in this episode, I'm going to give you a whistle-stop tour of what AI is. No computer experience required; if you've no idea how AI is built and what makes it tick, this will get you off to a good start. If you've already got some chops in computer software, then this episode may help you grasp how to explain AI to your friends. I'll go from the beginnings of GOFAI to the latest capsule networks, talking about how they're built and some of their limitations.</p>
<p>All this plus our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-033-2663.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xcs99w/033_AI-and-You.mp3" length="33491013" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

"What is AI?" That question is one of the ones in the opening credits of this podcast, and in this episode, I'm going to give you a whistle-stop tour of what AI is. No computer experience required; if you've no idea how AI is built and what makes it tick, this will get you off to a good start. If you've already got some chops in computer software, then this episode may help you grasp how to explain AI to your friends. I'll go from the beginnings of GOFAI to the latest capsule networks, talking about how they're built and some of their limitations.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2044</itunes:duration>
                <itunes:episode>33</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>032 - TEDx, Deconstructed: Building and Elaborating on the Talk</title>
        <itunes:title>032 - TEDx, Deconstructed: Building and Elaborating on the Talk</itunes:title>
        <link>https://aiandyou.net/e/032-tedx-deconstructed-building-and-elaborating-on-the-talk/</link>
                    <comments>https://aiandyou.net/e/032-tedx-deconstructed-building-and-elaborating-on-the-talk/#comments</comments>        <pubDate>Mon, 25 Jan 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/960dbdb3-00a8-3901-b769-d6afad1c8f88</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>I promised to expand on my last TEDx talk (<a href='https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai'>https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai</a>), and this is that episode. If you're an aspiring public speaker, you'll hear what went into creating the talk, the inspirations and reasons behind every word, the cycle of how it was built and modified in response to feedback.</p>
<p>If you're interested in the ideas that it raised, this episode expands on those and gives you more insight into how AI is setting the pace of life and what that means. Are we on an unsustainable path? How would we get off it?</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-032-4237.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>I promised to expand on my last TEDx talk (<a href='https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai'>https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai</a>), and this is that episode. If you're an aspiring public speaker, you'll hear what went into creating the talk, the inspirations and reasons behind every word, the cycle of how it was built and modified in response to feedback.</p>
<p>If you're interested in the ideas that it raised, this episode expands on those and gives you more insight into how AI is setting the pace of life and what that means. Are we on an unsustainable path? How would we get off it?</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-032-4237.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/yra78n/032_AI-and-You.mp3" length="29460215" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

I promised to expand on my last TEDx talk (https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai), and this is that episode. If you're an aspiring public speaker, you'll hear what went into creating the talk, the inspirations and reasons behind every word, the cycle of how it was built and modified in response to feedback.
If you're interested in the ideas that it raised, this episode expands on those and gives you more insight into how AI is setting the pace of life and what that means. Are we on an unsustainable path? How would we get off it?
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1792</itunes:duration>
                <itunes:episode>32</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>031 - Guest: Ted Parson, AI Law Professor, part 2</title>
        <itunes:title>031 - Guest: Ted Parson, AI Law Professor, part 2</itunes:title>
        <link>https://aiandyou.net/e/031-guest-ted-parson-ai-law-professor-part-2/</link>
                    <comments>https://aiandyou.net/e/031-guest-ted-parson-ai-law-professor-part-2/#comments</comments>        <pubDate>Mon, 18 Jan 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5e3c72b5-17e5-3064-9b09-2d6ef7ff8dc6</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of artificial intelligence and law... you'll find Dr. Edward Parson. He's the Dan and Rae Emmett Professor of Environmental Law and director of the <a href='https://aipulse.org/'>AI-PULSE</a> project at UCLA conducting “interdisciplinary research and innovative programming to study how technological advances and scientific knowledge and uncertainties influence law and policy making, and how their impacts can be managed to advance human and societal well-being.” In part 2, we talk about regulation vs the Trolley Problem, Ted's work on longer-term consequences of AI, and what it's like to work on a project like PULSE.</p>
<p>All this plus the latest headlines in AI!</p>
<p>Here is a link to my University of Victoria continuing studies course on AI: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> as mentioned in the episode. Signup deadline: Feb 6.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-031-8882.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of artificial intelligence and <em>law</em>... you'll find Dr. Edward Parson. He's the Dan and Rae Emmett Professor of Environmental Law and director of the <a href='https://aipulse.org/'>AI-PULSE</a> project at UCLA conducting “interdisciplinary research and innovative programming to study how technological advances and scientific knowledge and uncertainties influence law and policy making, and how their impacts can be managed to advance human and societal well-being.” In part 2, we talk about regulation vs the Trolley Problem, Ted's work on longer-term consequences of AI, and what it's like to work on a project like PULSE.</p>
<p>All this plus the latest headlines in AI!</p>
<p>Here is a link to my University of Victoria continuing studies course on AI: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> as mentioned in the episode. Signup deadline: Feb 6.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-031-8882.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/cqycsq/031_AI-and-You.mp3" length="28512901" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

At the intersection of artificial intelligence and law... you'll find Dr. Edward Parson. He's the Dan and Rae Emmett Professor of Environmental Law and director of the AI-PULSE project at UCLA conducting “interdisciplinary research and innovative programming to study how technological advances and scientific knowledge and uncertainties influence law and policy making, and how their impacts can be managed to advance human and societal well-being.” In part 2, we talk about regulation vs the Trolley Problem, Ted's work on longer-term consequences of AI, and what it's like to work on a project like PULSE.
All this plus the latest headlines in AI!
Here is a link to my University of Victoria continuing studies course on AI: https://bit.ly/UVicAIandYou as mentioned in the episode. Signup deadline: Feb 6.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2259</itunes:duration>
                <itunes:episode>31</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>030 - Guest: Ted Parson, AI Law Professor, part 1</title>
        <itunes:title>030 - Guest: Ted Parson, AI Law Professor, part 1</itunes:title>
        <link>https://aiandyou.net/e/030-guest-ted-parson-ai-law-professor-part-1/</link>
                    <comments>https://aiandyou.net/e/030-guest-ted-parson-ai-law-professor-part-1/#comments</comments>        <pubDate>Mon, 11 Jan 2021 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/87940c7e-7bfd-3890-80e1-a4c27662c991</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of artificial intelligence and law... you'll find Dr. Edward Parson. He's the Dan and Rae Emmett Professor of Environmental Law and director of the <a href='https://aipulse.org/'>AI-PULSE</a> project at UCLA conducting “interdisciplinary research and innovative programming to study how technological advances and scientific knowledge and uncertainties influence law and policy making, and how their impacts can be managed to advance human and societal well-being.” We talk about the work of PULSE, sources of bias in AI, and the use of AI in law.</p>
<p>All this plus the latest headlines in AI!</p>
<p>Here is a link to my University of Victoria continuing studies course on AI: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> as mentioned in the episode. Signup deadline: Feb 6.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-030-7339.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>At the intersection of artificial intelligence and <em>law</em>... you'll find Dr. Edward Parson. He's the Dan and Rae Emmett Professor of Environmental Law and director of the <a href='https://aipulse.org/'>AI-PULSE</a> project at UCLA conducting “interdisciplinary research and innovative programming to study how technological advances and scientific knowledge and uncertainties influence law and policy making, and how their impacts can be managed to advance human and societal well-being.” We talk about the work of PULSE, sources of bias in AI, and the use of AI in law.</p>
<p>All this plus the latest headlines in AI!</p>
<p>Here is a link to my University of Victoria continuing studies course on AI: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> as mentioned in the episode. Signup deadline: Feb 6.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2021/01/transcription-030-7339.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ysnp7k/030_AI-and-You.mp3" length="25524918" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

At the intersection of artificial intelligence and law... you'll find Dr. Edward Parson. He's the Dan and Rae Emmett Professor of Environmental Law and director of the AI-PULSE project at UCLA conducting “interdisciplinary research and innovative programming to study how technological advances and scientific knowledge and uncertainties influence law and policy making, and how their impacts can be managed to advance human and societal well-being.” We talk about the work of PULSE, sources of bias in AI, and the use of AI in law.
All this plus the latest headlines in AI!
Here is a link to my University of Victoria continuing studies course on AI: https://bit.ly/UVicAIandYou as mentioned in the episode. Signup deadline: Feb 6.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2021</itunes:duration>
                <itunes:episode>30</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>029 - Guest: Katie King, AI Business Consultant, part 2</title>
        <itunes:title>029 - Guest: Katie King, AI Business Consultant, part 2</itunes:title>
        <link>https://aiandyou.net/e/029-guest-katie-king-ai-business-consultant-part2/</link>
                    <comments>https://aiandyou.net/e/029-guest-katie-king-ai-business-consultant-part2/#comments</comments>        <pubDate>Mon, 04 Jan 2021 00:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/1de2f55f-7de3-3132-9dc5-be7c93c57807</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How does a business successfully deploy #AI, especially if they are new at it? Katie King is a UK-based AI consultant and marketing expert, and member of the All-Party Parliamentary Group task force on AI (APPG-AI). She is the author of the book <a href='https://www.amazon.com/Using-Artificial-Intelligence-Marketing-Competitive/dp/0749483393'>Using Artificial Intelligence in Marketing: How to Harness AI and Maintain the Competitive Edge</a>, and CEO of the <a href='https://www.aiinbusiness.co.uk/our-team'>AI in Business</a> consultancy, taking her to locations like Dubai and Singapore, and which I am a partner in.</p>
<p>In part 2, we talk about education of AI and how it should shift, and what students entering the job market should do.</p>
<p>Reach Katie via <a href='https://www.linkedin.com/in/katieeking/'>LinkedIn</a>, <a href='https://twitter.com/katieeking'>Twitter</a>, and <a href='https://www.instagram.com/katiekingmba'>Instagram</a>.</p>
<p>All this plus the latest headlines in AI!</p>
<p>Here is a teaser opportunity for you readers of show notes: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> so you can jump in ahead of the announcement in episode 30!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/12/transcription-029-4952.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How does a business successfully deploy #AI, especially if they are new at it? Katie King is a UK-based AI consultant and marketing expert, and member of the All-Party Parliamentary Group task force on AI (APPG-AI). She is the author of the book <a href='https://www.amazon.com/Using-Artificial-Intelligence-Marketing-Competitive/dp/0749483393'><em>Using Artificial Intelligence in Marketing: How to Harness AI and Maintain the Competitive Edge</em></a>, and CEO of the <a href='https://www.aiinbusiness.co.uk/our-team'>AI in Business</a> consultancy, taking her to locations like Dubai and Singapore, and which I am a partner in.</p>
<p>In part 2, we talk about education of AI and how it should shift, and what students entering the job market should do.</p>
<p>Reach Katie via <a href='https://www.linkedin.com/in/katieeking/'>LinkedIn</a>, <a href='https://twitter.com/katieeking'>Twitter</a>, and <a href='https://www.instagram.com/katiekingmba'>Instagram</a>.</p>
<p>All this plus the latest headlines in AI!</p>
<p>Here is a teaser opportunity for you readers of show notes: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> so you can jump in ahead of the announcement in episode 30!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/12/transcription-029-4952.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/xrujj2/029_AI-and-You.mp3" length="20333713" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How does a business successfully deploy #AI, especially if they are new at it? Katie King is a UK-based AI consultant and marketing expert, and member of the All-Party Parliamentary Group task force on AI (APPG-AI). She is the author of the book Using Artificial Intelligence in Marketing: How to Harness AI and Maintain the Competitive Edge, and CEO of the AI in Business consultancy, taking her to locations like Dubai and Singapore, and which I am a partner in.
In part 2, we talk about education of AI and how it should shift, and what students entering the job market should do.
Reach Katie via LinkedIn, Twitter, and Instagram.
All this plus the latest headlines in AI!
Here is a teaser opportunity for you readers of show notes: https://bit.ly/UVicAIandYou so you can jump in ahead of the announcement in episode 30!
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1270</itunes:duration>
                <itunes:episode>29</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>028 - Guest: Katie King, AI Business Consultant, part 1</title>
        <itunes:title>028 - Guest: Katie King, AI Business Consultant, part 1</itunes:title>
        <link>https://aiandyou.net/e/028-guest-katie-king-ai-business-consultant-part-1/</link>
                    <comments>https://aiandyou.net/e/028-guest-katie-king-ai-business-consultant-part-1/#comments</comments>        <pubDate>Mon, 28 Dec 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/07bed770-8d6b-351f-8193-402014c79660</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How does a business successfully deploy #AI, especially if they are new at it? Katie King is a UK-based AI consultant and marketing expert, and member of the All-Party Parliamentary Group task force on AI (APPG-AI). She is the author of the book <a href='https://www.amazon.com/Using-Artificial-Intelligence-Marketing-Competitive/dp/0749483393'>Using Artificial Intelligence in Marketing: How to Harness AI and Maintain the Competitive Edge</a>, and CEO of the <a href='https://www.aiinbusiness.co.uk/our-team'>AI in Business</a> consultancy, taking her to locations like Dubai and Singapore, and which I am a partner in.</p>
<p>We talk about how she got into AI consulting, trends that apply to businesses looking to adopt AI, and her work on the APPG-AI.</p>
<p>Reach Katie via <a href='https://www.linkedin.com/in/katieeking/'>LinkedIn</a>, <a href='https://twitter.com/katieeking'>Twitter</a>, and <a href='https://www.instagram.com/katiekingmba'>Instagram</a>.</p>
<p>All this plus predictions for 2021!</p>
<p>Here is a teaser opportunity for you readers of show notes: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> so you can jump in ahead of the announcement in episode 30!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/12/transcription-028-6738.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How does a business successfully deploy #AI, especially if they are new at it? Katie King is a UK-based AI consultant and marketing expert, and member of the All-Party Parliamentary Group task force on AI (APPG-AI). She is the author of the book <a href='https://www.amazon.com/Using-Artificial-Intelligence-Marketing-Competitive/dp/0749483393'><em>Using Artificial Intelligence in Marketing: How to Harness AI and Maintain the Competitive Edge</em></a>, and CEO of the <a href='https://www.aiinbusiness.co.uk/our-team'>AI in Business</a> consultancy, taking her to locations like Dubai and Singapore, and which I am a partner in.</p>
<p>We talk about how she got into AI consulting, trends that apply to businesses looking to adopt AI, and her work on the APPG-AI.</p>
<p>Reach Katie via <a href='https://www.linkedin.com/in/katieeking/'>LinkedIn</a>, <a href='https://twitter.com/katieeking'>Twitter</a>, and <a href='https://www.instagram.com/katiekingmba'>Instagram</a>.</p>
<p>All this plus predictions for 2021!</p>
<p>Here is a teaser opportunity for you readers of show notes: <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a> so you can jump in ahead of the announcement in episode 30!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/12/transcription-028-6738.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/q6s636/028_AI-and-You.mp3" length="32187871" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How does a business successfully deploy #AI, especially if they are new at it? Katie King is a UK-based AI consultant and marketing expert, and member of the All-Party Parliamentary Group task force on AI (APPG-AI). She is the author of the book Using Artificial Intelligence in Marketing: How to Harness AI and Maintain the Competitive Edge, and CEO of the AI in Business consultancy, taking her to locations like Dubai and Singapore, and which I am a partner in.
We talk about how she got into AI consulting, trends that apply to businesses looking to adopt AI, and her work on the APPG-AI.
Reach Katie via LinkedIn, Twitter, and Instagram.
All this plus predictions for 2021!
Here is a teaser opportunity for you readers of show notes: https://bit.ly/UVicAIandYou so you can jump in ahead of the announcement in episode 30!
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2011</itunes:duration>
                <itunes:episode>28</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>027 - A TEDx Journey: Getting to the Red Dot</title>
        <itunes:title>027 - A TEDx Journey: Getting to the Red Dot</itunes:title>
        <link>https://aiandyou.net/e/027-a-tedx-journey-getting-to-the-red-dot/</link>
                    <comments>https://aiandyou.net/e/027-a-tedx-journey-getting-to-the-red-dot/#comments</comments>        <pubDate>Mon, 21 Dec 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8bd13d8c-54d1-3e0c-9375-d07b87a69c3a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>TEDx: The franchise of the TED organization and the venue for speakers on all kinds of thought-provoking topics. What is it like to apply for and give a TEDx talk? This episode is a little different (it's the holidays), and goes into my experiences getting to the TEDx stage (three times) and since then, evaluating technology speakers for one of the premier TEDx events in North America. That TEDx talk is online at <a href='https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai'>https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai</a>.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/12/transcript-027-1039.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>TEDx: The franchise of the TED organization and the venue for speakers on all kinds of thought-provoking topics. What is it like to apply for and give a TEDx talk? This episode is a little different (it's the holidays), and goes into my experiences getting to the TEDx stage (three times) and since then, evaluating technology speakers for one of the premier TEDx events in North America. That TEDx talk is online at <a href='https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai'>https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai</a>.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/12/transcript-027-1039.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/2iw82i/027_AI-and-You.mp3" length="34159869" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

TEDx: The franchise of the TED organization and the venue for speakers on all kinds of thought-provoking topics. What is it like to apply for and give a TEDx talk? This episode is a little different (it's the holidays), and goes into my experiences getting to the TEDx stage (three times) and since then, evaluating technology speakers for one of the premier TEDx events in North America. That TEDx talk is online at https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai.
All that and our usual look at today's headlines in AI.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2086</itunes:duration>
                <itunes:episode>27</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>026 - Guest: Thomas Homer-Dixon, Complex Systems Theorist, part 2</title>
        <itunes:title>026 - Guest: Thomas Homer-Dixon, Complex Systems Theorist, part 2</itunes:title>
        <link>https://aiandyou.net/e/026-guest-thomas-homer-dixon-complex-systems-theorist-part2/</link>
                    <comments>https://aiandyou.net/e/026-guest-thomas-homer-dixon-complex-systems-theorist-part2/#comments</comments>        <pubDate>Mon, 14 Dec 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7caa07d2-1364-340a-aaaf-71ffd48ccd73</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Exponential technology such as artificial intelligence causes the complexity of our world to explode. How do we address complexity as an existential risk in itself? Enter <a href='https://homerdixon.com/'>Dr. Thomas Homer-Dixon</a>, director of the <a href='https://cascadeinstitute.org/'>Cascade Institute</a> at <a href='https://www.royalroads.ca/'>Royal Roads University</a>, in British Columbia. He describes himself as a Complex Systems Theorist, and has a PhD from MIT in international relations, defense and arms control policy, and conflict theory.</p>
<p>His research has focused on threats to global security in the 21st century, including economic instability, climate change, and energy scarcity. He and the Cascade Institute study how people, organizations, and societies can better resolve their conflicts and innovate in response to complex problems.</p>
<p>In part 2 of our talk, we continue discussing his latest book, <a href='https://www.penguinrandomhouse.ca/books/221940/commanding-hope-by-thomas-homer-dixon/9780307363169'>Commanding Hope</a>, Systems Thinking, and how to identify mechanisms for shaping our collective beliefs and attitudes to produce more effective action on, say, climate change.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-026-1309.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Exponential technology such as artificial intelligence causes the complexity of our world to explode. How do we address complexity as an existential risk in itself? Enter <a href='https://homerdixon.com/'>Dr. Thomas Homer-Dixon</a>, director of the <a href='https://cascadeinstitute.org/'>Cascade Institute</a> at <a href='https://www.royalroads.ca/'>Royal Roads University</a>, in British Columbia. He describes himself as a Complex Systems Theorist, and has a PhD from MIT in international relations, defense and arms control policy, and conflict theory.</p>
<p>His research has focused on threats to global security in the 21st century, including economic instability, climate change, and energy scarcity. He and the Cascade Institute study how people, organizations, and societies can better resolve their conflicts and innovate in response to complex problems.</p>
<p>In part 2 of our talk, we continue discussing his latest book, <a href='https://www.penguinrandomhouse.ca/books/221940/commanding-hope-by-thomas-homer-dixon/9780307363169'>Commanding Hope</a>, Systems Thinking, and how to identify mechanisms for shaping our collective beliefs and attitudes to produce more effective action on, say, climate change.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-026-1309.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/qe7mc5/026_AI-and-You.mp3" length="83747592" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Exponential technology such as artificial intelligence causes the complexity of our world to explode. How do we address complexity as an existential risk in itself? Enter Dr. Thomas Homer-Dixon, director of the Cascade Institute at Royal Roads University, in British Columbia. He describes himself as a Complex Systems Theorist, and has a PhD from MIT in international relations, defense and arms control policy, and conflict theory.
His research has focused on threats to global security in the 21st century, including economic instability, climate change, and energy scarcity. He and the Cascade Institute study how people, organizations, and societies can better resolve their conflicts and innovate in response to complex problems.
In part 2 of our talk, we continue discussing his latest book, Commanding Hope, Systems Thinking, and how to identify mechanisms for shaping our collective beliefs and attitudes to produce more effective action on, say, climate change.
All that and our usual look at today's headlines in AI.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2093</itunes:duration>
                <itunes:episode>26</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>025 - Guest: Thomas Homer-Dixon, Complex Systems Theorist, part 1</title>
        <itunes:title>025 - Guest: Thomas Homer-Dixon, Complex Systems Theorist, part 1</itunes:title>
        <link>https://aiandyou.net/e/025-guest-thomas-homer-dixon-complex-systems-theorist-part-1/</link>
                    <comments>https://aiandyou.net/e/025-guest-thomas-homer-dixon-complex-systems-theorist-part-1/#comments</comments>        <pubDate>Mon, 07 Dec 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/36d9e7b9-9612-3556-9fe7-d8f50b5d4d49</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Exponential technology such as artificial intelligence causes the complexity of our world to explode. How do we address complexity as an existential risk in itself? Enter <a href='https://homerdixon.com/'>Dr. Thomas Homer-Dixon</a>, director of the <a href='https://cascadeinstitute.org/'>Cascade Institute</a> at <a href='https://www.royalroads.ca/'>Royal Roads University</a>, in British Columbia. He describes himself as a Complex Systems Theorist, and has a PhD from MIT in international relations, defense and arms control policy, and conflict theory.</p>
<p>His research has focused on threats to global security in the 21st century, including economic instability, climate change, and energy scarcity. He and the Cascade Institute study how people, organizations, and societies can better resolve their conflicts and innovate in response to complex problems.</p>
<p>In our talk, we discuss his latest book, <a href='https://www.penguinrandomhouse.ca/books/221940/commanding-hope-by-thomas-homer-dixon/9780307363169'>Commanding Hope</a>, and the role of hope in empowering us to tackle our biggest problems.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-025-5918.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Exponential technology such as artificial intelligence causes the complexity of our world to explode. How do we address complexity as an existential risk in itself? Enter <a href='https://homerdixon.com/'>Dr. Thomas Homer-Dixon</a>, director of the <a href='https://cascadeinstitute.org/'>Cascade Institute</a> at <a href='https://www.royalroads.ca/'>Royal Roads University</a>, in British Columbia. He describes himself as a Complex Systems Theorist, and has a PhD from MIT in international relations, defense and arms control policy, and conflict theory.</p>
<p>His research has focused on threats to global security in the 21st century, including economic instability, climate change, and energy scarcity. He and the Cascade Institute study how people, organizations, and societies can better resolve their conflicts and innovate in response to complex problems.</p>
<p>In our talk, we discuss his latest book, <a href='https://www.penguinrandomhouse.ca/books/221940/commanding-hope-by-thomas-homer-dixon/9780307363169'>Commanding Hope</a>, and the role of <em>hope </em>in empowering us to tackle our biggest problems.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-025-5918.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
<p> </p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/237u9d/025_AI-and-You.mp3" length="60140664" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Exponential technology such as artificial intelligence causes the complexity of our world to explode. How do we address complexity as an existential risk in itself? Enter Dr. Thomas Homer-Dixon, director of the Cascade Institute at Royal Roads University, in British Columbia. He describes himself as a Complex Systems Theorist, and has a PhD from MIT in international relations, defense and arms control policy, and conflict theory.
His research has focused on threats to global security in the 21st century, including economic instability, climate change, and energy scarcity. He and the Cascade Institute study how people, organizations, and societies can better resolve their conflicts and innovate in response to complex problems.
In our talk, we discuss his latest book, Commanding Hope, and the role of hope in empowering us to tackle our biggest problems.
All that and our usual look at today's headlines in AI.
Transcript and URLs referenced at HumanCusp Blog.


 
 
 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1503</itunes:duration>
                <itunes:episode>25</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>024 - The Biggest Question About AGI</title>
        <itunes:title>024 - The Biggest Question About AGI</itunes:title>
        <link>https://aiandyou.net/e/024-the-biggest-question-about-agi/</link>
                    <comments>https://aiandyou.net/e/024-the-biggest-question-about-agi/#comments</comments>        <pubDate>Mon, 30 Nov 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/75561bbd-f2b6-3dce-b9b3-c7746b78ff9a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We tackle the most important question about Artificial General Intelligence - When Will It Happen? Everyone really wants to know, but no one has a clue.  Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.</p>
<p>We might not be able to get a date, but we'll explore why it's such a hard question and see what useful questions we can get out of it.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-024-0680.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>We tackle the most important question about Artificial General Intelligence - <em>When Will It Happen?</em> Everyone really wants to know, but no one has a clue.  Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.</p>
<p>We might not be able to get a date, but we'll explore why it's such a hard question and see what useful questions we can get out of it.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-024-0680.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/sdq2nr/024_AI-and-You.mp3" length="26139329" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

We tackle the most important question about Artificial General Intelligence - When Will It Happen? Everyone really wants to know, but no one has a clue.  Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.
We might not be able to get a date, but we'll explore why it's such a hard question and see what useful questions we can get out of it.
All that and our usual look at today's headlines in AI.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1653</itunes:duration>
                <itunes:episode>24</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>023 - Guest: Pamela McCorduck, AI Historian, part 2</title>
        <itunes:title>023 - Guest: Pamela McCorduck, AI Historian, part 2</itunes:title>
        <link>https://aiandyou.net/e/023-guest-pamela-mccorduck-ai-historian-part-2/</link>
                    <comments>https://aiandyou.net/e/023-guest-pamela-mccorduck-ai-historian-part-2/#comments</comments>        <pubDate>Mon, 23 Nov 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/35758f45-62d1-39ce-bca9-fbebba14a7ba</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. <a href='https://www.amazon.com/Machines-Who-Think-Artificial-Intelligence/dp/1138440191'>Machines Who Think</a> (1979, 2003) and her recent sequel <a href='https://www.amazon.com/This-Could-Important-Artificial-Intelligentsia-ebook/dp/B08LTK2BC2'>This Could Be Important</a> (2019) help understand the who, what, and why of where AI has come from.</p>
<p>In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow's "Two Cultures", and the interaction between AI and the humanities, along with more tales of its founding fathers.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-023-5021.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part <em>raconteuse</em>, her books romp through the history and characters of AI as both authoritative record and <em>belles-lettres</em>. <a href='https://www.amazon.com/Machines-Who-Think-Artificial-Intelligence/dp/1138440191'><em>Machines Who Think</em></a> (1979, 2003) and her recent sequel <a href='https://www.amazon.com/This-Could-Important-Artificial-Intelligentsia-ebook/dp/B08LTK2BC2'><em>This Could Be Important</em></a> (2019) help understand the who, what, and why of where AI has come from.</p>
<p>In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow's "Two Cultures", and the interaction between AI and the humanities, along with more tales of its founding fathers.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-023-5021.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/h3vhbc/023_AI-and-You.mp3" length="29131701" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.
In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow's "Two Cultures", and the interaction between AI and the humanities, along with more tales of its founding fathers.
All that and our usual look at today's headlines in AI.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2043</itunes:duration>
                <itunes:episode>23</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>022 - Guest: Pamela McCorduck, AI Historian</title>
        <itunes:title>022 - Guest: Pamela McCorduck, AI Historian</itunes:title>
        <link>https://aiandyou.net/e/022-guest-pamela-mccorduck-ai-historian/</link>
                    <comments>https://aiandyou.net/e/022-guest-pamela-mccorduck-ai-historian/#comments</comments>        <pubDate>Mon, 16 Nov 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/79fa8382-0823-3911-af07-8a7ae35aa928</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.</p>
<p>In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-022-4679.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part <em>raconteuse</em>, her books romp through the history and characters of AI as both authoritative record and <em>belles-lettres</em>. <em>Machines Who Think</em> (1979, 2003) and her recent sequel <em>This Could Be Important</em> (2019) help understand the who, what, and why of where AI has come from.</p>
<p>In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.</p>
<p>All that and our usual look at today's headlines in AI.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/11/transcript-022-4679.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/yt8wxw/022_AI-and-You.mp3" length="30963823" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.
In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.
All that and our usual look at today's headlines in AI.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2183</itunes:duration>
                <itunes:episode>22</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>021 - Guest: David Wood, Futurist, part 2</title>
        <itunes:title>021 - Guest: David Wood, Futurist, part 2</itunes:title>
        <link>https://aiandyou.net/e/021-guest-david-wood-futurist-part-2/</link>
                    <comments>https://aiandyou.net/e/021-guest-david-wood-futurist-part-2/#comments</comments>        <pubDate>Mon, 09 Nov 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/45970ee0-8713-3eed-91c5-ff291ff676c6</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the <a href='https://londonfuturists.com'>London Futurists</a>, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.</p>
<p>In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-021-0978.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the <a href='https://londonfuturists.com'>London Futurists</a>, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including <em>Smartphones for All</em>, <em>The Abolition of Aging</em>, <em>Transcending Politics</em>, and <em>Sustainable Superabundanc</em>e.</p>
<p>In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-021-0978.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/t7522s/021_AI-and-You.mp3" length="39551908" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.
In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2658</itunes:duration>
                <itunes:episode>21</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>020 - Guest: David Wood, Futurist</title>
        <itunes:title>020 - Guest: David Wood, Futurist</itunes:title>
        <link>https://aiandyou.net/e/020-guest-david-wood-futurist/</link>
                    <comments>https://aiandyou.net/e/020-guest-david-wood-futurist/#comments</comments>        <pubDate>Mon, 02 Nov 2020 10:00:00 -0800</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/a7eaa71e-a1a2-30eb-be15-eda13caca229</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the <a href='https://londonfuturists.com'>London Futurists</a>, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.</p>
<p>In part 1 of our interview, we talk about David's singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-020-8117.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>https://aiandyou.net/</a> .</p>
<p> </p>

<p>How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the <a href='https://londonfuturists.com'>London Futurists</a>, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including <em>Smartphones for All</em>, <em>The Abolition of Aging</em>, <em>Transcending Politics</em>, and <em>Sustainable Superabundanc</em>e.</p>
<p>In part 1 of our interview, we talk about David's singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-020-8117.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/s64w3j/020_AI-and-You.mp3" length="30928031" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: https://aiandyou.net/ .
 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.
In part 1 of our interview, we talk about David's singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2088</itunes:duration>
                <itunes:episode>20</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>019 - Guest: Tony Czarnecki, Futurist, part 2</title>
        <itunes:title>019 - Guest: Tony Czarnecki, Futurist, part 2</itunes:title>
        <link>https://aiandyou.net/e/019-guest-tony-czarnecki-futurist-part-2/</link>
                    <comments>https://aiandyou.net/e/019-guest-tony-czarnecki-futurist-part-2/#comments</comments>        <pubDate>Mon, 26 Oct 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f576c01f-91ad-3928-a534-ad477e2cd443</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Tony Czarnecki is a futurist and a Managing Partner of <a href='https://sustensis.co.uk/'>Sustensis</a>, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence.</p>
<p>In this concluding part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and European and global economic structures for sustaining technological evolution.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-019-8935.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Tony Czarnecki is a futurist and a Managing Partner of <a href='https://sustensis.co.uk/'>Sustensis</a>, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence.</p>
<p>In this concluding part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and European and global economic structures for sustaining technological evolution.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-019-8935.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/vg3paq/019_AI-and-You.mp3" length="24389203" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

Tony Czarnecki is a futurist and a Managing Partner of Sustensis, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence.
In this concluding part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and European and global economic structures for sustaining technological evolution.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1794</itunes:duration>
                <itunes:episode>19</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>018 - Guest: Tony Czarnecki, Futurist</title>
        <itunes:title>018 - Guest: Tony Czarnecki, Futurist</itunes:title>
        <link>https://aiandyou.net/e/018-guest-tony-czarnecki-futurist/</link>
                    <comments>https://aiandyou.net/e/018-guest-tony-czarnecki-futurist/#comments</comments>        <pubDate>Mon, 19 Oct 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8e8c926d-9169-34b2-be8b-6f1edfd303a4</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Tony Czarnecki is a futurist and a Managing Partner of <a href='https://sustensis.co.uk/'>Sustensis</a>, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence. </p>
<p>In this first part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and discuss the pandemic and what's next.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-018-9565.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Tony Czarnecki is a futurist and a Managing Partner of <a href='https://sustensis.co.uk/'>Sustensis</a>, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence. </p>
<p>In this first part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and discuss the pandemic and what's next.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-018-9565.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/upprgh/018_AI-and-You.mp3" length="30165225" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

Tony Czarnecki is a futurist and a Managing Partner of Sustensis, London – a Think Tank for inspirations for Humanity's transition to coexistence with superintelligence – the final form of an ever faster and more intelligent, self-learning and perhaps even conscious Artificial Intelligence. Tony is the author of several books on the subject of Superintelligence. 
In this first part of our interview, we talk about his latest book in his Posthumans series, "Becoming a Butterfly," and discuss the pandemic and what's next.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2194</itunes:duration>
                <itunes:episode>18</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>017 - Guest: Roman Yampolskiy, Professor of AI Safety, part 2</title>
        <itunes:title>017 - Guest: Roman Yampolskiy, Professor of AI Safety, part 2</itunes:title>
        <link>https://aiandyou.net/e/017-guest-roman-yampolskiy-professor-of-ai-safety-part-2/</link>
                    <comments>https://aiandyou.net/e/017-guest-roman-yampolskiy-professor-of-ai-safety-part-2/#comments</comments>        <pubDate>Mon, 12 Oct 2020 16:19:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/3c00dcfe-755b-351c-82cb-591e75b91d2a</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it look like to be on the front lines of academic research into making future AI safe? It looks like <a href='http://cecs.louisville.edu/ry/'>Roman Yampolskiy</a>, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.</p>
<p>In this second part of our interview, we talk about his <a href='https://philpapers.org/archive/YAMOCO.pdf'>latest paper</a>: a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-017-3892.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it look like to be on the front lines of academic research into making future AI safe? It looks like <a href='http://cecs.louisville.edu/ry/'>Roman Yampolskiy</a>, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of <em>AI Safety</em>. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.</p>
<p>In this second part of our interview, we talk about his <a href='https://philpapers.org/archive/YAMOCO.pdf'>latest paper</a>: a comprehensive analysis of the <em>Control Problem, </em>the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/10/transcript-017-3892.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/892z4a/017_AI-and-You.mp3" length="26138964" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.
In this second part of our interview, we talk about his latest paper: a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1790</itunes:duration>
                <itunes:episode>17</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>016 - Guest: Roman Yampolskiy, Professor of AI Safety</title>
        <itunes:title>016 - Guest: Roman Yampolskiy, Professor of AI Safety</itunes:title>
        <link>https://aiandyou.net/e/016-guest-roman-yampolskiy-professor-of-ai-safety/</link>
                    <comments>https://aiandyou.net/e/016-guest-roman-yampolskiy-professor-of-ai-safety/#comments</comments>        <pubDate>Mon, 05 Oct 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/e3b09835-1fdd-3084-a925-1c8c727d110b</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it look like to be on the front lines of academic research into making future AI safe? It looks like <a href='http://cecs.louisville.edu/ry/'>Roman Yampolskiy</a>, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.</p>
<p>In this first part of our interview, we talk about his latest paper, a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control?</p>
<p>All this and our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-016-8884.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>What does it look like to be on the front lines of academic research into making future AI safe? It looks like <a href='http://cecs.louisville.edu/ry/'>Roman Yampolskiy</a>, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of <em>AI Safety</em>. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.</p>
<p>In this first part of our interview, we talk about his latest paper, a comprehensive analysis of the <em>Control Problem, </em>the central issue of AI safety: How do we ensure future AI remains under our control?</p>
<p>All this and our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-016-8884.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/wygewi/016_AI-and-You.mp3" length="36464311" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.
In this first part of our interview, we talk about his latest paper, a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control?
All this and our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2476</itunes:duration>
                <itunes:episode>16</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>015 - Guest: Karina Vold, Professor of Philosophy, part 2</title>
        <itunes:title>015 - Guest: Karina Vold, Professor of Philosophy, part 2</itunes:title>
        <link>https://aiandyou.net/e/015-guest-karina-vold-professor-of-philosophy-part-2/</link>
                    <comments>https://aiandyou.net/e/015-guest-karina-vold-professor-of-philosophy-part-2/#comments</comments>        <pubDate>Mon, 28 Sep 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ff430251-e709-3ded-82ac-63b3d26339ab</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of #AI from a philosopher's perspective. In the second half of this interview we learn about value alignment, the Trolley Problem, and just what those institutes do about AI.  Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-015-1955.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of #AI from a philosopher's perspective. In the second half of this interview we learn about value alignment, the Trolley Problem, and just what those institutes <em>do </em>about AI.  Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-015-1955.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/gmh4dn/015_AI-and-You.mp3" length="41484483" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of #AI from a philosopher's perspective. In the second half of this interview we learn about value alignment, the Trolley Problem, and just what those institutes do about AI.  Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2543</itunes:duration>
                <itunes:episode>15</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>014 - Guest: Karina Vold, Professor of Philosophy</title>
        <itunes:title>014 - Guest: Karina Vold, Professor of Philosophy</itunes:title>
        <link>https://aiandyou.net/e/014-guest-karina-vold-professor-of-philosophy/</link>
                    <comments>https://aiandyou.net/e/014-guest-karina-vold-professor-of-philosophy/#comments</comments>        <pubDate>Mon, 21 Sep 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/bd3431a2-9dc9-34d4-a397-02667e92b11f</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of AI from a philosopher's perspective. In this interview we learn about the Philosophy of Mind, the Extended Mind Hypothesis - and find out who Otto and Inga are. Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.</p>
<p>All this and our usual look at today's AI headlines</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-014-0001.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of AI from a philosopher's perspective. In this interview we learn about the Philosophy of Mind, the Extended Mind Hypothesis - and find out who <em>Otto </em>and <em>Inga </em>are. Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.</p>
<p>All this and our usual look at today's AI headlines</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-014-0001.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/89guuc/014_AI-and-You.mp3" length="41920865" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of AI from a philosopher's perspective. In this interview we learn about the Philosophy of Mind, the Extended Mind Hypothesis - and find out who Otto and Inga are. Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.
All this and our usual look at today's AI headlines
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1856</itunes:duration>
                <itunes:episode>14</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>013 - Guest: Paolo Pirjanian, Embodied Robotics CEO, part 2</title>
        <itunes:title>013 - Guest: Paolo Pirjanian, Embodied Robotics CEO, part 2</itunes:title>
        <link>https://aiandyou.net/e/013-guest-paolo-pirjanian-embodied-robotics-ceo-part-2/</link>
                    <comments>https://aiandyou.net/e/013-guest-paolo-pirjanian-embodied-robotics-ceo-part-2/#comments</comments>        <pubDate>Mon, 14 Sep 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/5afaa18f-514e-31b1-b61f-3f637064c309</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Have you seen a robot help a troubled child? This week's guest makes one. This is part 2 of the interview with Paolo Pirjanian, who is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded <a href='https://embodied.com/'>Embodied, Inc.</a> with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives. We will learn more about how Moxie the robot works, what it can do, and Paolo's plans for future robots.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-013-0144.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Have you seen a robot help a troubled child? This week's guest makes one. This is part 2 of the interview with Paolo Pirjanian, who is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded <a href='https://embodied.com/'>Embodied, Inc.</a> with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives. We will learn more about how Moxie the robot works, what it can do, and Paolo's plans for future robots.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-013-0144.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/6svzqn/013_AI-and-You.mp3" length="28882251" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

Have you seen a robot help a troubled child? This week's guest makes one. This is part 2 of the interview with Paolo Pirjanian, who is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded Embodied, Inc. with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives. We will learn more about how Moxie the robot works, what it can do, and Paolo's plans for future robots.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2094</itunes:duration>
                <itunes:episode>13</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>012 - Guest: Paolo Pirjanian, Embodied Robotics CEO</title>
        <itunes:title>012 - Guest: Paolo Pirjanian, Embodied Robotics CEO</itunes:title>
        <link>https://aiandyou.net/e/012-guest-paolo-pirjanian-embodied-robotics-ceo/</link>
                    <comments>https://aiandyou.net/e/012-guest-paolo-pirjanian-embodied-robotics-ceo/#comments</comments>        <pubDate>Mon, 07 Sep 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/c665acc1-b88d-3af0-9d85-d6d5b777eb01</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Have you seen a robot help a troubled child? This week's guest makes one. Paolo Pirjanian is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded <a href='https://embodied.com/'>Embodied, Inc.</a> with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives.</p>
<p>All this and our usual look at today's AI headlines</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-012-3083.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>Have you seen a robot help a troubled child? This week's guest makes one. Paolo Pirjanian is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded <a href='https://embodied.com/'>Embodied, Inc.</a> with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives.</p>
<p>All this and our usual look at today's AI headlines</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/09/transcript-012-3083.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/h3y3vi/012_AI-and-You.mp3" length="56677793" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

Have you seen a robot help a troubled child? This week's guest makes one. Paolo Pirjanian is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded Embodied, Inc. with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives.
All this and our usual look at today's AI headlines
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3131</itunes:duration>
                <itunes:episode>12</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>011 - Guest: Kristóf Kovács, Mensa Psychologist, part 2</title>
        <itunes:title>011 - Guest: Kristóf Kovács, Mensa Psychologist, part 2</itunes:title>
        <link>https://aiandyou.net/e/011-guest-kristof-kovacs-mensa-psychologist-part-2/</link>
                    <comments>https://aiandyou.net/e/011-guest-kristof-kovacs-mensa-psychologist-part-2/#comments</comments>        <pubDate>Mon, 31 Aug 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/2d679789-394f-31d0-a26f-d049a7d06f26</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.</p>
<p>Most people are content to define 'intelligence' as 'that which an IQ score measures', - but what if it's your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode, when we also talk about what IQ tests are measuring and how to interpret them, what Mensa does, and Kristóf's research into the g-factor of intelligence.</p>
<p>All this and our usual look at today's AI headlines and a description of my upcoming continuing studies course at the University of Victoria on the same theme as this podcast, which is open to online enrollment from all over the world; register at <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a>. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-011-4414.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.</p>
<p>Most people are content to define 'intelligence' as 'that which an IQ score measures', - but what if it's your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode, when we also talk about what IQ tests are measuring and how to interpret them, what Mensa does, and Kristóf's research into the <em>g</em>-factor of intelligence.</p>
<p>All this and our usual look at today's AI headlines and a description of my upcoming continuing studies course at the University of Victoria on the same theme as this podcast, which is open to online enrollment from all over the world; register at <a href='https://bit.ly/UVicAIandYou'>https://bit.ly/UVicAIandYou</a>. </p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-011-4414.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/bx97y2/011_AI-and-You.mp3" length="50218015" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.
Most people are content to define 'intelligence' as 'that which an IQ score measures', - but what if it's your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode, when we also talk about what IQ tests are measuring and how to interpret them, what Mensa does, and Kristóf's research into the g-factor of intelligence.
All this and our usual look at today's AI headlines and a description of my upcoming continuing studies course at the University of Victoria on the same theme as this podcast, which is open to online enrollment from all over the world; register at https://bit.ly/UVicAIandYou. 
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2750</itunes:duration>
                <itunes:episode>11</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>010 - Guest: Kristóf Kovács, Mensa Psychologist</title>
        <itunes:title>010 - Guest: Kristóf Kovács, Mensa Psychologist</itunes:title>
        <link>https://aiandyou.net/e/010-guest-kristof-kovacs-mensa-psychologist/</link>
                    <comments>https://aiandyou.net/e/010-guest-kristof-kovacs-mensa-psychologist/#comments</comments>        <pubDate>Mon, 24 Aug 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/855b7110-65a5-3ede-93dd-38fc5a64dc4b</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.</p>
<p>Most people are content to define 'intelligence' as 'that which an IQ score measures', - but what if it's your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode!</p>
<p>All this and our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-010-2845.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.</p>
<p>Most people are content to define 'intelligence' as 'that which an IQ score measures', - but what if it's your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode!</p>
<p>All this and our usual look at today's AI headlines.</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-010-2845.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/eaiyc5/010_AI-and-You.mp3" length="40730906" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.
Most people are content to define 'intelligence' as 'that which an IQ score measures', - but what if it's your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode!
All this and our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2211</itunes:duration>
                <itunes:episode>10</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>009 - Guest: Richard Foster-Fletcher, futurist, part 2</title>
        <itunes:title>009 - Guest: Richard Foster-Fletcher, futurist, part 2</itunes:title>
        <link>https://aiandyou.net/e/title-1596930992/</link>
                    <comments>https://aiandyou.net/e/title-1596930992/#comments</comments>        <pubDate>Mon, 17 Aug 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/8cfb8db5-41dd-3e1d-b703-21d446c11301</guid>
                                    <description><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>What happens when two futurists talk with each other? You get this episode, with guest Richard Foster-Fletcher, who is doing amazing work in the UK building learning communities such as MKAI (Milton Keynes AI), he is a graduate of the MIT Artificial Intelligence Strategy Course and is the host of the ‘Boundless: Designing Our Digital Future’ podcast, of which I was the very first guest.</p>
<p>In this conclusion of our interview, we talk about how the cultures of organizations facing disruption should change,and some of Richard's experiences in creating forums for debating AI policies.</p>
<p>All this and some of today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-009-3475.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This and all episodes at: <a href='http://aiandyou.net/'>http://aiandyou.net/</a> .</p>
<p> </p>

<p>What happens when two futurists talk with each other? You get this episode, with guest Richard Foster-Fletcher, who is doing amazing work in the UK building learning communities such as MKAI (Milton Keynes AI), he is a graduate of the MIT Artificial Intelligence Strategy Course and is the host of the ‘Boundless: Designing Our Digital Future’ podcast, of which I was the very first guest.</p>
<p>In this conclusion of our interview, we talk about how the cultures of organizations facing disruption should change,and some of Richard's experiences in creating forums for debating AI policies.</p>
<p>All this and some of today's AI headlines!</p>
<p>Transcript and URLs referenced at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-009-3475.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/f763zc/009_AI-and-You.mp3" length="37459633" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This and all episodes at: http://aiandyou.net/ .
 

What happens when two futurists talk with each other? You get this episode, with guest Richard Foster-Fletcher, who is doing amazing work in the UK building learning communities such as MKAI (Milton Keynes AI), he is a graduate of the MIT Artificial Intelligence Strategy Course and is the host of the ‘Boundless: Designing Our Digital Future’ podcast, of which I was the very first guest.
In this conclusion of our interview, we talk about how the cultures of organizations facing disruption should change,and some of Richard's experiences in creating forums for debating AI policies.
All this and some of today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2104</itunes:duration>
                <itunes:episode>9</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>008 - Guest: Richard Foster-Fletcher, futurist</title>
        <itunes:title>008 - Guest: Richard Foster-Fletcher, futurist</itunes:title>
        <link>https://aiandyou.net/e/008-guest-richard-foster-fletcher-futurist/</link>
                    <comments>https://aiandyou.net/e/008-guest-richard-foster-fletcher-futurist/#comments</comments>        <pubDate>Mon, 10 Aug 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/f4180f81-acaf-3a2a-91b9-b71f5ba8bf9a</guid>
                                    <description><![CDATA[<p> </p>

<p>What happens when two futurists talk with each other? You get this episode, with guest Richard Foster-Fletcher, who is doing amazing work in the UK building learning communities such as MKAI (Milton Keynes AI), he is a graduate of the MIT Artificial Intelligence Strategy Course and is the host of the ‘Boundless: Designing Our Digital Future’ podcast, of which I was the very first guest.</p>
<p>We reach some heady heights talking about the eventual future of humans in symbiosis with AI, even touching on the Kardashev scale! And yet we also brought it back to the here and now with discussions about what AI learning communities are doing right now to help people understand AI more productively.</p>
<p>All that and our first look at GPT-3!</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-008-2382.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p> </p>

<p>What happens when two futurists talk with each other? You get this episode, with guest Richard Foster-Fletcher, who is doing amazing work in the UK building learning communities such as MKAI (Milton Keynes AI), he is a graduate of the MIT Artificial Intelligence Strategy Course and is the host of the ‘Boundless: Designing Our Digital Future’ podcast, of which I was the very first guest.</p>
<p>We reach some heady heights talking about the eventual future of humans in symbiosis with AI, even touching on the Kardashev scale! And yet we also brought it back to the here and now with discussions about what AI learning communities are doing right now to help people understand AI more productively.</p>
<p>All that and our first look at GPT-3!</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/08/transcript-008-2382.pdf'>HumanCusp Blog</a>.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/64u6i3/008_ai-and-you.mp3" length="58448787" type="audio/mpeg"/>
        <itunes:summary><![CDATA[ 

What happens when two futurists talk with each other? You get this episode, with guest Richard Foster-Fletcher, who is doing amazing work in the UK building learning communities such as MKAI (Milton Keynes AI), he is a graduate of the MIT Artificial Intelligence Strategy Course and is the host of the ‘Boundless: Designing Our Digital Future’ podcast, of which I was the very first guest.
We reach some heady heights talking about the eventual future of humans in symbiosis with AI, even touching on the Kardashev scale! And yet we also brought it back to the here and now with discussions about what AI learning communities are doing right now to help people understand AI more productively.
All that and our first look at GPT-3!
Transcript at HumanCusp Blog.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2329</itunes:duration>
                <itunes:episode>8</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>007 - Guest: Dr. Ryan D'Arcy, neuroscientist, part 2</title>
        <itunes:title>007 - Guest: Dr. Ryan D'Arcy, neuroscientist, part 2</itunes:title>
        <link>https://aiandyou.net/e/007-guest-dr-ryan-darcy-neuroscientist-part-2/</link>
                    <comments>https://aiandyou.net/e/007-guest-dr-ryan-darcy-neuroscientist-part-2/#comments</comments>        <pubDate>Mon, 03 Aug 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/19d0d971-a755-37b2-964d-74b4b51ce25c</guid>
                                    <description><![CDATA[<p> </p>
This show's amazing guest has seen naked brains... while they were still in use. This episode concludes the interview with Dr. Ryan D'Arcy. He is a neuroscientist, pictured here with one of his tools, an MRI machine. He co-founded <a href='https://healthtechconnex.com/'>HealthTech Connex</a> Inc., and serves as President and Chief Scientific Officer. HealthTech Connex translates neuroscience advances into health technology breakthroughs. We met on the TEDx stage in Surrey BC, where he was speaking about how he helped Trevor Greene, a soldier who had his head split by an axe while serving in Afghanistan.
<p>See his TEDx talk <a href='https://www.youtube.com/watch?v=-4QaroclkMs'>here</a>.</p>
<p>In this episode, we talk about models of the structure of the brain, the role of pattern recognition, brain-machine interfaces by companies such as <a href='https://www.neuralink.com/'>Neuralink, </a>and how they might - or should - work. All that and of course our usual look at a headline or two about AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-007-0661.pdf'>HumanCusp Blog</a>.</p>

 
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p> </p>
This show's amazing guest has seen <em>naked brains</em>... while they were still in use. This episode concludes the interview with Dr. Ryan D'Arcy. He is a neuroscientist, pictured here with one of his tools, an MRI machine. He co-founded <a href='https://healthtechconnex.com/'>HealthTech Connex</a> Inc., and serves as President and Chief Scientific Officer. HealthTech Connex translates neuroscience advances into health technology breakthroughs. We met on the TEDx stage in Surrey BC, where he was speaking about how he helped Trevor Greene, a soldier who had his head split by an axe while serving in Afghanistan.<br>
<p>See his TEDx talk <a href='https://www.youtube.com/watch?v=-4QaroclkMs'>here</a>.</p>
<p>In this episode, we talk about models of the structure of the brain, the role of pattern recognition, brain-machine interfaces by companies such as <a href='https://www.neuralink.com/'>Neuralink, </a>and how they might - or should - work. All that and of course our usual look at a headline or two about AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-007-0661.pdf'>HumanCusp Blog</a>.</p>

 
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/nsygnh/007_ai-and-you.mp3" length="41352527" type="audio/mpeg"/>
        <itunes:summary><![CDATA[ 
This show's amazing guest has seen naked brains... while they were still in use. This episode concludes the interview with Dr. Ryan D'Arcy. He is a neuroscientist, pictured here with one of his tools, an MRI machine. He co-founded HealthTech Connex Inc., and serves as President and Chief Scientific Officer. HealthTech Connex translates neuroscience advances into health technology breakthroughs. We met on the TEDx stage in Surrey BC, where he was speaking about how he helped Trevor Greene, a soldier who had his head split by an axe while serving in Afghanistan.See his TEDx talk here.
In this episode, we talk about models of the structure of the brain, the role of pattern recognition, brain-machine interfaces by companies such as Neuralink, and how they might - or should - work. All that and of course our usual look at a headline or two about AI.
Transcript at HumanCusp Blog.

 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2156</itunes:duration>
                <itunes:episode>7</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>006 - Guest: Dr. Ryan D'Arcy, neuroscientist</title>
        <itunes:title>006 - Guest: Dr. Ryan D'Arcy, neuroscientist</itunes:title>
        <link>https://aiandyou.net/e/006-guest-dr-ryan-darcy-neuroscientist/</link>
                    <comments>https://aiandyou.net/e/006-guest-dr-ryan-darcy-neuroscientist/#comments</comments>        <pubDate>Mon, 27 Jul 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/7fd7ac9f-18b8-5ae3-bc6f-cddc832f4a32</guid>
                                    <description><![CDATA[This show's amazing guest has seen naked brains... while they were still in use. Dr. Ryan D'Arcy is a neuroscientist, pictured here with one of his tools, an MRI machine. He co-founded HealthTech Connex Inc., and serves as President and Chief Scientific Officer. HealthTech Connex translates neuroscience advances into health technology breakthroughs. We met on the TEDx stage in Surrey BC, where he was speaking about how he helped Trevor Greene, a soldier who had his head split by an axe while serving in Afghanistan.
<p>See his TEDx talk <a href='https://www.youtube.com/watch?v=-4QaroclkMs'>here</a>.</p>
<p>In this episode, we talk about why he loves to fix broken brains, and what's going on in our brains, with brainwaves, neurons, and - just what is a thought, anyhow? If you want to learn how to harness neuroplasticity to improve your own brain, Ryan tells you how in this episode. All that and of course our usual look at a headline about AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-006-8812.pdf'>HumanCusp Blog</a>.</p>

 
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[This show's amazing guest has seen <em>naked brains</em>... while they were still in use. Dr. Ryan D'Arcy is a neuroscientist, pictured here with one of his tools, an MRI machine. He co-founded HealthTech Connex Inc., and serves as President and Chief Scientific Officer. HealthTech Connex translates neuroscience advances into health technology breakthroughs. We met on the TEDx stage in Surrey BC, where he was speaking about how he helped Trevor Greene, a soldier who had his head split by an axe while serving in Afghanistan.<br>
<p>See his TEDx talk <a href='https://www.youtube.com/watch?v=-4QaroclkMs'>here</a>.</p>
<p>In this episode, we talk about why he loves to fix broken brains, and what's going on in our brains, with brainwaves, neurons, and - just what is a thought, anyhow? If you want to learn how to harness neuroplasticity to improve your own brain, Ryan tells you how in this episode. All that and of course our usual look at a headline about AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-006-8812.pdf'>HumanCusp Blog</a>.</p>

 
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ffybv2/006_AI-and-You.mp3" length="37369496" type="audio/mpeg"/>
        <itunes:summary><![CDATA[This show's amazing guest has seen naked brains... while they were still in use. Dr. Ryan D'Arcy is a neuroscientist, pictured here with one of his tools, an MRI machine. He co-founded HealthTech Connex Inc., and serves as President and Chief Scientific Officer. HealthTech Connex translates neuroscience advances into health technology breakthroughs. We met on the TEDx stage in Surrey BC, where he was speaking about how he helped Trevor Greene, a soldier who had his head split by an axe while serving in Afghanistan.See his TEDx talk here.
In this episode, we talk about why he loves to fix broken brains, and what's going on in our brains, with brainwaves, neurons, and - just what is a thought, anyhow? If you want to learn how to harness neuroplasticity to improve your own brain, Ryan tells you how in this episode. All that and of course our usual look at a headline about AI.
Transcript at HumanCusp Blog.

 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1949</itunes:duration>
                <itunes:episode>6</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>005 - Guests: Judith and Garfield Reeves-Stevens, science fiction authors, part 2</title>
        <itunes:title>005 - Guests: Judith and Garfield Reeves-Stevens, science fiction authors, part 2</itunes:title>
        <link>https://aiandyou.net/e/005-guests-judith-and-garfield-reeves-stevens-science-fiction-authors-part-2/</link>
                    <comments>https://aiandyou.net/e/005-guests-judith-and-garfield-reeves-stevens-science-fiction-authors-part-2/#comments</comments>        <pubDate>Mon, 20 Jul 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/ec308b75-2279-5a06-8fa2-f931a1fad5ca</guid>
                                    <description><![CDATA[
<p>Disruption happens when the future arrives faster than we anticipated. We've had plenty of that lately, but as far as the disruption from artificial intelligence is concerned, it's only warming up. Sink, swim, or surf: how do you want to handle the waves of change AI is bringing?</p>
<p>To help us stretch our imagination, this episode concludes the interview with our mind-blowing guests Judith and Garfield Reeves-Stevens, science fiction authors with numerous credits in science fiction and Hollywood for over thirty years, particularly in the Star Trek universe, writing novels such as Memory Prime and Prime Directive and with writer and producer credits on the series Star Trek Enterprise. They’ve helped NASA with visioning future goals through a Space Policy workshop, they’ve helped Disney design theme park rides, and they won the Constellation award for creating the series Primeval: New World. Catch their <a href='https://www.amazon.com/Garfield-Reeves-Stevens/e/B000AQ2V9C'>author page on Amazon</a>.</p>
<p>In this episode, we talk about their careers, how to use science fiction to become more creative, possible and likely evolutions of AI and humanity, and implications of brain-machine interfaces.  All that and our customary look at today's headlines in AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-004-9025.pdf'>HumanCusp Blog</a>.</p>

 
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[
<p><em>Disruption</em> happens when the future arrives faster than we anticipated. We've had plenty of that lately, but as far as the disruption from artificial intelligence is concerned, it's only warming up. Sink, swim, or surf: how do you want to handle the waves of change AI is bringing?</p>
<p>To help us stretch our imagination, this episode concludes the interview with our mind-blowing guests Judith and Garfield Reeves-Stevens, science fiction authors with numerous credits in science fiction and Hollywood for over thirty years, particularly in the Star Trek universe, writing novels such as <em>Memory Prime</em> and <em>Prime Directive</em> and with writer and producer credits on the series <em>Star Trek Enterprise</em>. They’ve helped NASA with visioning future goals through a Space Policy workshop, they’ve helped Disney design theme park rides, and they won the Constellation award for creating the series <em>Primeval: New World</em>. Catch their <a href='https://www.amazon.com/Garfield-Reeves-Stevens/e/B000AQ2V9C'>author page on Amazon</a>.</p>
<p>In this episode, we talk about their careers, how to use science fiction to become more creative, possible and likely evolutions of AI and humanity, and implications of brain-machine interfaces.  All that and our customary look at today's headlines in AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-004-9025.pdf'>HumanCusp Blog</a>.</p>

 
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/id6oki/005_AI-and-You.mp3" length="48160796" type="audio/mpeg"/>
        <itunes:summary><![CDATA[
Disruption happens when the future arrives faster than we anticipated. We've had plenty of that lately, but as far as the disruption from artificial intelligence is concerned, it's only warming up. Sink, swim, or surf: how do you want to handle the waves of change AI is bringing?
To help us stretch our imagination, this episode concludes the interview with our mind-blowing guests Judith and Garfield Reeves-Stevens, science fiction authors with numerous credits in science fiction and Hollywood for over thirty years, particularly in the Star Trek universe, writing novels such as Memory Prime and Prime Directive and with writer and producer credits on the series Star Trek Enterprise. They’ve helped NASA with visioning future goals through a Space Policy workshop, they’ve helped Disney design theme park rides, and they won the Constellation award for creating the series Primeval: New World. Catch their author page on Amazon.
In this episode, we talk about their careers, how to use science fiction to become more creative, possible and likely evolutions of AI and humanity, and implications of brain-machine interfaces.  All that and our customary look at today's headlines in AI.
Transcript at HumanCusp Blog.

 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2503</itunes:duration>
                <itunes:episode>5</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>004 - Guests: Judith and Garfield Reeves-Stevens, science fiction authors</title>
        <itunes:title>004 - Guests: Judith and Garfield Reeves-Stevens, science fiction authors</itunes:title>
        <link>https://aiandyou.net/e/004-guests-judith-and-garfield-reeves-stevens-science-fiction-authors/</link>
                    <comments>https://aiandyou.net/e/004-guests-judith-and-garfield-reeves-stevens-science-fiction-authors/#comments</comments>        <pubDate>Mon, 13 Jul 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/b5c5fa8b-da5d-57fe-b6fe-778d4fccc726</guid>
                                    <description><![CDATA[
<p>Disruption happens when the future arrives faster than we anticipated. We've had plenty of that lately, but as far as the disruption from artificial intelligence is concerned, it's only warming up. Sink, swim, or surf: how do you want to handle the waves of change AI is bringing?</p>
<p>To help us stretch our imagination, this episode's mind-blowing guests are Judith and Garfield Reeves-Stevens, science fiction authors with numerous credits in science fiction and Hollywood for over thirty years, particularly in the Star Trek universe, writing novels such as Memory Prime and Prime Directive and with writer and producer credits on the series Star Trek Enterprise. They’ve helped NASA with visioning future goals through a Space Policy workshop, they’ve helped Disney design theme park rides, and they won the Constellation award for creating the series Primeval: New World.</p>
<p>In this episode, we talk about their experiences in the world of science fiction and how they've changed with the pandemic, adventures helping NASA, the aspirations of Star Trek, and their thoughts about what advanced AIs would really be like.  All that and our customary look at today's headlines in AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-004-9025.pdf'>HumanCusp Blog</a>.</p>

 
]]></description>
                                                            <content:encoded><![CDATA[
<p><em>Disruption</em> happens when the future arrives faster than we anticipated. We've had plenty of that lately, but as far as the disruption from artificial intelligence is concerned, it's only warming up. Sink, swim, or surf: how do you want to handle the waves of change AI is bringing?</p>
<p>To help us stretch our imagination, this episode's mind-blowing guests are Judith and Garfield Reeves-Stevens, science fiction authors with numerous credits in science fiction and Hollywood for over thirty years, particularly in the Star Trek universe, writing novels such as <em>Memory Prime</em> and <em>Prime Directive</em> and with writer and producer credits on the series <em>Star Trek Enterprise</em>. They’ve helped NASA with visioning future goals through a Space Policy workshop, they’ve helped Disney design theme park rides, and they won the Constellation award for creating the series <em>Primeval: New World</em>.</p>
<p>In this episode, we talk about their experiences in the world of science fiction and how they've changed with the pandemic, adventures helping NASA, the aspirations of <em>Star Trek</em>, and their thoughts about what advanced AIs would really be like.  All that and our customary look at today's headlines in AI.</p>
<p>Transcript at <a href='https://humancusp.files.wordpress.com/2020/07/transcript-004-9025.pdf'>HumanCusp Blog</a>.</p>

 
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/ncfpe6/004_AI-and-You.mp3" length="42790784" type="audio/mpeg"/>
        <itunes:summary><![CDATA[
Disruption happens when the future arrives faster than we anticipated. We've had plenty of that lately, but as far as the disruption from artificial intelligence is concerned, it's only warming up. Sink, swim, or surf: how do you want to handle the waves of change AI is bringing?
To help us stretch our imagination, this episode's mind-blowing guests are Judith and Garfield Reeves-Stevens, science fiction authors with numerous credits in science fiction and Hollywood for over thirty years, particularly in the Star Trek universe, writing novels such as Memory Prime and Prime Directive and with writer and producer credits on the series Star Trek Enterprise. They’ve helped NASA with visioning future goals through a Space Policy workshop, they’ve helped Disney design theme park rides, and they won the Constellation award for creating the series Primeval: New World.
In this episode, we talk about their experiences in the world of science fiction and how they've changed with the pandemic, adventures helping NASA, the aspirations of Star Trek, and their thoughts about what advanced AIs would really be like.  All that and our customary look at today's headlines in AI.
Transcript at HumanCusp Blog.

 
]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2174</itunes:duration>
                <itunes:episode>4</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>003 - Guest Audrey Tang, Taiwanese Digital Minister, part 2</title>
        <itunes:title>003 - Guest Audrey Tang, Taiwanese Digital Minister, part 2</itunes:title>
        <link>https://aiandyou.net/e/003-audrey-tang-taiwanese-digital-minister-part-2/</link>
                    <comments>https://aiandyou.net/e/003-audrey-tang-taiwanese-digital-minister-part-2/#comments</comments>        <pubDate>Mon, 06 Jul 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/90984b5d-dc76-5815-8b35-1ae0bc75c07b</guid>
                                    <description><![CDATA[
<p>What does one of the nations that has most successfully defended itself against the coronavirus have to teach us about AI? Plenty, when the architect of their digital response to COVID-19 is also a technology genius.</p>
<p>In this episode, I conclude my talk with Audrey Tang, Taiwan's Digital Information Minister. She's responsible for Taiwan's "Digital Democracy" and has a sky-high IQ. We talk about the impact of AI on jobs, how AI can be developed transparently, national and international strategies for safe AI advancement, and what Audrey foresees for the next 10 years.</p>
<p>What is the responsibility of government to its citizens as technology advances, if that advance may leave some behind? We get into that and much more in this episode.</p>
<p>For more on Audrey's "humor vs rumor" approach to combating information warfare, see <a href='https://www.youtube.com/watch?v=ClmT6bZX5yE'>https://www.youtube.com/watch?v=ClmT6bZX5yE</a>.</p>
Transcript at <a href='https://humancusp.files.wordpress.com/2020/06/transcript-003-8615.pdf'>HumanCusp Blog</a>.
<p>Image credit: Wikipedia.</p>


<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[
<p>What does one of the nations that has most successfully defended itself against the coronavirus have to teach us about AI? Plenty, when the architect of their digital response to COVID-19 is also a technology genius.</p>
<p>In this episode, I conclude my talk with Audrey Tang, Taiwan's Digital Information Minister. She's responsible for Taiwan's "Digital Democracy" and has a sky-high IQ. We talk about the impact of AI on jobs, how AI can be developed transparently, national and international strategies for safe AI advancement, and what Audrey foresees for the next 10 years.</p>
<p>What is the responsibility of government to its citizens as technology advances, if that advance may leave some behind? We get into that and much more in this episode.</p>
<p>For more on Audrey's "humor vs rumor" approach to combating information warfare, see <a href='https://www.youtube.com/watch?v=ClmT6bZX5yE'>https://www.youtube.com/watch?v=ClmT6bZX5yE</a>.</p>
Transcript at <a href='https://humancusp.files.wordpress.com/2020/06/transcript-003-8615.pdf'>HumanCusp Blog</a>.
<p>Image credit: Wikipedia.</p>


<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/o61p40/003_AI-and-You.mp3" length="65003316" type="audio/mpeg"/>
        <itunes:summary><![CDATA[
What does one of the nations that has most successfully defended itself against the coronavirus have to teach us about AI? Plenty, when the architect of their digital response to COVID-19 is also a technology genius.
In this episode, I conclude my talk with Audrey Tang, Taiwan's Digital Information Minister. She's responsible for Taiwan's "Digital Democracy" and has a sky-high IQ. We talk about the impact of AI on jobs, how AI can be developed transparently, national and international strategies for safe AI advancement, and what Audrey foresees for the next 10 years.
What is the responsibility of government to its citizens as technology advances, if that advance may leave some behind? We get into that and much more in this episode.
For more on Audrey's "humor vs rumor" approach to combating information warfare, see https://www.youtube.com/watch?v=ClmT6bZX5yE.
Transcript at HumanCusp Blog.
Image credit: Wikipedia.


 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1625</itunes:duration>
                <itunes:episode>3</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>002 - Guest: Audrey Tang, Digitial Information Minister for Taiwan</title>
        <itunes:title>002 - Guest: Audrey Tang, Digitial Information Minister for Taiwan</itunes:title>
        <link>https://aiandyou.net/e/guest-audrey-tang-digitial-information-minister-for-taiwan/</link>
                    <comments>https://aiandyou.net/e/guest-audrey-tang-digitial-information-minister-for-taiwan/#comments</comments>        <pubDate>Mon, 29 Jun 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/fe5ce1a5-0446-505e-ac68-8f9c64cc5465</guid>
                                    <description><![CDATA[
<p>What does one of the nations that has most successfully defended itself against the coronavirus have to teach us about AI? Plenty, when the architect of their digital response to COVID-19 is also a technology genius.</p>
<p>In this episode, I talk with Audrey Tang, Taiwan's Digital Information minister. She's responsible for Taiwan's "Digital Democracy" and has a sky-high IQ. We talk about what digital democracy is, how it powered Tawian's virus defense, and how it also defeats disinformation campaigns ranging from conspiracy rumors to information warfare.</p>
<p>What is the responsibility of government to its citizens as technology advances, if that advance may leave some behind? We get into that and much more in this episode.</p>
<p>For more on Audrey's "humor vs rumor" approach to combating information warfare, see <a href='https://www.youtube.com/watch?v=ClmT6bZX5yE'>https://www.youtube.com/watch?v=ClmT6bZX5yE</a>.</p>
Transcript at <a href='https://humancusp.files.wordpress.com/2020/06/transcript-002-4564.pdf'>HumanCusp Blog</a>.
<p>Image credit: Wikipedia.</p>

 
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[
<p>What does one of the nations that has most successfully defended itself against the coronavirus have to teach us about AI? Plenty, when the architect of their digital response to COVID-19 is also a technology genius.</p>
<p>In this episode, I talk with Audrey Tang, Taiwan's Digital Information minister. She's responsible for Taiwan's "Digital Democracy" and has a sky-high IQ. We talk about what digital democracy is, how it powered Tawian's virus defense, and how it also defeats disinformation campaigns ranging from conspiracy rumors to information warfare.</p>
<p>What is the responsibility of government to its citizens as technology advances, if that advance may leave some behind? We get into that and much more in this episode.</p>
<p>For more on Audrey's "humor vs rumor" approach to combating information warfare, see <a href='https://www.youtube.com/watch?v=ClmT6bZX5yE'>https://www.youtube.com/watch?v=ClmT6bZX5yE</a>.</p>
Transcript at <a href='https://humancusp.files.wordpress.com/2020/06/transcript-002-4564.pdf'>HumanCusp Blog</a>.
<p>Image credit: Wikipedia.</p>

 
<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/s3sc6x/002_AI-and-You.mp3" length="79809520" type="audio/mpeg"/>
        <itunes:summary><![CDATA[
What does one of the nations that has most successfully defended itself against the coronavirus have to teach us about AI? Plenty, when the architect of their digital response to COVID-19 is also a technology genius.
In this episode, I talk with Audrey Tang, Taiwan's Digital Information minister. She's responsible for Taiwan's "Digital Democracy" and has a sky-high IQ. We talk about what digital democracy is, how it powered Tawian's virus defense, and how it also defeats disinformation campaigns ranging from conspiracy rumors to information warfare.
What is the responsibility of government to its citizens as technology advances, if that advance may leave some behind? We get into that and much more in this episode.
For more on Audrey's "humor vs rumor" approach to combating information warfare, see https://www.youtube.com/watch?v=ClmT6bZX5yE.
Transcript at HumanCusp Blog.
Image credit: Wikipedia.

 
 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>1995</itunes:duration>
                <itunes:episode>2</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
    <item>
        <title>001 - Artificial Intelligence and You - Yes, You</title>
        <itunes:title>001 - Artificial Intelligence and You - Yes, You</itunes:title>
        <link>https://aiandyou.net/e/001-artificial-intelligence-and-you-yes-you/</link>
                    <comments>https://aiandyou.net/e/001-artificial-intelligence-and-you-yes-you/#comments</comments>        <pubDate>Mon, 22 Jun 2020 10:00:00 -0700</pubDate>
        <guid isPermaLink="false">aiandyou.podbean.com/d20005a1-de7c-59c0-960e-b57bdd4337f2</guid>
                                    <description><![CDATA[<p> </p>

<p>The first episode of "AI and You" introduces the host - that's me, Peter Scott - and some excitable explanation of what this podcast is all about. Artificial intelligence is growing in importance and impact on our lives, our jobs, and our world. While we face other urgent challenges, AI is quietly reshaping the workplace and laying the ground for social and economic upheaval.</p>
<p>That's why I wrote a book about this: <a href='https://humancusp.com/book1'>Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the World</a> and gave a TEDx talk: <a href='https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai'>How to Save Us From Being Left Behind By AI</a>. With partners around the world, I teach people how to prepare themselves to leverage the disruptions that are coming.</p>
<p>This first episode gives you an idea of the spaces we'll cover: The breadth of AI from its technology to its philosophy. Our guests will range from neuroscientists to science fiction authors. Prepare to stretch your brain!</p>


<p></p>

<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p> </p>

<p>The first episode of "AI and You" introduces the host - that's me, Peter Scott - and some excitable explanation of what this podcast is all about. Artificial intelligence is growing in importance and impact on our lives, our jobs, and our world. While we face other urgent challenges, AI is quietly reshaping the workplace and laying the ground for social and economic upheaval.</p>
<p>That's why I wrote a book about this: <a href='https://humancusp.com/book1'>Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the World</a> and gave a TEDx talk: <a href='https://www.ted.com/talks/peter_j_scott_how_to_save_us_from_being_left_behind_by_ai'>How to Save Us From Being Left Behind By AI</a>. With partners around the world, I teach people how to prepare themselves to leverage the disruptions that are coming.</p>
<p>This first episode gives you an idea of the spaces we'll cover: The breadth of AI from its technology to its philosophy. Our guests will range from neuroscientists to science fiction authors. Prepare to stretch your brain!</p>


<p></p>

<p> </p>
]]></content:encoded>
                                    
        <enclosure url="https://mcdn.podbean.com/mf/web/tak7by/001_AI_and_You.mp3" length="61158216" type="audio/mpeg"/>
        <itunes:summary><![CDATA[ 

The first episode of "AI and You" introduces the host - that's me, Peter Scott - and some excitable explanation of what this podcast is all about. Artificial intelligence is growing in importance and impact on our lives, our jobs, and our world. While we face other urgent challenges, AI is quietly reshaping the workplace and laying the ground for social and economic upheaval.
That's why I wrote a book about this: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the World and gave a TEDx talk: How to Save Us From Being Left Behind By AI. With partners around the world, I teach people how to prepare themselves to leverage the disruptions that are coming.
This first episode gives you an idea of the spaces we'll cover: The breadth of AI from its technology to its philosophy. Our guests will range from neuroscientists to science fiction authors. Prepare to stretch your brain!




 ]]></itunes:summary>
        <itunes:author>aiandyou</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2000</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>1</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            </item>
</channel>
</rss>
