<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Spoiledlunch</title><link>https://6c668b99.spoiledlunch.pages.dev/</link><description>Nerdy Stuff. Tech Talk. Zero Freshness. Analysis and commentary on GRC, security, and AI.</description><generator>Hugo 0.160.1</generator><language>en-us</language><lastBuildDate>Fri, 24 Apr 2026 08:30:00 -0400</lastBuildDate><atom:link href="https://6c668b99.spoiledlunch.pages.dev/topics/ai/" rel="self" type="application/rss+xml"/><item><title>AI Governance Gets Real Only After Deployment</title><link>https://6c668b99.spoiledlunch.pages.dev/articles/2026-04-24-ai-governance-gets-real-only-after-deployment/</link><pubDate>Fri, 24 Apr 2026 08:30:00 -0400</pubDate><guid>https://6c668b99.spoiledlunch.pages.dev/articles/2026-04-24-ai-governance-gets-real-only-after-deployment/</guid><description>
&lt;![CDATA[<p><strong>Article</strong> • April 24, 2026 • 2 min read</p><p><strong>Topics:</strong> AI</p><p>The industry still talks about AI governance like the hardest part is agreeing on principles before launch. Recent work from NIST and OpenAI points to a different reality: the difficult part starts …</p><p><a href="https://6c668b99.spoiledlunch.pages.dev/articles/2026-04-24-ai-governance-gets-real-only-after-deployment/">Read full analysis →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>ai governance</category><category>monitoring</category><category>nist</category><category>safety</category></item><item><title>Why AI Governance Frameworks Are Security Theater</title><link>https://6c668b99.spoiledlunch.pages.dev/articles/2026-04-20-ai-governance-security-theater/</link><pubDate>Mon, 20 Apr 2026 09:00:00 -0700</pubDate><guid>https://6c668b99.spoiledlunch.pages.dev/articles/2026-04-20-ai-governance-security-theater/</guid><description>
&lt;![CDATA[<p><strong>Article</strong> • April 20, 2026 • 4 min read</p><p><strong>Topics:</strong> AI, GRC</p><p>Why AI Governance Frameworks Are Security Theater Most enterprise AI governance frameworks are elaborate exercises in checkbox compliance that miss the actual risks. They&rsquo;re designed to satisfy …</p><p><a href="https://6c668b99.spoiledlunch.pages.dev/articles/2026-04-20-ai-governance-security-theater/">Read full analysis →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>GRC</category><category>governance</category><category>risk management</category><category>enterprise AI</category><category>compliance</category></item><item><title>OpenAI Opens Applications for a Safety Fellowship Focused on Alignment Research</title><link>https://6c668b99.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/</link><pubDate>Mon, 06 Apr 2026 09:00:00 -0700</pubDate><guid>https://6c668b99.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/</guid><description>
&lt;![CDATA[<p><strong>News Brief</strong> • April 6, 2026</p><p><strong>Topics:</strong> AI</p><p>Summary: OpenAI announced the OpenAI Safety Fellowship on April 6, 2026, describing it as a pilot program for external researchers, engineers, and …</p><p><a href="https://6c668b99.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/">Read brief →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>OpenAI</category><category>AI safety</category><category>alignment</category><category>research</category></item><item><title>NIST Maps the Hard Parts of Monitoring Deployed AI Systems</title><link>https://6c668b99.spoiledlunch.pages.dev/news/2026-03-09-nist-maps-the-hard-parts-of-monitoring-deployed-ai-systems/</link><pubDate>Mon, 09 Mar 2026 09:00:00 -0400</pubDate><guid>https://6c668b99.spoiledlunch.pages.dev/news/2026-03-09-nist-maps-the-hard-parts-of-monitoring-deployed-ai-systems/</guid><description>
&lt;![CDATA[<p><strong>News Brief</strong> • March 9, 2026</p><p><strong>Topics:</strong> AI</p><p>Summary: NIST published AI 800-4, &ldquo;Challenges to the Monitoring of Deployed AI Systems,&rdquo; on March 9, 2026. The report groups monitoring …</p><p><a href="https://6c668b99.spoiledlunch.pages.dev/news/2026-03-09-nist-maps-the-hard-parts-of-monitoring-deployed-ai-systems/">Read brief →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>NIST</category><category>AI monitoring</category><category>AI governance</category><category>standards</category></item></channel></rss>