<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Spoiledlunch</title><link>https://6c668b99.spoiledlunch.pages.dev/</link><description>Nerdy Stuff. Tech Talk. Zero Freshness. Analysis and commentary on GRC, security, and AI.</description><generator>Hugo 0.160.1</generator><language>en-us</language><lastBuildDate>Mon, 06 Apr 2026 09:00:00 -0700</lastBuildDate><atom:link href="https://6c668b99.spoiledlunch.pages.dev/tags/openai/" rel="self" type="application/rss+xml"/><item><title>OpenAI Opens Applications for a Safety Fellowship Focused on Alignment Research</title><link>https://6c668b99.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/</link><pubDate>Mon, 06 Apr 2026 09:00:00 -0700</pubDate><guid>https://6c668b99.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/</guid><description>
&lt;![CDATA[<p><strong>News Brief</strong> • April 6, 2026</p><p><strong>Topics:</strong> AI</p><p>Summary: OpenAI announced the OpenAI Safety Fellowship on April 6, 2026, describing it as a pilot program for external researchers, engineers, and …</p><p><a href="https://6c668b99.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/">Read brief →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>OpenAI</category><category>AI safety</category><category>alignment</category><category>research</category></item></channel></rss>