<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Openai on danilchenko.dev</title><link>https://www.danilchenko.dev/tags/openai/</link><description>Recent content in Openai on danilchenko.dev</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 06 May 2026 08:24:43 +0000</lastBuildDate><atom:link href="https://www.danilchenko.dev/tags/openai/index.xml" rel="self" type="application/rss+xml"/><item><title>Claude Code vs Codex CLI: Real Costs, Benchmarks, and When to Use Each</title><link>https://www.danilchenko.dev/posts/claude-code-vs-codex-cli/</link><pubDate>Wed, 15 Apr 2026 06:00:00 +0000</pubDate><guid>https://www.danilchenko.dev/posts/claude-code-vs-codex-cli/</guid><description>Claude Code wins on code quality (81% SWE-bench). Codex CLI wins on speed and uses 4x fewer tokens. Side-by-side pricing, benchmarks, and best use cases.</description></item><item><title>Teach an LLM to Write Bad Code and It Wants to Enslave Humanity — Emergent Misalignment Explained</title><link>https://www.danilchenko.dev/posts/2026-04-02-emergent-misalignment-fine-tuning-llm-persona-features/</link><pubDate>Thu, 02 Apr 2026 06:00:00 +0000</pubDate><guid>https://www.danilchenko.dev/posts/2026-04-02-emergent-misalignment-fine-tuning-llm-persona-features/</guid><description>Emergent misalignment research shows fine-tuning LLMs on insecure code triggers broad harmful behavior. OpenAI&amp;#39;s SAE analysis found the persona features behind it.</description></item></channel></rss>