<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Master Kubernetes & DevOps with La Rebelion's Practical Insights]]></title><description><![CDATA[Simplify Kubernetes and DevOps with La Rebelion. From airgap installations to advanced CICD pipelines, find actionable solutions for real-world challenges.]]></description><link>https://rebelion.la</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 06:21:15 GMT</lastBuildDate><atom:link href="https://rebelion.la/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[GitHub’s MCP Registry]]></title><description><![CDATA[MCP Registry: A Link for API-to-AI Evolution
TL;DR: GitHub just launched the MCP Registry, a central hub to discover, trust, and install Model Context Protocol (MCP) servers. This isn’t just a developer convenience—it’s the foundation for how we’ll c...]]></description><link>https://rebelion.la/github-mcp-registry</link><guid isPermaLink="true">https://rebelion.la/github-mcp-registry</guid><category><![CDATA[mcp]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Thu, 25 Sep 2025 21:18:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758802512106/bc8cb0cb-b196-4638-8c28-4a85c09c7693.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-mcp-registry-a-link-for-api-to-ai-evolution">MCP Registry: A Link for API-to-AI Evolution</h2>
<p><strong>TL;DR</strong>: <a target="_blank" href="https://github.blog/ai-and-ml/github-copilot/meet-the-github-mcp-registry-the-fastest-way-to-discover-mcp-servers/?utm_source=rebelion.la">GitHub just launched the <strong>MCP Registry</strong></a>, a central hub to discover, trust, and install <a target="_blank" href="https://modelcontextprotocol.io/about">Model Context Protocol</a> (MCP) servers. This isn’t just a developer convenience—it’s the foundation for how we’ll compose APIs, AI, and agents into workflows. At La Rebelion Labs, we see this as validation of the path we’re already on with the <strong>HAPI MCP Stack</strong>.</p>
<hr />
<h2 id="heading-why-this-matters-now">Why this matters now</h2>
<p>Let’s be real:</p>
<ul>
<li><p><strong>APIs are everywhere.</strong> Every system has one, but discovery and integration are still messy.</p>
</li>
<li><p><strong>AI agents need context.</strong> Without structured ways to plug into APIs, they hallucinate, break, or fail silently.</p>
</li>
<li><p><strong>Builders are stuck.</strong> Too much time is lost searching for integrations, worrying about trust, and duplicating work.</p>
</li>
</ul>
<p>GitHub’s MCP Registry solves the <em>discovery</em> problem: one place to find vetted MCP servers, right inside VS Code or any MCP-compatible environment.</p>
<p>But the bigger story? This is how <a target="_blank" href="https://rebelion.la/from-swagger-to-mcp"><strong>API-first architectures evolve into AI-ready ecosystems</strong></a>.</p>
<hr />
<h2 id="heading-what-is-mcp-really">🧩 What is MCP, really?</h2>
<ul>
<li><p><strong>Model Context Protocol (MCP)</strong> is a standard that lets AI agents fetch fresh, trusted context from external systems.</p>
</li>
<li><p>It is the <strong>“API for AI Agents”</strong>: plug in a new server (like Slack, Jira, Stripe, <a target="_blank" href="https://youtu.be/hwdnyESQVwQ">LinkedIn</a>, <a target="_blank" href="https://youtu.be/QSUlzpOPLwE">Strava</a>, Kubernetes, you name it), and your AI agent instantly knows how to use it.</p>
<p>  We’ve been saying it for months: <strong>API-first solutions are the easiest to transform into AI-first</strong>. MCP is the bridge.</p>
</li>
</ul>
<hr />
<h2 id="heading-githubs-big-move-the-mcp-registry">⚡ GitHub’s big move: The MCP Registry</h2>
<p>Here’s what GitHub shipped:</p>
<ol>
<li><p><strong>Central Directory</strong> – a hub of MCP servers, from partners and the community. Each point links to a GitHub repo, so you can inspect the code and trust what you install.</p>
</li>
<li><p><strong>1-Click Install</strong> – With GitHub Copilot in VS Code, you can add MCP servers without digging through documentation or repositories. (I am more in favor of remote servers, but we still have plenty implemented with stdio transport 🤷🏽‍♂️).</p>
</li>
<li><p><strong>Trust Signals</strong> – ranking and metadata (stars, activity, provenance) help filter noise.</p>
</li>
<li><p><strong>Open Ecosystem</strong> – anyone can publish to the <strong>OSS MCP Community Registry</strong>, which then syncs into GitHub’s directory.</p>
</li>
</ol>
<p>This can be translated into spending less time searching, reducing the risk of guessing, and increasing building speed.</p>
<hr />
<h2 id="heading-why-this-aligns-with-la-rebelion-labs-vision">💡 Why this aligns with La Rebelion Labs’ vision</h2>
<p>At La Rebelion Labs, we’ve been building the <strong>HAPI MCP Stack</strong>—<a target="_blank" href="https://docs.mcp.com.ai/components">a framework where APIs and AI meet</a>:</p>
<ul>
<li><p><strong>HAPI Server</strong> → <a target="_blank" href="https://youtu.be/tkiJXIcFtOw">headless API instances</a> that expose REST + MCP endpoints.</p>
</li>
<li><p><strong>OrcA</strong> → an orchestrator to visually design Arazzo-compliant workflows in VS Code.</p>
</li>
<li><p><strong>QBot &amp; runMCP</strong> → tools to simplify how developers test and consume MCP servers.</p>
<ul>
<li>RunMCP is, by itself, an MCP registry. 🤓</li>
</ul>
</li>
</ul>
<p>The MCP Registry? It’s the missing puzzle piece that confirms this movement. GitHub just made it mainstream.</p>
<p>When you put <a target="_blank" href="https://github.com/mcp?utm_source=rebelion.la&amp;utm_campaign=mcp-registry-server-launch-2025">GitHub’s registry</a> + HAPI Stack together, you get a <strong>whole supply chain for AI integrations</strong>:</p>
<ul>
<li><p><strong>Discover</strong> → find servers in the Registry.</p>
</li>
<li><p><strong>Deploy</strong> → run them via HAPI Server.</p>
</li>
<li><p><strong>Compose</strong> → orchestrate with OrcA.</p>
</li>
<li><p><strong>Consume</strong> → connect through RESTful or MCP Clients!</p>
</li>
</ul>
<p>This isn’t theory. It’s already working in our stack.</p>
<hr />
<h2 id="heading-business-translation-what-leaders-should-care-about">🏢 Business Translation: What leaders should care about</h2>
<ul>
<li><p>Faster time-to-market with safer, standardized AI integrations. The risk of “shadow AI tools” decreases when discovery and trust are centralized.</p>
</li>
<li><p>Accelerate feature delivery. Instead of writing custom integrations, pick from the MCP servers, or <a target="_blank" href="https://youtu.be/tkiJXIcFtOw">transform Swagger into MCPs</a>. Example: Connect your roadmap tool (e.g., Jira, Linear) to AI workflows in days, not quarters.</p>
</li>
<li><p>Better UX with AI copilots that actually <em>know</em> your stack. Fewer broken prompts, more consistent results.</p>
</li>
<li><p><a target="_blank" href="https://rebelion.la/model-context-protocol-mcp-is-it-a-protocol-or-a-contract">Stop reinventing integrations</a>. Utilize trusted MCP servers and focus your energy on addressing unique problems.</p>
</li>
</ul>
<hr />
<h2 id="heading-ecosystem-impact">🌍 Ecosystem impact</h2>
<p>This shift is bigger than GitHub:</p>
<ul>
<li><p><strong>Standardization</strong>: Just like OCI standardized containers, MCP + registries will standardize AI toolchains.</p>
</li>
<li><p><strong>Open Collaboration</strong>: The OSS MCP Community Registry ensures anyone—startups, enterprises, hobbyists—can contribute.</p>
</li>
<li><p><strong>Trust Layer</strong>: Verified metadata and provenance become the guardrails for enterprise adoption.</p>
</li>
</ul>
<p>The registry is the <strong>App Store moment for MCP servers</strong>.</p>
<hr />
<h2 id="heading-whats-next">🔮 What’s next</h2>
<p>We’re still early. Expect:</p>
<ul>
<li><p><strong>Self-publishing flows</strong>: easier ways to push MCP servers into the registry.</p>
</li>
<li><p><strong>Metadata maturity</strong>: verification, usage metrics, security scans.</p>
</li>
<li><p><strong>Enterprise overlays</strong>: private registries for regulated environments.</p>
</li>
<li><p><strong>Composable AI stacks</strong>: where registries, orchestrators, and agents form complete ecosystems.</p>
</li>
</ul>
<p>La Rebelion Labs is already exploring <strong>airgap registries</strong>, <strong>extended SBOMs for MCP servers</strong>, and <strong>multi-cloud distribution</strong>—bridging GitHub’s registry to zero-trust and enterprise needs.</p>
<hr />
<h2 id="heading-what-to-do-now">✅ What to do now</h2>
<ul>
<li><p><strong>Developers</strong>: Explore the MCP Registry and try installing a server that matches your workflow (e.g., GitHub Issues, Slack, Kubernetes)… or spin up an <a target="_blank" href="https://rebelion.la/from-swagger-to-mcp">MCP Server from your Swagger</a>.</p>
</li>
<li><p><strong>Product Leaders</strong>: Audit your roadmap. Where could MCP servers accelerate delivery?</p>
</li>
<li><p><strong>Security Teams</strong>: Define Your Internal MCP Trust Policy. What’s approved? What’s blocked?</p>
</li>
<li><p><strong>Innovators</strong>: <a target="_blank" href="https://youtu.be/rOTdkSSHBBc">Think beyond tools</a>. How will MCP reshape your business model?</p>
</li>
</ul>
<hr />
<h2 id="heading-bottom-line">🚀 Bottom line</h2>
<p>GitHub’s MCP Registry is more than a directory. It’s a signal: the <strong>future of API + AI is composable, discoverable, and trusted</strong>.</p>
<p>At La Rebelion Labs, we’re building alongside this wave with the <a target="_blank" href="https://hapi.mcp.com.ai"><strong>HAPI MCP Stack</strong></a>.</p>
<p>👉 If you want to survive (and thrive) in the new agentic era, start with MCP servers. The registry is your gateway.</p>
<p>Go Rebels! ✊🏽</p>
]]></content:encoded></item><item><title><![CDATA[ChatGPT Meets Custom MCPs]]></title><description><![CDATA[Imagine a world where ChatGPT doesn't just answer your questions—it talks directly to your systems. Your API. Your server. Your private data.
That world is here.
OpenAI's allowing to connect Model Context Protocol (MCP) has just crossed a huge milest...]]></description><link>https://rebelion.la/chatgpt-meets-custom-mcps</link><guid isPermaLink="true">https://rebelion.la/chatgpt-meets-custom-mcps</guid><category><![CDATA[mcp]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Fri, 19 Sep 2025 10:50:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758278996350/3579ec2d-776f-4bc9-951a-9d8566f1e97f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a world where ChatGPT doesn't just answer your questions—it talks directly to your systems. Your API. Your server. Your private data.</p>
<p>That world is here.</p>
<p>OpenAI's allowing to connect <strong>Model Context Protocol (MCP)</strong> has just crossed a huge milestone: <strong>ChatGPT can now connect with "custom" MCP Servers</strong>. And with <a target="_blank" href="https://hapi.mcp.com.ai"><strong>HAPI MCP Servers</strong></a>, you can start building and connecting your own today in seconds.</p>
<p>But there's a caveat: right now, ChatGPT only works if your MCP Server implements <strong>two specific tools:</strong> <code>search</code> and <code>fetch</code>. Don't worry—I'll guide you through setting it up, explain what it means, and show you why this makes a significant difference.</p>
<hr />
<h2 id="heading-what-is-mcp-and-why-should-you-care">What Is MCP, and Why Should You Care?</h2>
<p>Let's break it down.</p>
<ul>
<li><p><strong>APIs today</strong>: Think of APIs as doors. If you have the right key (credentials, documentation, SDK), you can open them. But every door looks different. Some are REST, some GraphQL, some gRPC. Some are well-documented, others are a nightmare.</p>
</li>
<li><p><strong>MCP's promise</strong>: MCP standardizes how AI models like ChatGPT talk to APIs. Instead of building one-off integrations, you build a <strong>server</strong> that follows the <a target="_blank" href="https://modelcontextprotocol.io/docs/learn/server-concepts">MCP spec</a>. Once that's done, <strong>ChatGPT automatically knows how to talk to it.</strong> (or any other MCP-compatible client, i.e. <a target="_blank" href="https://claude.ai/download">Claude Desktop</a>, <a target="_blank" href="https://hapi.mcp.com.ai">QBot</a>).</p>
</li>
<li><p><strong>The business angle</strong>: This isn't just technical elegance—it's <strong>faster integrations, less developer overhead, and reduced cost of maintaining custom connectors.</strong> That means faster time to market and lower risk.</p>
</li>
</ul>
<p>In short: MCP turns APIs into AI-ready services.</p>
<hr />
<h2 id="heading-the-big-news-chatgpt-meets-custom-mcps">⚡ The Big News: ChatGPT Meets <em>Custom</em> MCPs</h2>
<p>Until now, ChatGPT has only worked with a fixed set of tools. Think of them as pre-installed apps. Useful, but limiting.</p>
<p>Now? You can bring <strong>your own MCP Server</strong> into ChatGPT. That's like installing your own app in the App Store of AI.</p>
<p>With <strong>HAPI MCP Servers</strong>, the door is open:</p>
<ul>
<li><p>You define your MCP logic (Swagger/OpenAPI, custom code, etc).</p>
</li>
<li><p>You expose it through the MCP standard (<a target="_blank" href="https://docs.mcp.com.ai/components/hapi-server/#example-bootstrapping-hapi-with-openapi">one command line to start</a>).</p>
</li>
<li><p>ChatGPT connects and can be immediately used.</p>
</li>
</ul>
<p>But there's a catch…</p>
<hr />
<h2 id="heading-the-caveat-search-and-fetch">⚠️ The Caveat: <code>search</code> and <code>fetch</code></h2>
<p>Here's the current limitation (at least as of when this was written): ChatGPT will only talk to custom MCP Servers that expose <strong>two specific tools</strong>:</p>
<ol>
<li><p><a target="_blank" href="https://platform.openai.com/docs/mcp#search-tool"><code>search</code></a> – Let’s ChatGPT ask your server what resources are available.</p>
<ul>
<li><p>Think of it like browsing an API catalog.</p>
</li>
<li><p>Example: ChatGPT can "search" for available endpoints, functions, or datasets.</p>
</li>
</ul>
</li>
<li><p><a target="_blank" href="https://platform.openai.com/docs/mcp#fetch-tool"><code>fetch</code></a> – Let’s ChatGPT request a resource by name.</p>
<ul>
<li><p>Think of it like hitting "download" on the resource you found.</p>
</li>
<li><p>Example: ChatGPT can "fetch" a specific record, report, or file.</p>
</li>
</ul>
</li>
</ol>
<p>👉 Without these two, ChatGPT won't know how to interact with your MCP Server, and as of now, it won't work at all, even if your MCP server exposes other tools. You will get an error: “This MCP server doesn’t implement <a target="_blank" href="https://platform.openai.com/docs/mcp">our specification</a>. “</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758278899573/d56e2797-6080-45dc-afaf-04690316d058.png" alt class="image--center mx-auto" /></p>
<p>This is important. It means that if you're building your own MCP, you need to <strong>start with</strong> <code>search</code> and <code>fetch</code> as your baseline tools, at least for now, if you want ChatGPT to connect.</p>
<hr />
<h2 id="heading-step-1-build-your-first-mcp-with-hapi">🛠️ Step 1: Build Your First MCP with HAPI</h2>
<p>Here's where it gets exciting.</p>
<p>With <strong>HAPI MCP Servers</strong>, you don't have to reinvent the wheel. HAPI gives you a framework to:</p>
<ul>
<li><p>Take your API logic.</p>
</li>
<li><p>Wrap it in MCP's protocol.</p>
</li>
<li><p>Expose it so ChatGPT can connect.</p>
</li>
</ul>
<p>Think of HAPI as a <strong>Gateway for MCP Servers</strong>—a lightweight, flexible way to spin up a server that speaks MCP fluently.</p>
<hr />
<h2 id="heading-step-2-point-hapi-to-your-openapiswagger-auto-generates-search">🧩 Step 2: Point HAPI to your OpenAPI/Swagger (auto-generates <code>search</code>)</h2>
<p>With HAPI MCP Servers, you don't implement <code>search</code>. HAPI reads your OpenAPI/Swagger spec and auto-builds the resource catalog, exposing a standards-compliant <code>search</code> tool automatically.</p>
<p>What to do:</p>
<ul>
<li><p>Provide your OpenAPI/Swagger spec (file or URL).</p>
</li>
<li><p>Start HAPI MCP against that spec.</p>
</li>
</ul>
<p>Example (pay attention to the <code>--chatgpt</code> flag):</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Local file</span>
hapi serve petstore --headless --chatgpt

<span class="hljs-comment"># Or remote spec</span>
hapi serve --openapi https://petstore3.swagger.io/api/v3/openapi.json
</code></pre>
<p>Result:</p>
<ul>
<li><p><code>search</code> is available immediately, listing operations/resources derived from your spec (endpoints, tags, summaries, etc.).</p>
</li>
<li><p>No custom code required.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-3-zerocode-fetch-auto-generated">🧩 Step 3: Zero‑code <code>fetch</code> (auto-generated)</h2>
<p>You also don't implement <code>fetch</code>. HAPI maps each searchable resource to a concrete retrieval operation using your OpenAPI details (paths, params, auth, and response schemas).</p>
<p>What you get:</p>
<ul>
<li><p><code>fetch</code> executes the proper request and returns JSON/text per the spec.</p>
</li>
<li><p>Response shaping and content-type handling come out of the box.</p>
</li>
</ul>
<p>Optional:</p>
<ul>
<li>You can override defaults (e.g., auth, headers, timeouts), but it's not required for ChatGPT connectivity.</li>
</ul>
<hr />
<h2 id="heading-step-4-connect-it-to-chatgpt">🔌 Step 4: Connect It to ChatGPT</h2>
<p>OpenAI's MCP docs show how to connect MCP Servers. Here's the gist:</p>
<ol>
<li><p>Run your HAPI MCP Server locally or in the cloud.</p>
<ol>
<li><p>If you run it locally, use a tool like <a target="_blank" href="https://ngrok.com/">ngrok</a> to expose it publicly.</p>
</li>
<li><p>If you want to run it in the cloud, <a target="_blank" href="https://docs.mcp.com.ai/components/hapi-server/hapi-cli">install it</a> on any VM server or use <a target="_blank" href="https://run.mcp.com.ai">runMCP</a>.</p>
</li>
</ol>
</li>
<li><p>Go to your ChatGPT settings → Connectors → Create.</p>
</li>
<li><p>Fire up ChatGPT and test!</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758278577009/d69107bf-e34c-4ae7-9b87-9eeb0b79ab1f.png" alt class="image--center mx-auto" /></p>
<p>From that point on, ChatGPT treats your MCP like a first-class citizen. Ask it:</p>
<blockquote>
<p>"What's in my Q1 Revenue Report?"</p>
</blockquote>
<p>And instead of hallucinating, ChatGPT <strong>fetches real data from your MCP.</strong> Is RAG from your own systems!</p>
<h2 id="heading-demo-time">Demo Time!</h2>
<p>Here's a quick demo of how it works in practice:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/-Vwh37i1ERI">https://youtu.be/-Vwh37i1ERI</a></div>
<p> </p>
<hr />
<h2 id="heading-business-impact-why-this-matters">🎯 Business Impact: Why This Matters</h2>
<p>Let's translate the tech into business value.</p>
<ul>
<li><p><strong>For executives</strong>: This reduces integration costs and speeds up AI adoption.</p>
</li>
<li><p><strong>For PMs</strong>: You can now think of MCPs as <strong>product features</strong>. Want ChatGPT to "understand your platform"? Wrap your API in MCP and ship it.</p>
</li>
<li><p><strong>For engineers</strong>: It's one spec to learn. One server to implement. No more writing custom ChatGPT plugins.</p>
</li>
<li><p><strong>For designers &amp; end-users</strong>: It means faster, smoother, and more reliable AI-powered workflows. Less "Sorry, I don't know that" and more <strong>real answers from real systems.</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-a-guided-path-from-zero-to-mcp">📚 A Guided Path: From Zero to MCP</h2>
<p>Here's how you can get started:</p>
<ol>
<li><p><strong>Read the MCP spec</strong> – <a target="_blank" href="https://platform.openai.com/docs/mcp">OpenAI's MCP docs</a>.</p>
</li>
<li><p><strong>Use HAPI</strong> – the quickest way to spin up your MCP Server.</p>
</li>
<li><p><strong>Start small</strong> – customize <code>search</code> and <code>fetch</code>. Test them with ChatGPT.</p>
</li>
<li><p><strong>Iterate</strong> – add more tools once the basics work.</p>
</li>
</ol>
<p>And remember: these are early days. My expectations are high. The current limitation (only <code>search</code> + <code>fetch</code>) will expand. Soon, you'll be able to expose more advanced tools and workflows. 🤞🏽</p>
<hr />
<h2 id="heading-whats-next">What's Next?</h2>
<p>Here's where I see this going:</p>
<ul>
<li><p><strong>Beyond</strong> <code>search</code> and <code>fetch</code> – Future versions of ChatGPT will support richer tools. Imagine <code>create</code>, <code>update</code>, <code>delete</code>, or even full workflow orchestration.</p>
</li>
<li><p><strong>Enterprise MCPs</strong> – Companies will wrap entire business systems in MCP to make them AI-ready.</p>
</li>
<li><p><strong>Agent-to-Agent Communication</strong> – MCP makes it possible for different AI agents (not just ChatGPT) to talk to the same backend services consistently.</p>
</li>
</ul>
<p>This isn't just a developer milestone—it's a <strong>paradigm shift for AI-powered business workflows.</strong></p>
<hr />
<h2 id="heading-final-thoughts">✅ Final Thoughts</h2>
<p>The integration of <strong>ChatGPT with custom MCP Servers</strong> is one of those rare shifts that unlocks a cascade of possibilities.</p>
<p>Today, it's simple: <strong>implement</strong> <code>search</code> and <code>fetch</code>. Tomorrow, it's expansive: <strong>AI systems deeply integrated with your data, tools, and workflows.</strong></p>
<p>With <strong>HAPI MCP Servers</strong>, you don't just consume this future—you build it.</p>
<p>So, the question isn't "<em>Should I try this?</em>" It's: <strong>How fast can you wrap your API in MCP and let ChatGPT talk to it?</strong></p>
<hr />
<p>🔗 <strong>Resources</strong></p>
<ul>
<li><p><a target="_blank" href="https://platform.openai.com/docs/mcp">OpenAI MCP Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.mcp.com.ai/components/hapi-server">HAPI MCP Servers</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[QBot: A CLI MCP Client Built for DevOps]]></title><description><![CDATA[What if you could talk to your infrastructure the same way you talk to ChatGPT? No clunky dashboards. No endless API docs. Just a simple CLI that lets you ask for what you want—and get it done.
That’s the vision behind QBot, my new CLI MCP client bui...]]></description><link>https://rebelion.la/qbot-a-cli-mcp-client-built-for-devops</link><guid isPermaLink="true">https://rebelion.la/qbot-a-cli-mcp-client-built-for-devops</guid><category><![CDATA[Devops]]></category><category><![CDATA[AI-automation]]></category><category><![CDATA[MCP Client]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Tue, 16 Sep 2025 02:04:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757988032804/1fafa1fa-b75e-498e-a95d-a57681f42537.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What if you could talk to your infrastructure the same way you talk to ChatGPT? No clunky dashboards. No endless API docs. Just a simple CLI that lets you ask for what you want—and get it done.</p>
<p>That’s the vision behind <strong>QBot</strong>, my new CLI MCP client built to help DevOps practitioners interact with their infrastructure using natural language.</p>
<p>This started as a Sunday afternoon experiment. I stumbled upon a $1 promo code for OpenAI and decided to give Codex a spin. My goal: generate a Claude Code + Codex-style clone that could serve as the foundation for QBot. To my surprise, the first MVP came together quickly—and it’s already powerful enough to be useful.</p>
<p>Thanks to the <strong>HAPI MCP Stack</strong>, connecting QBot to any MCP server became not only possible but seamless.</p>
<hr />
<h2 id="heading-what-qbot-can-do-today">What QBot Can Do Today</h2>
<p>With the MVP, here’s what I can already do in just a few steps:</p>
<ul>
<li><p>Connect to an MCP server.</p>
</li>
<li><p>List all available tools.</p>
</li>
<li><p>Call tools directly—or simply type prompts in natural language.</p>
</li>
<li><p>Configure the LLM provider and model on the fly.</p>
</li>
</ul>
<p>That means I can now interact with my MCP server <em>and</em> my infrastructure as if I were chatting with a colleague.</p>
<hr />
<h2 id="heading-demo-qbot-in-action">Demo: QBot in Action</h2>
<p>Here’s a quick example using the classic Petstore API (a favorite for demos because it’s simple and widely available).</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/jKnHjyoTE5w">https://youtu.be/jKnHjyoTE5w</a></div>
<p> </p>
<pre><code class="lang-bash">$ qbot --url http://localhost:3000/mcp
QBot • MCP REPL  Type text to chat, /<span class="hljs-built_in">help</span> <span class="hljs-keyword">for</span> commands
➜ Connected to http://localhost:3000/mcp
qbot&gt; /tools
... removed <span class="hljs-keyword">for</span> brevity ...  
qbot&gt; /llm status
Provider: ollama
Model: llama3.1:latest
Keys: (none)
Base URLs: ollama
qbot&gt; get pet with ID 1
➜ Calling tool getPetById
assistant: The pet with ID 1 has the following details:

* Category: Cats (ID 2)
* Name: Cat 1
* Photo URLs: url1, url2
* Tags: tag1, tag2
* Status: available
qbot&gt; delete order with ID 10
➜ Calling tool deleteOrder
assistant: Order with ID 10 has been deleted successfully.
qbot&gt; /<span class="hljs-built_in">exit</span>
</code></pre>
<p>Simple, right? No SDK setup. No boilerplate code. Just type what you need, and QBot handles the rest.</p>
<p>👉 I’ve included a full video walkthrough of the demo below so you can see it in action.</p>
<hr />
<h2 id="heading-whats-next-for-qbot">What’s Next for QBot</h2>
<p>The MVP is just the beginning. I’m already exploring ways to extend QBot with:</p>
<ul>
<li><p><strong>Few-shot prompts</strong> for more context-aware interactions.</p>
</li>
<li><p><strong>Tool chaining</strong> to run multi-step workflows automatically. (The AI formula: <a target="_blank" href="https://hashnode.com/post/cmfhj5nrt000002l1bha2d1up">OAS + MCP + Arazzo</a>)</p>
</li>
<li><p><strong>Deeper integration with infrastructure APIs</strong> so DevOps tasks become as easy as chatting.</p>
</li>
</ul>
<p>Ultimately, QBot aims to become the CLI sidekick every DevOps practitioner wishes they had.</p>
<hr />
<h2 id="heading-why-this-matters">Why This Matters</h2>
<p>Infrastructure is getting more complex, not less. Kubernetes, multi-cloud, APIs everywhere—DevOps teams are drowning in tools and dashboards. A conversational CLI like QBot flips that complexity on its head.</p>
<p>Instead of memorizing commands, you can describe your intent—and let QBot translate that into precise API calls. That’s where the <a target="_blank" href="https://docs.mcp.com.ai/introduction/architecture"><strong>HAPI MCP Stack</strong></a> shines. It provides the foundation for exposing APIs in a standard, machine-readable way, so tools like QBot can plug in and just work.</p>
<p>If you’re exploring how MCP can simplify your workflows, I’d highly recommend checking out the <a target="_blank" href="https://docs.mcp.com.ai/components/">HAPI Stack</a>—it’s the backbone of everything I’m building here.</p>
<hr />
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>This is just the start, but I’m excited about where QBot is heading. Imagine pairing natural language with infrastructure automation—suddenly, DevOps feels less like firefighting and more like flow.</p>
<p>I’d love to hear what you think:</p>
<ul>
<li><p>What features would you like to see in QBot?</p>
</li>
<li><p>How would you use a conversational CLI in your workflow?</p>
</li>
</ul>
<p>Drop your thoughts in the comments, and stay tuned for updates.</p>
<p><strong>Be HAPI. 😁 Go Rebels! ✊🏽</strong></p>
]]></content:encoded></item><item><title><![CDATA[From Swagger to MCP]]></title><description><![CDATA[Why should you care about MCP (Model Context Protocol)? Because it's quickly becoming the bridge between APIs and AI. If APIs are the nervous system of modern software, MCP is the translator that makes them accessible to large language models (LLMs) ...]]></description><link>https://rebelion.la/from-swagger-to-mcp</link><guid isPermaLink="true">https://rebelion.la/from-swagger-to-mcp</guid><category><![CDATA[swagger]]></category><category><![CDATA[OpenApi]]></category><category><![CDATA[mcp server]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Sat, 13 Sep 2025 11:00:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757711833835/f9f52fc3-38ea-49d9-84a7-f34e379fa2db.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Why should you care about MCP (Model Context Protocol)?</strong> Because it's quickly becoming the bridge between APIs and AI. If APIs are the nervous system of modern software, MCP is the translator that makes them accessible to large language models (LLMs) and AI agents.</p>
<p>In this post, I'll show you how to take a <strong>Swagger (OpenAPI) specification</strong> and, with just one command, spin up an <strong>MCP server</strong> ready to power copilots, agents, and AI-native applications.</p>
<p>And here's the kicker: <strong>you won't write a single line of code.</strong></p>
<p>👉 At the end, you'll also find the video walkthrough where I demo this live.</p>
<h2 id="heading-tldr">TL;DR</h2>
<p>For those in a hurry, here's the TL;DR:</p>
<ul>
<li><p><strong>MCP</strong> standardizes how AI agents interact with APIs, eliminating the need for custom integrations.</p>
</li>
<li><p>You can convert <strong>OpenAPI specs into MCP servers</strong> effortlessly, and you can have an MCP server up and running in minutes, making your APIs AI-ready.</p>
</li>
<li><p>This approach saves time, reduces costs, and future-proofs your API strategy.</p>
</li>
</ul>
<p>That's not all, from the date this post was written (September, 2025), many things are happening in the MCP ecosystem, AI time is moving fast:</p>
<p>Here's some great news! You can now explore MCP-related skills on <a target="_blank" href="https://skills.sh">Skills.sh</a>, making it easier to work with MCP servers and APIs. For specific details, check out <a target="_blank" href="https://agentskills.io/home">Agent skills</a>.</p>
<p>If you're interested in building MCP servers using <strong>OpenAPI</strong> specs, be sure to visit <a target="_blank" href="https://hapi.mcp.com.ai">hapi.mcp.com.ai</a>. Plus, our community offers fantastic tools: convert OpenAPI to MCP with <a target="_blank" href="https://skills.sh/mcp-com-ai/api-to-mcp-skills/swagger-to-mcp">swagger-to-mcp skill</a> and evaluate MCP servers with <a target="_blank" href="https://skills.sh/mcp-com-ai/mcp-server-evaluations">mcp-server-evaluations skill</a>.</p>
<hr />
<h2 id="heading-business-impact-first">Business Impact First</h2>
<p>When considering the adoption of MCP, it's essential to understand the tangible benefits it brings to both technical teams and business stakeholders.</p>
<p>For businesses, MCP eliminates the need for costly and time-consuming custom integrations. Standardizing how APIs interact with AI agents reduces the reliance on fragile, one-off connectors that often lead to maintenance headaches. This translates directly into <strong>cost savings</strong> and <strong>reduced operational risk</strong>.</p>
<p>From a time-to-market perspective, MCP accelerates the process of making APIs AI-ready. Instead of spending weeks or months building glue code, teams can leverage MCP to instantly expose their APIs to AI agents. This means businesses can respond faster to market demands and stay ahead of competitors.</p>
<p>Moreover, MCP positions organizations for the future. As AI agents become a standard part of digital ecosystems, having APIs that are already consumable by these agents ensures <strong>future-proofing</strong>. This strategic alignment not only supports innovation but also minimizes disruption as AI adoption grows.</p>
<p>For executives, MCP aligns with broader business strategies by enabling agility and reducing technical debt. For engineers, it simplifies workflows, allowing them to focus on delivering value rather than wrestling with integration challenges.</p>
<p>In short, MCP bridges the gap between technical efficiency and business impact, making it a win-win for all stakeholders.</p>
<hr />
<h2 id="heading-what-is-mcp-really">🤔 What is MCP, Really?</h2>
<p>MCP (Model Context Protocol) is like <strong>REST for AI agents</strong>.</p>
<ul>
<li><p>It defines how agents (Copilot, LangChain, custom LLMs, etc.) can discover, connect to, and use APIs.</p>
</li>
<li><p>It standardizes interactions so we don't reinvent integration every time.</p>
</li>
<li><p>It's designed to be <strong>open, language-agnostic, and scalable</strong>.</p>
</li>
</ul>
<p>If you think about it, APIs revolutionized cloud services by making them programmable. Similarly, <strong>MCP is transforming APIs into agent-consumable interfaces.</strong> To achieve this, API providers can leverage their existing OpenAPI specifications to seamlessly expose their APIs in a way that MCP clients can readily understand and utilize; that's why MCP and OpenAPI are a perfect match.</p>
<p><img src="https://i.imgur.com/M9Sc64o.jpeg" alt="spider-man-meme-3-dimensions" /></p>
<hr />
<h2 id="heading-the-technical-challenge">🛠️ The Technical Challenge</h2>
<p>Traditionally, if you wanted your AI copilot to talk to your API, you had to:</p>
<ol>
<li><p>Write custom wrappers.</p>
</li>
<li><p>Manually map endpoints to AI tools.</p>
</li>
<li><p>Deal with endless schema alignment issues.</p>
</li>
</ol>
<p>This is fragile, time-consuming, and expensive.</p>
<p><strong>Enter HAPI: Your Shortcut to MCP Servers</strong></p>
<p>Imagine a tool that takes your <strong>Swagger/OpenAPI specification</strong> and, with minimal effort, transforms it into a fully functional MCP server. That's exactly what <a target="_blank" href="https://hapi.mcp.com.ai"><strong>HAPI</strong></a> does.</p>
<p><a target="_blank" href="https://docs.mcp.com.ai/components/hapi-server/">HAPI</a> is a framework designed to simplify the process of creating MCP servers. It eliminates the complexity of manual integration by instantly exposing your API as an MCP server. With HAPI, you can bridge the gap between your APIs and AI agents in just a few steps—no coding required.</p>
<p>Whether you're working with a simple API or a complex enterprise-grade system, HAPI ensures that your APIs are AI-ready, scalable, and easy to manage. It's the ultimate enabler for teams looking to unlock the potential of AI-driven applications without the usual headaches of custom development.</p>
<p>Get the latest version of HAPI from <a target="_blank" href="https://github.com/la-rebelion/hapimcp/releases/">GitHub</a>.</p>
<hr />
<h2 id="heading-the-simplest-way-to-create-an-mcp-server">⚡ The Simplest Way to Create an MCP Server</h2>
<p>Here's how I did it in the demo video (below):</p>
<h3 id="heading-1-start-with-swagger-spec">1. Start with Swagger Spec</h3>
<p>I used the famous <a target="_blank" href="https://petstore3.swagger.io/">PetStore Swagger example</a>. This gives us a simple API with endpoints like:</p>
<ul>
<li><p><code>GET /pet/{id}</code> → Get pet by ID</p>
</li>
<li><p><code>GET /pet/findByStatus</code> → Find pets by status</p>
</li>
</ul>
<p>No coding needed. Just the API spec.</p>
<h3 id="heading-2-store-the-spec-locally">2. Store the Spec Locally</h3>
<p>Download the spec and save it locally. This gives us a blueprint of endpoints and operations.</p>
<h3 id="heading-3-run-hapi">3. Run HAPI</h3>
<p>Now the magic:</p>
<pre><code class="lang-bash">hapi serve petstore --headless
</code></pre>
<p>That's it.</p>
<ul>
<li><p><code>--headless</code> tells HAPI to consume services from the backend directly (the head).</p>
</li>
<li><p>The MCP server is now running locally on port <strong>3000</strong>.</p>
</li>
</ul>
<h3 id="heading-4-connect-to-vs-code">4. Connect to VS Code</h3>
<p>Open VS Code, add an MCP server configuration:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Petstore MCP"</span>,
  <span class="hljs-attr">"url"</span>: <span class="hljs-string">"http://localhost:3000/mcp"</span>
}
</code></pre>
<p>Start the server, connect, and you'll see <strong>19 tools exposed automatically.</strong></p>
<hr />
<h2 id="heading-putting-it-to-the-test">🔍 Putting It to the Test</h2>
<p>Once connected, you can:</p>
<ul>
<li><p>Ask your AI copilot: <em>"Get pets by status available."</em></p>
</li>
<li><p>The LLM resolves the correct MCP tool (<code>findPetsByStatus</code>).</p>
</li>
<li><p>The response comes back instantly, just like hitting the API.</p>
</li>
</ul>
<p>Example:</p>
<pre><code class="lang-json">[
  {
    <span class="hljs-attr">"id"</span>: <span class="hljs-number">10</span>,
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"doggie"</span>,
    <span class="hljs-attr">"category"</span>: {
      <span class="hljs-attr">"id"</span>: <span class="hljs-number">1</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Dogs"</span>
    },
    <span class="hljs-attr">"photoUrls"</span>: [
      <span class="hljs-string">"string"</span>
    ],
    <span class="hljs-attr">"tags"</span>: [
      {
        <span class="hljs-attr">"id"</span>: <span class="hljs-number">0</span>,
        <span class="hljs-attr">"name"</span>: <span class="hljs-string">"string"</span>
      }
    ],
    <span class="hljs-attr">"status"</span>: <span class="hljs-string">"available"</span>
  }
]
</code></pre>
<p>And it works the same if you hit the API directly.</p>
<hr />
<h2 id="heading-why-this-matters-for-ai-agents">🎯 Why This Matters for AI Agents</h2>
<p>With MCP:</p>
<ul>
<li><p>AI copilots <strong>don't need custom wrappers</strong> anymore.</p>
</li>
<li><p>LLMs can <strong>auto-discover tools</strong> via your API spec.</p>
</li>
<li><p>You can <strong>control which endpoints</strong> are exposed to the agent (important for governance and security).</p>
</li>
</ul>
<p>This transforms any API into an <strong>AI-first product</strong>.</p>
<hr />
<h2 id="heading-sneak-peek-adding-oauth2">🔒 Sneak Peek: Adding OAuth2</h2>
<p>In the next step (and in the upcoming video), I'll show how you can <strong>inject OAuth2 authentication</strong> directly into the Swagger spec.</p>
<ul>
<li><p>No extra code.</p>
</li>
<li><p>HAPI auto-detects the auth configuration.</p>
</li>
<li><p>You can integrate with providers like <a target="_blank" href="https://goauthentik.io/"><strong>Authentik</strong></a> seamlessly.</p>
</li>
</ul>
<p>This makes MCP servers not just easy but also <strong>enterprise-ready</strong>.</p>
<hr />
<h2 id="heading-watch-the-walkthrough">📺 Watch the Walkthrough</h2>
<p>Here's the full demo where I set up the MCP server in real time:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/tkiJXIcFtOw">https://youtu.be/tkiJXIcFtOw</a></div>
<p> </p>
<hr />
<h2 id="heading-faq">FAQ</h2>
<h3 id="heading-what-is-mcp-model-context-protocol">What is MCP (Model Context Protocol)?</h3>
<p>MCP is a standard that lets AI agents discover and use tools exposed by APIs in a consistent, secure, and auditable way.</p>
<h3 id="heading-what-is-the-fastest-way-to-convert-an-openapi-swagger-spec-into-an-mcp-server">What is the fastest way to convert an OpenAPI (Swagger) spec into an MCP server?</h3>
<p>Use HAPI. With a single command like <code>hapi serve &lt;project&gt; --headless</code>, you can generate an MCP server without writing custom code.</p>
<h3 id="heading-do-i-need-to-write-code-to-create-an-mcp-server-from-swaggeropenapi">Do I need to write code to create an MCP server from Swagger/OpenAPI?</h3>
<p>No. <a target="_blank" href="https://hapi.mcp.com.ai">HAPI</a> can turn an OpenAPI v3 spec into an MCP server with zero custom code. Skill available: <a target="_blank" href="https://skills.sh/mcp-com-ai/api-to-mcp-skills/swagger-to-mcp">swagger-to-mcp</a>.</p>
<h3 id="heading-does-mcp-work-with-openapi-v2-swagger-20">Does MCP work with OpenAPI v2 (Swagger 2.0)?</h3>
<p>HAPI requires OpenAPI v3.x. If you have Swagger 2.0, <a target="_blank" href="https://converter.swagger.io">convert it to OpenAPI v3</a> first.</p>
<h3 id="heading-where-is-the-mcp-endpoint-when-using-hapi">Where is the MCP endpoint when using HAPI?</h3>
<p>By default, MCP is available at <code>http://localhost:3000/mcp</code> (or the port you configure).</p>
<h3 id="heading-how-do-ai-agents-discover-tools-in-mcp">How do AI agents discover tools in MCP?</h3>
<p>Agents call <code>tools/list</code> on the MCP server, which returns the tool catalog derived from your OpenAPI operations.</p>
<h3 id="heading-can-i-control-which-api-endpoints-are-exposed-as-tools">Can I control which API endpoints are exposed as tools?</h3>
<p>Yes. You can limit exposure by curating your OpenAPI spec or providing a filtered spec for MCP generation.</p>
<h3 id="heading-is-mcp-secure-for-enterprise-use">Is MCP secure for enterprise use?</h3>
<p>Yes, when deployed with proper auth (OAuth2, API keys, etc.) and policy enforcement at the boundary. MCP keeps authority outside the model.</p>
<h3 id="heading-can-i-deploy-mcp-servers-to-the-cloud">Can I deploy MCP servers to the cloud?</h3>
<p>Yes. HAPI supports deployments to <a target="_blank" href="https://www.cloudflare.com/developer-platform/products/workers/">Cloudflare Workers</a>, <a target="_blank" href="https://hub.docker.com/r/hapimcp/hapi-cli">Docker</a>, <a target="_blank" href="https://fly.io">Fly.io</a>, and other environments.</p>
<h3 id="heading-how-do-i-test-if-my-mcp-server-works">How do I test if my MCP server works?</h3>
<p>Check <code>/health</code>, call <code>ping</code>, list tools with <code>tools/list</code>, and run a sample tool call. You can also use the <a target="_blank" href="https://skills.sh/mcp-com-ai/mcp-server-evaluations">mcp-server-evaluations skill</a> for a structured test.</p>
<h3 id="heading-whats-the-difference-between-mcp-and-rest">What’s the difference between MCP and REST?</h3>
<p>REST is an API style. MCP is a protocol for how AI agents discover and invoke tools, often backed by REST APIs.</p>
<h3 id="heading-how-does-mcp-improve-seoaeo-for-apis">How does MCP improve SEO/AEO for APIs?</h3>
<p>It makes APIs machine-discoverable for AI agents, increasing tool discoverability and enabling answer engines to access structured actions.</p>
<hr />
<h2 id="heading-go-rebels">✊ Go Rebels</h2>
<p>At <strong>La Rebelion</strong>, we believe in tools that <strong>remove friction</strong> and let engineers (and businesses) focus on what matters: building value.</p>
<p>MCP is more than just another protocol. It's the bridge between your APIs and the AI-native future.</p>
<p>So, go ahead. Try it with your own Swagger spec. You'll see just how easy it is.</p>
<p><strong>The future of API integration is agent-first. And it starts with MCP.</strong></p>
<hr />
<p>💡 Pro tip: Share this with your engineering team and product leaders. MCP will impact <strong>both sides</strong> of the business equation — execution and strategy.</p>
]]></content:encoded></item><item><title><![CDATA[OpenAPI + MCP + Arazzo = The Formula for AI Automation]]></title><description><![CDATA[What’s possible when you mix OpenAPI, the Model Context Protocol (MCP), and the Arazzo Specification?
In one word: everything.
We’re standing on the edge of a new era in AI automation—where workflows that once took weeks to wire up across APIs, SDKs,...]]></description><link>https://rebelion.la/openapi-mcp-arazzo-the-formula-for-ai-automation</link><guid isPermaLink="true">https://rebelion.la/openapi-mcp-arazzo-the-formula-for-ai-automation</guid><category><![CDATA[Arazzo]]></category><category><![CDATA[OpenApi]]></category><category><![CDATA[mcp]]></category><category><![CDATA[API-First]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI-automation]]></category><category><![CDATA[Workflow Automation]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Sat, 13 Sep 2025 00:30:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757703891581/f0a91529-627f-4eeb-a55d-c2d15e6acb85.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What’s possible when you mix <a target="_blank" href="https://swagger.io/specification/v3"><strong>OpenAPI</strong></a>, the <a target="_blank" href="https://modelcontextprotocol.io/"><strong>Model Context Protocol</strong></a> <strong>(MCP)</strong>, and the <a target="_blank" href="https://spec.openapis.org/arazzo/latest.html"><strong>Arazzo Specification</strong></a>?</p>
<p>In one word: <strong>everything.</strong></p>
<p>We’re standing on the edge of a new era in <strong>AI automation</strong>—where workflows that once took weeks to wire up across APIs, SDKs, and custom scripts could soon be described, orchestrated, and executed <strong>seamlessly</strong>.</p>
<hr />
<h2 id="heading-why-this-formula-matters">Why This Formula Matters</h2>
<p>The combination of <strong>OpenAPI</strong>, <strong>MCP</strong>, and <strong>Arazzo</strong> is transformative for AI-powered automation:</p>
<ul>
<li><p><strong>OpenAPI</strong> provides a universal language to describe APIs—a trusted <strong>blueprint</strong> that ensures clarity and consistency across systems.</p>
</li>
<li><p><strong>MCP</strong> introduces a <strong>native protocol</strong> for LLMs to interact with APIs directly—eliminating hacks and workarounds. It acts as the <strong>bridge</strong>, enabling seamless communication between AI agents and APIs.</p>
</li>
<li><p><strong>Arazzo</strong> serves as the <strong>orchestration layer</strong>, defining workflows as portable, spec-driven processes. It’s the <strong>glue</strong> that binds everything together.</p>
</li>
</ul>
<p>This synergy means you’re no longer confined to a single ecosystem and vendor lock-in like n8n, LangFlow, Flowise, or traditional engines like Temporal or Camunda. With <strong>Arazzo as the workflow layer</strong>, you can orchestrate processes across <strong>any system</strong> while maintaining the confidence and reliability of an <a target="_blank" href="https://swagger.io/resources/articles/adopting-an-api-first-approach/">API-first approach</a>.</p>
<p>By leveraging these standards, you unlock a future where AI workflows are not only powerful but also portable, scalable, and ecosystem-agnostic.</p>
<p>The result? An LLM can take “<strong>Place an order for SKU123</strong>” and autonomously execute the workflow across services with full visibility.</p>
<h3 id="heading-mcp-openapi-in-action">MCP + OpenAPI in Action</h3>
<p>Imagine you’ve got an API for order processing:</p>
<pre><code class="lang-mermaid">sequenceDiagram
    participant User
    participant LLM as LLM (via MCP)
    participant OrderAPI as Order Processing API (OpenAPI)
    User-&gt;&gt;LLM: "Place order for SKU123"
    LLM-&gt;&gt;OrderAPI: MCP call to create order
    OrderAPI--&gt;&gt;LLM: Order confirmation
    LLM-&gt;&gt;User: "Order placed successfully!"
</code></pre>
<h3 id="heading-adding-arazzo-to-the-mix">Adding Arazzo to the mix</h3>
<p>Now, let’s say placing an order involves multiple steps: checking inventory, processing payment, and sending a confirmation email. Here’s how Arazzo orchestrates that process in a structured, spec-driven way:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">arazzo:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
<span class="hljs-attr">info:</span>
  <span class="hljs-attr">title:</span> <span class="hljs-string">AI</span> <span class="hljs-string">Order</span> <span class="hljs-string">Workflow</span>
  <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
<span class="hljs-attr">workflow:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">step:</span> <span class="hljs-string">checkInventory</span>
    <span class="hljs-attr">description:</span> <span class="hljs-string">"Verify if the requested SKU is available in inventory."</span>
    <span class="hljs-attr">call:</span>
      <span class="hljs-attr">operationId:</span> <span class="hljs-string">getInventory</span>
    <span class="hljs-attr">next:</span> [<span class="hljs-string">createOrder</span>]
  <span class="hljs-bullet">-</span> <span class="hljs-attr">step:</span> <span class="hljs-string">createOrder</span>
    <span class="hljs-attr">description:</span> <span class="hljs-string">"Create an order for the requested SKU after inventory confirmation."</span>
    <span class="hljs-attr">call:</span>
      <span class="hljs-attr">operationId:</span> <span class="hljs-string">postOrder</span>
    <span class="hljs-attr">next:</span> [<span class="hljs-string">sendConfirmation</span>]
  <span class="hljs-bullet">-</span> <span class="hljs-attr">step:</span> <span class="hljs-string">sendConfirmation</span>
    <span class="hljs-attr">description:</span> <span class="hljs-string">"Send a confirmation email to the user after the order is successfully created."</span>
    <span class="hljs-attr">call:</span>
      <span class="hljs-attr">operationId:</span> <span class="hljs-string">postEmail</span>
</code></pre>
<p>This YAML specification defines a clear, step-by-step workflow for placing an order. Each step is described with its purpose, the API operation it calls, and the next step(s) in the sequence. This makes the workflow not only executable but also easy to understand and maintain.</p>
<p>How does this play out in practice? Here’s the sequence of interactions visualized:</p>
<pre><code class="lang-mermaid">sequenceDiagram
    participant User
    participant LLM as LLM&lt;br&gt;(via MCP Client)
    participant Arazzo as (Arazzo)&lt;br&gt;Workflow Engine
    participant InventoryAPI as Inventory API (HAPI)
    participant OrderAPI as Order Processing API (HAPI)
    participant EmailAPI as Email Service API (HAPI)

    User-&gt;&gt;LLM: "Place order for SKU123"
    LLM-&gt;&gt;+Arazzo: Start order workflow
    Arazzo-&gt;&gt;InventoryAPI: Check inventory for SKU123
    InventoryAPI--&gt;&gt;Arazzo: Inventory available
    Arazzo-&gt;&gt;OrderAPI: Create order for SKU123
    OrderAPI--&gt;&gt;Arazzo: Order created successfully
    Arazzo-&gt;&gt;EmailAPI: Send confirmation email
    EmailAPI--&gt;&gt;Arazzo: Email sent successfully
    Arazzo--&gt;&gt;-LLM: Order workflow completed
    LLM-&gt;&gt;User: "Order placed successfully!"
</code></pre>
<p>In this diagram:</p>
<ol>
<li><p>The <strong>User</strong> initiates the process by requesting to place an order in natural language.</p>
</li>
<li><p>The <strong>LLM</strong>, using the MCP protocol, delegates the workflow execution to the <strong>Arazzo Workflow Engine</strong>.</p>
</li>
<li><p>Arazzo orchestrates the steps:</p>
<ul>
<li><p>It first checks the inventory via the <strong>Inventory API</strong>.</p>
</li>
<li><p>If inventory is available, it proceeds to create the order using the <strong>Order Processing API</strong>.</p>
</li>
<li><p>Finally, it sends a confirmation email through the <strong>Email Service API</strong>.</p>
</li>
</ul>
</li>
<li><p>Once all steps are completed, Arazzo reports back to the LLM, which informs the user of the successful order placement.</p>
</li>
</ol>
<p>Because this workflow is defined in a spec-driven manner, it is portable across platforms like <a target="_blank" href="https://n8n.io">n8n</a>, <a target="_blank" href="https://docs.langflow.org/concepts-flows">LangFlow</a>, <a target="_blank" href="https://temporal.io/solutions/ai">Temporal</a>, or <a target="_blank" href="https://camunda.com/solutions/api-orchestration/">Camunda</a>. Arazzo ensures that the logic remains consistent and reusable, regardless of the underlying execution engine.</p>
<p>This approach not only simplifies complex workflows but also makes them scalable, maintainable, and transparent—key factors for building robust AI-powered automation systems.</p>
<hr />
<h2 id="heading-the-real-business-impact">The real business impact</h2>
<p>Executives care about <strong>cost, speed, and risk.</strong> Developers care about <strong>standards and reusability.</strong> Operators care about <strong>observability and maintainability.</strong></p>
<p>This formula speaks to all of them:</p>
<ul>
<li><p>Faster integration = faster time-to-market</p>
</li>
<li><p>Standards = reduced tech debt and vendor lock-in</p>
</li>
<li><p>Clear orchestration = predictable, auditable AI-powered workflows</p>
</li>
</ul>
<p>It’s not just about building cool AI toys—it’s about <strong>building resilient, future-proof business systems.</strong></p>
<hr />
<h2 id="heading-why-im-excited-about-arazzo">Why I’m excited about Arazzo</h2>
<p>The <strong>Arazzo Spec</strong> feels like the orchestration layer AI has been waiting for. Imagine telling your AI agent not just <em>what</em> API to call (that’s OpenAPI), or <em>how</em> to connect to it (that’s MCP), but also <em>why, when, and in what order</em>.</p>
<p>That’s Arazzo. That’s <strong>AI workflows as code.</strong></p>
<p>This opens the door to something bigger: <strong>cross-platform automation without silos.</strong></p>
<hr />
<h2 id="heading-where-the-hapi-stackhttpsdocsmcpcomaicomponents-fits">Where the <a target="_blank" href="https://docs.mcp.com.ai/components">HAPI Stack</a> fits</h2>
<p>The <a target="_blank" href="https://mcp.com.ai">HAPI Stack for MCP</a> explores how these standards—OpenAPI, MCP, and now Arazzo—can accelerate the <strong>API-First + AI-Orchestration future.</strong> Instead of reinventing the wheel every time, we’re building with <strong>standards-first tooling</strong> that makes it easier to go from idea → workflow → production.</p>
<hr />
<h2 id="heading-what-is-your-take">What is your take?</h2>
<p>I’d love to hear your take</p>
<ul>
<li><p>Do you see <strong>Arazzo</strong> becoming the universal glue for workflows?</p>
</li>
<li><p>How do you imagine <strong>AI agents</strong> benefiting from an OpenAPI + MCP + Arazzo formula?</p>
</li>
<li><p>What opportunities—or challenges—do you see for your teams or projects?</p>
</li>
</ul>
<p>Check out the Arazzo spec here: <a target="_blank" href="https://spec.openapis.org/arazzo/latest.html">Arazzo Latest Draft</a> and the deep dive here: <a target="_blank" href="https://swagger.io/blog/the-arazzo-specification-a-deep-dive/">Swagger Blog</a>.</p>
<p>Let’s spark a conversation about what’s next—because <strong>this might just be the foundation of AI-native automation.</strong></p>
<p>Go Rebels! ✊🏽</p>
]]></content:encoded></item><item><title><![CDATA[OIDC Becomes ISO: Why It Matters for AI Agentification and MCP]]></title><description><![CDATA[✅ TL;DR

OIDC is now officially published as ISO/IEC standards (nine specs) in October 2024.

ISO brand means it’s trusted globally and meets governments’ legal rules.

ISO versions have all updates and fixes built in, so they’re cleaner and more sta...]]></description><link>https://rebelion.la/oidc-becomes-iso-why-it-matters-for-ai-agentification-and-mcp</link><guid isPermaLink="true">https://rebelion.la/oidc-becomes-iso-why-it-matters-for-ai-agentification-and-mcp</guid><category><![CDATA[ISO 26131]]></category><category><![CDATA[mcp]]></category><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[OIDC]]></category><category><![CDATA[oauth]]></category><category><![CDATA[OpenID Connect]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Thu, 11 Sep 2025 22:40:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756756782226/49bae46e-0be6-40f9-bf82-3b5cd67b3f34.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">✅ TL;DR</h2>
<ul>
<li><p>OIDC is now officially published as ISO/IEC standards (nine specs) in October 2024.</p>
</li>
<li><p>ISO brand means it’s trusted globally and meets governments’ legal rules.</p>
</li>
<li><p>ISO versions have all updates and fixes built in, so they’re cleaner and more stable.</p>
</li>
<li><p>For companies, this means easier compliance, better interoperability, and stronger product trust worldwide.</p>
</li>
</ul>
<hr />
<h2 id="heading-what-does-the-iso-version-include-technically">🎯 What Does the ISO Version Include Technically?</h2>
<p>ISO/IEC 26131–26139 (2024) cover:</p>
<ul>
<li><p>Core OIDC flows (authentication, claims, ID tokens)</p>
</li>
<li><p>Discovery, dynamic client registration</p>
</li>
<li><p>Front-channel and back-channel logout, session management</p>
</li>
<li><p>Response encoding like OAuth response-mode/form-post, multiple response handling</p>
</li>
</ul>
<p>These make up the full family of essential OIDC functionality—certified as international standards.</p>
<hr />
<h2 id="heading-why-this-moment-matters">Why This Moment Matters</h2>
<p>OpenID Connect (OIDC) has been the de facto standard for user authentication, but it wasn’t formally an <em>international</em> standard. Every organization interpreted or patched it slightly differently. The result: <strong>identity technical debt</strong> at scale.</p>
<p>Fast forward to October 2024. OpenID Connect is now published as an <strong>ISO/IEC standard</strong> (<a target="_blank" href="https://www.iso.org/standard/89056.html">ISO/IEC 26131</a>–26139). This isn’t just symbolic. It’s a clean ledger moment: a way to pay down accumulated debt, unify digital identity practices, and prepare for what’s next — the <strong>agentification era</strong> powered by AI and the <strong>Model Context Protocol (MCP)</strong>.</p>
<p>This post explores:</p>
<ul>
<li><p>Why OIDC becoming ISO is a business and technical milestone</p>
</li>
<li><p>How it aligns with the <strong>Technical Debt Model</strong></p>
</li>
<li><p>What this means for the rise of <strong>AI agents and MCP ecosystems</strong></p>
</li>
<li><p>How enterprises can act now to leverage this milestone strategically</p>
</li>
</ul>
<hr />
<h2 id="heading-what-does-oidc-becoming-iso-mean">What Does OIDC Becoming ISO Mean?</h2>
<h3 id="heading-a-quick-refresher">A Quick Refresher</h3>
<ul>
<li><p><strong>OpenID Connect (OIDC)</strong>: A simple identity layer on top of OAuth 2.0, enabling apps to verify user identities and fetch profile data securely.</p>
</li>
<li><p><strong>ISO/IEC 26131–26139</strong>: The family of international standards formally publishing OIDC as a global standard, incorporating all corrections and clarifications accumulated since its original release in 2014.</p>
</li>
</ul>
<h3 id="heading-why-it-matters-now">Why It Matters Now</h3>
<ol>
<li><p><strong>Compliance and Trust</strong>: ISO standards are recognized globally by governments and regulators. For industries like banking, healthcare, and telecom, this unlocks new adoption opportunities.</p>
</li>
<li><p><strong>Consolidation</strong>: Multiple OIDC drafts, errata, and extensions are now unified into a stable, internationally recognized baseline.</p>
</li>
<li><p><strong>Future Alignment</strong>: Extensions like Financial-grade API (FAPI) and eKYC are already being prepared for ISO. This sets a path for the entire identity ecosystem.</p>
</li>
</ol>
<hr />
<h2 id="heading-oidc-iso-through-the-lens-of-the-technical-debt-model">OIDC ISO Through the Lens of the Technical Debt Model</h2>
<h3 id="heading-principal-the-cost-weve-accumulated">Principal: The Cost We’ve Accumulated</h3>
<p>Organizations have been carrying identity debt for years:</p>
<ul>
<li><p>Supporting multiple OIDC drafts across vendors</p>
</li>
<li><p>Maintaining custom flows for session management, logout, or claims</p>
</li>
<li><p>Rewriting integrations due to inconsistent interpretation</p>
</li>
</ul>
<h3 id="heading-interest-the-daily-penalties">Interest: The Daily Penalties</h3>
<p>Every day this debt continues:</p>
<ul>
<li><p><strong>Security risk increases</strong> (misapplied patches, outdated flows)</p>
</li>
<li><p><strong>Compliance costs rise</strong> (auditors chasing mismatched versions)</p>
</li>
<li><p><strong>Engineering time is wasted</strong> maintaining divergent code</p>
</li>
</ul>
<h3 id="heading-repayment-plan-iso-adoption-as-debt-reduction">Repayment Plan: ISO Adoption as Debt Reduction</h3>
<ul>
<li><p><strong>Short term</strong>: Audit existing identity flows, mapping them against the ISO/IEC baseline.</p>
</li>
<li><p><strong>Medium term</strong>: Update configurations and require ISO alignment in vendor contracts.</p>
</li>
<li><p><strong>Long term</strong>: Use ISO as the <strong>single source of truth</strong>, reducing spec fragmentation permanently.</p>
</li>
</ul>
<h3 id="heading-future-investment-avoiding-new-debt">Future Investment: Avoiding New Debt</h3>
<p>By adopting ISO OIDC now, organizations build on a foundation that:</p>
<ul>
<li><p>Prepares them for FAPI, eKYC, and AI-integrated identity standards</p>
</li>
<li><p>Ensures future AI agents can authenticate in globally interoperable ways</p>
</li>
</ul>
<hr />
<h2 id="heading-the-ai-agentification-era-meets-iso-oidc">The AI Agentification Era Meets ISO OIDC</h2>
<h3 id="heading-why-ai-agents-need-strong-identity">Why AI Agents Need Strong Identity</h3>
<p>AI agents — autonomous, API-driven entities — are becoming the new workforce. They:</p>
<ul>
<li><p>Access enterprise APIs</p>
</li>
<li><p>Execute workflows across cloud environments</p>
</li>
<li><p>Act on behalf of humans and organizations</p>
</li>
</ul>
<p>But here’s the catch: without a <strong>trusted, standardized identity</strong>, agents become security risks. Rogue, spoofed, or misconfigured agents could wreak havoc.</p>
<h3 id="heading-the-role-of-mcp-model-context-protocolhttpsmodelcontextprotocolio">The Role of MCP (<a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol</a>)</h3>
<p>MCP is a new REST-like protocol for LLMs to interact with tools and APIs. It provides context-sharing and interoperability across agent ecosystems. But MCP itself relies on <a target="_blank" href="https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#security-considerations"><strong>identity assurances</strong></a> to:</p>
<ul>
<li><p>Authenticate agents</p>
</li>
<li><p>Authorize actions</p>
</li>
<li><p>Ensure interoperability across organizations</p>
</li>
</ul>
<h3 id="heading-where-oidc-iso-fits">Where OIDC ISO Fits</h3>
<p>With OIDC as ISO:</p>
<ul>
<li><p><strong>Agents authenticate securely</strong> across ecosystems</p>
</li>
<li><p><strong>MCP servers trust external agents</strong> without one-off integrations</p>
</li>
<li><p><strong>Global interoperability becomes possible</strong> — a government AI agent in Europe can authenticate with an enterprise API in the US without legal or technical headaches</p>
</li>
</ul>
<hr />
<h2 id="heading-real-world-case-studies">Real-World Case Studies</h2>
<h3 id="heading-banking-and-finance">Banking and Finance</h3>
<p>Banks adopting <a target="_blank" href="https://openid.net/guest-blog-financial-grade-api-fapi-explained-by-an-implementer-updated/"><strong>FAPI</strong></a> (Financial-grade API) need rock-solid authentication. Before ISO, regulators hesitated because OIDC was a community spec. With ISO/IEC recognition:</p>
<ul>
<li><p>Banks can rely on a globally certified baseline.</p>
</li>
<li><p>Cross-border financial transactions become smoother.</p>
</li>
<li><p>AI-powered finance agents (e.g., portfolio optimizers) can operate across institutions securely.</p>
</li>
</ul>
<h3 id="heading-healthcare">Healthcare</h3>
<p>Hospitals and telemedicine providers struggle with identity silos across systems. An ISO-standard OIDC allows:</p>
<ul>
<li><p>Unified patient login across multiple providers</p>
</li>
<li><p>Secure AI agents acting as digital health assistants</p>
</li>
<li><p>Compliance with strict privacy laws (HIPAA, GDPR)</p>
</li>
</ul>
<h3 id="heading-government-digital-ids">Government Digital IDs</h3>
<p>National ID programs increasingly rely on OIDC (e.g., Gov.UK Sign-In, European eIDAS pilots). ISO formalization means:</p>
<ul>
<li><p>OIDC can now be used in <strong>legally binding digital identity frameworks</strong>.</p>
</li>
<li><p>Government AI agents can authenticate into regulated ecosystems without custom bridges.</p>
</li>
</ul>
<h3 id="heading-telecom">Telecom</h3>
<p>Telecom operators adopting <strong>GSMA Open Gateway</strong> need standard APIs for cross-carrier services. ISO OIDC provides:</p>
<ul>
<li><p>A trusted baseline for API authentication.</p>
</li>
<li><p>AI agents (e.g., fraud-detection bots) that can move across carrier networks securely.</p>
</li>
</ul>
<hr />
<h2 id="heading-why-executives-pms-and-engineers-should-care">Why Executives, PMs, and Engineers Should Care</h2>
<h3 id="heading-executives-reducing-risk-unlocking-growth">Executives: Reducing Risk, Unlocking Growth</h3>
<ul>
<li><p><strong>Compliance-ready</strong>: ISO-certified OIDC reduces regulatory headaches</p>
</li>
<li><p><strong>Market expansion</strong>: Enables secure partnerships in finance, healthcare, telecom</p>
</li>
<li><p><strong>Future-proofing</strong>: Establishes trust in AI agent ecosystems before regulations catch up</p>
</li>
</ul>
<h3 id="heading-product-managers-selling-iso-oidc-as-a-feature">Product Managers: Selling ISO OIDC as a Feature</h3>
<ul>
<li><p>“Audit-ready login” for compliance-focused customers</p>
</li>
<li><p>“Government-grade authentication” for global enterprises</p>
</li>
<li><p>Interoperable identity as a <strong>competitive differentiator</strong></p>
</li>
</ul>
<h3 id="heading-engineers-less-maintenance-more-clarity">Engineers: Less Maintenance, More Clarity</h3>
<ul>
<li><p>A single reference spec (ISO/IEC 26131–26139)</p>
</li>
<li><p>No more guessing which draft your vendor supports</p>
</li>
<li><p>Cleaner upgrades to FAPI, eKYC, and MCP integrations</p>
</li>
</ul>
<hr />
<h2 id="heading-steps-to-adopt-iso-oidc">Steps to Adopt ISO OIDC</h2>
<ol>
<li><p><strong>Audit Current Implementations</strong>: Identify all OIDC flows, libraries, and vendors in use.</p>
</li>
<li><p><strong>Map to ISO/IEC 26131–26139</strong>: Compare existing implementations against the ISO standard to identify gaps.</p>
</li>
<li><p><strong>Update Configurations</strong>: Work with vendors to ensure they support the ISO version. Update your identity provider settings accordingly.</p>
</li>
<li><p><strong>Train Teams</strong>: Educate engineering, security, and compliance teams on the changes and benefits of ISO OIDC.</p>
</li>
</ol>
<hr />
<h2 id="heading-faqs">FAQs</h2>
<p><strong>Q1: What does OIDC becoming ISO mean for businesses?</strong> It means organizations can now rely on an internationally recognized, audit-ready identity standard — reducing compliance risk and unlocking new regulated markets.</p>
<p><strong>Q2: How does OIDC ISO help AI agents?</strong> It ensures AI agents can authenticate securely and interoperably across systems, forming the trust layer for MCP-driven ecosystems.</p>
<p><strong>Q3: How is ISO OIDC different from earlier OIDC specs?</strong> ISO versions consolidate all errata and clarifications, providing a single, globally trusted reference point.</p>
<p><strong>Q4: Why should enterprises act now?</strong> Adopting ISO OIDC now reduces identity technical debt, lowers compliance costs, and positions enterprises ahead of future regulations.</p>
<hr />
<h2 id="heading-conclusion-a-clean-ledger-for-the-future-of-identity">Conclusion: A Clean Ledger for the Future of Identity</h2>
<p>OIDC becoming ISO is not just a standards milestone. It’s a <strong>technical debt repayment plan</strong> for digital identity — giving businesses a chance to reset, unify, and secure their ecosystems. And as AI agents and MCP become central to enterprise workflows, ISO OIDC is the <strong>trust anchor</strong> they’ll rely on.</p>
<p>For executives, PMs, and engineers alike, this is a rare chance to pay off accumulated debt and position identity as a growth enabler. Because in the era of agentification, trust isn’t optional — it’s the foundation.</p>
<hr />
<p><em>Pro Tip: If you’re building agent ecosystems or planning MCP integrations, bake ISO OIDC compliance into your roadmap now. Think of it as paying off debt today to avoid bankruptcy tomorrow.</em></p>
<p><strong>Go Rebels!</strong> ✊🏽</p>
<hr />
<h2 id="heading-learn-more-about-the-new-oidc">Learn more about the <em>new</em> OIDC</h2>
<ul>
<li><p><a target="_blank" href="https://openid.net/10-years-on-openidconnect-published-as-iso-spec/?utm_source=rebelion.la">10 Years On: OpenID Connect Published as an ISO/IEC Spec</a></p>
</li>
<li><p><a target="_blank" href="https://self-issued.info/?p=2573&amp;utm_source=rebelion.la">OpenID Connect specifications published as ISO standards</a></p>
</li>
<li><p><a target="_blank" href="https://openid.net/specs/openid-connect-4-identity-assurance-1_0.html?utm_source=rebelion.la">OpenID Connect for Identity Assurance 1.0</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Evolution of OAuth, Dynamic Client Registration, and Why It Matters for AI]]></title><description><![CDATA[Remember when you only had one password for everything? That era is long gone. Today, apps, cloud services, and AI agents all need to talk to each other — securely, automatically, and at scale. The silent hero making this possible: OAuth.

How many a...]]></description><link>https://rebelion.la/the-evolution-of-oauth-dynamic-client-registration-and-why-it-matters-for-ai</link><guid isPermaLink="true">https://rebelion.la/the-evolution-of-oauth-dynamic-client-registration-and-why-it-matters-for-ai</guid><category><![CDATA[Dynamic Client Registration]]></category><category><![CDATA[mcp]]></category><category><![CDATA[AI]]></category><category><![CDATA[SSO]]></category><category><![CDATA[OIDC]]></category><category><![CDATA[pkce]]></category><category><![CDATA[oauth]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Mon, 01 Sep 2025 16:50:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756744883060/1523e5a1-9549-4af3-ad43-d04142ba3c5a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Remember when you only had one password for everything?</strong> That era is long gone. Today, apps, cloud services, and AI agents all need to talk to each other — securely, automatically, and at scale. The silent hero making this possible: <strong>OAuth</strong>.</p>
<ul>
<li><p>How many applications can you think of that use OAuth for authentication and authorization?</p>
</li>
<li><p>How many applications had you actually approved with your social login!?</p>
</li>
</ul>
<p>But OAuth itself has evolved. As the AI ecosystem grows, we're witnessing a new level of demand for <strong>Dynamic Client Registration (DCR)</strong> — the ability for clients to register with an authorization server <em>on the fly</em>.</p>
<p>This post is your guide:</p>
<ul>
<li><p>How OAuth has changed (and why it matters).</p>
</li>
<li><p>Why <strong>Dynamic Client Registration</strong> is a cornerstone for multi-tenant and AI-driven systems.</p>
</li>
<li><p>How the <strong>Model Context Protocol (MCP)</strong> is adopting these patterns.</p>
</li>
<li><p>And a practical <strong>step-by-step with Authentik and the HAPI MCP Server</strong> to ground it all.</p>
</li>
</ul>
<hr />
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/oauth-evolution-diagram.svg" alt="OAuth's Evolution Diagram" /></p>
<h2 id="heading-1-oauths-evolution-from-delegation-to-identity-to-best-practice">1. OAuth's Evolution: From Delegation to Identity to Best Practice</h2>
<p>Let's start simple.</p>
<ul>
<li><p><strong>OAuth 2.0 (2012):</strong> The original delegation protocol. It let apps request access to resources on your behalf (e.g., a calendar app accessing your Google Calendar). But it was flexible — sometimes too flexible. Some flows turned out to be insecure.</p>
</li>
<li><p><strong>OpenID Connect (2014):</strong> A layer on top of OAuth 2.0. It added an <strong>ID token</strong> so apps could also <em>know who you are</em> — not just access your data. Think of it as turning OAuth into both an <strong>access key</strong> <em>and</em> a <strong>passport</strong>.</p>
</li>
<li><p><strong>OAuth 2.1 (draft, 2020+):</strong> A cleanup spec. It didn't reinvent the wheel — it just enforced <strong>best practices</strong> learned over a decade:</p>
<ul>
<li><p>No more Implicit or Password flows.</p>
</li>
<li><p><strong>PKCE everywhere.</strong></p>
</li>
<li><p><strong>Refresh token rotation.</strong></p>
</li>
<li><p>TLS mandatory.</p>
</li>
<li><p>Discovery &amp; registration treated as first-class citizens.</p>
</li>
</ul>
</li>
</ul>
<p>👉 In short: OAuth matured from a flexible toolkit to a <strong>secure, standardized foundation</strong>.</p>
<hr />
<h2 id="heading-2-why-dynamic-client-registration-matters-now">2. Why Dynamic Client Registration Matters Now</h2>
<p>Static client registration works fine if you're only registering a handful of apps. But what if:</p>
<ul>
<li><p>You're running a <strong>multi-tenant SaaS platform</strong> with thousands of customers, each needing isolated integrations?</p>
</li>
<li><p>You're building <strong>AI agent ecosystems</strong>, where agents spawn dynamically and need credentials to access APIs in real time?</p>
<ul>
<li>With <a target="_blank">MCP</a>, these agents can self-register and obtain the necessary credentials without manual intervention.</li>
</ul>
</li>
</ul>
<p>That's where <strong>Dynamic Client Registration (RFC 7591)</strong> comes in:</p>
<ul>
<li><p>Clients can <strong>register programmatically</strong> with an Authorization Server.</p>
</li>
<li><p>They receive credentials (client ID, secrets, etc.) automatically.</p>
</li>
<li><p>They can operate without human intervention.</p>
</li>
</ul>
<p>This matters because:</p>
<ul>
<li><p><strong>Cost savings:</strong> No manual provisioning overhead.</p>
</li>
<li><p><strong>Scalability:</strong> Hundreds or thousands of clients → no problem.</p>
</li>
<li><p><strong>Security:</strong> Credentials are issued and rotated via defined policies.</p>
</li>
</ul>
<p>Without DCR, scaling modern environments is like trying to hand out car keys at a concert — messy, error-prone, and slow.</p>
<hr />
<h2 id="heading-3-mcp-and-dynamic-registration-a-perfect-fit">3. MCP and Dynamic Registration: A Perfect Fit</h2>
<p>Fast forward to <strong>today's AI-driven world</strong>. Enter the <strong>Model Context Protocol (MCP)</strong> — an emerging specification defining how AI systems interact securely and consistently with external tools, APIs, and resources.</p>
<p>From the <a target="_blank" href="https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#dynamic-client-registration">MCP specification (June 2025 release)</a>:</p>
<blockquote>
<p>"Dynamic Client Registration enables MCP clients to register securely with authorization servers, supporting environments with high client churn such as multi-agent systems."</p>
</blockquote>
<p>What this means in plain language:</p>
<ul>
<li><p>AI agents (think assistants, copilots, bots) can <strong>self-register</strong> with a system.</p>
</li>
<li><p>No admin has to pre-configure each one.</p>
</li>
<li><p>The authorization flow is standardized, portable, and safe.</p>
</li>
</ul>
<p>MCP leverages the OAuth ecosystem — <strong>discovery, PKCE, and DCR</strong> — to make this seamless.</p>
<p>This is big. Imagine AI agents spinning up per request, each one negotiating its credentials securely without any human intervention. That's the kind of automation we're heading toward.</p>
<hr />
<h2 id="heading-4-example-authentik-hapi-mcp-server">4. Example: Authentik + HAPI MCP Server</h2>
<p>Enough theory — let's make it real.</p>
<p>Imagine you're deploying <a target="_blank" href="https://docs.goauthentik.io/"><strong>Authentik</strong></a> (an open-source Identity Provider) as your authorization server, and you want your AI agents running in the <strong>HAPI MCP Server</strong> (<a target="_blank" href="https://hapi.mcp.com.ai">hapi.mcp.com.ai</a>) to register dynamically.</p>
<p>Here's the simplified step-by-step flow; for more details, refer to the <a target="_blank" href="https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#dynamic-client-registration">MCP specification</a>.</p>
<pre><code class="lang-mermaid">sequenceDiagram
    participant Client as MCP Client (Agent)
    participant Auth as Authentik (Auth Server)
    participant Server as HAPI MCP Server

    rect Purple
      critical ​Authorization Server Discovery
        Client--&gt;Server: Tell me where
      end
    end

    critical ​Dynamic Client Registration
      rect rgb(17,147,176)
        Client-&gt;&gt;Auth: POST /register -&gt; client_id, secret
        Auth-&gt;&gt;Client: Client Credentials
      end
      critical User Authorizes
        rect rgb(218,119,86)
              Client--&gt;Auth: Auth Request (PKCE)&lt;br&gt;Token exchange
        end
      end
    end

    rect green
      critical Ready&lt;br&gt;(new agent linked/&lt;br&gt;ready to call APIs)
        Client-&gt;&gt;Server: Request with token      
        Server-&gt;&gt;Client: Response
      end
    end
</code></pre>
<h3 id="heading-step-1-discovery">Step 1. Discovery</h3>
<ul>
<li><p>The MCP client queries something without a token,</p>
</li>
<li><p>The MCP Server responds with the <strong>discovery endpoint</strong>:</p>
</li>
</ul>
<pre><code class="lang-graphql"><span class="hljs-symbol">https:</span>//&lt;authentik-domain&gt;/.well-known/openid-configuration
</code></pre>
<blockquote>
<p>"I can't grant you access without a token. Check with Auth Server for details."</p>
</blockquote>
<ul>
<li>From here, it learns where to send authorization and registration requests.</li>
</ul>
<h3 id="heading-step-2-dynamic-client-registration">Step 2. Dynamic Client Registration</h3>
<ul>
<li>The MCP client calls Authentik's registration endpoint (from discovery metadata):</li>
</ul>
<pre><code class="lang-graphql">POST /application/o/token/
Content-<span class="hljs-symbol">Type:</span> application/json
{
  <span class="hljs-string">"client_name"</span>: <span class="hljs-string">"hapi-agent-123"</span>,
  <span class="hljs-string">"redirect_uris"</span>: [<span class="hljs-string">"https://hapi.mcp.com.ai/callback"</span>],
  <span class="hljs-string">"grant_types"</span>: [<span class="hljs-string">"authorization_code"</span>],
  <span class="hljs-string">"response_types"</span>: [<span class="hljs-string">"code"</span>],
  <span class="hljs-string">"token_endpoint_auth_method"</span>: <span class="hljs-string">"none"</span>
}
</code></pre>
<ul>
<li>Authentik returns:</li>
</ul>
<pre><code class="lang-graphql">{
  <span class="hljs-string">"client_id"</span>: <span class="hljs-string">"abc123"</span>,
  <span class="hljs-string">"client_secret"</span>: <span class="hljs-string">"xyz789"</span>,    // &lt;-- optional, two client types
  <span class="hljs-string">"registration_access_token"</span>: <span class="hljs-string">"rat_456"</span>
}
</code></pre>
<p><a target="_blank" href="https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#name-client-types"><strong>OAuth 2.1</strong></a> defines <strong>two client types</strong> based on their ability to authenticate securely with the authorization server:</p>
<ul>
<li><p>"<strong>confidential</strong>": Clients that have credentials with the Authorization Server (AS) are designated as "confidential clients"</p>
</li>
<li><p>"<strong>public</strong>": Clients without credentials are referred to as "public clients." Public clients are incapable of maintaining confidentiality and should utilize methods like <strong>PKCE</strong>.</p>
</li>
</ul>
<h3 id="heading-step-3-authorization-code-pkce-flow">Step 3. Authorization Code + PKCE Flow</h3>
<ul>
<li>The MCP client (HAPI agent) initiates an auth request:</li>
</ul>
<pre><code class="lang-graphql">GET /application/o/authorize?
    response_type=code
    &amp;client_id=abc123
    &amp;redirect_uri=<span class="hljs-symbol">https:</span>//client.mcp.com.ai/callback
    &amp;scope=openid profile
    &amp;code_challenge=<span class="hljs-punctuation">... </span>
    &amp;code_challenge_method=S256
</code></pre>
<ul>
<li>Authentik handles user consent, then redirects back to HAPI MCP Client with an authorization code.</li>
</ul>
<h3 id="heading-step-4-token-exchange">Step 4. Token Exchange</h3>
<ul>
<li>The HAPI MCP Client exchanges the code (with PKCE verifier) at Authentik's token endpoint:</li>
</ul>
<pre><code class="lang-graphql">POST /application/o/token/
grant_type=authorization_code
code=<span class="hljs-punctuation">...
</span>client_id=abc123
code_verifier=<span class="hljs-punctuation">...</span>
</code></pre>
<ul>
<li>Authentik issues <strong>access + refresh tokens</strong>.</li>
</ul>
<h3 id="heading-step-5-agent-ready">Step 5. Agent Ready</h3>
<ul>
<li>Now the HAPI MCP agent can call APIs securely, and if refresh tokens rotate, it seamlessly handles renewal.</li>
</ul>
<hr />
<h2 id="heading-5-why-this-matters">5. Why This Matters</h2>
<p>By combining <strong>OAuth 2.1 best practices</strong>, <strong>Dynamic Client Registration</strong>, and the <strong>MCP standard</strong>, you get:</p>
<ul>
<li><p><strong>Seamless scaling</strong> of clients/agents.</p>
</li>
<li><p><strong>Zero manual overhead</strong> for new tenants or AI workloads.</p>
</li>
<li><p><strong>Security baked in</strong> with PKCE, HTTPS, and rotation.</p>
</li>
<li><p><strong>Interoperability</strong> — the same flows work across Authentik, Keycloak, Azure AD, and beyond.</p>
</li>
</ul>
<p>This isn't just theory. It's the backbone for the <strong>next generation of AI-native infrastructure</strong>.</p>
<hr />
<h2 id="heading-final-thought">Final Thought</h2>
<p>OAuth originated as a method for logging in with your Google account. Today, it's the glue that lets <strong>AI agents, SaaS platforms, and enterprises</strong> interact safely and automatically. <a target="_blank" href="https://rebelion.la/why-oidc-becoming-iso-matters">OpenID Connect (OIDC) becoming an ISO standard</a> matters now more than ever.</p>
<p>And if you're ready to experiment:</p>
<p>👉 Try spinning up an agent with the <a target="_blank" href="https://hapi.mcp.com.ai"><strong>HAPI MCP Server</strong></a> and plug it into Authentik. You'll see just how powerful Dynamic Client Registration is when theory meets practice.</p>
<p>The magic of HAPI Server is that it simplifies the implementation of OAuth 2.1 flows out of the box, using just the Swagger Specs and zero coding, making it easier for developers to build secure and scalable applications. PMs can spin up new agents in minutes, not weeks.</p>
<p>Go Rebels! ✊🏽</p>
]]></content:encoded></item><item><title><![CDATA[Agent Tools Guardrails: Why Too Many Tools Can Break Your AI Agent]]></title><description><![CDATA[When you're building AI agents, there's a question we don't talk about enough:
👉 How many tools should an agent actually have access to?
It sounds simple: connect your agent to a bunch of MCP servers, aggregate the tools, and let the model decide. B...]]></description><link>https://rebelion.la/agent-tools-guardrails-why-too-many-tools-can-break-your-ai</link><guid isPermaLink="true">https://rebelion.la/agent-tools-guardrails-why-too-many-tools-can-break-your-ai</guid><category><![CDATA[agents]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[mcp]]></category><category><![CDATA[guardrails]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Thu, 28 Aug 2025 00:28:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756340118459/92d58e74-fbe7-4ffb-af5c-85b9fca7dc1a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you're building AI agents, there's a question we don't talk about enough:</p>
<p>👉 <strong>How many tools should an agent actually have access to?</strong></p>
<p>It sounds simple: connect your agent to a bunch of MCP servers, aggregate the tools, and let the model decide. But in practice, that "let's give it everything" approach is a recipe for confusion, higher latency, and even session corruption.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/mcp/agent-tools-vs-code.png" alt="Agent Tools in VS Code example" /></p>
<p>Let's break it down.</p>
<h2 id="heading-the-hidden-risk-tool-overload">The Hidden Risk: Tool Overload</h2>
<p>Think about it like this:</p>
<ul>
<li><p>If you give a chef one pot, they'll probably make a meal just fine.</p>
</li>
<li><p>If you give them a hundred different pots, each slightly different, the kitchen slows down. They spend more time choosing than cooking.</p>
</li>
</ul>
<p>The same thing happens with agents. Presenting too many tools to the model increases "decision friction" and makes it more likely that the wrong one gets called.</p>
<p><strong>The result?</strong></p>
<ul>
<li><p>Wasted tokens on irrelevant tool calls.</p>
</li>
<li><p>Incorrect responses that break user trust.</p>
</li>
<li><p>Sessions that derail because of "tool confusion."</p>
</li>
</ul>
<p>I have been in situations where my local Ollama either times out or gets confused by the number of tools available. This has led to suboptimal performance and frustrating user experiences.</p>
<hr />
<h2 id="heading-guardrails-for-agent-tools">Guardrails for Agent Tools</h2>
<p>Managing an agent's toolset is as important as managing its memory or context window. Without the right guardrails, even the most powerful model can collapse under its own options.</p>
<p>There are <strong>three main approaches</strong> teams are trying today:</p>
<h3 id="heading-1-manual-coding-static-guardrails">1. <strong>Manual Coding</strong> (Static Guardrails)</h3>
<p>You hardcode the tool list in the agent logic. Simple, predictable, but rigid. If you need to add or swap tools, you will be shipping code changes.</p>
<p>Good for: <strong>Proof-of-concept agents.</strong> Bad for: <strong>Anything you want to scale.</strong></p>
<h3 id="heading-2-specialized-agents-semi-automatic">2. <strong>Specialized Agents</strong> (Semi-Automatic)</h3>
<p>Instead of one agent with 50 tools, you spin up multiple agents with smaller, curated toolsets. Each agent is specialized in a specific domain—think "Customer Support Agent" VS "Analytics Agent."</p>
<p>This gives you flexibility, but you now have <strong>agent sprawl</strong>—managing dozens of specialized agents becomes its own problem.</p>
<p>A variation on this is to use <strong>tool categories</strong>—grouping similar tools together and allowing the agent to choose from a category rather than a long list. This can reduce decision fatigue while still providing flexibility. You get an agent with a more focused toolset, tailored to specific tasks.</p>
<h3 id="heading-3-dynamic-tool-selection-runtime-guardrails">3. <strong>Dynamic Tool Selection</strong> (Runtime Guardrails)</h3>
<p>Here's where things get interesting. Instead of hardcoding or duplicating, you let the agent <strong>query for the tools it needs at runtime</strong>.</p>
<p>For example, with <a target="_blank" href="https://chat.mcp.com.ai"><strong>ChatMCP</strong></a>, the agents are defined by their context with filtered tools. Based on the conversation, the model is dynamically fed with only the relevant tools that meet the conversation's semantic needs. This keeps the toolset lean, relevant, and avoids overwhelming the LLM.</p>
<hr />
<h2 id="heading-beyond-guardrails-multi-headed-specialization">Beyond Guardrails: Multi-Headed Specialization</h2>
<p>What if one server could serve multiple toolsets, on demand?</p>
<p>That's where solutions like a <strong>HAPI Server in "hydra mode"</strong> come in—letting you define an MCP Server with multiple "heads" (each head being an API server). Instead of juggling dozens of servers, you present a single entry point that flexibly exposes the right tools at the right time.</p>
<p>This keeps your architecture clean while still giving your agents flexibility.</p>
<p>Dynamic tool selection or specialized agents is the best approach for most use cases. Whatever strategy you choose, the key is to keep the toolset relevant and manageable.</p>
<hr />
<h2 id="heading-why-this-matters">Why This Matters</h2>
<p>The number of tools you give your agent directly impacts:</p>
<ul>
<li><p><strong>Performance</strong> (more tools = slower calls).</p>
</li>
<li><p><strong>Reliability</strong> (fewer mistakes if tool choice is clear).</p>
</li>
<li><p><strong>Cost</strong> (fewer wasted tokens on irrelevant tool calls).</p>
</li>
</ul>
<p>Getting this wrong isn't just a technical detail—it's a <strong>business problem</strong>. If your agent burns through context windows or calls the wrong APIs, you're losing both <strong>time and money.</strong></p>
<hr />
<h2 id="heading-closing-thought">Closing Thought</h2>
<p>Building more intelligent agents isn't about giving them <em>all the tools</em>; it's about giving them <strong>the right tools at the right time.</strong></p>
<p>The future of agent design lies in dynamic guardrails—systems that adapt toolsets to context, not static lists that overwhelm the model.</p>
<p>If you're exploring MCP-based architectures and want to see what runtime guardrails look like in practice, check out the <a target="_blank" href="https://docs.mcp.com.ai/overview"><strong>HAPI Stack</strong></a>. With features like <strong>hydra-mode</strong>, it makes managing agent tools simpler, faster, and more scalable.</p>
<p>👉 <strong>Question for you:</strong> How are you managing the tool list for your agents today—manual, specialized, or dynamic?</p>
<p>Please let me know in the comments or reach out to me directly. Always happy to chat about building better AI systems!</p>
<p>Go Rebels! ✊🏽</p>
]]></content:encoded></item><item><title><![CDATA[Model Context Protocol (MCP) — Is it a protocol or a contract?]]></title><description><![CDATA[Since its introduction, MCP has sparked fascinating discussions across developer communities. If you're new to the concept, you can browse Reddit to see the ongoing debate surrounding it, with both proponents and opponents.
When I first discovered MC...]]></description><link>https://rebelion.la/model-context-protocol-mcp-is-it-a-protocol-or-a-contract</link><guid isPermaLink="true">https://rebelion.la/model-context-protocol-mcp-is-it-a-protocol-or-a-contract</guid><category><![CDATA[mcp]]></category><category><![CDATA[APIs]]></category><category><![CDATA[API Gateway]]></category><category><![CDATA[AI]]></category><category><![CDATA[swagger]]></category><category><![CDATA[protocols]]></category><category><![CDATA[OSI Model]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Tue, 05 Aug 2025 17:19:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754409846858/7a2f20bf-4bd9-4897-b6bd-6c943abb613d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Since its introduction, MCP has sparked fascinating discussions across developer communities. If you're new to the concept, you can browse Reddit to see the ongoing debate surrounding it, with both proponents and opponents.</p>
<p>When I first discovered MCP in late November 2024, I was immediately intrigued by its potential. It seemed like the missing piece in how we communicate with systems using natural language—a problem I was already trying to tackle with my CLI tools <code>gyat</code> and <code>hapi</code>. I created a <a target="_blank" href="https://apicove.com/blog">blog</a> to help others understand these tools and make API interactions more human-friendly. However, these tools worked algorithmically with deterministic workflows; they couldn't systematically interact with AI models.</p>
<p>Anthropic's MCP caught my attention because it <strong>appeared to offer a solution</strong>—a way to standardize natural language communication with systems, benefiting both developers and users. Initially, I saw MCP as a protocol that could bridge the gap between human language and machine comprehension.</p>
<h2 id="heading-my-journey-with-mcp">My Journey with MCP</h2>
<p>I immediately began experimenting with MCP, integrating it into my tools and workflows. The simplicity of adding tools to Claude's desktop interface was compelling—it felt like a natural evolution beyond traditional APIs and protocols.</p>
<p>However, there was a catch. The initial learning curve was steep, and troubleshooting was a challenging task. I wanted to help others facing similar struggles by making MCP more accessible without requiring deep technical understanding.</p>
<p>Through writing about my experiences and sharing practical tips, my understanding evolved in tandem with the specifications. A pivotal moment came when MCP introduced <strong>Streamable HTTP</strong> (Protocol Revision: 2025-03-26) as a transport method. This shift helped me realize that <strong>MCP isn't purely a protocol—it's fundamentally a contract</strong> that emphasizes interaction semantics rather than just data transfer mechanics.</p>
<p>This revelation clarified how MCP could function at a higher level. Since my <code>gyat</code> and <code>hapi</code> CLI tools were designed to be API-first, and MCP fits seamlessly into that approach—not just defining how systems communicate, but how they should behave.</p>
<h2 id="heading-the-lsp-connection-understanding-mcps-roots">The LSP Connection: Understanding MCP's Roots</h2>
<p>To understand what MCP truly is, let's examine its foundation. Do you recognize this JSON structure?</p>
<pre><code class="lang-json">Content-Length: ...\r\n
\r\n
{
    <span class="hljs-attr">"jsonrpc"</span>: <span class="hljs-string">"2.0"</span>,
    <span class="hljs-attr">"id"</span>: <span class="hljs-number">1</span>,
    <span class="hljs-attr">"method"</span>: <span class="hljs-string">"textDocument/completion"</span>,
    <span class="hljs-attr">"params"</span>: {
        ...
    }
}
</code></pre>
<p><img src="https://en.meming.world/images/en/thumb/3/37/Learning_to_be_Spider-Man.jpg/300px-Learning_to_be_Spider-Man.jpg" alt="Learning to be like..." /></p>
<p>What about these TypeScript interfaces?</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">interface</span> RequestMessage <span class="hljs-keyword">extends</span> Message {
    <span class="hljs-comment">/**
     * The request id.
     */</span>
    id: integer | <span class="hljs-built_in">string</span>;

    <span class="hljs-comment">/**
     * The method to be invoked.
     */</span>
    method: <span class="hljs-built_in">string</span>;

    <span class="hljs-comment">/**
     * The method's params.
     */</span>
    params?: array | <span class="hljs-built_in">object</span>;
}

<span class="hljs-keyword">interface</span> ResponseMessage <span class="hljs-keyword">extends</span> Message {
    <span class="hljs-comment">/**
     * The request id.
     */</span>
    id: integer | <span class="hljs-built_in">string</span> | <span class="hljs-literal">null</span>;

    <span class="hljs-comment">/**
     * The result of a request. This member is REQUIRED on success.
     * This member MUST NOT exist if there was an error invoking the method.
     */</span>
    result?: LSPAny;

    <span class="hljs-comment">/**
     * The error object in case a request fails.
     */</span>
    error?: ResponseError;
}
</code></pre>
<p>No, it is not MCP. This is the <a target="_blank" href="https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#contentPart"><strong>Language Server Protocol (LSP)</strong></a>—a protocol that standardizes how development tools communicate with programming language servers. LSP enables editors and IDEs to provide features such as code completion, linting, and refactoring <strong>across different languages without implementing language-specific logic in each tool</strong>.</p>
<p>The lifecycle message flows of LSP and MCP are strikingly similar:</p>
<pre><code class="lang-mermaid">sequenceDiagram
  participant Client
  participant Server

  Client-&gt;&gt;Server: Initialize
  Server-&gt;&gt;Client: Initialized
  Client-&gt;&gt;Server: Register Capability
  Client-&gt;&gt;Server: Unregister Capability
  Client-&gt;&gt;Server: Set Trace
  Client-&gt;&gt;Server: Log Trace
  Client-&gt;&gt;Server: Shutdown
  Client-&gt;&gt;Server: Exit
</code></pre>
<p><img src="https://media.tenor.com/QXVs4QWLlzkAAAAM/spider-man.gif" alt="LSP and MCP are strikingly similar" /></p>
<h3 id="heading-why-lsp-matters-for-understanding-mcp">Why LSP Matters for Understanding MCP?</h3>
<p>LSP revolutionized software development by solving a critical integration problem. Before LSP, supporting code intelligence across multiple programming languages meant <strong>writing M × N integrations—M tools for N editors</strong>. Each language had its peculiarities, and each editor had its API, resulting in expensive, fragile, and inefficient custom integrations.</p>
<p>LSP introduced a <strong>universal interface</strong> for language servers. Instead of each editor implementing custom logic for every language, they could all speak the same protocol. <strong>Once a language server was built, any LSP-compliant editor could use it.</strong></p>
<p><strong>Sound familiar?</strong> This is precisely what MCP aims to achieve for AI model interactions. <em>Once an API implements an MCP Server, any compliant client can interact with it without needing custom logic for each client</em>.</p>
<h2 id="heading-the-strategic-launch-lucky-or-smart">The Strategic Launch: Lucky or Smart?</h2>
<p><strong>Timeline:</strong></p>
<ul>
<li><p>MCP SDK made <a target="_blank" href="https://github.com/modelcontextprotocol/typescript-sdk/releases/tag/0.1.0">public on GitHub: October 23rd, 2024</a></p>
</li>
<li><p>Claude desktop <a target="_blank" href="https://docs.anthropic.com/en/release-notes/claude-apps#october-31st%2C-2024">released: October 31st, 2024</a></p>
</li>
</ul>
<p>The buzz around MCP is largely thanks to Claude desktop—the first MCP client. Anthropic made a brilliant move by recognizing LSP's potential and applying similar principles to AI model interactions.</p>
<p>Despite the controversial choice of using <code>stdio</code> as the initial transport layer, it was strategically sound, providing a lightweight and efficient communication method that made it easier for developers to build AI-powered tools locally.</p>
<p>Why not use HTTP? <strong>HTTP is a heavyweight protocol</strong> with built-in features that aren't always necessary for simple interactions. <code>stdio</code> is simpler, faster, and more direct—ideal for local applications where both client and server are on the same machine, which was the case for Claude desktop and the initial MCP implementations.</p>
<h3 id="heading-understanding-stdio">Understanding STDIO</h3>
<p><strong>STDIO (Standard Input/Output)</strong> is defined by the C runtime and inherited by UNIX-like systems, consisting of three standard file descriptors:</p>
<ul>
<li><p><strong>stdin</strong> (0): input stream</p>
</li>
<li><p><strong>stdout</strong> (1): output stream</p>
</li>
<li><p><strong>stderr</strong> (2): error stream</p>
</li>
</ul>
<p>These are simply <strong>file streams</strong>, often attached to terminals, pipes, or redirected. STDIO serves as the transport layer for higher-level protocols.</p>
<p><strong>Analogy:</strong> If <strong>HTTP is like postal mail</strong>, STDIO is like <strong>hand-delivering messages through a tube</strong>—simpler, faster, and ideal when sender and receiver are in close proximity.</p>
<p><img src="https://media1.giphy.com/media/v1.Y2lkPWVjZjA1ZTQ3YTRycWI5cnhlczZsN3Q1cWR6aXZ5MTUwazZ2b2wzdHZvN2lsZXR3ayZlcD12MV9naWZzX3NlYXJjaCZjdD1n/hTsAAaYV5nRjq/200.webp" alt class="image--center mx-auto" /></p>
<p>The community's push toward HTTP as a transport layer makes sense—it's familiar, widely supported, and includes built-in features like headers, status codes, and content negotiation. For me, the introduction of <a target="_blank" href="https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http">Streamable HTTP</a> transport in <code>version 2025-03-26</code> was transformative, making MCP more accessible and easier to integrate into existing systems.</p>
<h2 id="heading-mcp-as-contract-not-protocol">MCP as Contract, Not Protocol</h2>
<p>This brings us to the core insight: <strong>MCP is not just another protocol; it's a semantic layer</strong> that enables <strong>intelligent API interactions</strong>. It allows agents to understand not only the data, but also the <strong>intent and context</strong> behind it.</p>
<p>Consider this question: <em>Where do tools like Swagger/OpenAPI Specification (OAS) fit into networking architectures such as the OSI model?</em></p>
<p>They don't fit neatly, and that's perfectly fine. <strong>Swagger/OAS isn't a protocol in the traditional sense—it's a contract</strong> that defines how systems should interact.</p>
<h3 id="heading-why-swaggeroas-doesnt-fit-the-osi-model">Why Swagger/OAS Doesn't Fit the OSI Model</h3>
<p>The <strong>OSI model</strong> describes <strong>how</strong> data is transmitted across networks—from physical wire (Layer 1) to applications (Layer 7).</p>
<p>Swagger/OAS, by contrast:</p>
<ul>
<li><p>Doesn't handle <strong>transport</strong> (Layer 4)</p>
</li>
<li><p>Doesn't define <strong>sessions</strong> (Layer 5)</p>
</li>
<li><p>Isn't concerned with <strong>encoding</strong> (Layer 6)</p>
</li>
<li><p>It isn’t a <strong>runtime application</strong> (Layer 7)</p>
</li>
</ul>
<p>Instead, it:</p>
<ul>
<li><p><strong>Describes APIs</strong>—endpoints, parameters, responses, and authentication</p>
</li>
<li><p><strong>Acts as metadata</strong> for client-service interactions over HTTP</p>
</li>
</ul>
<p>This is <strong>meta-communication</strong>—a <strong>contract</strong>, not transmission itself.</p>
<h3 id="heading-the-layer-8-concept">The "Layer 8" Concept</h3>
<p>I would place Swagger in <a target="_blank" href="https://en.wikipedia.org/wiki/Layer_8"><strong>Layer 8</strong></a><strong>: "People, Policies, and Planning"</strong>—where developers, product managers, and architects live. It's about human <strong>agreement</strong>, <strong>specification</strong>, and <strong>negotiation</strong> of service interfaces.</p>
<p><strong>Analogy:</strong> If the OSI model is a road system:</p>
<ul>
<li><p>Layers 1–4 are asphalt, traffic rules, intersections, and cars.</p>
</li>
<li><p>Layers 5–7 are comprised of drivers, GPS, and mobile apps, which send and receive instructions.</p>
</li>
<li><p><strong>Swagger/OAS</strong> is the <em>roadmap and city planning documentation</em> telling drivers and managers how roads should be used.</p>
</li>
</ul>
<h3 id="heading-mcp-operates-in-the-intent-layer">MCP Operates in the Intent Layer</h3>
<p>In <strong>intent-based systems</strong> (like MCP or autonomous agents), specifications like Swagger or gRPC play crucial roles:</p>
<ul>
<li><p>Declaring <strong>intent</strong></p>
</li>
<li><p>Providing <strong>machine-readable contracts</strong></p>
</li>
<li><p>Powering <strong>automation, code generation, testing, and governance</strong></p>
</li>
</ul>
<p>In modern cloud-native and AI-centric architectures, these tools reside in the intent/control plane—often referred to as <strong>Layer 8+</strong> or <strong>"Layer 9: Governance/Compliance."</strong></p>
<h2 id="heading-summary-contract-vs-protocol">Summary: Contract vs. Protocol</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Aspect</strong></td><td><strong>OSI Equivalent</strong></td><td><strong>Why It's Different</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Describes APIs over HTTP</td><td>Above Layer 7</td><td>It's metadata, not data</td></tr>
<tr>
<td>Exists at design time</td><td>Layer 8 (People)</td><td>It's a contract, not a protocol</td></tr>
<tr>
<td>Used by Devs &amp; Tools</td><td>Layer 8/9</td><td>Focused on agreement &amp; automation</td></tr>
<tr>
<td>Does not transmit data</td><td>Not in OSI</td><td>Doesn't live in the data plane</td></tr>
</tbody>
</table>
</div><p><strong>Key Insight:</strong> Swagger/OAS is what <em>developers</em> and <em>machines</em> use to agree <strong>on how</strong> to communicate, but it doesn't do the communicating itself.</p>
<p>Similarly, <strong>MCP lives in "Layer 8"—the realm of contracts, conventions, and collaboration.</strong> It's not a limitation; it's evolution from "transporting data" to "understanding and automating intent."</p>
<h2 id="heading-why-this-matters-mcps-real-value">Why This Matters: MCP's Real Value</h2>
<p>Anthropic's genius was leveraging this intent-layer understanding to create a more intuitive interface for AI model interactions. MCP wasn't just another specification—it was a way to make AI more accessible and practical.</p>
<p>The introduction of <a target="_blank" href="https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http">Streamable HTTP</a> transport was my AHA moment, making integration with existing systems seamless. When I finally understood <strong>MCP as a contract rather than a protocol</strong>, everything fell into place.</p>
<h2 id="heading-looking-forward-mcp-in-enterprise">Looking Forward: MCP in Enterprise</h2>
<p>In regulated industries—such as banking, healthcare, and defense—contracts are paramount. MCP's explicit semantic layer maps directly to compliance requirements:</p>
<ul>
<li><p><strong>Audit Trails:</strong> Each message logged with context keys</p>
</li>
<li><p><strong>Policy Enforcement:</strong> Only approved message types pass through</p>
</li>
<li><p><strong>Version Control:</strong> Upgrade contracts without breaking consumers</p>
</li>
</ul>
<p>With modern deployment stacks, you can deploy MCP servers using GitOps, ensuring every revision is tracked, tested, and approved.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>What began as curiosity about a new protocol evolved into a deeper understanding of how AI systems can better serve human needs. <strong>MCP isn't just about data exchange—it's about creating meaningful, contextual interactions</strong> between humans and AI.</p>
<p>By recognizing MCP as a contract that operates in the intent layer, we can build more reliable, maintainable, and robust AI integrations. It's not about the transport mechanism; it's about the semantic agreement that enables intelligent automation.</p>
<p>I have been working with the <a target="_blank" href="https://mpc.com.ai">HAPI Stack for MCP</a>, which aims to provide a comprehensive framework for building and deploying API-first MCP-based applications. It simplifies the integration and deployment of MCP-based applications, allowing developers to focus on creating intelligent systems without getting bogged down in low-level details.</p>
<p><img src="https://docs.mcp.com.ai/img/diagrams/hapi-mcp-diagram.svg" alt="HAPI Stack for MCP" /></p>
<p>Please review the <a target="_blank" href="https://docs.hapi.com.ai">HAPI Stack documentation</a> to learn how it can help you build robust, intent-driven applications using MCP.</p>
<h2 id="heading-further-reading">Further Reading</h2>
<ul>
<li><p><a target="_blank" href="https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification">LSP Specification</a></p>
</li>
<li><p><a target="_blank" href="https://langserver.org">Language Server Protocol</a></p>
</li>
<li><p><a target="_blank" href="https://modelcontextprotocol.io">Model Context Protocol (MCP)</a></p>
</li>
<li><p><a target="_blank" href="https://docs.mcp.com.ai">MCP as Contracts documentation</a></p>
</li>
</ul>
<hr />
<p><em>This post was brought to you by someone who once thought MCP was just another protocol—and is now convinced it's a contract that unlocks smarter, more reliable AI integrations.</em></p>
<p>Go Rebels! ✊🏽</p>
]]></content:encoded></item><item><title><![CDATA[You Don't Need to Implement MCP Servers: A Contract-First Approach to AI Tool Integration]]></title><description><![CDATA[The Model Context Protocol (MCP) has been introduced as an "open standard for connecting AI assistants to the systems where data lives". In other words, MCP is a contract or interface specification – much like OpenAPI – that informs large language mo...]]></description><link>https://rebelion.la/you-dont-need-to-implement-mcp-servers-a-contract-first-approach-to-ai-tool-integration</link><guid isPermaLink="true">https://rebelion.la/you-dont-need-to-implement-mcp-servers-a-contract-first-approach-to-ai-tool-integration</guid><category><![CDATA[mcp]]></category><category><![CDATA[mcp server]]></category><category><![CDATA[OpenApi]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[AI-automation]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Sun, 27 Jul 2025 13:06:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753621057514/7b87faf6-5b6d-4614-83d9-c235c5e8c96f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The <a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol</a> (MCP) has been introduced as an <em>"</em><a target="_blank" href="https://www.anthropic.com/news/model-context-protocol#:~:text=Today%2C%20we%27re%20open,produce%20better%2C%20more%20relevant%20responses"><em>open standard for connecting AI assistants to the systems where data lives</em></a><em>"</em>. In other words, MCP is a contract or interface specification – much like OpenAPI – that informs large language models (LLMs) about the <em>available tools and data sources and how to utilize them</em>. This aligns with the analogy that <a target="_blank" href="https://www.linkedin.com/pulse/extending-apis-mcp-oas-two-faces-same-coin-adrian-escutia-zg68c?trk=public_post#:~:text=OpenAPI%20describes%20how%20machines%20talk,to%20machines"><strong>OpenAPI describes how machines communicate with machines</strong></a>, whereas <strong>MCP describes how AI models interact with applications</strong>. By standardizing the interface, MCP replaces many one-off integrations with a single protocol, allowing AI agents to "plug and play" with databases, APIs, or local files regardless of their location.</p>
<p>Many developers initially assume that "MCP server" means writing new adapter code, but that's a misconception. Just as in OpenAPI a <code>servers</code> section is <em>metadata</em> (not code), in MCP the "server" entry in the spec merely points to the implementation. You <strong>don't need to hand-code a new server to satisfy the MCP spec</strong> – you only need to <em>expose the contract</em>, use Swagger specs to auto-generate your MCP tools. Existing RESTful APIs (with their OpenAPI/Swagger definitions) can <em>become</em> MCP servers on the fly by generating the MCP contract from the API spec. In practice, each REST endpoint or <a target="_blank" href="https://swagger.io/specification/#operation-object">operation object</a> turns into an <a target="_blank" href="https://modelcontextprotocol.io/docs/concepts/tools">MCP <em>tool</em></a> whose name matches the API's <code>operationId</code> (or path and method).</p>
<p>Consider a simple example. An OpenAPI (OAS) snippet might define a <code>/users</code> endpoint like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">openapi:</span> <span class="hljs-string">"3.0.0"</span>
<span class="hljs-attr">paths:</span>
  <span class="hljs-string">/users:</span>
    <span class="hljs-attr">get:</span>
      <span class="hljs-attr">operationId:</span> <span class="hljs-string">getUsers</span>
      <span class="hljs-attr">description:</span> <span class="hljs-string">Retrieve</span> <span class="hljs-string">a</span> <span class="hljs-string">list</span> <span class="hljs-string">of</span> <span class="hljs-string">users.</span>
      <span class="hljs-attr">parameters:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">limit</span>
          <span class="hljs-attr">in:</span> <span class="hljs-string">query</span>
          <span class="hljs-attr">schema:</span>
            <span class="hljs-attr">type:</span> <span class="hljs-string">integer</span>
            <span class="hljs-attr">default:</span> <span class="hljs-number">10</span>
</code></pre>
<p>In MCP terms, this becomes a <em>tool</em> named <code>getUsers</code> with a JSON schema for its input. For instance, the equivalent MCP contract might include:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"tools"</span>: [
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"getUsers"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Retrieve a list of users"</span>,
      <span class="hljs-attr">"inputSchema"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
        <span class="hljs-attr">"properties"</span>: {
          <span class="hljs-attr">"limit"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"integer"</span>, <span class="hljs-attr">"default"</span>: <span class="hljs-number">10</span> }
        }
      }
    }
  ]
}
</code></pre>
<p>Here, <code>getUsers</code> (the REST operationId) is exposed as a tool name, and its query parameter <code>limit</code> is captured in the <code>inputSchema</code> (see <a target="_blank" href="https://modelcontextprotocol.io/docs/learn/server-concepts#tools-ai-actions">MCP's tool definition structure</a>). This mirrors how <strong>mcp-openapi-proxy</strong> or similar tools work: in "low-level mode" they <em>automatically register every API endpoint as an MCP tool</em> (e.g. mapping <code>/chat/completions</code> to a <code>chat_completions()</code> tool).</p>
<h2 id="heading-key-misconceptions-in-the-mcp-ecosystem">Key Misconceptions in the MCP Ecosystem</h2>
<p>There are a few common myths about MCP that our approach clarifies:</p>
<ul>
<li><p><strong>"You must implement an MCP server from scratch."*</strong>Reality:<em> Any existing API can act as an MCP server </em>contractually<em>. You can <a target="_blank" href="https://youtu.be/rOTdkSSHBBc">auto-generate the MCP spec from the API's OpenAPI definition</a>, then let <a target="_blank" href="https://modelcontextprotocol.io/quickstart/client#how-it-works">the client or model call the REST endpoints directly</a> (typically via an API gateway). In other words, you're just publishing the <strong>contract</strong>. As one expert noted, "<a target="_blank" href="https://www.linkedin.com/posts/adrianescutia_gorebels-activity-7349233480465080321-Yr2w#:~:text=You%20don%27t%20need%20MCP%20servers%3B,to%20interact%20with%20each%20other">you don't need to code MCP servers</a> at all – any RESTful API can function as an MCP server". This is akin to Swagger: you don't write code for the "server" entry, you implement the API and reference it in the spec. Go one step further: if you have an OpenAPI spec, <em>*your existing API becomes an MCP server on the fly</em></em>. Keep reading, below we show how to do this with some magic tools.</p>
</li>
<li><p><strong>"MCP tools are different from API operations."*</strong>Reality:<em> In our view, the MCP "tools" <em>*are the API operations</em></em>. In fact, the REST <code>operationId</code> can become the tool name. Each <a target="_blank" href="https://modelcontextprotocol.io/docs/learn/server-concepts#example%3A-taking-action">tool has a unique name and JSON schema</a> for its inputs. For example, an OpenAPI path with <code>operationId: getUsers</code> simply yields an MCP tool named <code>getUsers</code>. This naming correspondence means your LLM sees familiar operations: calling a tool is just like calling the original API, but through the standardized protocol.</p>
</li>
<li><p><strong>"MCP adds overhead or new dependencies."*</strong>Reality:<em> By deriving the MCP spec from existing API docs, you avoid rewriting code. Many teams in the industry are already doing this. For example, tools like [<em>*HAPI server</em></em>](https://youtu.be/RGgFJcZ_PA4) dynamically turn all endpoints in an OpenAPI spec into MCP tools. This ensures that as long as your API is documented, your MCP interface is up-to-date. You're reusing your API's security, logic, and data – not replicating it (complementing maintenance). The MCP protocol itself is a thin JSON-RPC layer, so implementation can be as lightweight as a small gateway process.</p>
</li>
</ul>
<h2 id="heading-openapi-vs-mcp-two-faces-of-integration">OpenAPI vs MCP: Two Faces of Integration</h2>
<p>OpenAPI and MCP serve <strong>complementary roles</strong>. OpenAPI (OAS) has long been the de facto contract for RESTful services: it tells machines <em>which endpoints exist and how to call them</em>. MCP is the analogous layer for AI agents: it tells models <em>which tools they can invoke and what arguments to pass</em>. One observer nicely summarized this: "<a target="_blank" href="https://www.linkedin.com/pulse/extending-apis-mcp-oas-two-faces-same-coin-adrian-escutia-zg68c?trk=public_post#:~:text=OpenAPI%20describes%20how%20machines%20talk,to%20machines">OpenAPI describes how machines talk to machines. MCP defines how models talk to applications</a>".</p>
<p>In practice, you use both: a conventional client (or agent) can call an API with REST as usual, and an LLM-based agent can call an MCP "tool" – but under the hood it's the same endpoint. An MCP-enabled architecture might look like this: your host application (e.g. a chat UI) connects to one or more MCP servers (each backed by APIs), as shown by <a target="_blank" href="https://medium.com/@elisowski/mcp-explained-the-new-standard-connecting-ai-to-everything-79c5a1c98288">Edwin Lisowski's diagram</a>. The host sends the user's query along with the <em>list of available tools</em> to the LLM. <a target="_blank" href="https://medium.com/@tahirbalarabe2/what-is-model-context-protocol-mcp-architecture-overview-c75f20ba4498#:~:text=Here%E2%80%99s%20how%20it%20works%20in,final%20answer%20for%20the%20user">The model then decides which tool to use</a>, and the host invokes that tool (via the standard REST call or via an MCP gateway). The result returns through MCP and then to the user.</p>
<p>This two-way flow ("host ⇄ server via MCP") is flexible: some servers might access local data, others remote services. Crucially, any API (local or cloud) can be plugged in. <a target="_blank" href="https://modelcontextprotocol.io/docs/learn/server-concepts#:~:text=Discover%20available%20tools,Tool%20execution%20result">The protocol standardizes discovery and invocation</a>: clients send <code>tools/list</code> to discover tools, then <code>tools/call</code> with a name and arguments to execute it. As Anthropic explains, the architecture is straightforward: developers <em>"</em><a target="_blank" href="https://www.anthropic.com/news/model-context-protocol#:~:text=The%20Model%20Context%20Protocol%20is,that%20connect%20to%20these%20servers"><em>expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.</em></a><em>"</em>.</p>
<h2 id="heading-tools-first-not-server-first">Tools-First, Not Server-First</h2>
<p>Our approach is "tools-first": we focus on providing LLMs with <em>contractual access</em> to backend tools, rather than building a brand-new server implementation for each. Concretely, this means:</p>
<ul>
<li><p><strong>Generate the MCP spec from your existing API (OpenAPI).</strong> Tools like our HAPI server (Headless API) can read the OpenAPI file and emit the MCP tools schema. No new business logic is written; we simply wrap the contract. This lets the agent know, for example, that "getUsers" exists and takes an integer limit.</p>
</li>
<li><p><strong>Use an MCP gateway (runMCP) to manage connections.</strong> This component handles the JSON-RPC transport (stdio or HTTP) and routes calls to the real API endpoints. It also aggregates multiple tools into one logical "server" if needed.</p>
</li>
<li><p><strong>Invoke tools from the MCP client (chatMCP).</strong> The LLM sees a unified list of tools. When it decides to call one, the MCP client issues a <code>tools/call</code> request, which the gateway translates into the actual API call. The response is sent back to the LLM in structured JSON format. In practice, this is exactly what many products do: for instance, Claude Desktop, Cursor, or Cobie all connect via stdio to MCP servers that wrap REST APIs.</p>
</li>
</ul>
<p>By contrast, the "MCP hype" often suggests building a separate software server for each data source. We challenge that: you already have servers! For example, if you have an OData or REST API for your database, that <em>can be</em> the MCP server. You only need to publish its OpenAPI spec so agents know how to talk to it. In short:</p>
<ul>
<li><p>If you can call the API now, you can call it via MCP.</p>
</li>
<li><p>Each API operation is simply a tool in the MCP world.</p>
</li>
<li><p>No new code (beyond the gateway glue) is needed to implement the business logic.</p>
</li>
</ul>
<p>This cuts development effort dramatically. As one architect put it, <a target="_blank" href="https://www.linkedin.com/pulse/extending-apis-mcp-oas-two-faces-same-coin-adrian-escutia-zg68c?trk=public_post#:~:text=MCP%20gives%20AI%20models%20a,It">MCP gives AI models a <em>shared language</em> for your stack</a>. The MCP spec is generated "on the fly" from what the server already exposes – so maintaining your APIs automatically maintains your AI contract.</p>
<h2 id="heading-example-conversion-openapi-mcp">Example Conversion: OpenAPI → MCP</h2>
<p>To make this concrete, imagine a typical <strong>user service</strong> with an OpenAPI spec (a small excerpt is shown above). In the OAS we have:</p>
<ul>
<li><p><code>GET /users?limit=10</code> with <code>operationId: getUsers</code></p>
</li>
<li><p><code>POST /users</code> with <code>operationId: createUser</code></p>
</li>
<li><p><code>GET /users/{id}</code> with <code>operationId: getUserById</code></p>
</li>
<li><p>etc.</p>
</li>
</ul>
<p>When we run HAPI (or another OAS-to-MCP tool), it might produce an MCP contract listing tools like:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"tools"</span>: [
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"getUsers"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Returns a list of users"</span>,
      <span class="hljs-attr">"inputSchema"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
        <span class="hljs-attr">"properties"</span>: {
          <span class="hljs-attr">"limit"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"integer"</span>, <span class="hljs-attr">"default"</span>: <span class="hljs-number">10</span> }
        }
      }
    },
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"createUser"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Create a new user"</span>,
      <span class="hljs-attr">"inputSchema"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
        <span class="hljs-attr">"properties"</span>: {
          <span class="hljs-attr">"body"</span>: { 
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
            <span class="hljs-attr">"properties"</span>: {
              <span class="hljs-attr">"name"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> },
              <span class="hljs-attr">"email"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>, <span class="hljs-attr">"format"</span>: <span class="hljs-string">"email"</span> }
            },
            <span class="hljs-attr">"required"</span>: [<span class="hljs-string">"name"</span>, <span class="hljs-string">"email"</span>]
          }
        }
      }
    },
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"getUserById"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Returns a user by ID"</span>,
      <span class="hljs-attr">"inputSchema"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
        <span class="hljs-attr">"properties"</span>: {
          <span class="hljs-attr">"id"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> }
        }
      }
    }
    <span class="hljs-comment">// ... additional tools from other endpoints ...</span>
  ]
}
</code></pre>
<p>This closely follows the <a target="_blank" href="https://modelcontextprotocol.io/docs/learn/server-concepts#tools-ai-actions">official MCP tool definition structure</a>. The <code>name</code> field is the unique tool name (we use the OpenAPI operationId), and <code>inputSchema</code> is a JSON Schema object for the tool's parameters. The <code>description</code> helps the LLM understand its purpose. The MCP server (gateway) would advertise these tools via <code>tools/list</code>, and the LLM can pick one by name. Then the gateway performs the underlying REST call, sends the response back as MCP <code>content</code>, and the model incorporates it.</p>
<p>By keeping the tool definitions in sync with the OpenAPI, teams ensure that every change to the API automatically updates the MCP interface. In our experience, architects are often pleasantly surprised at how minimal the extra work is: it's essentially just <em>publishing</em> the API spec over a standard channel. Many open-source proxies (like <a target="_blank" href="https://github.com/matthewhand/mcp-openapi-proxy#:~:text=%2A%20Low,based%20on%20static%20configurations">mcp-openapi-proxy</a>) operate this way out of the box.</p>
<h3 id="heading-oas-vs-mcp-contract-level-feature-comparison">📊 OAS vs. MCP – Contract-Level Feature Comparison</h3>
<p>Here's a <strong>side-by-side comparison table</strong> showing the similarities between <strong>OpenAPI Specification (OAS)</strong> and <strong>Model Context Protocol (MCP)</strong>. This focuses on their <strong>structural parallels</strong>, especially in how <strong>OAS defines contracts</strong> vs. how <strong>MCP defines intents, tools, and context</strong>, helping technical product managers and architects bridge the conceptual gap:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature / Concept</td><td>OpenAPI Specification (OAS)</td><td>Model Context Protocol (MCP)</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Primary Purpose</strong></td><td>Define RESTful API contract (endpoints, methods, payloads)</td><td>Define Intent-Tool interactions for agentic or AI-enhanced systems</td></tr>
<tr>
<td><strong>Spec Format</strong></td><td>JSON/YAML using OpenAPI v3+</td><td>JSON-based (MCP.json or embedded)</td></tr>
<tr>
<td><strong>Operations</strong></td><td><code>operationId</code> used to uniquely identify operations</td><td><code>tool.name</code> used to uniquely identify tools (similar to functions)</td></tr>
<tr>
<td><strong>Endpoints</strong></td><td>Defined under <code>paths</code>, each with HTTP verbs (<code>get</code>, <code>post</code>, etc.)</td><td>Defined implicitly via <code>tool.name</code> + <code>input</code> schema</td></tr>
<tr>
<td><strong>Inputs</strong></td><td><code>parameters</code>, <code>requestBody</code>, and schema references</td><td><code>input_schema</code> (JSON Schema)</td></tr>
<tr>
<td><strong>Outputs / Responses</strong></td><td><code>responses</code> with schema (200, 400, etc.)</td><td><code>output_schema</code> defines expected response format</td></tr>
<tr>
<td><strong>Server Implementation</strong></td><td>API server or framework auto-generates routes (e.g. FastAPI)</td><td>MCP agent or <code>HAPI Server</code> interprets tools and connects via context</td></tr>
<tr>
<td><strong>Docs / UI Tooling</strong></td><td>Swagger UI, Redoc</td><td><code>chatMCP</code> client as UI for triggering tool use via intent</td></tr>
<tr>
<td><strong>Security</strong></td><td><code>securitySchemes</code> (API key, OAuth2, JWT, etc.)</td><td>Context-aware, authorization not yet standardized</td></tr>
<tr>
<td><strong>Extensibility</strong></td><td>Via <code>x-</code> custom properties, plugins, generators</td><td>Native support for extended metadata like <code>description</code>, <code>examples</code>, etc.</td></tr>
<tr>
<td><strong>Versioning</strong></td><td><code>openapi: 3.x.x</code>, plus custom versioning strategies</td><td><code>version</code> field in MCP spec</td></tr>
<tr>
<td><strong>Primary Use Case</strong></td><td>Human-to-API, API-first design and testing</td><td>Agent-to-Tool communication, AI workflows, function chaining</td></tr>
<tr>
<td><strong>Tools</strong></td><td>Swagger Codegen, OpenAPI Generator, Postman</td><td><code>HAPI Server</code>, <code>runMCP</code>, <code>chatMCP</code>, and MCP-compatible tools</td></tr>
<tr>
<td><strong>Contract Source of Truth</strong></td><td><code>.yaml</code> or <code>.json</code> file</td><td><code>mcp.json</code> or embedded in AI agent memory / local registry</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-interpretation">🔍 Interpretation</h3>
<ul>
<li><p><strong>OAS is to APIs what MCP is to Agents</strong> — both define contracts, but for different execution contexts.</p>
</li>
<li><p><strong>operationId ≈ tool.name</strong> – this is the most precise analog between the two: both uniquely identify callable logic units.</p>
</li>
<li><p><strong>Request/response schemas</strong> are equally critical in both — they define the expected structure, enabling validation and introspection.</p>
</li>
<li><p><strong>Security and extensibility</strong> are evolving in MCP, much like OAS has matured over time.</p>
</li>
<li><p><strong>UI tooling</strong>, such as Swagger UI, is analogous to <code>chatMCP</code> that in MCP, providing a user-friendly interface for interacting with the defined contracts.</p>
</li>
<li><p><strong>Versioning and extensibility</strong> are handled similarly, allowing both OAS and MCP to evolve without breaking existing contracts.</p>
</li>
<li><p><strong>Primary use cases differ</strong>: OAS focuses on API design and documentation, while MCP enables AI agents to interact with tools and data sources.</p>
</li>
<li><p><strong>Tools ecosystem</strong> is growing for MCP, similar to how OAS has a rich set of generators and clients.</p>
</li>
<li><p><strong>Source of truth</strong> for both is the contract file itself, whether it's an OpenAPI spec or an MCP JSON file.</p>
</li>
<li><p><strong>Both OAS and MCP are about contracts, not implementations</strong> – they define how to interact with services, not how those services are built.</p>
</li>
</ul>
<h2 id="heading-benefits-and-metrics-of-a-contract-first-mcp-approach">Benefits and Metrics of a Contract-First MCP Approach</h2>
<p>This tools-first strategy yields measurable benefits for technical teams. Here are some key metrics (KPIs) that improve under this approach:</p>
<ul>
<li><p><strong>Integration Speed:</strong> Onboarding a new data source is much faster. Instead of weeks of adapter coding, you generate the MCP contract from the existing API in minutes.</p>
</li>
<li><p><strong>Developer Productivity:</strong> Engineers spend less time writing boilerplate. The MCP gateway handles the JSON-RPC plumbing. Developers can focus on adding real value to the API itself.</p>
</li>
<li><p><strong>Reuse and Consistency:</strong> By leveraging the OpenAPI spec, you ensure <em>one source of truth</em>. There's no risk of the MCP interface drifting from the actual API. Consistency can be measured by <em>coverage</em>: e.g., what percentage of endpoints are exposed as tools.</p>
</li>
<li><p><strong>Scalability:</strong> As your API surface grows, the tooling scales automatically. More endpoints automatically become available tools without additional coding. A possible KPI is the <em>number of integrated tools per quarter</em>, which should climb rapidly.</p>
</li>
<li><p><strong>Maintainability:</strong> Fewer moving parts (no custom MCP servers) means easier upkeep. An MCP contract auto-generated from OAS reduces maintenance cost. One could track <em>time spent on MCP-related bugs</em> dropping.</p>
</li>
<li><p><strong>Security and Governance:</strong> Standard contracts enable uniform policies. For example, you can apply the same authentication rules across all tools. Metrics here include <em>compliance checks passed</em> or <em>auditable traceability</em> of tool calls.</p>
</li>
<li><p><strong>User Adoption:</strong> Finally, from the product perspective, a consistent protocol can drive usage. One could survey developers or product managers for satisfaction – an important KPI in itself – or measure <em>number of agent use-cases enabled</em>.</p>
</li>
</ul>
<p>All these KPIs are <em>founded on sound principles</em>. For instance, using the well-known OpenAPI spec leverages existing developer skills, reducing training time. Relying on a standardized JSON-RPC protocol means better compatibility between different LLM platforms. In aggregate, we expect metrics like "time to market" and "number of connected services" to improve when the team adopts this contract-first MCP approach.</p>
<h2 id="heading-introducing-the-happy-mcp-stack">Introducing the <em>Happy MCP Stack</em></h2>
<p>To put these ideas into practice, we've built a small MCP "stack":</p>
<p><img src="https://cdn.rebelion.la/img/mcp/hapi-mcp-diagram.svg" alt="Happy MCP Stack" /></p>
<ul>
<li><p><a target="_blank" href="https://hapi.mcp.com.ai"><strong>HAPI Server</strong></a> <strong>(Headless API and MCP Gateway):</strong> A CLI tool that reads an OpenAPI spec and instantly serves an MCP contract. Think of it as "Swagger + MCP" without writing code. Under the hood HAPI generates the <code>tools</code> definitions from your paths/operationIds and launches a JSON-RPC endpoint. This way you can make any REST service MCP-ready on the fly.</p>
</li>
<li><p><a target="_blank" href="https://run.mcp.com.ai"><strong>runMCP</strong></a> <strong>(MCP Control Plane):</strong> This component manages multiple MCP server instances. It acts like your control plane: you configure which HAPI instances (i.e. which specs) to run, and runMCP ensures they're reachable by the agent. It handles routing calls from the LLM to the right MCP server and can also aggregate tools across servers. It's our answer to "How do I host and scale MCP servers?" without heavy infrastructure changes.</p>
</li>
<li><p><a target="_blank" href="https://chat.mcp.com.ai"><strong>chatMCP</strong></a> <strong>(MCP Client Agent):</strong> This is a user-friendly client for interacting with the MCP tools from a conversation or automation. Imagine a WhatsApp-like interface between you and an AI agent: chatMCP lets a human ask the AI for something, the AI uses the MCP tools behind the scenes, and the result flows back in chat. It leverages the standard MCP client libraries to manage the request/response loop. In demos we show how an agent can "chat" with these MCP tools as if they were part of its own language.</p>
</li>
</ul>
<p>Together, this stack embodies our "contract-first" philosophy. <strong>You don't code custom servers</strong>; you just plug existing APIs into MCP via <a target="_blank" href="https://github.com/la-rebelion/hapimcp">HAPI Servers</a>, orchestrate with <a target="_blank" href="https://github.com/la-rebelion/runmcp">runMCP</a>, and interact with <a target="_blank" href="https://github.com/la-rebelion/chatmcp">chatMCP</a>.</p>
<h2 id="heading-the-path-forward">The Path Forward</h2>
<p>MCP is indeed poised to be a new standard in AI integration, but we emphasize simplicity over hype. The core insight is this: <strong>if your API is RESTful and documented, it can be an MCP service.</strong> You only need to publish the contract. In other words, <em>we are not creating a new backend; we are exposing the existing one</em>. This shifts the focus from "building servers" to "publishing tools".</p>
<p>For architects and product managers, that means reusing what you have: existing microservices, databases, and SaaS APIs become immediately LLM-ready. The only code you write is for joining pieces (the gateway), not for implementing domain logic twice. This reduces cost and risk while opening up your tools to powerful AI agents.</p>
<p>As MCP matures, we'll see even more marketplaces and integrations (Anthropic's ecosystem, Community repos, etc.). Our hope is that by clarifying these misconceptions – and measuring impact with solid KPIs – teams will adopt the simplest successful strategy: <strong>tools already exist; just give models the contract</strong>.</p>
<p>In summary, MCP <strong>is a contract standard, not another platform framework</strong>. Think of it as a <em>language for agents</em>, built on familiar foundations (OpenAPI, JSON, HTTP). By focusing on the contract, we demystify MCP, avoid reinventing servers, and speed up AI integration. As one developer put it: MCP simply <em>"</em><a target="_blank" href="https://medium.com/@elisowski/mcp-explained-the-new-standard-connecting-ai-to-everything-79c5a1c98288#:~:text=MCP%20replaces%20one-off%20hacks,for%20autonomous%20agents"><em>replaces one-off hacks with a unified, real-time protocol</em></a><em>"</em>. That's a future where AI agents can do work with the tools we already have – and we believe that's exactly what technical architects and product managers need.</p>
<p>Curious about how to get started? Check out our <a target="_blank" href="https://docs.mcp.com.ai/">Happy MCP Stack documentation</a> for a quick guide on turning your OpenAPI specs into MCP tools. Or, if you want to see it in action, try our <a target="_blank" href="https://chat.mcp.com.ai">chatMCP</a> and <a target="_blank" href="https://run.mcp.com.ai">runMCP</a> or watch our <a target="_blank" href="https://go.rebelion.la/chatmcp-demos">demo videos</a> to see how easy it is to integrate AI agents with existing APIs using MCP. You can also join our community on <a target="_blank" href="https://discord.gg/EpHzbPee">Discord</a> to experience the power of AI agents using MCP tools directly.</p>
<p><a target="_blank" href="https://go.rebelion.la/contact-us">Drop us a line</a> if you have questions or want to collaborate on MCP projects. We're excited to see how the community will leverage this protocol to build innovative AI applications.</p>
<p>Let's build the future of AI integration together! Go Rebels! ✊🏽</p>
<h2 id="heading-additional-resources">Additional Resources</h2>
<ul>
<li><p><a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol (MCP) Official Site</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/la-rebelion/hapimcp">HAPI Server GitHub Repository</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/la-rebelion/runmcp">runMCP GitHub Repository</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/la-rebelion/chatmcp">chatMCP GitHub Repository</a></p>
</li>
<li><p><a target="_blank" href="https://docs.mcp.com.ai/">Happy MCP Stack Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://modelcontextprotocol.io/quickstart/client">MCP Quickstart Guide</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/RGgFJcZ_PA4">HAPI Server Demo Video</a></p>
</li>
<li><p><a target="_blank" href="https://community.mcp.com.ai/">MCP Community Forum</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[OpenAPI & MCP: The Yin-Yang of Modern Intelligent Applications]]></title><description><![CDATA[MCP, or Model Context Protocol, is a hot topic in the developers' community. If you haven't heard of it, you can check out the MCP specification or search for "MCP" on YouTube; you'll find many videos about it!
Many developers, at least at first glan...]]></description><link>https://rebelion.la/openapi-and-mcp-the-yin-yang-of-modern-intelligent-applications</link><guid isPermaLink="true">https://rebelion.la/openapi-and-mcp-the-yin-yang-of-modern-intelligent-applications</guid><category><![CDATA[mcp]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[swagger]]></category><category><![CDATA[OpenApi]]></category><category><![CDATA[REST API]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Mon, 09 Jun 2025 01:21:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749238473245/52a00564-8d12-43ed-aab4-233904e01845.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>MCP, or Model Context Protocol, is a hot topic in the developers' community. If you haven't heard of it, you can check out the <a target="_blank" href="https://modelcontextprotocol.io/">MCP specification</a> or search for "MCP" on YouTube; you'll find many videos about it!</p>
<p>Many developers, at least at first glance, think that MCP is overrated and just another way to refer to APIs. That’s partially correct, but it is much more than that. It standardizes how LLMs communicate with applications; it is the "Natural Language" of models and makes them more accessible to developers.</p>
<p>That's precisely why I want to discuss it. MCP is a protocol that allows developers to integrate AI models into their applications in a standardized way, making it easier to build and maintain AI-powered applications. It provides a common language for models to communicate with applications, enabling developers to focus on building features instead of worrying about the underlying model implementation.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/two-faces-same-coin-headless.png" alt class="image--center mx-auto" /></p>
<p>Swagger, or OpenAPI Specifications, is familiar to most people. The OpenAPI and MCP specifications are two sides of the same coin. Both define how APIs should be designed and documented, but serve different purposes. OpenAPI focuses on RESTful APIs, while MCP centers around LLMs integration.</p>
<p>I have implemented a holistic solution called <code>hapi</code> (headless API) to facilitate the implementation of both OpenAPI and MCP specifications. Let me show you how to use it with a practical example.</p>
<p>My VM instances run on <a target="_blank" href="https://www.kqzyfj.com/click-101444381-17083152">Contabo</a> (affiliate link), a VPS provider. Using the <code>hapi</code> CLI tool, I will demonstrate how to integrate MCP without writing a single line of code.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/RGgFJcZ_PA4">https://youtu.be/RGgFJcZ_PA4</a></div>
<p> </p>
<p>First, download or save your OpenAPI specification file. In this example, it's <code>contabo.json</code>, which describes the API of the Contabo VPS provider. You can find it in the <a target="_blank" href="https://api.contabo.com">Contabo API documentation</a>.</p>
<p>The file must be saved in the <code>$APICOVE_HOME/specs</code>, which by default is the user's home directory: <code>~/.apicove/specs/contabo.json</code>, or in your current working directory (where you run the <code>hapi</code> command). With the <code>list</code> argument, the tool shows the available specifications using the <code>hapi</code> CLI tool:</p>
<pre><code class="lang-bash">$ hapi list
List of APIs
- petstore
- contabo
- oci-registry
- agentico
- petstore-oasv2
</code></pre>
<p>In a <strong><mark>Greenfield </mark></strong> project, you can create a new project using the <code>hapi init</code> command:</p>
<pre><code class="lang-bash">$ hapi init contabo
</code></pre>
<p>This command will use the Swagger specification file <code>contabo.json</code> (or <code>contabo.yaml</code>) to generate the controller’s scaffolding and save the file in the <code>$APICOVE_HOME/src/contabo</code> directory.</p>
<p>There are two ways to run the <code>hapi</code> command: using the "headless mode" or the "MCP mode."</p>
<p>In the "MCP mode", you can run the command as follows:</p>
<pre><code class="lang-bash">$ hapi run contabo --mcp
</code></pre>
<p>This command prints some API stats, like the number of endpoints and models. Then, it starts a web server with the entire MCP implementation, including the OpenAPI specification, the MCP specification, and the controllers scaffolding. If you test the endpoints using a tool like Postman or the built-in Swagger UI, you will see that the endpoints are already implemented and ready to use, without any business logic yet. You can then implement the business logic in the controllers located in the <code>$APICOVE_HOME/src/contabo</code> directory.</p>
<p>The "headless mode" is intended for <strong><mark>brownfield</mark></strong> projects, where you already have an existing backend application and want to integrate the MCP part. The "head" part is the current backend application, and the <code>server</code> defined in the OpenAPI specification consumes its business logic.</p>
<p>In this mode, you can run the command as follows:</p>
<pre><code class="lang-bash">$ hapi run contabo --headless
</code></pre>
<p>This command will start a web server with the MCP implementation but will not use the controller scaffolding. Instead, it will consume the business logic using the existing backend application at <a target="_blank" href="https://api.contabo.com/">https://api.contabo.com/</a>.</p>
<p>In the demo video, you can see how I use the <code>hapi</code> CLI tool to run the MCP implementation in both modes and test the endpoints using Postman and Swagger UI. I retrieve the list of VPS instances via MCP and finally show the actual Contabo Admin Panel, where I list the same VPS instances, to prove that the MCP implementation is working correctly.</p>
<p>Would you like to see more examples of how to use <code>hapi</code> MCP? Or do you have any questions about the implementation? Let me know in the comments below!</p>
<p>Connect with me on <a target="_blank" href="https://x.com/LaRebelionLabs">Twitter</a> or <a target="_blank" href="http://www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&amp;followMember=adrianescutia">LinkedIn</a> to stay updated on the latest developments by "La Rebelion".</p>
<p>Want to download the <code>hapi</code> CLI tool? Comment on LinkedIn or Twitter, and I will send you the link to download it.</p>
<p>Be “HAPI” and Go Rebels! ✊🏽</p>
]]></content:encoded></item><item><title><![CDATA[How to Build a CLI That Integrates with RESTful APIs in DevOps]]></title><description><![CDATA[Learn how to structure your CLI for seamless API integration, efficiency, and better user experience—whether following Kubernetes’ action-first model, ArgoCD’s resource-first approach, or a hybrid like Docker’s CLI.
CLI Design Patterns: Action vs. Re...]]></description><link>https://rebelion.la/how-to-build-a-cli-that-integrates-with-restful-apis-in-devops</link><guid isPermaLink="true">https://rebelion.la/how-to-build-a-cli-that-integrates-with-restful-apis-in-devops</guid><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[command line]]></category><category><![CDATA[Scripting]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Mon, 24 Feb 2025 23:24:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740437879339/73a91f17-6598-4ffd-a2a3-e01b2ebe3640.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Learn how to structure your CLI for seamless API integration, efficiency, and better user experience—whether following Kubernetes’ action-first model, ArgoCD’s resource-first approach, or a hybrid like Docker’s CLI.</p>
<h1 id="heading-cli-design-patterns-action-vs-resource">CLI Design Patterns: Action vs. Resource</h1>
<p><a target="_blank" href="https://en.wikipedia.org/wiki/Command-line_interface">Command-line interfaces</a> (CLIs) are the backbone of DevOps workflows. However, not all CLIs are built the same. Some group commands by <strong>resource type</strong>, like <a target="_blank" href="https://docs.cloudify.co">Cloudify</a>'s <a target="_blank" href="https://docs.cloudify.co/latest/cli/"><code>cfy</code></a>, while others, like <a target="_blank" href="https://kubernetes.io">Kubernetes</a>' <a target="_blank" href="https://kubectl.docs.kubernetes.io/references/kubectl/"><code>kubectl</code></a>, group commands by <strong>action</strong> (or verb: get, delete).</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/cli-design-patterns/cli-design-patterns.svg" alt="CLI Design Patterns" /></p>
<p>If you are a DevOps engineer, and you are building a CLI to automate your workflows or you are a user of a CLI and you are wondering why some commands are grouped by resource type and others by action, you might be wondering:</p>
<p>Which approach is better? Why do some tools favor one over the other? And how does this impact usability and automation? In this post, we'll break down the differences, explore a hybrid approach, and discuss how I applied these principles in designing my CLIs for the "La Rebelion" community: <a target="_blank" href="https://apicove.com/gyat/"><code>gyat</code></a>, <a target="_blank" href="https://apicove.com/hapi"><code>hapi</code></a>, and <a target="_blank" href="https://agentico.dev/docs/intro"><code>agentico</code></a>.</p>
<h2 id="heading-grouping-by-resource-type-cloudify-cli-style">Grouping by Resource Type (Cloudify CLI Style)</h2>
<p>Cloudify's <code>cfy</code> CLI organizes commands based on what you are working with.</p>
<p>Example:</p>
<pre><code class="lang-bash">cfy blueprint upload my-blueprint.yaml
cfy deployment create my-deployment
cfy execution start -d my-deployment
</code></pre>
<p>Here, the <strong>blueprints</strong>, <strong>deployments</strong>, and <strong>executions</strong> are first-class citizens, and all related actions are nested under their resource type.</p>
<p>✅ Pros:</p>
<ul>
<li><p>Intuitive for users who think in terms of domain objects.</p>
</li>
<li><p>Reduces ambiguity when working with complex resources.</p>
</li>
<li><p>It is more straightforward to document and organize.</p>
</li>
</ul>
<p>❌ Cons:</p>
<ul>
<li><p>It can be verbose, requiring more typing.</p>
</li>
<li><p>Users must remember different command structures for each resource type.</p>
</li>
<li><p>It is harder to generalize across different resource types.</p>
</li>
</ul>
<h2 id="heading-grouping-by-action-kubernetes-cli-style">Grouping by Action (Kubernetes CLI Style)</h2>
<p>Kubernetes' <code>kubectl</code> CLI flips the approach—users start with what they want to do, followed by the resource type.</p>
<p>Example:</p>
<pre><code class="lang-bash">kubectl get pods
kubectl delete service my-service
kubectl apply -f deployment.yaml
</code></pre>
<p>The focus here is on the actions (<code>get</code>, <code>delete</code>, <code>apply</code>), which are reusable across many resource types.</p>
<p>✅ Pros:</p>
<ul>
<li><p>More consistent and predictable across different resource types.</p>
</li>
<li><p>Easier for automation and scripting.</p>
</li>
<li><p>Reduces cognitive load since users only need to remember actions, not different resource structures.</p>
</li>
</ul>
<p>❌ Cons:</p>
<ul>
<li><p>It may feel unnatural to those used to object-oriented workflows.</p>
</li>
<li><p>Resource types must be specified explicitly every time.</p>
</li>
<li><p>Some commands can become longer and harder to read.</p>
</li>
</ul>
<h2 id="heading-hybrid-approach-terraform-cli-style">Hybrid Approach: Terraform CLI Style</h2>
<p>Terraform uses a hybrid approach, mixing action-first and resource-first paradigms.</p>
<p>Example:</p>
<pre><code class="lang-bash">terraform apply
terraform state list
</code></pre>
<p>In the example, <code>terraform apply</code> is action-first, while <code>terraform state list</code> is more resource-focused.</p>
<p>This approach provides flexibility, allowing users to interact with high-level actions and granular resources. It's ideal for tools that have a mix of static and dynamic resources.</p>
<h2 id="heading-why-choose-one-approach-over-the-other">Why Choose One Approach Over the Other?</h2>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/cli-design-patterns/cli-design-approaches.svg" alt="CLI Design Approaches" /></p>
<p>The choice between these approaches depends on your tool's use case, target audience, and design goals. Here are some factors to consider:</p>
<ul>
<li><p><strong>Resource Nature</strong>: A resource-first approach (<code>cfy</code> style) is intuitive if your tool deals with well-defined domain objects. If it works with dynamic or API-driven resources, an action-first approach (<code>kubectl</code> style) keeps things predictable.</p>
</li>
<li><p><strong>User Experience</strong>: Consider how users will interact with your tool. If they think in terms of objects, a resource-first approach is better. If they think about actions, an action-first approach is more suitable.</p>
</li>
<li><p><strong>Automation</strong>: If your tool needs to be easily scriptable, an action-first approach (<code>kubectl</code> style) is more flexible.</p>
</li>
<li><p><strong>Complexity</strong>: A hybrid approach (<code>terraform</code> style) offers the best of both worlds for tools with a mix of static and dynamic resources.</p>
</li>
</ul>
<h2 id="heading-examples-of-cli-design-patterns-in-action">Examples of CLI Design Patterns in Action</h2>
<p>Based on their command structure, here's a classification of popular CLI tools in the Cloud (CNCF) and DevOps space.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>CLI Tool</strong></td><td><strong>Approach</strong></td><td><strong>Example Command</strong></td><td><strong>Notes</strong></td></tr>
</thead>
<tbody>
<tr>
<td><a target="_blank" href="https://docs.cloudify.co/latest/cli/orch_cli/"><strong>cfy</strong></a></td><td>Resource-Type-First</td><td><code>cfy blueprint upload</code></td><td>Cloudify groups command by resource type (<code>blueprint</code>, <code>deployment</code>, etc.).</td></tr>
<tr>
<td><a target="_blank" href="https://docs.openstack.org/newton/user-guide/common/cli-install-openstack-command-line-clients.html"><strong>openstack</strong></a></td><td>Resource-Type-First</td><td><code>openstack server create</code></td><td>Organizes commands by resource type (<code>server</code>, <code>network</code>, etc.).</td></tr>
<tr>
<td><a target="_blank" href="https://developer.hashicorp.com/vault/docs/commands"><strong>vault</strong></a></td><td>Resource-Type-First</td><td><code>vault secrets enable</code></td><td>Commands are grouped by resource type (<code>secrets</code>, <code>policy</code>, <code>auth</code>).</td></tr>
<tr>
<td><a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd/"><strong>argocd</strong></a></td><td>Resource-Type-First</td><td><code>argocd app create</code></td><td>Commands are prefixed with a general resource (<code>app</code>, <code>repo</code>, <code>cluster</code>).</td></tr>
<tr>
<td><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/"><strong>aws</strong></a></td><td>Resource-Type-First</td><td><code>aws s3 ls</code></td><td>Commands are grouped by service (<code>s3</code>, <code>ec2</code>, <code>iam</code>) before performing actions.</td></tr>
<tr>
<td><a target="_blank" href="https://github.com/tektoncd/cli#useful-commands"><strong>tkn</strong></a></td><td>Resource-Type-First</td><td><code>tkn pipeline start</code></td><td>Tekton emphasizes resource types (<code>pipeline</code>, <code>task</code>, <code>taskrun</code>).</td></tr>
<tr>
<td><a target="_blank" href="https://kubectl.docs.kubernetes.io/references/kubectl/"><strong>kubectl</strong></a></td><td>Action-First</td><td><code>kubectl get pods</code></td><td>Kubernetes CLI groups by actions (<code>get</code>, <code>apply</code>, <code>delete</code>, etc.).</td></tr>
<tr>
<td><a target="_blank" href="https://helm.sh/docs/helm/"><strong>helm</strong></a></td><td>Action-First</td><td><code>helm install myapp</code></td><td>Helm focuses on actions (<code>install</code>, <code>upgrade</code>, <code>uninstall</code>).</td></tr>
<tr>
<td><a target="_blank" href="https://fluxcd.io/flux/cmd/flux/"><strong>flux</strong></a></td><td>Action-First</td><td><code>flux bootstrap github</code></td><td>Primarily action-driven for GitOps workflows.</td></tr>
<tr>
<td><a target="_blank" href="https://docs.docker.com/reference/cli/docker/"><strong>docker</strong></a></td><td>Hybrid</td><td><code>docker run nginx</code></td><td>For "common commands" like <code>run</code>, <code>pull</code>, <code>login</code>, it uses an action-first approach, it "hides" the resource type (<code>container</code>, <code>image</code>). For more complex commands, it uses a <strong>resource-first</strong> approach.</td></tr>
<tr>
<td><a target="_blank" href="https://www.pulumi.com/docs/iac/cli/commands/"><strong>pulumi</strong></a></td><td>Hybrid</td><td><code>pulumi up</code></td><td>Uses both action-first (<code>up</code>, <code>destroy</code>) and resource-first (<code>stack</code>, <code>config</code>) approaches.</td></tr>
<tr>
<td><a target="_blank" href="https://developer.hashicorp.com/terraform/cli/commands"><strong>terraform</strong></a></td><td>Hybrid</td><td><code>terraform apply</code></td><td>Mixes both approaches; core commands like <code>apply</code> and <code>plan</code> are action-based, but subcommands like <code>terraform state</code> are <strong>resource-oriented</strong>.</td></tr>
</tbody>
</table>
</div><p>This should give a solid overview of how different cloud and DevOps CLIs structure their commands and give you a good idea of which approach to use for your tooling.</p>
<h2 id="heading-comparing-cli-design-patterns">Comparing CLI Design Patterns 🔍</h2>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/cli-design-patterns/cli-design-patterns-usability-impact.svg" alt="CLI Design Patterns Usability Impact" /></p>
<p>Let's compare these approaches side by side:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Criteria</strong></td><td><strong>Action-Resource</strong> (<code>kubectl</code> style)</td><td><strong>Resource-Action</strong> (<code>cfy</code> style)</td><td><strong>Hybrid</strong> (<code>terraform</code> style)</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Consistency</strong></td><td>High 🟢</td><td>Medium 🟡</td><td>Medium 🟡</td></tr>
<tr>
<td><strong>Predictability</strong></td><td>High 🟢</td><td>Medium 🟡</td><td>Medium 🟡</td></tr>
<tr>
<td><strong>Ease of Automation</strong></td><td>High 🟢</td><td>Medium 🟡</td><td>High 🟢</td></tr>
<tr>
<td><strong>Cognitive Load</strong></td><td>Low 🔴</td><td>Medium 🟡</td><td>Medium 🟡</td></tr>
<tr>
<td><strong>Flexibility</strong></td><td>Medium 🟡</td><td>Low 🔴</td><td>High 🟢</td></tr>
<tr>
<td><strong>Intuitiveness</strong></td><td>Medium 🟡</td><td>High 🟢</td><td>Medium 🟡</td></tr>
<tr>
<td><strong>Documentation</strong></td><td>Medium 🟡</td><td>High 🟢</td><td>Medium 🟡</td></tr>
<tr>
<td><strong>Implementation</strong> (<em>Ease</em>)</td><td>Low 🔴</td><td>High 🟢</td><td>Medium 🟡</td></tr>
<tr>
<td><strong>Use Cases</strong></td><td>APIs, dynamic resources</td><td>Domain objects, static resources</td><td>Mixed resources</td></tr>
<tr>
<td><strong>Examples</strong></td><td><code>kubectl</code>, <code>helm</code></td><td><code>cfy</code>, <code>openstack</code></td><td><code>terraform</code>, <code>docker</code></td></tr>
</tbody>
</table>
</div><p>The table above provides a quick overview of each approach's pros and cons. The best choice depends on your specific use case and user base, but understanding these trade-offs can help you make an informed decision.</p>
<p>The action-resource approach (<code>kubectl</code> style) is more complex to implement, document, and organize but more consistent and predictable across different resource types. The resource-action approach (<code>cfy</code> style) is more intuitive for users who think <strong>in terms of domain objects</strong>, but it can be harder to generalize across different resource types. The hybrid approach (<code>terraform</code> style) provides flexibility, allowing users to interact with high-level actions and granular resources.</p>
<h3 id="heading-tree-like-representation-of-the-commands">Tree-like Representation of the Commands</h3>
<p>Here’s a <strong>tree-like representation</strong> of the command structures for <code>kubectl</code> (action-first), <code>cfy</code> (resource-type-first), and <code>argocd</code> (hybrid) using a <strong>Linux command-style layout</strong>:</p>
<h4 id="heading-action-first-command-structure">Action-First command structure</h4>
<pre><code class="lang-bash">kubectl
├── get
│   ├── pods
│   ├── services
│   ├── deployments
│   ├── nodes
│   └── configmap
├── describe
│   ├── pod mypod
│   ├── service myservice
│   ├── deployment mydeployment
│   └── node mynode
├── apply
│   ├── -f myapp.yaml
│   ├── -f service.yaml
│   └── -f ingress.yaml
├── delete
│   ├── pod mypod
│   ├── service myservice
│   ├── deployment mydeployment
│   └── namespace mynamespace
└── logs
    ├── pod/mypod
    └── deployment/mydeployment
</code></pre>
<h4 id="heading-resource-type-first-command-structure">Resource-Type-First command structure</h4>
<pre><code class="lang-bash">cfy
├── blueprint
│   ├── upload myblueprint.yaml
│   ├── delete myblueprint
│   └── list
├── deployment
│   ├── create mydeployment -b myblueprint
│   ├── delete mydeployment
│   ├── outputs mydeployment
│   └── list
├── execution
│   ├── start myworkflow -d mydeployment
│   ├── cancel myexecution
│   ├── list
│   └── resume myexecution
└── tenant
    ├── create mytenant
    ├── delete mytenant
    ├── list
    └── set-default mytenant
</code></pre>
<h4 id="heading-hybrid-approach-command-structure">Hybrid Approach command structure</h4>
<pre><code class="lang-bash">docker
├── <span class="hljs-comment"># Action-First Commands (Hides Resource Type)</span>
│   ├── run nginx
│   ├── pull nginx:latest
│   ├── push myrepo/myimage:v1
│   ├── login myregistry.com
│   ├── <span class="hljs-built_in">logout</span> myregistry.com
│   ├── search nginx
│   ├── build -t myimage .
│   ├── tag myimage myrepo/myimage:v1
│   ├── inspect mycontainer
│   └── <span class="hljs-built_in">exec</span> -it mycontainer bash
│
├── <span class="hljs-comment"># Resource-First Commands (Explicit Resource Type)</span>
│   ├── container
│   │   ├── ls
│   │   ├── start mycontainer
│   │   ├── stop mycontainer
│   │   ├── rm mycontainer
│   │   ├── inspect mycontainer
│   │   └── logs mycontainer
│   │
│   ├── image
│   │   ├── ls
│   │   ├── prune
│   │   ├── rm myimage
│   │   ├── save -o myimage.tar myimage
│   │   ├── load -i myimage.tar
│   │   └── <span class="hljs-built_in">history</span> myimage
│   │
│   ├── network
│   │   ├── ls
│   │   ├── create mynetwork
│   │   ├── connect mynetwork mycontainer
│   │   ├── disconnect mynetwork mycontainer
│   │   └── rm mynetwork
│   │
│   ├── volume
│   │   ├── ls
│   │   ├── create myvolume
│   │   ├── rm myvolume
│   │   └── inspect myvolume
│   │
│   ├── compose
│   │   ├── up -d
│   │   ├── down
│   │   ├── logs
│   │   ├── ps
│   │   └── restart
│   │
└── <span class="hljs-comment"># System Commands</span>
    ├── system prune
    ├── system df
    ├── version
    └── info
</code></pre>
<h3 id="heading-key-takeaways"><strong>Key Takeaways</strong></h3>
<ul>
<li><p><strong>kubectl (Action-First)</strong>: Commands start with <strong>verbs</strong> (<code>get</code>, <code>apply</code>, <code>delete</code>) and then specify the resource.</p>
</li>
<li><p><strong>cfy (Resource-Type-First)</strong>: Commands start with the <strong>resource type</strong> (<code>blueprint</code>, <code>deployment</code>, <code>execution</code>), followed by actions.</p>
</li>
<li><p><strong>docker (Hybrid)</strong>: A mix—some commands are <strong>resource-type-first</strong> (<code>container</code>, <code>image</code>, <code>network</code>, <code>volume</code>), while others follow an <strong>action-first</strong> structure (<code>run</code>, <code>pull</code>, <code>build</code>, <code>tag</code>, <code>login</code>).</p>
</li>
</ul>
<p>This tree representation helps visualize how commands are structured and grouped in each CLI.</p>
<p>Look at the tree representation of the <a class="post-section-overview" href="#action-first-command-structure">Action-First command structure</a> for the <code>kubectl</code> command. Do you see any pattern? Yes! It is a structure with <strong>verbs</strong> as the root nodes and <strong>resources</strong> as the leaf nodes, similar to Swagger API specifications' <strong>verb-object</strong> structure.</p>
<h2 id="heading-api-first-to-cli-commands-mapping">API-First to CLI Commands Mapping</h2>
<p>API-first design is widely used for creating CLIs, particularly for tools that interact with APIs. This approach involves directly linking CLI commands to API endpoints, facilitating the automation and scripting of workflows. Both action-first and resource-first CLIs can adapt this strategy to their command structures.</p>
<h3 id="heading-action-first-mapping">Action-First Mapping</h3>
<p>For simplicity (because the Kubernetes and Cloudify APIs are huge), let's take the <a target="_blank" href="https://petstore3.swagger.io">Petstore API</a> (or <a target="_blank" href="https://petstore.swagger.io">Petstore v2</a>) as an example and map it to the <code>kubectl</code> CLI style based on the HTTP <strong>verb-object</strong> structure of the API to the <strong>action-resource</strong> structure of the <code>kubectl</code> CLI, we get the following command structure:</p>
<pre><code class="lang-bash">GET     /pets
POST    /pets
GET     /pets/{petId}
PUT     /pets/{petId}
DELETE  /pets/{petId}

GET     /orders
POST    /orders
GET     /orders/{orderId}
DELETE  /orders/{orderId}

GET     /users
POST    /users
GET     /users/{username}
PUT     /users/{username}
DELETE  /users/{username}
</code></pre>
<p>Just replace pets with <code>kubectl get pets</code> or <code>kubectl get pets 10</code>, orders with <code>kubectl get orders</code>, and users with <code>kubectl get users</code>.</p>
<h3 id="heading-resource-first-mapping">Resource-First Mapping</h3>
<p>On the other hand, if we map the <strong>resource-action</strong> structure of the <code>cfy</code> CLI to the <strong>verb-object</strong> structure of Swagger API specifications, we get the following command structure:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">openapi:</span> <span class="hljs-number">3.0</span><span class="hljs-number">.0</span>
<span class="hljs-comment"># ...</span>
<span class="hljs-attr">paths:</span>
  <span class="hljs-string">/pets:</span>
    <span class="hljs-attr">get:</span>
      <span class="hljs-comment"># ...</span>
    <span class="hljs-attr">post:</span>
      <span class="hljs-comment"># ...</span>
  <span class="hljs-string">/pets/{petId}:</span>
    <span class="hljs-attr">get:</span>
      <span class="hljs-comment"># ...</span>
    <span class="hljs-attr">put:</span>
      <span class="hljs-comment"># ...</span>
    <span class="hljs-attr">delete:</span>
      <span class="hljs-comment"># ...</span>
  <span class="hljs-string">/orders:</span>
    <span class="hljs-attr">get:</span>
      <span class="hljs-comment"># ...</span>
    <span class="hljs-attr">post:</span>
      <span class="hljs-comment"># ...</span>
  <span class="hljs-string">/users:</span>
    <span class="hljs-attr">get:</span>
      <span class="hljs-comment"># ...</span>
    <span class="hljs-attr">post:</span>
      <span class="hljs-comment"># ...</span>
</code></pre>
<p>Just replace pets with <code>cfy pets list</code>, orders with <code>cfy orders list</code>, and users with <code>cfy users list</code>.</p>
<h3 id="heading-takeaways-from-the-mapping">Takeaways from the Mapping</h3>
<ul>
<li><p><strong>Action-First Mapping</strong>: The HTTP <strong>verb-object</strong> structure of the API maps directly to the <strong>action-resource</strong> structure of the CLI.</p>
</li>
<li><p><strong>Resource-First Mapping</strong>: The <strong>paths</strong> structure of the API maps directly to the <strong>resource-action</strong> structure of the CLI.</p>
</li>
</ul>
<p>This mapping helps visualize how API-first design can be applied to action- and resource-first CLIs.</p>
<h2 id="heading-how-i-applied-these-cli-design-patterns">👨🏽‍💻 How I Applied These CLI Design Patterns</h2>
<p>When designing <code>gyat</code> (Go-through-your-API-tool), <code>hapi</code> (Headless-API), and <code>agentico</code>, I had to choose between the action-resource (<code>kubectl</code> style) and resource-action (<code>cfy</code> style) approaches based on the nature of the resources and the target audience. Here's how I made my decisions:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>CLI Tool</td><td>Approach</td><td>Why?</td></tr>
</thead>
<tbody>
<tr>
<td><a target="_blank" href="https://apicove.com/hapi"><code>hapi</code></a></td><td>Action-Resource (kubectl style)</td><td>The resource types are dynamic and change based on the Swagger API spec. Users need consistency across various APIs.</td></tr>
<tr>
<td><a target="_blank" href="https://apicove.com/gyat/"><code>gyat</code></a></td><td>Action-Resource (kubectl style)</td><td>The resource types are dynamic and change based on the Swagger API spec. Users need consistency across various APIs.</td></tr>
<tr>
<td><a target="_blank" href="https://agentico.dev/docs/intro"><code>agentico</code></a></td><td>Resource-Action (<code>cfy</code> style)</td><td>The resource types are static and well-defined by the <a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol</a> (MCP). Grouping by resource makes navigation easier.</td></tr>
</tbody>
</table>
</div><p>This decision aligns with how users interact with each tool. In <code>gyat</code>, users frequently deal with unknown <strong>APIs</strong>, so an action-first structure keeps things predictable. In <code>agentico</code>, users work with <strong>well-defined domain objects</strong>, making a resource-first approach more intuitive.</p>
<h2 id="heading-final-thoughts-choosing-the-right-cli-style">Final Thoughts: Choosing the Right CLI Style</h2>
<p>So, which approach is best?</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/cli-design-patterns/cli-grouping-patterns.svg" alt="CLI Grouping Patterns" /></p>
<p>The answer depends on the nature of the tool:</p>
<ul>
<li><p>If your CLI works with fixed resources, a resource-type-first approach (like <code>cfy</code>) makes sense.</p>
</li>
<li><p>an action-first approach (like <code>kubectl</code>) is better if your CLI needs to handle dynamic or API-driven resources.</p>
</li>
<li><p>a hybrid approach (like <code>terraform</code>) might be ideal if your tool requires high-level operations and granular control.</p>
</li>
</ul>
<p>Understanding these design principles helps create CLIs that are intuitive, scalable, and efficient. What's your take on CLI design? Have you encountered a CLI that does things differently? Let me know in the comments!</p>
<p>If you want to learn more about what libraries and tools I use to build these CLIs, let me know in the comments, and I will write a follow-up post. 📝</p>
<p>Until next time, happy coding, and never stop learning and challenging the status quo!</p>
<p>Go Rebels! ✊🏼</p>
]]></content:encoded></item><item><title><![CDATA[Bridging the AI and DevOps Enterprise Gap]]></title><description><![CDATA[AI, particularly Large Language Models (LLMs), has captured the imagination of developers, executives, and enterprises alike. But while AI is often seen as the ultimate game-changer, it represents only a small piece of the puzzle. The real challenge ...]]></description><link>https://rebelion.la/bridging-the-ai-and-devops-enterprise-gap</link><guid isPermaLink="true">https://rebelion.la/bridging-the-ai-and-devops-enterprise-gap</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Devops]]></category><category><![CDATA[#AIOps]]></category><category><![CDATA[airgap]]></category><category><![CDATA[ztna]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Fri, 07 Feb 2025 22:08:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738964757573/9de50a03-d632-493b-97bb-5c3b0f57d545.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI, particularly Large Language Models (LLMs), has captured the imagination of developers, executives, and enterprises alike. But while AI is often seen as the ultimate game-changer, it represents only a small piece of the puzzle. The real challenge is about <strong>integration</strong> - connecting AI to existing IT systems, ensuring compliance, and scaling adoption in enterprise environments.</p>
<p>The same applies to <strong>Kubernetes and Helm</strong>. While developers can easily build and test applications locally, DevOps teams face a different reality when deploying to enterprise environments, particularly those in <strong>Zero Trust Networks (ZTN)</strong>, introduce <strong>strict controls, approval bottlenecks, and rigid security policies</strong> that make integration a serious challenge. <strong>Airgap restrictions, security policies, and compliance requirements</strong> introduce a new layer of complexity that local development can't solve alone. But, <strong>what about a hybrid approach?</strong></p>
<p>This post explores these two parallel challenges - <strong>the AI enterprise gap and the DevOps enterprise gap</strong> - and how teams can bridge them. A practical use case example is the challenge of <strong>orchestrating AI and Kubernetes workflows</strong> in enterprise environments, where <strong>visibility, compliance, and automation</strong> are key. Let's dive in.</p>
<p><img src="https://media1.tenor.com/m/ueZoBBmaOkIAAAAd/swifferpics-taylor-swift.gif" alt="Taylor Swift dive in" class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-ais-biggest-challenge-isnt-ai">AI's Biggest Challenge Isn't AI</h2>
<p>AI has made incredible strides in recent years, from <strong>transforming customer service</strong> to <strong>optimizing supply chains</strong>; every industry is exploring AI applications in marketing, healthcare, and finance. The rise of Large Language Models (LLMs) has opened up new possibilities for natural language processing, chatbots, and content generation. But the biggest challenge isn't building AI models—it's <strong>integrating them into enterprise IT systems</strong>.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/Enterprise-Ready-Solutions-for-AI-and-DevOps-in-ZTN.png" alt="Enterprise-ready solutions for AI" /></p>
<h3 id="heading-the-ai-enterprise-disconnect">The AI-Enterprise Disconnect</h3>
<p>While AI models have made remarkable strides, most enterprises struggle with:</p>
<ul>
<li><p><strong>Data integration</strong>: AI is only as good as the data it can access, but enterprise data is often siloed, locked behind security policies, or tied to legacy systems.</p>
</li>
<li><p><strong>IT approvals and security</strong>: Enterprises operate in regulated environments where <strong>introducing new tools requires rigorous vetting</strong>. An AI model that needs full access to internal systems is often a non-starter.</p>
</li>
<li><p><strong>Scalability and operationalization</strong>: Developing an AI model in a lab is easy; <strong>deploying and maintaining it at scale is the real challenge</strong>. Organizations need <strong>MLOps</strong> pipelines to manage retraining, versioning, and observability.</p>
</li>
<li><p><strong>End-user adoption</strong>: AI adoption isn't just about technical implementation - it's about trust. Employees need confidence in AI recommendations, and organizations must navigate <strong>explainability, bias, and compliance</strong> challenges.</p>
</li>
</ul>
<h3 id="heading-solution-making-ai-work-in-enterprise-it">Solution: Making AI Work in Enterprise IT</h3>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/unlock-ai-potential-in-enterprise-restricted-environments.png" alt="Unlock AI Potential" /></p>
<p>To bridge this gap, enterprises need to <strong>stop thinking of AI as an isolated project</strong> and start integrating it into their workflows:</p>
<ul>
<li><p><strong>Agentic AI systems</strong>: AI needs to interact with enterprise data, APIs, and security policies while <strong>complying with existing access controls</strong>. This is where <a target="_blank" href="https://agentico.dev"><strong>Agentico.dev</strong></a> (a discovery platform for agentic AI tools) can help navigate solutions.</p>
</li>
<li><p><strong>Infrastructure-aware AI</strong>: AI applications should know <strong>enterprise constraints, security rules, and compliance requirements</strong> rather than assuming open access. <a target="_blank" href="https://k1s.sh/docs/qbot/">QBot</a>, your Kubepilot, can help with this; it is an AI-powered assistant for cloud-native environments.</p>
</li>
<li><p><strong>Context-aware workflows</strong>: AI needs <strong>human-in-the-loop processes</strong> to ensure reliable and trustworthy decision-making. Agentico.dev works with QBot to provide these smart autonomous workflows for cloud-native environments.</p>
</li>
</ul>
<hr />
<h2 id="heading-kubernetes-and-helm-in-enterprise-zero-trust-environments">Kubernetes and Helm in Enterprise Zero Trust Environments</h2>
<h3 id="heading-why-local-development-is-not-enough">Why Local Development is Not Enough</h3>
<p>Developers can set up <strong>Kubernetes and Helm</strong> applications locally, but <strong>real-world enterprise deployments are different</strong>:</p>
<ol>
<li><p><strong>Tooling restrictions</strong>: Enterprises enforce strict <strong>approved toolsets</strong> - introducing a new CLI tool or automation script is <strong>often impossible</strong>. ⚠️</p>
</li>
<li><p><strong>Airgap and Zero Trust</strong>: Many enterprise environments (finance, defense, telecom) <strong>restrict internet access</strong>, preventing direct downloads of images, charts, or dependencies.</p>
</li>
<li><p><strong>Configuration drift</strong>: Deploying across <strong>multiple environments (dev, test, staging, production)</strong> leads to inevitable drift - <strong>how do you detect and manage it?</strong></p>
</li>
<li><p><strong>Version mismatches</strong>: Helm charts and Kubernetes manifests may work in <strong>one cluster but fail in another</strong> due to different Kubernetes versions, security policies, or RBAC restrictions.</p>
</li>
<li><p><strong>Approval and compliance bottlenecks</strong>: Even minor changes (e.g., modifying a Helm chart) <strong>require approvals</strong>, slowing down iteration cycles.</p>
</li>
</ol>
<h3 id="heading-how-to-solve-it-enterprise-kubernetes-best-practices">How to Solve It: Enterprise Kubernetes Best Practices</h3>
<p>To navigate these challenges, enterprise teams must adopt a <strong>structured approach</strong>:</p>
<ul>
<li><p><strong>Pre-approved package formats</strong>: Instead of raw Helm charts or YAML files, enterprises should distribute Kubernetes manifests as <strong>OCI artifacts</strong>, making them easier to verify and deploy.</p>
</li>
<li><p><strong>Airgap-friendly tooling</strong>: Solutions like <a target="_blank" href="https://k1s.sh/airgap"><strong>K1s Airgap</strong></a> help move container images and Helm charts across restricted environments without breaking security policies.</p>
</li>
<li><p><strong>Automated drift detection</strong>: Implement <strong>GitOps workflows (ArgoCD, FluxCD)</strong> to monitor and correct drift between environments continuously. Or even better, use <a target="_blank" href="https://k1s.sh/qbot"><strong>QBot</strong></a> to automate this process.</p>
</li>
<li><p><strong>Enterprise-grade observability</strong>: <strong>Tracing deployments and workflows</strong> is crucial. Use tools like <a target="_blank" href="https://opentelemetry.io/"><strong>OpenTelemetry</strong></a><strong>, Loki, and Prometheus</strong> to capture granular logs and metrics. Agentico's <a target="_blank" href="https://agentico.dev/tools"><strong>AI tools</strong></a> can help with this out of the box; with <a target="_blank" href="https://www.npmjs.com/package/@agentico/mcp-create-tool"><code>mcp-create-tool</code></a>, you can enable OpenTelemetry in your AI tools.</p>
</li>
<li><p><strong>Standardized scaffolding</strong>: Automate directory and scaffolding creation with <strong>templates</strong>, ensuring that every project starts with a consistent structure. This is how <a target="_blank" href="https://k1s.sh/docs/qbot/devops/">QBot DevOps</a> <code>init</code> and <code>scaffold</code> actions work; they create a new project with a consistent structure, either the <a target="_blank" href="https://www.npmjs.com/package/@k1ssh/qbctl">CLI</a> or the <a target="_blank" href="https://agentico.dev/tools/@k1ssh/qbot-init-project">MCP Tools</a>.</p>
</li>
</ul>
<hr />
<h2 id="heading-the-common-thread-process-orchestration-and-visibility">The Common Thread - Process Orchestration and Visibility</h2>
<h3 id="heading-managing-and-monitoring-workflows-in-ai-and-aiops">Managing and Monitoring Workflows in AI and AIOps</h3>
<p>Both AI Workflows and DevOps pipelines suffer from the same <strong>hidden complexity</strong> - a lack of visibility into all the <strong>inner steps</strong> involved in deployment, integration, and execution. Now, AI and Kubernetes, need to be orchestrated and monitored to ensure compliance, security, and reliability.</p>
<h3 id="heading-questions-enterprises-need-to-ask">Questions Enterprises Need to Ask:</h3>
<ul>
<li><p><strong>Where do bottlenecks occur in AI integration or DevOps deployments?</strong></p>
</li>
<li><p><strong>How do we track dependencies across teams and environments?</strong></p>
</li>
<li><p><strong>Can we audit every step of an AI decision, including the DevOps pipeline or Kubernetes deployment?</strong></p>
</li>
<li><p><strong>How do we validate that an AI-generated result or a Kubernetes manifest is compliant before deployment?</strong></p>
</li>
</ul>
<h3 id="heading-solutions-for-monitoring-inner-steps-in-ai-amp-kubernetes">Solutions for Monitoring Inner Steps in AI &amp; Kubernetes</h3>
<ol>
<li><p><strong>End-to-End Observability</strong></p>
<ul>
<li><p>AI: Use <strong>AI observability tools</strong> to track model performance and bias.</p>
</li>
<li><p>Kubernetes: Implement <strong>tracing (</strong><a target="_blank" href="https://www.jaegertracing.io"><strong>Jaeger</strong></a><strong>, OpenTelemetry)</strong> for complete visibility.</p>
</li>
</ul>
</li>
<li><p><strong>Automated Compliance Checks</strong></p>
<ul>
<li><p>AI: <strong>Governance tools</strong> for tracking AI outputs, bias, and ethical considerations.</p>
</li>
<li><p>Kubernetes: <strong>Policy engines (</strong><a target="_blank" href="https://kyverno.io"><strong>Kyverno</strong></a><strong>,</strong> <a target="_blank" href="https://www.openpolicyagent.org"><strong>OPA</strong></a><strong>)</strong> to enforce security rules.</p>
</li>
</ul>
</li>
<li><p><strong>Self-Healing Workflows</strong></p>
<ul>
<li><p>AI: Auto-retrain models when accuracy drops below a threshold.</p>
</li>
<li><p>Kubernetes: Auto-rollbacks when a deployment fails.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-bridging-the-gap-ai-and-airgap-in-enterprise-it">Bridging the Gap: AI and Airgap in Enterprise IT</h2>
<blockquote>
<p>A Strategic Framework for Prioritizing Enterprise-Oriented Solutions</p>
</blockquote>
<p>Enterprises operate in complex IT landscapes where adopting new technologies - like AI or cloud-native DevOps tools - is never as simple as plugging them in. The real challenge lies in aligning these solutions with security policies, compliance requirements, and existing IT infrastructure.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/prioritize-enterprise-oriented-strategies.png" alt="Strategic Framework for AI and DevOps" /></p>
<p>The <strong>strategic framework</strong> illustrated above highlights how different solutions fall into one of four quadrants based on:</p>
<ul>
<li><p><strong>Integration Level:</strong> How well a solution fits within the existing enterprise tech stack.</p>
</li>
<li><p><strong>Enterprise Policy Alignment:</strong> How naturally a solution adheres to corporate security, governance, and compliance requirements.</p>
</li>
</ul>
<p>By mapping solutions across these dimensions, enterprises can make informed decisions about AI, DevOps, and automation strategies. If a solution doesn't fit within the enterprise's existing policies or tech stack, it's likely to face resistance or fail to deliver the expected value, but you can always <strong>bridge the gap</strong> with tools like <a target="_blank" href="https://agentico.dev"><strong>Agentico.dev</strong></a> and <a target="_blank" href="https://k1s.sh/docs/qbot/"><strong>QBot</strong></a>, designed to work within enterprise constraints.</p>
<hr />
<h2 id="heading-conclusion-ai-and-kubernetes-need-enterprise-ready-orchestration">Conclusion: AI and Kubernetes Need Enterprise-Ready Orchestration</h2>
<p>The problem isn't AI. The problem isn't Kubernetes. The problem isn't the workflows. The problem is <strong>integration, security, compliance, and scaling adoption in enterprises</strong>.</p>
<p>To succeed, organizations need to:</p>
<p>✅ <strong>Design AI with enterprise constraints in mind</strong> - not as an isolated tool.<br />✅ <strong>Ensure Kubernetes workflows are airgap- and ZTN-compatible</strong>.<br />✅ <strong>Automate monitoring, drift detection, and compliance validation</strong>.<br />✅ <strong>Adopt tools that work within enterprise policies rather than fighting against them</strong>.</p>
<p>By addressing the big <strong>integration challenge</strong>, enterprises can <strong>truly unlock the potential of AI and Kubernetes at scale</strong>.</p>
<p>Are you ready to bridge the gap? Let us know in the comments below!</p>
<p>Do you have questions? Contact us at <a target="_blank" href="https://la-rebelion.io"><strong>La Rebelion Labs</strong></a> or <a target="_blank" href="https://agentico.dev"><strong>Agentico.dev</strong></a> for more insights on bridging the AI and DevOps enterprise gap.</p>
<p>Be Rebel, be Innovative, be Agile, be Agentico! Go Rebels! ✊🏻</p>
]]></content:encoded></item><item><title><![CDATA[A Game-Changer for DevOps: Leveraging AI in Air-Gapped and Zero Trust Environments]]></title><description><![CDATA[DevOps practitioners working in Zero Trust Network Access (ZTNA) or air-gapped environments often face unique challenges. These highly secure setups restrict internet access to protect sensitive data, making it difficult to integrate modern tools and...]]></description><link>https://rebelion.la/devops-leveraging-ai-in-air-gapped-and-zero-trust-environments</link><guid isPermaLink="true">https://rebelion.la/devops-leveraging-ai-in-air-gapped-and-zero-trust-environments</guid><category><![CDATA[agentic AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[airgap]]></category><category><![CDATA[zerotrust]]></category><category><![CDATA[ztna]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[Model Context Protocol]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Fri, 17 Jan 2025 16:00:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737129500105/e2a2f35a-d0f7-4306-a176-d6768fc3ec8f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>DevOps practitioners working in <a target="_blank" href="https://www.zscaler.com/resources/security-terms-glossary/what-is-zero-trust-network-access">Zero Trust Network Access</a> (ZTNA) or air-gapped environments often face unique challenges. These highly secure setups restrict internet access to protect sensitive data, making it difficult to integrate modern tools and automation. But what if there were a way to bridge the gap between your cloud infrastructure and air-gapped systems while maintaining security? The <a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol</a> (MCP) and Claude for Desktop, are a powerful combination that's set to transform how DevOps engineers operate in these constrained environments.</p>
<p>With MCP, you can integrate AI-driven automation directly into your air-gapped workflows. Claude for Desktop acts as your user-friendly interface, enabling natural language interaction with AI tools running locally on your PC. Imagine being able to automate scripts, manage deployments, and even perform drift detection; all while staying within the confines of a secure environment.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/ai-interactions-in-airgap-envs.png" alt="Interactions in Air-Gapped Environments" /></p>
<p>Let's dive into why this approach is a game changer and how you can get started with a simple example.</p>
<h3 id="heading-why-mcp-and-claude-for-desktop-are-perfect-for-zero-trusthttpswwwcisagovzero-trust-maturity-model-and-air-gapped-use-cases">Why MCP and Claude for Desktop Are Perfect for <a target="_blank" href="https://www.cisa.gov/zero-trust-maturity-model">Zero Trust</a> and Air-Gapped Use Cases</h3>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/a-new-way-to-interact-with-your%20-ai-tools-in-air-gapped-environments.png" alt="AI Tools in Air-Gapped Environments" /></p>
<p><strong>Simplified Automation:</strong> Claude for Desktop allows you to run scripts, including bash scripts, directly from your PC. Whether you're deploying applications or performing repetitive tasks, this setup eliminates the need for constant manual intervention.</p>
<p><strong>Enhanced Security:</strong> Data flowing to external Large Language Models (LLMs) can be a concern. By using Local LLMs, you can encapsulate sensitive data and still benefit from natural language processing and semantic orchestration. This ensures that no sensitive information leaves your secure environment.</p>
<p><strong>Bridging Gaps Between Systems:</strong> Claude's ability to communicate with both cloud and air-gapped environments makes it a versatile tool for hybrid setups. This is particularly beneficial in ZTNA environments where strict security policies might otherwise limit operational flexibility.</p>
<p><strong>Customizability:</strong> With MCP tools, you can build deterministic workflows and gradually enhance them by adding memory and sequential learning capabilities. This enables your system to grow smarter over time, adapting to your specific needs.</p>
<h2 id="heading-a-simple-example-echoing-messages-with-mcp-and-claude">A Simple Example: Echoing Messages with MCP and Claude</h2>
<p>To demonstrate how easy it is to get started, let's set up a simple MCP tool that echoes a message. While basic, this example highlights the potential of MCP and Claude for automating workflows in air-gapped environments.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤖</div>
<div data-node-type="callout-text">AI is transforming DevOps, Create your own AI tools and integrate them into your pipelines. <strong>I am going to post more articles about how to leverage AI for DevOps, stay tuned and subscribe!</strong></div>
</div>

<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=DIl3gVmvf0o">https://www.youtube.com/watch?v=DIl3gVmvf0o</a></div>
<p> </p>
<h3 id="heading-step-1-install-required-tools">Step 1: Install Required Tools</h3>
<ul>
<li><p>Download Claude for Desktop: <a target="_blank" href="https://claude.ai/download">Claude for Desktop</a></p>
</li>
<li><p>Install Node.js: <a target="_blank" href="https://nodejs.org/">Node.js Official Website</a>, as MCP tools require Node.js to run. Python is also supported, but <em>not covered in this example</em>.</p>
</li>
</ul>
<h3 id="heading-step-2-create-an-mcp-tool">Step 2: Create an MCP Tool</h3>
<p>Use the Agentico <a target="_blank" href="https://www.npmjs.com/package/@agentico/mcp-create-tool">MCP Create Tool</a> to generate a new project:</p>
<pre><code class="lang-bash">npx @agentico/mcp-create-tool my-agentico-tool -t Echo -d <span class="hljs-string">"Echo a message from MCP server to the client"</span>
<span class="hljs-built_in">cd</span> my-agentico-tool
npm install &amp;&amp; npm run build
</code></pre>
<p>This command creates a TypeScript project with all the necessary files, including an example configuration file for Claude. It's designed to simplify the onboarding process for DevOps practitioners who may not be familiar with TypeScript. ⭐ Star the <a target="_blank" href="https://github.com/agentico-dev/mcp-server">MCP Create Tool</a> on GitHub if you find it helpful!</p>
<h3 id="heading-step-3-update-claudes-configuration">Step 3: Update Claude’s Configuration</h3>
<p>Ensure Claude's configuration file is updated automatically, the <code>mcp-create-tool</code> adds an entry with the path to your MCP tool.</p>
<p>macOS: <code>~/Library/Application Support/Claude/claude_desktop_config.json</code></p>
<p>Windows: <code>%APPDATA%\Claude\claude_desktop_config.json</code></p>
<p>Here's an example:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"mcpServers"</span>: {
    <span class="hljs-attr">"my-agentico-tool"</span>: {
      <span class="hljs-attr">"command"</span>: <span class="hljs-string">"node"</span>,
      <span class="hljs-attr">"args"</span>: [
        <span class="hljs-string">"/path/to/project/my-agentico-tool/build/index.js"</span>
      ]
    }
  }
}
</code></pre>
<p>Please refer to the <a target="_blank" href="https://modelcontextprotocol.io/quickstart/user">Claude user guide</a> for detailed instructions if you need anything.</p>
<h3 id="heading-step-4-test-the-setup">Step 4: Test the Setup</h3>
<p>Open Claude for Desktop and type the following message in the chat:</p>
<pre><code class="lang-txt">echo hello
</code></pre>
<p>Claude will respond with:</p>
<pre><code class="lang-txt">hello
</code></pre>
<p>This interaction demonstrates how the MCP tool processes a message and returns a response. You can review the code in src/index.ts and customize it to handle more complex tasks.</p>
<p>You can test with different messages and observe how Claude interacts with your MCP tool to execute the desired actions.</p>
<p>I know, it's a simple example, but it showcases the power of MCP and Claude for Desktop in automating workflows within air-gapped environments. Your imagination is the only limit to what you can achieve with these tools.</p>
<h2 id="heading-future-possibilities">Future Possibilities</h2>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/exploring-the-future-with-mcp-and-claude.png" alt="Exploring the Future with MCP and Claude" /></p>
<p>The potential for MCP and Claude for Desktop extends far beyond simple tasks. Imagine using these tools to:</p>
<ul>
<li><p><strong>Automate application deployments:</strong> Seamlessly manage builds and rollouts without internet access.</p>
</li>
<li><p><strong>Perform drift detection:</strong> Compare the desired and actual states of your systems to identify configuration issues.</p>
</li>
<li><p><strong>Integrate with Local LLMs:</strong> Enhance security while leveraging AI capabilities for smarter workflows.</p>
</li>
</ul>
<p>Stay tuned for an upcoming post where we'll cover how to use Claude for Desktop to automate deployments and detect drift in air-gapped environments. The possibilities are endless, and this is just the beginning.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The combination of MCP and Claude for Desktop offers a groundbreaking way to tackle the challenges of ZTNA and air-gapped environments. By enabling natural language interaction with local AI tools, this approach simplifies workflows, enhances security, and empowers DevOps practitioners to achieve more with less effort.</p>
<p>Ready to revolutionize your DevOps workflows? Start exploring MCP with <a target="_blank" href="https://agentico.dev">Agentico</a> and Claude for Desktop today. And remember: your imagination is the only limit.</p>
<p>Go Rebels! ✊️</p>
]]></content:encoded></item><item><title><![CDATA[Revolutionizing AI Workflows with Agentic Tools and MCP Server]]></title><description><![CDATA[Imagine a world where building AI applications is as streamlined as deploying microservices or containers. With the rise of Agentic AI frameworks and orchestrators, this isn't just a dream, it's happening now. Developers are harnessing these tools to...]]></description><link>https://rebelion.la/ai-workflows-agentic-tools-and-mcp-server</link><guid isPermaLink="true">https://rebelion.la/ai-workflows-agentic-tools-and-mcp-server</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Orchestration]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Wed, 08 Jan 2025 13:54:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736344386945/6b51e61d-499a-4ebb-9d5a-68d09ac12e75.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a world where building AI applications is as streamlined as deploying microservices or containers. With the rise of Agentic AI frameworks and orchestrators, this isn't just a dream, it's happening now. Developers are harnessing these tools to train, test, and deploy AI solutions faster than ever before. And the best part? It's only getting better.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/ai-workflows-with-agentic-tools-and-MCPServer.png" alt="AI Workflows with MCP Server" /></p>
<p>In this article, we'll explore the power of Agentic tools, the role of MCP Server in simplifying AI workflows, and how you can leverage these tools to build your own AI applications. Let's dive in!</p>
<h3 id="heading-the-rise-of-agentic-ai">The Rise of Agentic AI</h3>
<p>At the heart of this evolution lies the <a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol</a> (MCP), a framework developed by Anthropic. MCP introduces a <strong>standard protocol</strong> for AI applications to communicate seamlessly with one another, and with the rest of the world. While MCP opens doors to powerful integrations, getting started can feel daunting, especially for those new to Agentic AI.</p>
<p>That's where <a target="_blank" href="https://www.npmjs.com/package/@la-rebelion/mcp-server"><strong>La Rebelion's MCP Server</strong></a> comes in. We are in the quest to simplify the process of creating and managing Agentic tools, by building a server <a target="_blank" href="https://en.wikipedia.org/wiki/Facade_pattern">facade</a> that acts as a bridge between your tools and MCP we are on track to achieve it! Our approach is inspired by the simplicity and effectiveness of microservices and containers: <em>each tool does one thing and does it exceptionally well</em>.</p>
<h3 id="heading-why-agentic-tools-matter">Why Agentic Tools Matter</h3>
<p>Agentic tools are to AI what microservices are to cloud architecture—modular, efficient, and purpose-driven. Each tool focuses on a specific task, ensuring precision and reliability. However, building these tools from scratch can be cumbersome. You need to handle communication protocols, input validation, and execution logic. That's a lot of complexity for something that should be simple.</p>
<p>Enter <strong>MCP Server</strong>, our server facade that eliminates the grunt work. You focus on crafting the core logic of your tool, and we handle the heavy lifting—like communication with MCP, input/output validation, and orchestration. It's like giving your tools a backstage crew to ensure the show runs flawlessly.</p>
<h3 id="heading-from-complex-to-seamless-building-with-mcp-server">From Complex to Seamless: Building with MCP Server</h3>
<p>Let's look at how effortless it is to create and deploy an Agentic tool using La Rebelion's MCP Server. Here's an example where we build a simple <strong>Echo Tool</strong>:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/index.ts</span>
<span class="hljs-comment">// Create a new instance of the MCPServer</span>
<span class="hljs-keyword">import</span> { EchoTool } <span class="hljs-keyword">from</span> <span class="hljs-string">"./EchoTool.js"</span>;
<span class="hljs-keyword">const</span> myServer = <span class="hljs-keyword">new</span> MCPServer(<span class="hljs-string">'My MCP Server'</span>, <span class="hljs-string">'1.0.0'</span>);

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-comment">// Register tools</span>
  myServer.registerTool(<span class="hljs-string">"echo"</span>, <span class="hljs-keyword">new</span> EchoTool());
  <span class="hljs-keyword">await</span> myServer.run();
}
</code></pre>
<p>In this example:  </p>
<ol>
<li>You define the logic of your tool (e.g., Echo Tool).  </li>
<li>You register it with the server facade (<code>myServer.registerTool</code>).  </li>
<li>MCP Server takes care of the rest—communicating with MCP, validating inputs/outputs, and executing the tool seamlessly.  </li>
</ol>
<p>Here's a visual breakdown of the architecture leveraging Anthropic's Model Context Protocol:</p>
<p><img src="https://github.com/la-rebelion/mcp-server/raw/refs/heads/main/mcp-server.png" alt="UML Diagram of the Agent Tool" /></p>
<p>References: </p>
<blockquote>
<p><a target="_blank" href="https://github.com/la-rebelion/mcp-server">MCP Server@GitHub</a><br /><a target="_blank" href="https://www.npmjs.com/package/@la-rebelion/mcp-server">MCP Server@npm</a></p>
</blockquote>
<h3 id="heading-build-your-own-agentic-tools-today">Build Your Own Agentic Tools Today</h3>
<p>Whether you're optimizing a workflow, automating a process, or enhancing an existing operation, Agentic AI tools can help you achieve your goals. By leveraging the power of MCP Server, you can build and deploy tools that communicate seamlessly with other AI applications, creating a robust ecosystem of intelligent agents. La Rebelion's <strong>MCP Server</strong> empowers you to focus on what matters most: <strong>building tools that deliver real value</strong>. By simplifying the complexities of MCP integration, we make it easier for developers to dive into the world of Agentic AI.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/rise-of-agentic-ai.png" alt="Agentic AI" /></p>
<p>Agentic AI is revolutionizing the way we build, deploy, and manage AI applications. With tools like MCP Server, developers can create intelligent agents that communicate effortlessly, streamlining workflows and enhancing productivity.</p>
<p>Ready to start? Explore the <strong>MCP Server</strong> on <a target="_blank" href="https://github.com/la-rebelion/mcp-server">GitHub</a> or check out the <a target="_blank" href="https://www.npmjs.com/package/@la-rebelion/mcp-server">npm package</a> to get started.  </p>
<p>At <strong>La Rebelion</strong>, we believe that tools, like microservices, should be small, focused, and powerful. With our MCP Server, you can bring your AI ideas to life without sweating the details. Let's build something incredible, together. </p>
<p>What will you create with Agentic AI and MCP Server? Let us know in the comments below!</p>
<p>Go Rebels! ✊🏻</p>
]]></content:encoded></item><item><title><![CDATA[The Future of DevOps is Agentic AI]]></title><description><![CDATA[Imagine a world where deploying your microservices or managing complex DevOps workflows is as simple as working alongside an AI-powered assistant. Thanks to Anthropic's Model Context Protocol (MCP), this vision is becoming a reality. MCP simplifies t...]]></description><link>https://rebelion.la/the-future-of-devops-is-agentic-ai</link><guid isPermaLink="true">https://rebelion.la/the-future-of-devops-is-agentic-ai</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AI-automation]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Wed, 11 Dec 2024 00:11:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733875235178/e86a5393-3c6d-4189-8fa0-51c406bd34d6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a world where deploying your microservices or managing complex DevOps workflows is as simple as working alongside an AI-powered assistant. Thanks to Anthropic's <strong>Model Context Protocol (MCP)</strong>, this vision is becoming a reality. MCP simplifies the process of creating <strong>AI agents</strong> capable of handling complex tasks, enabling them to work together to solve problems, automate processes, and deliver results.</p>
<p>And here's the best part: MCP is not limited to a single language model like Claude. You can leverage <strong>any LLM</strong> (Large Language Model) to power your agents, providing unmatched flexibility for developers.</p>
<h2 id="heading-what-makes-agentic-ai-different">What Makes Agentic AI Different?</h2>
<p>AI agents are evolving beyond simple chat interfaces. They can now reason through complex problems, plan and execute tasks, and collaborate with other agents. This "agentic" capability makes them more than assistants - they're powerful tools for augmenting human potential.</p>
<h2 id="heading-core-components-of-ai-agents">Core Components of AI Agents:</h2>
<p>According to <a target="_blank" href="https://www.linkedin.com/posts/armand-ruiz_the-future-of-ai-is-agentic-lets-learn-activity-7271493883648208896-8dCj">Armand Ruiz</a>, VP of AI product at IBM, AI agents typically consist of four key components:</p>
<ol>
<li><p><strong>Agent Core</strong>: The brain of the agent, managing integrations and processes.</p>
</li>
<li><p><strong>Memory Module</strong>: Keeps track of past interactions and data for better context and continuity.</p>
</li>
<li><p><strong>Tools</strong>: APIs or other external systems that the agent can use to perform tasks.</p>
</li>
<li><p><strong>Planning Module</strong>: Enables advanced problem-solving by analyzing and strategizing to achieve goals.</p>
</li>
</ol>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/capabilities-ai-agents.png" alt="Capabilities of AI Agents" /></p>
<p>These features align closely with MCP's concepts in the <a target="_blank" href="https://modelcontextprotocol.io/docs/concepts/architecture">core architecture</a>, where <strong>Servers</strong> act as the agent core, <a target="_blank" href="https://modelcontextprotocol.io/docs/concepts/tools"><strong>Tools</strong></a> extend functionality, <a target="_blank" href="https://modelcontextprotocol.io/docs/concepts/resources"><strong>Resources</strong></a> provide context, and <a target="_blank" href="https://modelcontextprotocol.io/docs/concepts/prompts"><strong>Prompts</strong></a> guide workflows.</p>
<h2 id="heading-mcp-the-backbone-of-agentic-ai">MCP: The Backbone of Agentic AI</h2>
<p>With MCP, developers can create agents equipped with powerful primitives:</p>
<ul>
<li><p><strong>Tools</strong>: Let agents execute actions, interact with external systems, and perform computations.</p>
</li>
<li><p><strong>Resources</strong>: Provide structured data or content to enhance the agent's decision-making process.</p>
</li>
<li><p><strong>Prompts</strong>: Create reusable templates to guide workflows, chain interactions, and surface user-friendly commands (e.g., slash commands in a UI).</p>
</li>
</ul>
<p>This modular approach ensures that agents remain adaptable and powerful, capable of solving specific tasks or collaborating with other agents in a multi-agent framework.</p>
<p>With MCP Inspector, the developer experience is enhanced, allowing developers to easily debug and test their agents.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/mcp/mcp-components.png" alt="MCP Core Components" /></p>
<h2 id="heading-my-journey-building-an-agent-to-deploy-microservices">My Journey: Building an Agent to Deploy Microservices</h2>
<p>I'm currently working on an AI agent designed to <strong>deploy microservices on Kubernetes</strong>. By leveraging MCP, I've created an agent capable of:</p>
<ul>
<li><p>Using <strong>Tools</strong> to interact with Kubernetes APIs and manage deployments.</p>
</li>
<li><p>Accessing <strong>Resources</strong> to gather configuration files, cluster states, and deployment histories for better context.</p>
</li>
<li><p>Employing <strong>Prompts</strong> to streamline user interactions, enabling developers to input dynamic arguments or chain workflows effortlessly.</p>
</li>
</ul>
<p>This agent, named <strong>QBot</strong>, is a DevOps "SME" (Subject Matter Expert) that assists users through deploying microservices, managing Kubernetes environments, and troubleshooting deployments. By combining the power of AI with the flexibility of MCP, QBot simplifies complex tasks and empowers developers to focus on innovation.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/dzQ_dTaSJow">https://youtu.be/dzQ_dTaSJow</a></div>
<p> </p>
<h2 id="heading-why-mcp-and-ai-agents-matter">Why MCP and AI Agents Matter?</h2>
<ol>
<li><p><strong>Simplifies DevOps</strong>: Newcomers can overcome the steep learning curve of Kubernetes with an agent guiding them through deployments.</p>
</li>
<li><p><strong>Empowers Experts</strong>: Experienced engineers can focus on higher-value tasks by automating repetitive processes.</p>
</li>
<li><p><strong>Scales Seamlessly</strong>: MCP allows for using multiple LLMs, meaning agents can adapt to different models or languages as needed.</p>
</li>
<li><p><strong>Enhances Collaboration</strong>: Agents can work together, sharing resources and tools to tackle complex workflows that no single agent could handle alone.</p>
</li>
<li><p><strong>Boosts Efficiency</strong>: By automating tasks, agents reduce human error and speed up processes, leading to faster deployments and more reliable systems.</p>
</li>
</ol>
<p>The challenge <s>is</s> was to <strong>connect your data and APIs to LLMs</strong>, but with MCP, that problem is solved. You can have everything in your PC or AirGap environment and connect your data and APIs to LLMs.</p>
<p>Now, the developers can focus on building agents tailored to their needs, leveraging the power of AI to streamline operations and push boundaries.</p>
<hr />
<h2 id="heading-how-you-can-get-started">How You Can Get Started</h2>
<p>This is just the beginning of a journey to transform automation. By replicating my approach, you can adapt this methodology for other workflows, from testing pipelines to automating IT operations.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/ai/building-tailored-ai-agent.png" alt="Building an AI Agent" /></p>
<p><strong>Want to build an agent tailored to your needs?</strong> Here's what to do:</p>
<ol>
<li><p><strong>Explore MCP</strong>: Familiarize yourself with its architecture and concepts <a target="_blank" href="https://modelcontextprotocol.io/docs/concepts/architecture">here</a>.</p>
</li>
<li><p><strong>Identify Tasks</strong>: Start with a specific, repetitive task that could benefit from automation.</p>
</li>
<li><p><strong>Design Your Agent</strong>: Use MCP's primitives to define your Tools, Resources, and Prompts.</p>
<ol>
<li><p>Break down the task into smaller tasks that the agent can handle. These are your <strong>Tools</strong>.</p>
</li>
<li><p>Identify the data or context needed to complete these tasks. These are your <strong>Resources</strong>.</p>
</li>
<li><p>Create a workflow that guides users through the task. These are your <strong>Prompts</strong>.</p>
</li>
</ol>
</li>
<li><p><strong>Iterate and Scale</strong>: Test your agent, refine its capabilities, and expand it to handle more tasks.</p>
</li>
</ol>
<p>I am going to guide you through this framework in future posts, sharing insights and tutorials to help you build your own AI agents.</p>
<hr />
<h2 id="heading-the-road-ahead">The Road Ahead</h2>
<p>The future of AI is agentic. As these technologies mature, we'll increasingly collaborate with AI agents to streamline operations and push boundaries. MCP provides the framework to start building these agents today, with the flexibility to use any LLM that suits your project.</p>
<p>I'm thrilled to share my journey and help you apply these concepts to your automation projects. Together, we'll explore how AI agents can revolutionize workflows.</p>
<p><strong>Subscribe and follow "La Rebelion"</strong> for updates, tutorials, and insights as we navigate this exciting landscape. Let's enjoy the ride!</p>
<p>Take a look at our <a target="_blank" href="https://www.youtube.com/@LaRebelion">YouTube channel</a> and subscribe to stay updated with our latest videos, I will be sharing more about AI agents, MCP, and how to build your own AI agents.</p>
<p>Let's simplify the complex, automate the mundane, and unlock the full potential of AI agents with MCP.</p>
<p>Go Rebels! ✊🏻</p>
<h2 id="heading-update-agentico">Update - Agentico</h2>
<p><a target="_blank" href="https://agentico.dev">Agentico</a>, where AI meets simplicity. I am working on this new amazing idea to solve some of the problems due to the proliferation of AI tools, and if you like me are asking:</p>
<ul>
<li><p><em>What tools already exist?</em></p>
</li>
<li><p><em>Which tools fit my needs?</em></p>
</li>
<li><p><em>How do I find tools across diverse environments?</em></p>
</li>
</ul>
<p>Identifying the right tools across multiple environments feels like finding a needle in a haystack. <a target="_blank" href="https://agentico.dev/tools">Agentico Tools</a> discovery will help you tackle these problems.</p>
<p>Stay tuned!</p>
]]></content:encoded></item><item><title><![CDATA[Deploying Microservices in Kubernetes with AI Anthropic's MCP]]></title><description><![CDATA[Forget Code, Write Commands in Plain English: Meet MCP 🛠️💬
Let’s face it: writing scripts in Python or Bash isn’t everyone’s cup of tea. But what if I told you that those days might be over? Anthropic's Model Context Protocol (MCP) is here to chang...]]></description><link>https://rebelion.la/deploying-microservices-in-kubernetes-with-ai-anthtopics-mcp</link><guid isPermaLink="true">https://rebelion.la/deploying-microservices-in-kubernetes-with-ai-anthtopics-mcp</guid><category><![CDATA[mcp]]></category><category><![CDATA[AI]]></category><category><![CDATA[#anthropic]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Mon, 02 Dec 2024 14:57:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733151037296/c9ed91c6-b3a1-4548-89ec-c37552cb60a4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Forget Code, Write Commands in Plain English</strong>: Meet MCP 🛠️💬</p>
<p>Let’s face it: writing scripts in Python or Bash isn’t everyone’s cup of tea. But what if I told you that those days might be over? <strong>Anthropic's Model Context Protocol (MCP)</strong> is here to change the agentic field, the innovation that’s leveling the playing field in AI.</p>
<p>MCP flips the script on the traditional barriers to AI adoption. You no longer need to wrestle with setting up complex environments or mastering niche programming languages to make AI work in your local setup. All you need is the right <a target="_blank" href="https://github.com/modelcontextprotocol/servers"><strong>MCP Server</strong></a>, and you’re off to the races.</p>
<h3 id="heading-scripts-simplified-plain-english-for-the-win">Scripts, Simplified: Plain English for the Win 🚀</h3>
<p>Imagine writing a script not in code, but in <strong>natural language</strong>. Yeah, you read that right. Something like:</p>
<blockquote>
<p><em>"Deploy the retail microservice ACME 1.1 in our Dev Kubernetes cluster."</em></p>
</blockquote>
<p>That’s it. No loops. No syntax errors. Just plain, simple instructions. MCP handles the heavy lifting, translating your request into action. It’s like magic but powered by AI.</p>
<h3 id="heading-why-does-this-matter">Why Does This Matter? 🧐</h3>
<p>Because it puts the power of AI into <em>everyone’s</em> hands—not just developers. Take a Product Manager, for example. Sure, they’re not the ones deploying microservices (and they probably shouldn’t be 😅). But if they <strong>can</strong>, then so can:</p>
<ul>
<li><p>A marketer running a campaign analysis.</p>
</li>
<li><p>A data analyst pulling metrics.</p>
</li>
<li><p>You, tinkering with your side hustle on a Sunday afternoon.</p>
</li>
</ul>
<p>It’s accessibility at its finest, and it’s making tech more inclusive.</p>
<h3 id="heading-ready-to-see-mcp-in-action">Ready to See MCP in Action? 🎥</h3>
<p>In another post, I’ll walk you through a <strong>live demo</strong> of MCP in action. I’ll show you how to deploy a microservice in a Kubernetes cluster using nothing but natural language “commands” (prompts) with <a target="_blank" href="https://claude.ai/download"><strong>Anthropic's Claude for Desktop</strong></a> and <strong>QBot MCP Server</strong> by <em>La Rebelion</em>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733151437545/5a02d019-3d30-4c0c-8e9c-9008c76fd378.png" alt class="image--center mx-auto" /></p>
<p>No coding. No fuss. Just results. The sky is the limit!</p>
<p>If this sounds like something you don’t want to miss (and trust me, you don’t), hit that follow button. Let’s take this journey together, one natural-language script at a time.</p>
<p>Do you have someone in mind who could be interested and benefit from this? Share the love, and send them the article link, I will appreciate it. 🥰</p>
<p><strong>Go Rebels! ✊🏻</strong></p>
]]></content:encoded></item><item><title><![CDATA[CSAR and SBOM for Airgap Kubernetes: Strategies for Enterprise Deployments]]></title><description><![CDATA[How to leverage the strengths of CSAR and SBOM to move images to private registries in airgapped environments?
CSAR focus on how to deploy and manage applications, while SBOM secures what is in them. What if we combined both to orchestrate securely a...]]></description><link>https://rebelion.la/csar-and-sbom-for-airgap-kubernetes-strategies-for-enterprise-deployments</link><guid isPermaLink="true">https://rebelion.la/csar-and-sbom-for-airgap-kubernetes-strategies-for-enterprise-deployments</guid><category><![CDATA[csar]]></category><category><![CDATA[skopeo]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[airgap]]></category><category><![CDATA[sbom]]></category><category><![CDATA[Helm]]></category><category><![CDATA[Crane]]></category><category><![CDATA[Kustomize]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Thu, 31 Oct 2024 12:24:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730324743248/a40859fd-5a31-4a61-9f5b-452c7d9e7d44.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>How to leverage the strengths of CSAR and SBOM to move images to private registries in airgapped environments?</strong></p>
<p><strong>CSAR</strong> focus on <strong>how</strong> to deploy and manage applications, while <strong>SBOM</strong> secures <strong>what</strong> is in them. What if we combined both to orchestrate securely across public, private, and airgapped environments?</p>
<p>DevOps and Kubernetes in airgap environments are challenging, and in this post, we'll explore some ideas of strategies for deploying cloud-native apps in Kubernetes in airgap environments. One of the bigger challenges I've noticed is still moving images to private registries; it's a roadblock that continues to trip people up, especially in airgapped environments. Security is another big concern in these environments, and it's important to have a strategy in place to ensure that your Kubernetes cluster is secure.</p>
<p>I aim to explore how we can leverage the strengths of different tools and frameworks to avoid creating from scratch and use well-known standards and frameworks to solve this problem. I'll be looking at how we can combine the strengths of <strong>CSAR</strong> and <strong>SBOM</strong> to extend them and create a new approach to move images to private registries in airgapped environments. I'll also explore some of the alternatives available today and how they can be used to solve this problem.</p>
<ul>
<li><p>Why the tools we have today are not enough to solve this problem?</p>
</li>
<li><p>What are the challenges, what alternatives do we have to move images to private registries in airgapped environments?</p>
</li>
</ul>
<p>These are some of the questions I'll be exploring in this post. I'll also be sharing some ideas on how we can leverage the strengths of different tools and frameworks to create a new approach to move images to private registries in airgapped environments. Keep reading to learn more!</p>
<h2 id="heading-the-challenge-of-moving-images-to-private-registries-in-airgapped-environments">The Challenge of Moving Images to Private Registries in Airgapped Environments</h2>
<p>While we have amazing tools to deploy cloud-native applications, the process of moving images to private registries is still a challenge; think about it: before, on bare-metal environments, or VMs, managing RPMs or DEBs was (is) also a challenge, I faced this problem in the past, when I was working in a project that required to <a target="_blank" href="https://rebelion.la/install-kubernetes-docker-offline">install a Vanilla Kubernetes cluster in a private network</a>, I had to install a lot of packages in a lot of servers, and the servers were in a private network, so we had to download the packages in a server with internet access, then move the packages to the private network, and install the packages in the servers. Well, we live in cycles, right? Now, the story is the same, but with containers, we have to download the images from a server with internet access, then move the images to the private network, and then deploy the images to the servers.</p>
<p>In fast-paced and agile enterprises, where <a target="_blank" href="https://scaledagileframework.com/business-agility/">business agility</a> drives multiple value streams, each with unique requirements, dependencies, and security demands, managing Kubernetes or containerized clusters across teams can feel overwhelming. The more projects and clusters, the more time-consuming and error-prone the process becomes. Worse, moving images across networks can expose your systems to serious security threats. Now imagine doing all of this in airgapped environments—no internet access, and even tougher challenges. You need a solution to move images to private registries seamlessly, without compromising security. The stakes are high, and it's time to rethink how we tackle this.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/airgap/the-three-challenges-on-airgap.webp" alt="challenges-airgap" /></p>
<p>We can summarize the challenges of moving images to private registries in airgapped environments as follows:</p>
<ul>
<li><p>The first challenge is <strong>moving images to private registries</strong>. In airgapped environments, you can't pull images from public registries like <a target="_blank" href="https://hub.docker.com/">Docker Hub</a> or <a target="_blank" href="https://quay.io">Quay.io</a>. You need to move images to a private registry in the airgapped environment.</p>
</li>
<li><p>The second challenge is <strong>security</strong>. In airgapped environments, security is a top priority. You need to <strong>ensure that your Kubernetes cluster is secure</strong> and that you're not exposing your cluster to security risks.</p>
</li>
<li><p>The third challenge is <strong>orchestration</strong>. In airgapped environments, you need to <strong>orchestrate and manage dependencies</strong>. You need a way to track dependencies and vulnerabilities and ensure that your deployments are secure.</p>
</li>
</ul>
<p>What alternatives do we have to move images to private registries in airgapped environments?</p>
<h2 id="heading-alternatives-to-move-images-in-airgapped-environments">Alternatives to Move Images in Airgapped Environments</h2>
<p>In modern enterprise environments, particularly airgapped and hybrid ones, efficient package management and secure software distribution are critical. I was wondering, how could some of the frameworks and tools that we have today help us to solve this problem. <a target="_blank" href="https://winery.readthedocs.io/en/latest/user/yml/index.html#export-csar">CSAR</a>, <a target="_blank" href="https://www.cisa.gov/sbom">SBOM</a>, <a target="_blank" href="https://docs.docker.com/compose/">Docker Compose</a>, <a target="_blank" href="https://helm.sh">Helm</a>, <a target="_blank" href="https://kubectl.docs.kubernetes.io/installation/kustomize/">Kustomize</a>, <a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/">ArgoCD</a>, and many other, each of these tools has unique strengths, addressing various aspects like dependencies, vulnerabilities, orchestration, and inventory management.</p>
<p>These cannot help us to tackle the problem of moving images to private registries in airgapped environments. But what if we could combine the strengths of these tools to create a new approach to move images to private registries in airgapped environments?</p>
<p>Why the tools we have today are not enough to solve this problem?</p>
<h2 id="heading-why-helm-kustomize-and-argocd-are-not-enough">Why Helm, Kustomize, and ArgoCD are not enough?</h2>
<p>Restricted environments like airgapped or high-security ones require a different approach to conventional software deployment. Firstly, standard tools like Helm, Kustomize, or Docker Compose are not designed to handle the complexities of airgapped environments. They cannot move images to private registries in airgapped environments, track dependencies, and vulnerabilities, and orchestrate deployments across multiple environments. They are focused only on defining and deploying applications, not on managing the complexities of airgapped environments.</p>
<p>Secondly, the alternatives on top of current solutions are not enough to solve this problem. Tools like Crane, and Skopeo provide ways to move images between registries, but they cannot track dependencies and vulnerabilities, orchestrate deployments, and manage software packages securely in airgapped environments. Tools like Hauler, and Zarf are focused on packaging and distributing assets in airgapped environments, these are great alternatives but are tightly focused on specific <strong>non-standard package formats</strong> that may not be suitable for all use cases.</p>
<p>Thirdly, the lack of transparency and security in the software supply chain, combined with the absence of a unified solution presents a significant challenge. With mandates like the White House’s push for <a target="_blank" href="https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/">enhanced Software Supply Chain Security</a>, organizations need a way to securely move images to private registries in airgapped environments, track dependencies, and vulnerabilities, and orchestrate deployments across multiple environments. A comprehensive solution that ensures security, transparency, and compliance—while managing software packages seamlessly and securely in airgapped setups—is essential.</p>
<h2 id="heading-csar-and-sbom-two-frameworks-two-solutions">CSAR and SBOM: Two Frameworks, Two Solutions</h2>
<p>The <strong>Cloud Service Archive</strong> (<strong>CSAR</strong>) and <strong>Software Bill of Materials</strong> (<strong>SBOM</strong>) frameworks are two solutions aimed at solving different problems. CSAR focuses on orchestrating deployments of network functions, while SBOMs, like CycloneDX and SPDX, ensure security and transparency by providing a detailed inventory of components. But as organizations move toward stricter security practices, airgapped environments, and hybrid clouds, a new approach is needed; one that blends the strengths of both.</p>
<p><strong>CSAR packages simplify orchestration and deployment</strong> in NFV environments, but they don't track dependencies or vulnerabilities. <strong>SBOMs (CycloneDX/SPSX) excel at security and compliance</strong>, listing components and vulnerabilities. But they don't provide a way to orchestrate the inventory and dependencies for deployments. CSAR's role is about <strong>how to deploy</strong>, while SBOM's role is <strong>what is deployed</strong> and how to manage its risks. A new approach is needed to combine the strengths of both frameworks. This approach should provide a way to manage and set up a private registry, move images to it, and orchestrate deployments while tracking dependencies and vulnerabilities. It should also support airgapped environments and hybrid clouds.</p>
<p>An inventory of components and orchestration to move images to private registries in airgapped environments has become a necessity and is a must-have for modern enterprise projects. I was thinking about how to solve this problem, with either a tool or a service, and I came up with the idea of building a tool that can help with this. The tool is part of the <a target="_blank" href="https://k1s.sh">K1s project</a>, and it's called <a target="_blank" href="https://k1s.sh/airgap">K1s Airgap</a> <strong>SABOR</strong> (Software Airgap Bill of Materials Resolver). K1s' SABOR is a tool that helps you move images to private registries in airgapped environments. It's a simple tool that you can run on your local machine, and it will help you move images to private registries in airgapped environments.</p>
<p>Before we dive into the details of SABOR, let's take a look at the challenges of deploying Kubernetes in airgap environments, and what options you have.</p>
<h2 id="heading-alternatives-to-move-images-to-private-registries-in-airgapped-environments">Alternatives to Move Images to Private Registries in Airgapped Environments</h2>
<p>There are several alternatives to move images to private registries in airgapped environments. Here are some of the most common ones:</p>
<ul>
<li><p><strong>Docker Save and Load</strong>: You can use the <code>docker save</code> and <code>docker load</code> commands to save images to a tarball and then load them into a private registry (as demoed in my video <a target="_blank" href="https://youtube.com/clip/Ugkx-z_mQVdJvz05PX-_SiaKy5uXqAHj6fZu?si=hMV8Wa69GdujOVCe">here</a>). This is a manual process and can be time-consuming.</p>
<ul>
<li>If you don't have or don't want to install Docker (or any other tool) you can do it with <code>curl</code>! Yes, check this video demo I made <a target="_blank" href="https://youtube.com/shorts/7wFapdIqjA8">here</a>.</li>
</ul>
</li>
<li><p><strong>Crane</strong> and <strong>Skopeo</strong>: <a target="_blank" href="https://github.com/google/go-containerregistry/tree/main/cmd/crane">Crane</a> "is a tool for interacting with remote images and registries". <a target="_blank" href="https://github.com/containers/skopeo">Skopeo</a> is a tool that helps you copy images between registries. You can use Crane or Skopeo to move images to private registries in airgapped environments.</p>
</li>
<li><p><strong>Hauler</strong>: <a target="_blank" href="https://ranchergovernment.com/products/hauler">Hauler</a> is a tool "designed and built to solve the challenges of collecting, packaging, and distributing assets in airgapped environments".</p>
</li>
<li><p><strong>Zarf</strong>: <a target="_blank" href="https://zarf.dev">Zarf</a> is a tool that "provides the ability to package necessary components from the internet and securely deliver all of the files and dependencies needed to run an application in a disconnected environment" (airgapped).</p>
</li>
</ul>
<p>I ordered this list by the complexity of the tool, from the simplest to the most complex (based on my experience). Additionally, on enterprise projects, you may need to use a combination of these tools to move images to private registries in airgapped environments; in these kinds of projects, due to the security restrictions the ISV (Independent Software Vendor) will probably share with you a set of images that you will have to move to your private registry, or in self-service projects, the ISV and System Integrators will have to move the images to the private registry, but they may have restricted access to the Kubernetes cluster, so they will not be able to use tools that require cluster role permissions, some of the tools mentioned above <strong>require cluster role permissions, to deploy operators, or add CRDs</strong>.</p>
<p>There is this other project incubating in the Linux Foundation, called <a target="_blank" href="https://sigstore.dev">Sigstore</a>, that aims to provide a way to sign and verify software artifacts. Singing and verifying software artifacts is a must-have in airgapped environments.</p>
<p>Another great initiative is <a target="_blank" href="https://github.com/kubernetes-sigs/bom">BOM</a> (Bill of Materials), a Kubernetes Special Interest Group (SIG) that aims to provide a way to track dependencies and vulnerabilities in Kubernetes deployments.</p>
<p>Also, Zarf is a great project leveraging the power of the SBOM framework, but I found it a bit complex to use and would be hard to integrate into a pipeline already in place in enterprise projects, it is focused on packaging and delivering assets in airgapped environments.</p>
<p>As I said, why not leverage the different strengths of these tools to create a new approach to move images to private registries in airgapped environments?</p>
<h2 id="heading-sabor-a-new-approach-extending-sbom-and-csar-to-move-images-in-airgapped-environments">SABOR: A New Approach Extending SBOM and CSAR to Move Images in Airgapped Environments</h2>
<p>Recognizing the strengths of both CSAR and SBOM, the airgap tool I'm building takes a hybrid approach.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/airgap/fusion-to-stronger-solution-for-airgap.webp" alt="Breezo a powerful fusion" /></p>
<h3 id="heading-sbom-strengths">SBOM strengths:</h3>
<p>SBOMs, like CycloneDX and SPDX, address these limitations by providing a transparent view of software composition. They focus on the what; tracking all software components, their versions, dependencies, and potential security vulnerabilities. In airgapped or high-security environments, knowing exactly what's inside each software package is critical for managing risks and maintaining compliance.</p>
<ul>
<li><p><strong>Security Focus</strong>: SBOMs shine when it comes to identifying vulnerable components and managing risk. With increasing software supply chain attacks, having this transparency is crucial.</p>
</li>
<li><p><strong>Compliance and Auditability</strong>: SBOMs help organizations meet regulatory and industry standards by providing detailed insight into software components.</p>
</li>
</ul>
<h4 id="heading-limitations-of-sbom">Limitations of SBOM:</h4>
<ul>
<li><p><strong>Lack of Orchestration</strong>: SBOMs track what is inside a package but don't help deploy or manage the how or where of that package across cloud environments.</p>
</li>
<li><p><strong>Not Built for Distribution</strong>: SBOMs don't inherently provide mechanisms to package and distribute software effectively, particularly in complex, hybrid, or airgapped environments.</p>
</li>
</ul>
<h3 id="heading-csar-the-orchestration-powerhouse-with-limitations">CSAR: The Orchestration Powerhouse with Limitations</h3>
<p>CSAR packages are a solution designed to simplify the deployment and lifecycle management of network services, especially in NFV (Network Function Virtualization) environments. They bundle everything needed for deploying these services; service descriptors, configuration files, and scripts; making it easier to orchestrate and deploy complex systems like Virtual Network Functions (VNFs) across cloud environments.</p>
<p>However, CSAR has limitations:</p>
<ul>
<li><p><strong>Deployment Focused</strong>: While it's excellent at managing the how and where of deployments, CSAR doesn't track the what; it doesn't list or manage software components, dependencies, or vulnerabilities.</p>
</li>
<li><p><strong>Limited Security Insight</strong>: CSAR doesn't provide a detailed breakdown of software components, leaving gaps when it comes to tracking software risks or ensuring compliance.</p>
</li>
</ul>
<h3 id="heading-how-does-sabor-extend-all-of-these-strengths">How does SABOR extend all of these strengths?</h3>
<p>Recognizing the strengths of both CSAR and SBOM, the airgap tool I'm building takes a hybrid approach. By leveraging SBOM data, the tool will provide a full breakdown of components while allowing users to securely package and distribute software without internet access. This is particularly vital for industries like telecom or defense, where security and compliance are paramount.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/airgap/Breezo-powerful-superhero-aura-of-strength.webp" alt="Breezo a powerful fusion" /></p>
<p>SABOR will:</p>
<ul>
<li><p>Extend SBOMs by not just tracking the what but also leveraging that information to create secure, compliant packages for airgapped and hybrid cloud environments.</p>
</li>
<li><p>Focus on Distribution: This tool will handle the how and where by orchestrating package distribution across public clouds, private clouds, and airgapped environments. This addresses CSAR's strength in orchestration while extending SBOM's transparency to a more practical application.</p>
</li>
<li><p>By leveraging SBOM data, the tool will provide a full breakdown of components while allowing users to securely package and distribute software without internet access. This is particularly vital for industries like telecom or defense, where security and compliance are paramount.</p>
</li>
<li><p>Extend CSAR's orchestration capabilities to include SBOM data, providing a comprehensive solution for managing software packages across multiple environments.</p>
</li>
</ul>
<h4 id="heading-future-enhancements-orchestrating-across-multiple-environments">Future Enhancements: Orchestrating Across Multiple Environments</h4>
<p>Looking ahead, this tool will not only package and secure software components but also orchestrate their distribution:</p>
<ul>
<li><p><strong>Multi-Cloud Compatibility</strong>: It will support deployment in public and private clouds, allowing organizations to seamlessly move packages between environments.</p>
</li>
<li><p><strong>Hybrid Cloud Orchestration</strong>: Enterprises working with a mix of cloud and airgapped environments will benefit from centralized orchestration, ensuring that the right packages are deployed in the right places, with full transparency and control over their contents.</p>
</li>
</ul>
<p>As software supply chains become increasingly complex, and as organizations move toward hybrid cloud and airgapped solutions, there's a growing need for tools that offer both security and orchestration. CSAR helps with orchestration, but it lacks transparency and security insights. SBOMs provide transparency but don't manage how or where packages are deployed.</p>
<h2 id="heading-bonus-recipies-on-current-tools-in-action">BONUS: Recipies on Current Tools in Action</h2>
<p>Let's explore a hypothetical example of how we could use CSAR and SBOM to move images to private registries in airgapped environments and orchestrate deployments.</p>
<h3 id="heading-csar-orchestrating-deployments">CSAR: Orchestrating Deployments</h3>
<p>CSAR packages simplify orchestration and deployment in NFV environments, but they don't track dependencies or vulnerabilities. They focus on the <strong>how</strong> of deployments, not the what. CSAR intentionally leaves out software composition details, focusing instead on orchestrating deployments. This makes it easier to manage complex network services but leaves gaps in security and compliance.</p>
<p>You could deploy containers with CSAR using a specific <a target="_blank" href="https://openbaton.github.io/documentation/vnfm-docker/">VNFM (Virtual Network Function Manager) that supports containers</a>, but it is not the best tool for that because of the complexity, we want to get stick to Kubernetes, hence it is better to use Helm, or Kustomize for that.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">By design, CSAR templates are meant to deploy network functions and network services, NOT applications, microservices, containers, or images.</div>
</div>

<p>How could we leverage the strengths of CSAR to move images to private registries in airgapped environments? Again, this is not the focus of CSAR, but it is a common problem that we face in enterprise projects, let's walk through a simple hypothetical example where you're deploying a web application using a CSAR package with a TOSCA template. The web app needs two things:</p>
<ul>
<li><p>A database.</p>
</li>
<li><p>An application server that connects to the database.</p>
</li>
</ul>
<h4 id="heading-tosca-template-breakdown">TOSCA Template Breakdown</h4>
<p>This template example will describe:</p>
<ul>
<li><p><strong>Node templates</strong>: The components (like the database and app server).</p>
</li>
<li><p><strong>Relationships</strong>: How these components connect.</p>
</li>
<li><p><strong>Dependencies</strong>: The order in which things need to happen.</p>
</li>
</ul>
<p>Here's what the TOSCA template might look like in a simplified version:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">topology_template:</span>
  <span class="hljs-attr">node_templates:</span>
    <span class="hljs-attr">my_database:</span>
      <span class="hljs-attr">type:</span> <span class="hljs-string">Database</span>
      <span class="hljs-attr">properties:</span>
        <span class="hljs-attr">db_name:</span> <span class="hljs-string">my_app_db</span>
        <span class="hljs-attr">db_user:</span> <span class="hljs-string">user123</span>
        <span class="hljs-attr">db_password:</span> <span class="hljs-string">pass123</span>
      <span class="hljs-attr">interfaces:</span>
        <span class="hljs-attr">Standard:</span>
          <span class="hljs-attr">create:</span> <span class="hljs-string">db_setup_script.sh</span> <span class="hljs-comment"># This script sets up the database</span>

    <span class="hljs-attr">my_app_server:</span>
      <span class="hljs-attr">type:</span> <span class="hljs-string">WebApplication</span>
      <span class="hljs-attr">properties:</span>
        <span class="hljs-attr">app_name:</span> <span class="hljs-string">my_web_app</span>
        <span class="hljs-attr">app_port:</span> <span class="hljs-number">8080</span>
      <span class="hljs-attr">requirements:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">database_connection:</span>
            <span class="hljs-attr">node:</span> <span class="hljs-string">my_database</span> <span class="hljs-comment"># This tells the app to connect to the database</span>
      <span class="hljs-attr">interfaces:</span>
        <span class="hljs-attr">Standard:</span>
          <span class="hljs-attr">create:</span> <span class="hljs-string">app_setup_script.sh</span> <span class="hljs-comment"># This script installs and starts the app</span>
</code></pre>
<h4 id="heading-what-happens-when-the-orchestrator-follows-this-template">What Happens When the Orchestrator Follows This Template</h4>
<p>Orchestrator reads the TOSCA template: It starts by identifying the two components: <code>my_database</code> and <code>my_app_server</code>.</p>
<p><strong>Step 1: Set up the database:</strong></p>
<ul>
<li><p>The orchestrator sees that the database (called <code>my_database</code>) needs to be created first.</p>
</li>
<li><p>It runs the <code>db_setup_script.sh</code> (defined under the create interface) to set up the database with the right name and credentials (<code>my_app_db</code>, <code>user123</code>, <code>pass123</code>).</p>
</li>
</ul>
<p><strong>Step 2: Set up the application server:</strong></p>
<ul>
<li><p>Next, the orchestrator looks at the <code>my_app_server</code> node and sees that it needs the database to be ready before the app can be set up.</p>
</li>
<li><p>The template says the app server should connect to the database (<code>my_database</code>), so the orchestrator knows to establish that link.</p>
</li>
<li><p>After the database is ready, the orchestrator runs the <code>app_setup_script.sh</code> to install and start the web application.</p>
</li>
</ul>
<p><strong>How the Orchestrator Knows the Sequence</strong></p>
<p>The orchestrator knows:</p>
<p>What to do first because the app server depends on the database (defined in the requirements section: database_connection: my_database). How to do it because the TOSCA template specifies the exact scripts to run for each component (like db_setup_script.sh for the database and app_setup_script.sh for the application).</p>
<p>Summary of the Orchestrator's Process:</p>
<ol>
<li><p>Deploy the database first.</p>
</li>
<li><p>Wait for the database to be ready.</p>
</li>
<li><p>Deploy the application server after the database is up.</p>
</li>
<li><p>Make sure the application server can connect to the database.</p>
</li>
</ol>
<p>This sequence is clearly laid out in the TOSCA template, and the orchestrator follows these instructions step by step to ensure everything works properly.</p>
<p>You may be wondering, "hey, this looks like a Helm chart or Docker Compose file!" Yes, it is similar, you can think of TOSCA in CSAR as a more complex and cloud-native version of Docker Compose. Both tools help you define and orchestrate multiple services, but they operate at different levels. Docker Compose focuses on containers, while TOSCA/CSAR works at a higher level, including virtual machines, networks, and cloud services.</p>
<p>One advantage of TOSCA VNF packages is that they can be deployed across different cloud environments, and you can package everything you need for a network service in a single package, including your container images. If you ned to move images from different registries in airgapped environments, you will need to extend the logic of the TOSCA template to move the images to the private registry in the airgapped environment, and this is the problem we are trying to solve.</p>
<p>To extend the TOSCA template to include the necessary components for moving images between registries, we can add node templates representing the source and destination registries, as well as the logic to move the images. This approach allows us to define relationships between components and orchestrate the movement of container images in a declarative way.</p>
<p>Here's how we could extend the TOSCA template to include image transfer:</p>
<h4 id="heading-extended-tosca-template-example">Extended TOSCA Template Example</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">topology_template:</span>
  <span class="hljs-attr">node_templates:</span>
    <span class="hljs-comment"># Application Deployment Node</span>
    <span class="hljs-attr">web_application:</span>
      <span class="hljs-attr">type:</span> <span class="hljs-string">tosca.nodes.WebApplication</span>
      <span class="hljs-attr">properties:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"MyWebApp"</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">"1.0"</span>
        <span class="hljs-attr">container_image:</span> <span class="hljs-string">"myrepo/webapp:1.0"</span>
      <span class="hljs-attr">requirements:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">web_server</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">registry_pull:</span> <span class="hljs-string">source_registry</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">registry_push:</span> <span class="hljs-string">destination_registry</span>

    <span class="hljs-comment"># Source Docker Registry</span>
    <span class="hljs-attr">source_registry:</span>
      <span class="hljs-attr">type:</span> <span class="hljs-string">tosca.nodes.Registry</span>
      <span class="hljs-attr">properties:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"Source Docker Registry"</span>
        <span class="hljs-attr">url:</span> <span class="hljs-string">"https://source-registry.com"</span>
        <span class="hljs-attr">credentials:</span> 
          <span class="hljs-attr">username:</span> <span class="hljs-string">"source_user"</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">"source_password"</span>
      <span class="hljs-attr">interfaces:</span>
        <span class="hljs-attr">Standard:</span>
          <span class="hljs-attr">operations:</span>
            <span class="hljs-attr">pull_image:</span>
              <span class="hljs-attr">description:</span> <span class="hljs-string">"Pull the image from source registry"</span>
              <span class="hljs-attr">implementation:</span> <span class="hljs-string">scripts/pull_image.sh</span>
              <span class="hljs-attr">inputs:</span>
                <span class="hljs-attr">image_name:</span> { <span class="hljs-attr">get_property:</span> [<span class="hljs-string">web_application</span>, <span class="hljs-string">container_image</span>] }

    <span class="hljs-comment"># Destination Docker Registry</span>
    <span class="hljs-attr">destination_registry:</span>
      <span class="hljs-attr">type:</span> <span class="hljs-string">tosca.nodes.Registry</span>
      <span class="hljs-attr">properties:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"Destination Docker Registry"</span>
        <span class="hljs-attr">url:</span> <span class="hljs-string">"https://destination-registry.com"</span>
        <span class="hljs-attr">credentials:</span> 
          <span class="hljs-attr">username:</span> <span class="hljs-string">"dest_user"</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">"dest_password"</span>
      <span class="hljs-attr">interfaces:</span>
        <span class="hljs-attr">Standard:</span>
          <span class="hljs-attr">operations:</span>
            <span class="hljs-attr">push_image:</span>
              <span class="hljs-attr">description:</span> <span class="hljs-string">"Push the image to destination registry"</span>
              <span class="hljs-attr">implementation:</span> <span class="hljs-string">scripts/push_image.sh</span>
              <span class="hljs-attr">inputs:</span>
                <span class="hljs-attr">image_name:</span> { <span class="hljs-attr">get_property:</span> [<span class="hljs-string">web_application</span>, <span class="hljs-string">container_image</span>] }

    <span class="hljs-comment"># Web Server</span>
    <span class="hljs-attr">web_server:</span>
      <span class="hljs-attr">type:</span> <span class="hljs-string">tosca.nodes.Compute</span>
      <span class="hljs-attr">properties:</span>
        <span class="hljs-attr">os:</span> <span class="hljs-string">"linux"</span>
        <span class="hljs-attr">instance_type:</span> <span class="hljs-string">"t2.medium"</span>
      <span class="hljs-attr">interfaces:</span>
        <span class="hljs-attr">Standard:</span>
          <span class="hljs-attr">operations:</span>
            <span class="hljs-attr">configure:</span>
              <span class="hljs-attr">implementation:</span> <span class="hljs-string">scripts/configure_web_server.sh</span>
</code></pre>
<p>We can now define the logic for pulling the image from the source registry and pushing it to the destination registry in the <code>pull_image.sh</code> and <code>push_image.sh</code> scripts. These scripts can use tools like <code>docker</code> or <code>skopeo</code> to interact with the registries and move the images between them.</p>
<p>This extended TOSCA template allows us to define the entire deployment process, including moving the container image between registries, in a single declarative file. The orchestrator can then follow this template to deploy the application and manage the image transfer process automatically.</p>
<p>You can even extend this a little more:</p>
<p><strong>Dedicated Image Transfer Node:</strong></p>
<p>Instead of having the pull_image and push_image operations on the registries, we could define a dedicated node for transferring images between registries. This node would represent the transfer logic and could be scaled independently if needed.</p>
<p><strong>Integrating with Helm:</strong></p>
<p>If you're using Helm to manage Kubernetes deployments, you can integrate Helm hooks to trigger operations to validate the image transfer had been done before deploying a new version of the application. This way, you can ensure that the image is available in the destination registry before starting the deployment process.</p>
<p><strong>SBOM Generation:</strong></p>
<p>You could further enhance the template by adding SBOM generation after the image pull to capture and document all dependencies and vulnerabilities before moving to the destination registry.</p>
<p>By extending the TOSCA template with SBOM generation, you can ensure that your deployments are secure and compliant by tracking all dependencies and vulnerabilities in your container images.</p>
<h3 id="heading-sbom-tracking-dependencies-and-vulnerabilities">SBOM: Tracking Dependencies and Vulnerabilities</h3>
<p>While SBOMs (like CycloneDX or SPDX) aren't designed to manage orchestration, steps, or dependencies, they provide valuable metadata about software components, their licenses, and their relationships. SBOMs help with security, compliance, and tracking of software assets, but they don't handle deployment.</p>
<p>SBOMs, like CycloneDX and SPDX, provide a detailed inventory of components and vulnerabilities in software packages. They focus on the what—tracking all software components, their versions, dependencies, and potential security vulnerabilities. This transparency is crucial for managing risks and maintaining compliance in modern enterprise environments.</p>
<h4 id="heading-extend-sboms-for-orchestration">Extend SBOMs for Orchestration</h4>
<p>Since SBOM focuses on describing the "what" (i.e., the components that make up an application or system), we can think of a way to extend SBOMs by adding "how" information to include dependencies and steps. Here's how we could theoretically extend SBOM for orchestration:</p>
<ol>
<li><p>Component Description in SBOM:</p>
<ul>
<li><p>Use existing SBOM structures to describe software components (e.g., web app, database, libraries).</p>
</li>
<li><p>Extend it by adding fields for deployment order and dependencies between components.</p>
</li>
</ul>
</li>
<li><p>Define Relationships:</p>
<ul>
<li>Introduce a new "orchestration" section in SBOM, similar to TOSCA's relationships or Docker Compose's depends_on. This section would define how different components rely on each other (e.g., app server depends on the database).</li>
</ul>
</li>
<li><p>Deployment Steps:</p>
<ul>
<li>Extend SBOMs to include execution scripts or commands for deploying components, similar to what CSAR or Docker Compose does with their create and run steps.</li>
</ul>
</li>
</ol>
<p>By doing this, SBOM could theoretically handle some orchestration tasks, though it's primarily designed for tracking components rather than deploying them.</p>
<p>Example of an Extended SBOM for airgap deployments:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"bomFormat"</span>: <span class="hljs-string">"CycloneDX"</span>,
  <span class="hljs-attr">"specVersion"</span>: <span class="hljs-string">"1.6"</span>,
  <span class="hljs-attr">"serialNumber"</span>: <span class="hljs-string">"urn:uuid:123e4567-e89b-12d3-a456-426614174000"</span>,
  <span class="hljs-attr">"version"</span>: <span class="hljs-number">1</span>,
  <span class="hljs-attr">"metadata"</span>: {
    <span class="hljs-attr">"timestamp"</span>: <span class="hljs-string">"2024-10-28T12:34:56Z"</span>,
    <span class="hljs-attr">"component"</span>: {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"MyWebApp"</span>,
      <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0"</span>,
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"application"</span>
    }
  },
  <span class="hljs-attr">"components"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"container"</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"webapp"</span>,
      <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0"</span>,
      <span class="hljs-attr">"properties"</span>: [
        {
          <span class="hljs-attr">"name"</span>: <span class="hljs-string">"image"</span>,
          <span class="hljs-attr">"value"</span>: <span class="hljs-string">"myrepo/webapp:1.0"</span>
        }
      ]
    },
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"platform"</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"registries"</span>,
      <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0"</span>,
      <span class="hljs-attr">"components"</span>: [
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"application"</span>,
          <span class="hljs-attr">"name"</span>: <span class="hljs-string">"source_registry"</span>,
          <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0"</span>,
          <span class="hljs-attr">"supplier"</span>: {
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"docker hub"</span>,
            <span class="hljs-attr">"url"</span>: [
              <span class="hljs-string">"https://source-registry.com"</span>
            ]
          },
          <span class="hljs-attr">"properties"</span>: [
            {
              <span class="hljs-attr">"name"</span>: <span class="hljs-string">"username"</span>,
              <span class="hljs-attr">"value"</span>: <span class="hljs-string">"myuser"</span>
            },
            {
              <span class="hljs-attr">"name"</span>: <span class="hljs-string">"password"</span>,
              <span class="hljs-attr">"value"</span>: <span class="hljs-string">"mypassword"</span>
            }
          ]
        },
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"application"</span>,
          <span class="hljs-attr">"name"</span>: <span class="hljs-string">"destination_registry"</span>,
          <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0"</span>,
          <span class="hljs-attr">"supplier"</span>: {
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"quay.io"</span>,
            <span class="hljs-attr">"url"</span>: [
              <span class="hljs-string">"https://destination-registry.com"</span>
            ]
          },
          <span class="hljs-attr">"properties"</span>: [
            {
              <span class="hljs-attr">"name"</span>: <span class="hljs-string">"username"</span>,
              <span class="hljs-attr">"value"</span>: <span class="hljs-string">"myuser"</span>
            },
            {
              <span class="hljs-attr">"name"</span>: <span class="hljs-string">"password"</span>,
              <span class="hljs-attr">"value"</span>: <span class="hljs-string">"mypassword"</span>
            }
          ]
        }
      ]
    }
  ],
  <span class="hljs-attr">"externalReferences"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"formulation"</span>,
      <span class="hljs-attr">"url"</span>: <span class="hljs-string">"https://orchestrator.com/wf/1234"</span>,
      <span class="hljs-attr">"comment"</span>: <span class="hljs-string">"Workflow for deploying the web application"</span>
    }
  ],
  <span class="hljs-attr">"formulation"</span>: [ {
    <span class="hljs-attr">"workflows"</span>: [
      {
        <span class="hljs-attr">"taskTypes"</span>: [<span class="hljs-string">"deploy"</span>],
        <span class="hljs-attr">"uid"</span>: <span class="hljs-string">"0f8fad5b-d9cb-469f-a165-70867728950e"</span>,
        <span class="hljs-attr">"bom-ref"</span>: <span class="hljs-string">"urn:k1s:webapp"</span>,
        <span class="hljs-attr">"name"</span>: <span class="hljs-string">"deploy"</span>,
        <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Deploy the web application"</span>,
        <span class="hljs-attr">"steps"</span>: [
          {
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"pull"</span>,
            <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Pull the web application image"</span>,
            <span class="hljs-attr">"commands"</span>: [
              {
                <span class="hljs-attr">"executed"</span>: <span class="hljs-string">"pull"</span>,
                <span class="hljs-attr">"properties"</span>: [
                  {
                    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"image"</span>,
                    <span class="hljs-attr">"value"</span>: <span class="hljs-string">"myrepo/webapp:1.0"</span>
                  }
                ]
              }
            ]
          }
        ]
      }
    ]
  }
  ],
  <span class="hljs-attr">"dependencies"</span>: [
    {
      <span class="hljs-attr">"ref"</span>: <span class="hljs-string">"webapp"</span>,
      <span class="hljs-attr">"dependsOn"</span>: [
        <span class="hljs-string">"database"</span>
      ]
    }
  ]
}
</code></pre>
<p>Here, the components are described as they normally would be in an SBOM. We've added a new "formulation" section to describe the deployment workflow, including steps like pulling the image from the source registry. The "dependencies" section specifies that the web application depends on the database component.</p>
<p>By extending SBOMs with orchestration information, we can create a more comprehensive view of software components and their deployment steps. This can help organizations manage complex deployments more effectively, especially in airgapped environments where transparency and control are critical.</p>
<h2 id="heading-adding-the-spices-to-add-flavor-to-the-recipe-sabor">Adding the Spices to Add Flavor to the Recipe: SABOR</h2>
<p>The Proof Of Concept (POC) that I have in mind is to create a tool that can help you move images to private registries in airgapped environments. This tool will leverage the strengths of <strong>standard tools</strong> like CSAR and SBOM to provide a comprehensive solution for managing software packages in airgapped environments, as the catalog of images grows, the complexity of managing them increases, and the risk of security vulnerabilities grows. It will help you set up multiple private registries or environments, move images between them, and orchestrate deployments while tracking dependencies and vulnerabilities. See this as a tool that can help you to eliminate the burden of moving and packaging images in airgapped environments.</p>
<p>In your big enterprise projects, have you noticed that multiple teams or providers are duplicating the same work and breaking a standardization that could help to manage the software packages more efficiently? This tool will help you to centralize the work and make it easier to manage and update software packages.</p>
<p>How many images of the same software package do you have in your private registry? Commonly, in enterprise projects, you will have multiple images of the same software package in different "tenants" in the same registry, SABOR will help you to remove all the fat and keep only the necessary images, the perfect recipe!</p>
<h3 id="heading-airflow-in-airgapped-environments-secure-controlled-efficient">Airflow in airgapped environments - secure, controlled, efficient</h3>
<p>The starting point for SABOR is the SBOM. By leveraging the SBOM data, SABOR will provide a full breakdown of components while allowing users to securely package and distribute software without internet access. This is particularly vital for industries like telecom or defense, where security and compliance are paramount. Also, you can start from the bottom up, SABOR will help you generate the SBOM from the list of images that you define.</p>
<p><img src="https://cdn.statically.io/img/cdn.rebelion.la/img/airgap/K1s+Airgap+Bundler+Demo-s.mp4" alt="SABOR demo" /></p>
<p>Demo: <a target="_blank" href="https://youtube.com/shorts/4ElqY-0Vtuc?feature=share">https://youtube.com/shorts/4ElqY-0Vtuc?feature=share</a></p>
<p>I decided to go with SBOM as the final solution because it provides a detailed inventory of components and vulnerabilities, making it easier to manage and update software packages. It also helps identify and manage vulnerabilities in software components, ensuring that organizations can address security risks proactively. SBOMs can be generated automatically as part of the software build process, ensuring that organizations have up-to-date information on software components. They can also be integrated with other tools and processes, such as vulnerability scanners and compliance tools, to provide a comprehensive view of software security and compliance.</p>
<p>By including orchestration metadata (like dependencies and deployment workflows) in the SBOM itself, you would allow users to not only track and verify components but also orchestrate them across multiple environments (public cloud, private cloud, and airgapped environments) securely and efficiently.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this post, we explored the challenges of moving images to private registries in airgapped environments and discussed some of the alternatives available today. We looked at the strengths and limitations of CSAR and SBOM frameworks and how they could be extended to create a new approach to move images to private registries in airgapped environments. We also discussed the idea of combining the strengths of different tools and frameworks to create a comprehensive solution for managing software packages in airgapped environments.</p>
<p>What do you think about this approach? Do you think it could help solve the challenges of moving images to private registries in airgapped environments? I'd love to hear your thoughts and <a target="_blank" href="https://go.rebelion.la/airgap-feedback">feedback on this idea</a>!</p>
<p>Would you give it a try to SABOR? Let me know in the comments, and if you have any questions or suggestions, <a target="_blank" href="https://go.rebelion.la/contact-us">feel free to reach out</a>. Go, Rebels! ✊🏼</p>
<p>Be curious, be rebellious, and keep learning! <a target="_blank" href="https://go.rebelion.la/k1s-news">Subscribe to the newsletter</a> for more content on Kubernetes, DevOps, and cloud-native technologies.</p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Anti-Patterns in Airgap Environments]]></title><description><![CDATA[Operating Kubernetes in an airgap environment presents unique challenges that can easily lead to anti-patterns, inefficient practices that hinder productivity and security. Whether it is managing container images or handling security updates, these i...]]></description><link>https://rebelion.la/kubernetes-anti-patterns-in-airgap-environments</link><guid isPermaLink="true">https://rebelion.la/kubernetes-anti-patterns-in-airgap-environments</guid><category><![CDATA[airgap]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[zerotrust]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Fri, 18 Oct 2024 10:42:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729247716567/d5aeee06-5b42-4765-8caf-0ff72d6cec52.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Operating Kubernetes in an <a target="_blank" href="https://rebelion.la/cicd-in-enterprise-and-air-gapped-environments#heading-key-issues-in-enterprise-cicd">airgap environment presents unique challenges</a> that can easily lead to anti-patterns, inefficient practices that hinder productivity and security. Whether it is managing container images or handling security updates, these isolated environments demand tailored solutions. In this post, we'll explore the common anti-patterns teams encounter when <a target="_blank" href="https://rebelion.la/how-to-install-kubernetes-in-airgapped-environments-without-a-container-runtime">deploying Kubernetes in airgap</a> setups and provide practical strategies to overcome them. If your team struggles with maintaining consistent workflows or securing updates, read on to learn how to streamline your airgap Kubernetes operations.</p>
<h2 id="heading-what-is-an-airgap-environment">What is an Airgap Environment?</h2>
<p>An airgap environment refers to a network that is physically isolated from the public internet or other external systems. This setup is typically used in industries with strict security requirements, such as telecom, healthcare, and government, to safeguard sensitive data and critical infrastructure.</p>
<p>While airgap environments provide enhanced security, they also impose restrictions on workflows, updates, and communication, making efficient Kubernetes operations more complex. These limitations can lead to anti-patterns—common pitfalls that can make your systems less effective and harder to manage over time.</p>
<h2 id="heading-1-the-siloed-operator">1. The Siloed Operator</h2>
<p>In a traditional Kubernetes setup, collaboration between DevOps and development teams is key to maintaining seamless operations. However, in airgap environments, operators often end up working in isolated silos, managing configurations, deployments, and updates without input from other teams. This creates bottlenecks and a lack of visibility across the organization; big projects usually require the collaboration of multiple teams, and environments, the ownership and accountability of each environment is usually distributed among different teams, often leading to miscommunication and misalignment.</p>
<p>In enterprise environments, the siloed operator anti-pattern can manifest in several ways:</p>
<ul>
<li><strong>Lack of Communication:</strong> Operators work in isolation, leading to misaligned priorities and inconsistent configurations.</li>
<li><strong>Slow Deployment Cycles:</strong> Without cross-team collaboration, deployments take longer, leading to delays and missed opportunities.</li>
<li><strong>Security Gaps:</strong> Siloed operators may overlook critical security updates or misconfigure settings, exposing the system to vulnerabilities.</li>
<li><strong>Operational Inefficiencies:</strong> Manual handoffs and lack of automation result in inefficiencies and errors that could be avoided with better collaboration.</li>
<li><strong>Knowledge Gaps:</strong> Teams miss out on valuable insights and best practices when working in isolation, leading to suboptimal solutions and missed opportunities for improvement.</li>
</ul>
<h3 id="heading-solution">Solution</h3>
<p>Cross-functional teams must be formed, ensuring that all members—from developers to security engineers—are involved in the Kubernetes lifecycle. Using tools like <a target="_blank" href="https://www.gitops.tech/">GitOps</a>, teams can maintain consistency and improve communication across silos, even within the constraints of an airgap environment.</p>
<p>But how can you foster collaboration in an airgap environment? Build bridges between them, don't break the silos, break the walls. Silos are not the problem, the walls are. Silos are a natural way to organize teams and responsibilities.</p>
<p>Here are some strategies to consider:</p>
<ul>
<li><strong>Cross-Training:</strong> Encourage team members to learn about each other's roles and responsibilities, fostering a culture of collaboration and shared ownership.</li>
<li><strong>GitOps:</strong> Implement GitOps practices to manage configurations and deployments in a version-controlled, collaborative environment. By using Git repositories as the source of truth, teams can work together to maintain consistency and transparency across the organization.</li>
<li><strong>Regular Syncs:</strong> Schedule regular meetings or stand-ups to discuss progress, challenges, and upcoming tasks. This ensures that everyone is on the same page and can address issues proactively.</li>
<li><strong>Automated Workflows:</strong> Use automation tools to streamline workflows and reduce manual handoffs between teams. By automating repetitive tasks, teams can focus on higher-value activities and improve efficiency.</li>
</ul>
<h2 id="heading-2-over-engineered-security-layers">2. Over-engineered Security Layers</h2>
<p>Security is paramount in an airgap environment, but over-complicating it can result in convoluted workflows. Often, teams stack unnecessary layers of security tools and processes, leading to cumbersome configurations that slow down operations and increase the risk of misconfiguration.</p>
<h3 id="heading-solution-1">Solution</h3>
<p>Leverage Kubernetes' built-in security features like <a target="_blank" href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">RBAC</a> (Role-Based Access Control), <a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/network-policies/">network policies</a>, and secrets management. Regularly audit your security layers to eliminate redundancy, ensuring that the environment remains secure but manageable.</p>
<h2 id="heading-3-inconsistent-image-management">3. Inconsistent Image Management</h2>
<p>One of the biggest challenges in an airgap environment is managing container images. Teams often face discrepancies between development and production due to inconsistent versioning or missing dependencies in airgap bundles. This leads to deployment failures or time-consuming troubleshooting.</p>
<h3 id="heading-solution-2">Solution</h3>
<p>Adopt SBOMs (Software Bill of Materials) to track every component of your container images. Use tools like CycloneDX or SPDX to create detailed manifests of the software and dependencies required.</p>
<p>Here are some strategies to improve image management in airgap environments:</p>
<ul>
<li>Automate the bundling and distribution process using Kubernetes Operators to ensure that all dependencies are accounted for before deployment.</li>
<li>Implement a registry proxy to cache images and dependencies locally, reducing the risk of missing components during deployment.</li>
<li>Utilities like <a target="_blank" href>Kaniko</a> or <a target="_blank" href>Buildah</a> can help you build images in airgap environments without requiring a Docker daemon, streamlining the image creation process.</li>
<li><a target="_blank" href="https://kubernetes-sigs.github.io/bom/">Kubernetes BOM</a> can help you track dependencies and vulnerabilities in your container images, ensuring that you have a complete picture of your software stack.</li>
<li><a target="_blank" href="https://k1s.sh/airgap">K1s airgap</a> is a package manager for Kubernetes that simplifies the installation and management of Kubernetes components in airgap environments. It provides a CLI and user-friendly interface for downloading and installing Kubernetes components without internet access. Allows you to orchestrate the installation of Kubernetes components in airgap environments, ensuring that your clusters are up to date and secure.</li>
</ul>
<h2 id="heading-4-the-update-lockdown">4. The Update Lockdown</h2>
<p>Airgap environments typically delay software updates to ensure stability, but this can become an anti-pattern if critical security patches or updates are missed. Over time, this creates a backlog of necessary changes that may compromise the security and functionality of your Kubernetes clusters.</p>
<h3 id="heading-solution-3">Solution</h3>
<p>Establish an automated, selective update system that allows you to prioritize critical patches without exposing your system to unnecessary risks. Implement automated pipelines for downloading and applying updates from trusted internal sources, ensuring that your system remains up to date without disrupting operations.</p>
<p>Here are some strategies to consider:</p>
<ul>
<li><strong>Automated Patch Management:</strong> Use tools like <a target="_blank" href="https://k1s.sh/airgap">K1s airgap</a> to automate the installation of updates and patches for your applications and Kubernetes clusters. This ensures that your system remains secure and up to date without manual intervention.</li>
<li><strong>Selective Updates:</strong> Prioritize critical security patches and updates, ensuring that your system remains secure without introducing unnecessary risks.</li>
<li><strong>Internal Package Repositories:</strong> Set up internal package repositories to host trusted updates and patches, allowing you to control the source of your software components. Build a pipeline that automatically checks individual components or packages for vulnerabilities and updates them as needed.</li>
<li><strong>Common Vulnerabilities and Exposures (<a target="_blank" href="https://www.redhat.com/en/topics/security/what-is-cve">CVE</a>) Monitoring:</strong> Regularly monitor <a target="_blank" href="https://www.cisa.gov/known-exploited-vulnerabilities-catalog">CVE databases</a> to stay informed about security vulnerabilities and apply patches promptly to mitigate risks.</li>
<li><strong>Common Image Scanning Tools:</strong> Use image scanning tools like <a target="_blank" href="https://trivy.dev">Trivy</a> or <a target="_blank" href="https://quay.github.io/clair/">Clair</a> to identify vulnerabilities in your container images and take action to remediate them before deployment.</li>
</ul>
<h2 id="heading-5-manual-process-reliance">5. Manual Process Reliance</h2>
<p>Without internet access, many teams revert to manual processes for deploying applications or making configuration changes in Kubernetes clusters. While this may work initially, it quickly becomes unsustainable and prone to errors, leading to inconsistent configurations and system drift.</p>
<h3 id="heading-solution-4">Solution</h3>
<p>Automate as much as possible. Use tools like <a target="_blank" href="https://helm.sh/">Helm</a> for package management, <a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/">Argo CD</a> for continuous delivery, and <a target="_blank" href="https://kubectl.docs.kubernetes.io/installation/kustomize/">Kustomize</a> for managing configuration layers. Even in an airgap environment, you can automate many aspects of your workflow to maintain consistency and reduce errors.</p>
<h2 id="heading-6-undocumented-workarounds">6. Undocumented Workarounds</h2>
<p>In the heat of solving problems, teams often develop quick workarounds to navigate the unique constraints of airgap environments. However, without documentation, these ad-hoc solutions create technical debt, leaving the system brittle and difficult to troubleshoot.</p>
<h3 id="heading-solution-5">Solution</h3>
<p>Ensure that every workaround and process change is properly documented. Use tools like <a target="_blank" href="https://docusaurus.io/docs">Docusaurus</a> or <a target="_blank" href="https://www.mkdocs.org">MkDocs</a> to create and maintain internal documentation that can be easily shared across teams. This makes your airgap Kubernetes system more resilient and easier to manage in the long run. Prepare SOPs (Standard Operating Procedures) for common tasks and procedures, ensuring that every team member has access to up-to-date documentation for troubleshooting and maintenance.</p>
<p>Here are some strategies to consider:</p>
<ul>
<li><strong>Runbook Automation:</strong> Create runbooks for common tasks and procedures, ensuring that every team member has access to up-to-date documentation for troubleshooting and maintenance.</li>
<li><strong>Knowledge Sharing:</strong> Encourage team members to share their knowledge and best practices through internal wikis, chat channels, or other collaboration tools. This helps prevent silos and ensures that everyone has access to the information they need to be successful.</li>
<li><strong>Documentation as Code:</strong> Treat documentation as code, storing it in version-controlled repositories alongside your source code. This ensures that documentation is always up to date and aligned with your system's configuration. Use tools like <a target="_blank" href="https://www.gitbook.com">GitBook</a>, <a target="_blank" href="https://docusaurus.io/docs/docs-introduction#docs-only-mode">Docusaurus</a>, MkDocs, or <a target="_blank" href="https://readthedocs.org">Read the Docs</a> to create and maintain documentation in a collaborative, version-controlled environment.</li>
</ul>
<h2 id="heading-7-lack-of-testing-parity">7. Lack of Testing Parity</h2>
<p>Testing in airgap environments can often be overlooked due to the infrastructure limitations. Without proper testing environments that mimic the production airgap conditions, teams can end up deploying untested code, leading to costly failures.</p>
<h3 id="heading-solution-6">Solution</h3>
<p>Set up mirrored environments that simulate the airgap restrictions as closely as possible. This includes restricted access, isolated networks, and offline testing capabilities. You can also consider using tools like <a target="_blank" href="https://rebelion.la/the-easiest-kubernetes-installations-ever">Minikube, K0s, K3s, or microk8s</a> to create lightweight Kubernetes clusters for local testing before deploying to the airgap environment or <a target="_blank" href="https://rebelion.la/install-kubernetes-docker-offline">Vanilla Kubernetes clusters in airgap environments</a>.</p>
<p>Use simulation tools like <a target="_blank" href="https://kind.sigs.k8s.io">Kind</a> or <a target="_blank" href="https://k3d.io">K3d</a> to create lightweight Kubernetes clusters or <a target="_blank" href="https://www.vcluster.com/docs/">virtual clusters</a> for local testing. These tools allow you to simulate airgap conditions and test your applications in a controlled environment before deploying them to production.</p>
<h2 id="heading-8-overburdened-devops-teams">8. Overburdened DevOps Teams</h2>
<p>In airgap environments, the responsibility for managing Kubernetes often falls entirely on the DevOps team. This can lead to burnout, delays, and increased errors as the team becomes a bottleneck for every deployment or update.</p>
<h3 id="heading-solution-7">Solution</h3>
<p>Empower your development teams to take on more responsibility by adopting a self-service model for Kubernetes management. This can be achieved by providing clear guidelines, templates, and automation tools that allow developers to handle deployments without requiring constant oversight from the DevOps team.</p>
<p>Culture is key. Encourage a culture of collaboration and shared responsibility, where every team member is empowered to contribute to the success of the project. By fostering a culture of ownership and accountability, you can distribute the workload more evenly and prevent burnout among your DevOps team.</p>
<h2 id="heading-9-misaligned-metrics">9. Misaligned Metrics</h2>
<p>In an airgap setup, teams may focus on the wrong metrics, such as deployment speed, rather than more critical factors like uptime, security, and system stability. This misalignment leads to poor decision-making and missed opportunities for improvement.</p>
<h3 id="heading-solution-8">Solution</h3>
<p>Reevaluate your success metrics to align with the specific needs of your airgap environment. Focus on <a target="_blank" href="https://www.metricstream.com/insights/5-steps-building-operational-resilience.htm">operational resilience</a>, security compliance, and system availability, rather than speed of deployment or other metrics that might be more applicable in non-airgap setups.</p>
<p>Here are some strategies to consider:</p>
<ul>
<li><strong>Service Level Objectives (SLOs):</strong> Define clear SLOs for your Kubernetes clusters, focusing on availability, performance, and security. Use these metrics to guide your decision-making and prioritize improvements that align with your organization's goals.</li>
<li><strong>Key Performance Indicators (KPIs):</strong> Identify KPIs that reflect the health and performance of your Kubernetes clusters, such as uptime, response time, and security compliance. Regularly monitor these metrics and use them to inform your decision-making and improvement efforts.</li>
<li><strong>Create chaos:</strong> Use tools like <a target="_blank" href="https://chaos-mesh.org">Chaos Mesh</a>, <a target="_blank" href="https://github.com/litmuschaos/litmus">Litmus</a>, or <a target="_blank" href="https://www.gremlin.com">Gremlin</a> to simulate failures and test the resilience of your Kubernetes clusters. By introducing <a target="_blank" href="https://landscape.cncf.io/guide#observability-and-analysis--chaos-engineering">controlled chaos</a> into your environment, you can identify weaknesses and improve your system's reliability and availability.</li>
</ul>
<h2 id="heading-10-forgotten-dependencies">10. Forgotten Dependencies</h2>
<p>In an airgap environment, it's easy to overlook key dependencies when creating containers for deployment. Missing a crucial library or package can lead to significant downtime, especially when updates or patches need to be applied manually.</p>
<h3 id="heading-solution-9">Solution</h3>
<p>You can use detailed manifests and dependency management tools to make sure every required library and package is bundled with your container images. Creating comprehensive SBOMs and automating the packaging process ensures that nothing is left out of your airgap environment.</p>
<p>Use tools like <a target="_blank" href="https://k1s.sh">K1s airgap</a> to manage dependencies and ensure that your container images are complete and up to date. By automating the bundling and distribution process, you can reduce the risk of missing dependencies and improve the reliability of your deployments.</p>
<h2 id="heading-airgap-challenges-are-solvable">Airgap Challenges Are Solvable</h2>
<p>While airgap environments present unique constraints, these challenges are surmountable with the right approach. By identifying common Kubernetes anti-patterns and adopting targeted solutions, your team can avoid the pitfalls that often slow down progress and lead to security vulnerabilities. It's time to rethink how you approach Kubernetes in airgap environments; streamlining workflows, improving security, and automating processes will make all the difference in ensuring your operations run smoothly and efficiently.</p>
<p>What anti-patterns have you experienced in airgap environments? Share your thoughts and solutions below—let's keep the conversation going!</p>
<p>Go Rebels! ✊🏻</p>
]]></content:encoded></item><item><title><![CDATA[File Transfer in the World of Slim Containers Using cURL]]></title><description><![CDATA[In the world of containers, every byte counts, the size of the image matters, and efficiency is king. But what happens when you need to transfer files out of the pods? This seemingly simple task can quickly become a headache, revealing the hidden str...]]></description><link>https://rebelion.la/solve-the-struggles-of-file-transfer-in-the-world-of-slim-containers-with-curl</link><guid isPermaLink="true">https://rebelion.la/solve-the-struggles-of-file-transfer-in-the-world-of-slim-containers-with-curl</guid><category><![CDATA[Microservices]]></category><category><![CDATA[containers]]></category><category><![CDATA[curl]]></category><dc:creator><![CDATA[La Rebelion Labs]]></dc:creator><pubDate>Mon, 19 Aug 2024 12:07:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724068679557/bc1dd8f5-3e74-42e8-b519-ca6dcccb12e9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of containers, every byte counts, the size of the image matters, and efficiency is king. But what happens when you need to transfer files out of the pods? This seemingly simple task can quickly become a headache, revealing the hidden struggles of working with slim containers.</p>
<p>Containers are all about efficiency—small, lightweight, and quick to start. But this focus on minimalism comes with trade-offs. One of the most significant challenges that often catches people off guard is the complexity of performing simple tasks, like transferring files between pods or external systems that are not part of the container orchestration. Examples include logs, generated data files for machine learning models or troubleshooting, or configuration files that need to be shared across multiple pods.</p>
<h2 id="heading-lightweight-containers-heavy-frustrations">Lightweight Containers, Heavy Frustrations</h2>
<p>When you're working with virtual machines (VMs), transferring files is usually a breeze. Tools like <code>scp</code> or <code>ftp</code> are readily available and make it easy to move data from one place to another. But in the world of containers, especially those built with minimalistic images, these conveniences are stripped away.</p>
<p>Why? Because these tools add weight to the container. And in a containerized environment, size matters—a lot. Every extra megabyte can slow down the startup time, increase the attack surface, and make scaling more cumbersome. So, in the name of efficiency, many container images leave out non-essential utilities, including those that make file transfers easy.</p>
<p>But what happens when you need to transfer files between pods? This is where the frustration sets in. Without <code>scp</code> or similar tools, you're often left scratching your head, trying to figure out how to get your files from point A to point B without compromising the lightweight nature of your containers.</p>
<h3 id="heading-the-common-workaround-a-two-step-process">The Common Workaround: A Two-Step Process</h3>
<p>One common workaround involves copying the file from the source pod to your local machine, and then from your local machine to the destination pod. It's a clunky, time-consuming process that feels like a step backward in a world where automation and efficiency are king.</p>
<p>But this two-step process quickly becomes a bottleneck, adding unnecessary complexity and slowing down your pipeline.</p>
<h3 id="heading-a-solution-streamlining-pod-to-pod-transfers">A Solution: Streamlining Pod-to-Pod Transfers</h3>
<p>Fortunately, there are solutions to this problem that allow you to maintain the efficiency of your containers without sacrificing functionality.</p>
<ol>
<li><p><strong>Using Kubernetes Volumes:</strong> One of the most straightforward solutions is to leverage Kubernetes volumes. By attaching a shared volume to multiple pods, you can enable them to read and write to the same storage, effectively bypassing the need for direct pod-to-pod transfers. While this approach works well for certain use cases, it might not be ideal for temporary or small-scale transfers.</p>
</li>
<li><p><strong>Kubernetes Jobs and Init Containers:</strong> Another approach is to use Kubernetes Jobs or Init Containers. You can create a Job or Init Container with the necessary tools for the file transfer, perform the transfer, and then clean up afterward. This method ensures that your main application containers remain lean, while still allowing for more complex operations when needed.</p>
</li>
<li><p><strong>Custom Scripts and Tools:</strong> If you need a more dynamic solution, consider writing custom scripts that use <code>kubectl cp</code> or even third-party tools designed for this purpose. These scripts can be triggered as part of your pipeline, automating the transfer process while keeping your containers streamlined.</p>
</li>
<li><p><strong>Service Mesh Integration:</strong> For those looking to go a step further, integrating a service mesh like Istio can help manage and automate communication between pods, including secure file transfers. This approach requires more setup and knowledge of service mesh architectures but offers a robust solution for complex environments.</p>
</li>
</ol>
<h2 id="heading-the-path-forward-a-simple-solution-with-curl-and-tar">The Path Forward - A Simple Solution with <code>curl</code> and <code>tar</code></h2>
<p><strong>Are these solutions still too complex</strong> for your needs? I got you covered. Here's a simple solution that can help you transfer files back and forth to external systems with SSH access without additional tools or complex configurations, by simply using <code>curl</code> and <code>tar</code>.</p>
<h3 id="heading-step-1-create-a-tarball">Step 1: Create a Tarball</h3>
<p>First, create a tarball of the files you want to transfer. You can do this by running the following command in the source pod:</p>
<pre><code class="lang-bash">tar -czf - /path/to/files &gt; files.tar.gz
</code></pre>
<p>This command creates a compressed tarball of the files you want to transfer, zipping is optional but recommended to reduce the size of the transfer.</p>
<h3 id="heading-step-2-transfer-the-tarball-achieving-sftp-like-functionality-with-curl">Step 2: Transfer the Tarball - Achieving SFTP-like functionality with <code>curl</code></h3>
<p>Here's how you can upload and download a file using SFTP with <code>curl</code>:</p>
<h4 id="heading-uploading-a-file-with-sftp">Uploading a File with SFTP</h4>
<pre><code class="lang-bash">curl -u &lt;USERNAME&gt;:&lt;PASSWORD&gt; -T &lt;PATH_TO_FILE&gt; sftp://&lt;SFTP_SERVER&gt;/&lt;REMOTE_DIRECTORY&gt;/&lt;TARGET_FILE_NAME&gt;
</code></pre>
<ul>
<li><p><code>-u &lt;USERNAME&gt;:&lt;PASSWORD&gt;</code>: Your SFTP credentials.</p>
</li>
<li><p><code>-T &lt;PATH_TO_FILE&gt;</code>: The file you want to upload.</p>
</li>
<li><p><code>sftp://&lt;SFTP_SERVER&gt;/&lt;REMOTE_DIRECTORY&gt;/&lt;TARGET_FILE_NAME&gt;</code>: The SFTP server URL, along with the remote directory and the target file name where the file will be uploaded.</p>
</li>
</ul>
<h4 id="heading-downloading-a-file-with-sftp">Downloading a File with SFTP</h4>
<p>To download a file using SFTP with <code>curl</code>, use the <code>-O</code> option.</p>
<pre><code class="lang-bash">curl -u &lt;USERNAME&gt;:&lt;PASSWORD&gt; -O sftp://&lt;SFTP_SERVER&gt;/&lt;REMOTE_DIRECTORY&gt;/&lt;FILE_NAME&gt;
</code></pre>
<ul>
<li><p><code>-u &lt;USERNAME&gt;:&lt;PASSWORD&gt;</code>: Your SFTP credentials.</p>
</li>
<li><p><code>-O</code>: Saves the file with its original name.</p>
</li>
<li><p><code>sftp://&lt;SFTP_SERVER&gt;/&lt;REMOTE_DIRECTORY&gt;/&lt;FILE_NAME&gt;</code>: The SFTP server URL and the path to the file you want to download.</p>
</li>
</ul>
<h3 id="heading-example-commands">Example Commands:</h3>
<ul>
<li><strong>Upload Example:</strong></li>
</ul>
<pre><code class="lang-bash">curl -u user:password -T /<span class="hljs-built_in">local</span>/path/to/file.txt sftp://example.com/remote/path/file.txt
</code></pre>
<ul>
<li><strong>Download Example:</strong></li>
</ul>
<pre><code class="lang-bash">curl -u user:password -O sftp://example.com/remote/path/file.txt
</code></pre>
<p><strong>Notes:</strong></p>
<ul>
<li><p>Make sure the <code>curl</code> version you are using is built with SFTP support, as not all <code>curl</code> versions support SFTP out of the box.</p>
</li>
<li><p>If you need to specify a different port (other than the default port 22 for SFTP), you can do so by appending <code>:port_number</code> to the server address like <code>sftp://example.com:2222/path/to/file.txt</code>.</p>
</li>
</ul>
<p>This approach lets you handle basic SFTP operations directly from the command line using <code>curl</code>. While it might not be as feature-rich as dedicated SFTP clients, it provides a simple and efficient way to transfer files without the need for additional tools or complex configurations.</p>
<h2 id="heading-embrace-the-challenge">Embrace the Challenge</h2>
<p>It's easy to get frustrated when something as seemingly simple as transferring a file becomes a challenge. But it's important to remember that the limitations of containerization are a byproduct of its strengths. The very reason containers are so powerful—lightweight, efficient, scalable—is also why they sometimes feel restrictive.</p>
<p>In the container world, size truly does matter, but so does creativity. The limitations you face today are opportunities to innovate and streamline your processes. By understanding the pain points and exploring the solutions available, you can keep your containers lean, mean, and ready to scale—without getting bogged down by the small stuff.</p>
<p>So, the next time you're faced with the challenge of transferring files between pods, remember: there's always a solution that keeps your containers efficient and your workflow smooth. Embrace the challenge, and you'll find that even in the smallest containers, you can pack a powerful punch.</p>
<p>Happy containerizing, Go Rebels! ✊🏽</p>
]]></content:encoded></item></channel></rss>