Back to Blog
12 min
technical

I Promised to Explain Why MCPs Made Me Cry. Here's What Finally Broke Me.

I told you MCPs brought me to tears. Now let me show you why a 'universal connector for AI' made a grown woman emotional.

MCPClaudeAI DevelopmentBuilding in PublicClaude Code

I Promised to Explain Why MCPs Made Me Cry. Here's What Finally Broke Me.

Published: February 3, 2026 - 12 min read

Back in December, I wrote a very vulnerable blog post confessing that I had discovered something in Claude that brought me close to tears. I remember sitting there, wondering if I was even allowed to write something like that on the internet.

I promised you I would eventually provide a deeper breakdown of what MCPs really are.

This is me keeping that promise.

But I am not just going to throw technical definitions at you. I want to get you feeling excited about MCPs the same way I felt when I first discovered what they were. So let me start by telling you exactly what broke me.


The Moment Everything Clicked

I was working on my 12-day challenge, trying to build something ambitious. To achieve what I wanted, I had to learn about and create a local MCP.

This discovery came with a refreshing realization: I could give Claude access to ANY system.

Not just files on my computer. Not just the code in my project. Any system I could program a connection to.

Databases. APIs. Smart home devices. My calendar. My email. Literally anything.

But here is what really got me: I was in control of what changes it could make. I was not handing Claude the keys to my entire digital life and hoping for the best. I was defining exactly what it could read, what it could write, and what it could execute. Precise control. Surgical access.

And then I realized the final piece.

This was not limited to Claude.

The MCP I was building would work with any LLM client that supports the protocol. Claude Desktop, ChatGPT Desktop, Cursor, Windsurf, Gemini. One connection. Multiple AI systems.

I had made an agentic system portable.

That is when I cried.

Not because I was sad. Because I suddenly saw no limits. I could build AI agents that actually do things in the real world, connect them to any system I needed, and take them with me to whatever AI platform I wanted to use.

The possibilities felt infinite.


What This Series Is About

This is Part 1 of a 4-part series called MCPs for Humans. I am writing it in the same spirit as my Tokenomics for Humans series. My goal is to take something technical and explain it in plain English so that you actually understand it, not just nod along pretending you do.

By the end of this series, you will understand:

  • Part 1 (this post): What MCPs are and why they matter
  • Part 2: The three superpowers MCPs give AI (Tools, Resources, Prompts)
  • Part 3: How MCPs make my Human-to-Human Bridge prediction possible
  • Part 4: A real MCP I built (meet Alex Bennett, my LinkedIn strategist)

If you have ever felt like AI is impressive but limited in what it can actually do for you, this series will change that.

Let's start with why MCPs even exist.


The Problem MCPs Solve (The N x M Nightmare)

Before MCPs existed, the AI industry had a massive headache. Engineers call it the "N x M integration problem." Let me explain it in human terms.

Imagine you have a bunch of different AI applications:

  • Claude (from Anthropic)
  • ChatGPT (from OpenAI)
  • Gemini (from Google)
  • Cursor (the AI coding tool)
  • And dozens more

Now imagine you have a bunch of different tools and data sources you want AI to access:

  • GitHub (your code)
  • Slack (your messages)
  • Your database
  • Your calendar
  • Your email
  • Your file system
  • And hundreds more

Here is the nightmare: Every single connection required custom code.

If you wanted Claude to access your GitHub, someone had to write specific code for that. If you wanted ChatGPT to access the same GitHub, someone had to write different specific code for that. If you wanted Gemini to access it too, more custom code.

Now multiply that by every tool and every AI application.

If you have 10 AI applications and 50 tools, you need 500 custom integrations. If you have 20 AI applications and 100 tools, you need 2,000 custom integrations.

This is the N x M problem. N applications times M tools equals an exponentially growing nightmare.

Here is what this looked like in practice:

  • Every data source was implemented differently
  • Every company building AI applications duplicated the same work
  • There was no standard way for AI to discover what tools were available
  • Security was inconsistent and often an afterthought
  • If you switched AI providers, all your integrations broke

It was chaos.


Enter MCP: The USB-C of AI

MCP stands for Model Context Protocol. It is an open standard developed by Anthropic that creates a universal way for AI applications to connect to external tools and data sources.

The best analogy I have heard is this: MCP is like USB-C for AI.

Think about the charging cable chaos we lived through for years. iPhones had Lightning. Androids had Micro USB. Laptops had their own proprietary connectors. You needed a drawer full of cables just to charge your devices.

USB-C was designed to fix this. One universal connector that works across phones, laptops, tablets, headphones, and gaming consoles. The transition took years (iPhones only adopted USB-C in 2023), but the idea was always the same: stop building custom connectors for every device.

MCP does the same thing for AI.

Instead of building custom integrations for every AI application, tool developers build one MCP server. That server works with any AI application that supports MCP.

Instead of building custom integrations for every tool, AI application developers implement MCP support once. Then their application works with any MCP server.

The N x M problem becomes an N + M solution.

10 AI applications + 50 tools = 60 implementations (instead of 500). 20 AI applications + 100 tools = 120 implementations (instead of 2,000).

That is the magic. One universal protocol. Everything connects.


How MCP Actually Works (The Architecture)

Now let me show you how this actually works under the hood. Do not worry. I am going to keep this simple.

MCP uses what engineers call a "client-server architecture." There are three main players:

1. The Host (Where You Talk to AI)

The host is the application where you actually interact with the AI. This is the thing you open on your computer and type into.

Examples of hosts:

  • Claude Desktop (the app you download from Anthropic)
  • ChatGPT Desktop (OpenAI's desktop app)
  • Cursor (the AI-powered code editor)
  • Windsurf (another AI coding tool)

The host is your home base. It is where the conversation happens. When you type a message, you are typing it into a host.

The host's job: Manage everything. It decides which MCP servers the AI can connect to. It asks for your permission before the AI takes actions. It is your first layer of security.

2. The Client (The Translator)

Inside every host, there are one or more MCP clients. Think of clients as translators.

When you connect to an MCP server, the client handles all the communication between the host and that server. It speaks the MCP language so the host does not have to worry about the technical details.

Here is the key detail: Each client maintains a one-to-one relationship with a specific server. If you connect to three different MCP servers, you have three different clients running inside your host, each talking to its respective server.

The client's job: Translate. Format messages correctly. Handle the back-and-forth communication. Make sure everything follows the MCP protocol.

3. The Server (The Superpower)

This is where the magic happens.

An MCP server is a lightweight program that exposes specific capabilities to the AI. It is the bridge between the AI and whatever external system you want to connect.

Examples of what MCP servers can do:

  • Give AI access to your file system (read and write files)
  • Connect AI to your GitHub repositories
  • Let AI query your database
  • Allow AI to send emails on your behalf
  • Enable AI to control smart home devices
  • Anything else you can program

The server's job: Provide capabilities. When the AI asks "what can you do?", the server responds with a list of tools, data sources, and shortcuts it offers. When the AI wants to use one of those capabilities, the server executes it.

How They Work Together

Let me walk you through a real example.

Scenario: You are using Claude Desktop, and you want Claude to help you create social media graphics in Canva.

  1. You install a Canva MCP server on your computer
  2. You configure Claude Desktop to connect to that server
  3. Claude Desktop (the host) creates a client that connects to the Canva server
  4. When you start a conversation, Claude can now see that a Canva server is available
  5. You ask Claude: "What templates do I have saved in my Canva account?"
  6. Claude asks the Canva server (through the client) to fetch your saved templates
  7. The server retrieves the information from Canva and sends it back
  8. Claude reads the response and shows you your available templates
  9. You might then say: "Create an Instagram post using my brand template with the headline 'New Product Launch'"
  10. Claude asks for your permission (because creating a design is a write action)
  11. You approve, and the server creates the design in your Canva account

That entire flow happens through MCP. The server provides the capability. The client handles the communication. The host manages everything and keeps you in control.


Why This Made Me Emotional

Now you understand the technical architecture. Let me connect it back to why I cried.

The universality. I realized I could build a server that connects to anything. My database. My file system. My APIs. My smart home(that I currently don't have but 100% will in the future). The limit is my imagination, not the technology.

The control. I am not giving AI unrestricted access. I define exactly what the server can do. If I only want Claude to read files but not write them, I build the server that way. If I want it to query my database but not delete anything, I control that. The power is in my hands.

The portability. The server I build works with Claude today. If tomorrow I want to use ChatGPT or Gemini or some AI that does not exist yet, my server still works. I am not locked into one ecosystem. My investment in building MCP servers travels with me.

The agentic potential. This is what really got me. I have been building AI agents for months. They have personalities. They have specific roles. But they were always somewhat limited in what they could actually do in the real world. MCPs remove that limitation. My agents can now take real actions, access real systems, and produce real results.

When all of that clicked at once, I understood why people keep saying AI is going to change everything.

Because now it can actually do everything.


The Industry Is Betting Big on This

I want you to understand that MCPs are not some niche technology that only a few developers care about. This is becoming infrastructure.

In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation. The co-founders include Anthropic, Block (the company behind Square and Cash App), and OpenAI. Supporting members include Google, Microsoft, AWS, Cloudflare, and Bloomberg.

Read that list again. The biggest names in tech are all backing MCP.

In March 2025, OpenAI officially adopted MCP and integrated it into ChatGPT Desktop. Sam Altman said: "People love MCP and we are excited to add support across our products."

What this means for you: The MCP skills and understanding you develop now will be valuable across the entire AI ecosystem. You are not betting on one company or one platform. You are learning a standard that the industry has collectively agreed upon.

This is like learning HTML in 1995. Or learning to build mobile apps in 2008. The window is open.


What Is Coming Next

Now you understand what MCPs are. You know the N x M problem they solve. You understand the architecture (hosts, clients, servers). You see why the industry is betting on this.

But I have only scratched the surface.

In Part 2, I am going to show you the three superpowers that MCP servers can provide: Tools, Resources, and Prompts. These are the building blocks that make MCPs actually useful. Understanding them will change how you think about what AI can do.

In Part 3, I am going to connect all of this to a prediction I made in January about AI becoming a human-to-human bridge. I told you that experts would package their knowledge into MCPs and sell access to their wisdom 24/7. Now I am going to show you exactly how that works technically. The vision meets the implementation.

In Part 4, I am going to show you a real MCP I built. His name is Alex Bennett, and he is my LinkedIn content strategist. He has 16 tools, 6 research resources, and 10 workflow prompts. He can generate carousels, track analytics, and apply my content strategy. He is proof that everything I am teaching you actually works.


Your Homework (Yes, Really)

Before Part 2, I want you to do one thing.

Think about a system or tool you wish AI could access for you. Maybe it is:

  • Your project management tool (Notion, Trello, Asana)
  • Your customer database
  • Your email inbox
  • Your calendar
  • Your file system
  • Your company's internal wiki

Now imagine an AI that could not just talk about that system but actually interact with it. Read from it. Write to it. Take actions on your behalf.

Hold that image in your mind.

Because that is exactly what MCPs make possible. And in the next few posts, I am going to show you how.


A Note on This Series

I have written about AI concepts before. My Tokenomics for Humans series broke down AI economics. My Claude God Tips series teaches practical Claude mastery. This series continues that mission.

You do not need to be a developer to understand MCPs.

You just need someone to explain it without the jargon.

That is what I am here for.


As always, thanks for reading!

Share this article

Found this helpful? Share it with others who might benefit.

Continue Reading

Enjoyed this post?

Get notified when I publish new blog posts, case studies, and project updates. No spam, just quality content about AI-assisted development and building in public.

No spam. Unsubscribe anytime. I publish 1-2 posts per day.

Want This Implemented, Not Just Explained?

I work with a small number of clients who need AI integration done right. If you're serious about implementation, let's talk.