The game-changing potential of Model Context Protocol

    Kent C. DoddsKent C. Dodds

    We're on the cusp of a revolution in how we interact with technology, and it's all thanks to something called Model Context Protocol (MCP). I've been diving deep into this concept lately, and I'm genuinely excited about the possibilities it opens up. Let me break it down for you.

    Imagine having a super-smart AI assistant that can talk to all your apps and services on your behalf. That's essentially what MCP enables. It's a way for Large Language Models (LLMs) — think ChatGPT on steroids — to communicate with authorized services and perform tasks that we'd normally do ourselves or hire a virtual assistant to handle.

    But here's where it gets really interesting: MCP isn't just about individual services. It's about getting different services to talk to each other seamlessly.

    The new user interaction model

    Think about how we interact with apps and services today. We tap, swipe, and click our way through user interfaces designed for humans. But what if we could skip all that and just tell an AI what we want to accomplish?

    That's where MCP comes in. It's like a new UI, but instead of being designed for human fingers and eyeballs, it's built for AI brains. And you as the developer get to build that!

    Now, you might be wondering, "Does my app really need an MCP?" Here's how I think about it: Whatever features you create for humans to consume and interact with your service, you can do the same sort of thing for LLMs with MCP tools.

    The cool thing is, LLMs don't need fancy UI elements. They just need endpoints. It's like we're giving them a direct line to the app's brain, bypassing all the pretty visuals we humans need.

    Accessibility gets a major upgrade

    This is where things get really exciting for me. By implementing MCP, we're potentially solving a ton of accessibility issues in one fell swoop.

    Imagine a blind user being able to wire up their LLM client with your MCP. Suddenly, they can schedule a soccer field for their team, sign up for their kid's extracurricular class, and handle online banking — all through a simple text or voice interface. No need for screen readers trying to navigate complex UIs. No modals. No comboboxes. No menus. Just pure, direct interaction.

    No need to write custom integrations

    Let's take this a step further. Say you want to:

    1. Schedule a city soccer field for your kid's practice
    2. Add that event to your calendar
    3. Text the whole team about the practice time

    Normally, this would involve juggling multiple apps and services. But with MCP, it's a whole different ballgame.

    If you're building the software for the city to reserve soccer fields, you don't have to worry about integrating with calendars or messaging services. Each service can have its own MCP server, and the LLM can talk to each one to accomplish the user's goal.

    It's like having a super-efficient personal assistant who knows how to talk to every service you use, without those services ever needing to know about each other.

    The exciting future ahead

    We're in for a really exciting future with MCP. It's not just about making our lives easier (though it certainly does that). It's about fundamentally changing how we interact with technology.

    Imagine a world where you don't need to learn a new interface for every app you use. Where accessibility isn't an afterthought, but a core feature. Where complex, multi-step tasks become as simple as expressing what you want to achieve.

    That's the world MCP is helping to create. And I, for one, can't wait to help you learn how to build that future.

    Start designing the future of intelligent user experiences

    Join the EpicAI.pro newsletter to get updates on events, articles, and more as they’re released

    I respect your privacy. Unsubscribe at any time.

    Share