1
.
06
.
2025
31
.
05
.
2025
LLM
Ruby on Rails

MCP Client in Rails using ruby-mcp-client gem

Paweł Strzałkowski
Chief Technology Officer
Ruby MCP Client in Rails by Paweł Strzałkowski

Let's build a helpful assistant! Haven't you seen too many articles like this? This one, I promise, will be special. It will add Model Context Protocol (MCP) capabilities to your Ruby on Rails application and allow YOUR assistant to be smarter, better, and even more helpful!

Implementing an LLM-driven assistant in Rails

We start by creating a new Ruby on Rails application and adding a minimal implementation of an LLM-driven assistant. Each prompt coming from the frontend will be a separate message, with no context of the previous ones (basically, it's nowhere near being production-ready).

It will use the ruby-openai gem. For simplicity, most of the needed logic is in the controller. You may find the full implementation in the link below the article.

class AssistantController < ApplicationController
  def show; end

  def chat
    system_message = {
      "role" => "system",
      "content" => "You are a helpful assistant."
    }
    user_message = {
      "role" => "user",
      "content" => params[:prompt]
    }

    openai_client = OpenAI::Client.new
    response = openai_client.chat(
      parameters: {
        model: "gpt-4.1-mini",
        messages: [ system_message, user_message ],
        temperature: 0.7
      }
    )

    assistant_message = response.dig("choices", 0, "message")

    respond_to do |format|
      format.turbo_stream do
        render(
          turbo_stream: turbo_stream.append(
                          "messages",
                          partial: "assistant/message",
                          locals: { message: user_message }
                        ) +
                        turbo_stream.append(
                          "messages",
                          partial: "assistant/message",
                          locals: { message: assistant_message }
                        ) +
                        turbo_stream.update(
                          "chat_input",
                          partial: "assistant/chat_form"
                        )
        )
      end
    end
  end
end

It is a simple form with a prompt textarea. The chat UI displays the prompt and the answers provided by the LLM.

It works well but has very limited capabilities. You can't use it to trigger any interactions or ask about real-world information (i.e., the current weather). Even though you may be used to asking for those via the ChatGPT web UI, you can't do it with the API. The core model doesn't have access to such features.

But we can add MCP capabilities!

Adding MCP Client to a Rails app

To keep focus on the task, I will show an example using the capability of greeting users, which is a feature of the FastMCP-backed server we've created in the article Create an MCP Server using Ruby on Rails and FastMCP.

Let's now add the ruby-mcp-client gem, written by Szymon Kurcab, available at https://github.com/simonx1/ruby-mcp-client. Add it to the application by including

gem 'ruby-mcp-client'

in the Gemfile and running

$ bundle install

This gem introduces an MCPClient::Client library, which then becomes an interface between your application and the LLM, allowing it to interact with tools provided by MCP servers.

Basically, it's the same approach which I have described in my talk in Kraków on The joy of creativity in the age of AI. You provide tools for the LLM and react to their usage in the LLM's responses. However, this time, you don't have to implement the tools within your Rails application. This is the task of the MCP servers. You can concentrate on building your service and let the MCP servers provide additional functionalities.

Using ruby-mcp-client with Rails

It's time to extend the application with MCP-provided features. The below implementation is quite naive, but you'll get the picture of how to use the gem:

  • Configure the MCPClient::Client object with server data. You may use any server which implements STDIO or SSE transport type.
  • Load the tools from the configured MCP server and put them in the proper format for your LLM (e.g., OpenAI, Anthropic, or Google).
  • Provide these tools in an LLM call.
  • With every response from the LLM, check if it requests any tool usage.
  • If there are tool calls, iterate through them, execute each tool using the mcp_client, and send the results back to the LLM in a subsequent call.
  • Continue this process until the LLM responds with a final message without any further tool calls.

Here’s how the chat action in AssistantController evolves:

def chat
  openai_messages = []
  ui_messages = []

  openai_messages << {
    "role" => "system",
    "content" => "You are a helpful assistant."
  }

  user_message = {
    "role" => "user",
    "content" => params[:prompt]
  }
  openai_messages << user_message
  ui_messages << user_message

  mcp_client = MCPClient::Client.new(
    mcp_server_configs: [
      MCPClient.sse_config(
        base_url: "http://localhost:3000/mcp/sse",
        read_timeout: 30,
        retries: 3,
        retry_backoff: 1
      )
    ]
  )
  tools = mcp_client.to_openai_tools

  openai_client = OpenAI::Client.new
  response = openai_client.chat(
    parameters: {
      model: "gpt-4.1-mini",
      messages: openai_messages,
      temperature: 0.7,
      tools: tools,
      tool_choice: "auto"
    }
  )

  assistant_message = response.dig("choices", 0, "message")
  tool_calls = assistant_message["tool_calls"] || []

  while tool_calls.any?
    tool_call = tool_calls.first
    function_details = tool_call["function"]
    function_name = function_details["name"]
    function_arguments = JSON.parse(
      function_details["arguments"]
    )

    tool_call_result = mcp_client.call_tool(
      function_name, function_arguments
    )

    openai_messages << {
      "role" => "assistant",
      "tool_calls" => [ tool_call ]
    }
    openai_messages << {
      "role" => "tool",
      "tool_call_id" => tool_call["id"],
      "name" => function_name,
      "content" => tool_call_result.to_json
    }
    ui_messages << {
      "role" => "assistant",
      "content" => "Calling tool <strong>#{function_name}</strong>, with #{function_arguments}"
    }

    response = openai_client.chat(
      parameters: {
        model: "gpt-4.1-mini",
        messages: openai_messages,
        tools: tools,
        tool_choice: "auto",
        temperature: 0.7
      }
    )

    assistant_message = response.dig("choices", 0, "message")
    tool_calls = assistant_message["tool_calls"] || []
  end

  ui_messages << assistant_message

  respond_to do |format|
    format.turbo_stream do
      turbo_stream_response = ActiveSupport::SafeBuffer.new

      ui_messages.each do |message|
        turbo_stream_response += turbo_stream.append(
          "messages",
          partial: "assistant/message",
          locals: { message: }
        )
      end

      turbo_stream_response += turbo_stream.update(
        "chat_input",
        partial: "assistant/chat_form"
      )

      render turbo_stream: turbo_stream_response
    end
  end
end

What's important: throughout a single chain of interactions, we provide the LLM with all the history of tool usage and assistant messages. This allows the LLM to understand which tools have already been called and what the effect of each call was.

The result is:

The assistant was asked to greet two users. It used the same GreetUserTool (from our FastMCP server example) twice in a row and then ended the interaction with a short summary.

Optimizations

What could surely be optimized in this implementation:

  • Your host application should allow the user to choose to accept or decline tool usage, even if it's a switch like "always use tools and don't ask".
  • You should limit the number of tools used in a single run. Otherwise, your application may be caught in an endless (or at least a very long) loop of actions.
  • And, of course, make the UI much nicer.

There is much more

The ruby-mcp-client gem has many more features. Please review its Readme at https://github.com/simonx1/ruby-mcp-client for a full list. It is actively maintained, and new features come often.There are several implementations of MCP servers, but very few of a client, which makes this project unique enough to give it a try and see how it fits your needs. If you have any questions for the author of the ruby-mcp-client gem, the rumor has it that he can be found on the Ruby AI Builders Discord :

If you'd like to check the full code used in this article, please review out the repository at https://github.com/pstrzalk/rails-with-ruby-mcp-client.

Summary

Integrating the ruby-mcp-client gem provides a direct method for enhancing Ruby on Rails assistants with MCP capabilities. It allows them to connect with external tools and services.  And there you have it: a straightforward approach to constructing advanced, tool-aware AI assistants directly within Ruby on Rails applications!

Articles in this series

Paweł Strzałkowski
Chief Technology Officer

Check my Twitter

Check my Linkedin

Did you like it? 

Sign up To VIsuality newsletter

READ ALSO

Ruby Meetups in 2022 - Summary

14
.
11
.
2023
Michał Łęcicki
Ruby on Rails
Visuality
Conferences

Repository - DDD in Ruby on Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Domain-Driven Design
Backend
Tutorial

Example Application - DDD in Ruby on Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Domain-Driven Design
Backend
Tutorial

How to launch a successful startup - story of POLTRAX (part 1)

14
.
11
.
2023
Michał Piórkowski
Ruby on Rails
Startups
Business

How to use different git emails for different projects

14
.
11
.
2023
Michał Łęcicki
Backend
Tutorial

Aggregate - DDD in Ruby on Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Domain-Driven Design
Backend
Tutorial

Visuality at wroc_love.rb 2022: It's back and it's good!

14
.
11
.
2023
Patryk Ptasiński
Ruby on Rails
Conferences
Ruby

Our journey to Event Storming

14
.
11
.
2023
Michał Łęcicki
Visuality
Event Storming

Should I use Active Record Callbacks?

14
.
11
.
2023
Mateusz Woźniczka
Ruby on Rails
Backend
Tutorial

How to rescue a transaction to roll back changes?

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Backend
Ruby
Tutorial

Safe navigation operator '&.' vs '.try' in Rails

14
.
11
.
2023
Mateusz Woźniczka
Ruby on Rails
Backend
Ruby
Tutorial

What does the ||= operator actually mean in Ruby?

14
.
11
.
2023
Mateusz Woźniczka
Ruby on Rails
Backend
Ruby
Tutorial

How to design an entity - DDD in Ruby on Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Domain-Driven Design
Backend
Tutorial

Entity - DDD in Ruby on Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Domain-Driven Design
Backend
Tutorial

Should I use instance variables in Rails views?

14
.
11
.
2023
Mateusz Woźniczka
Ruby on Rails
Frontend
Backend
Tutorial

Data Quality in Ruby on Rails

14
.
11
.
2023
Michał Łęcicki
Ruby on Rails
Backend
Software

We started using Event Storming. Here’s why!

14
.
11
.
2023
Mariusz Kozieł
Event Storming
Visuality

First Miłośnicy Ruby Warsaw Meetup

14
.
11
.
2023
Michał Łęcicki
Conferences
Visuality

Should I use Action Filters?

14
.
11
.
2023
Mateusz Woźniczka
Ruby on Rails
Backend
Tutorial

Value Object - DDD in Ruby on Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Domain-Driven Design
Backend
Tutorial

Introduction to DDD in Ruby on Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Domain-Driven Design
Backend
Tutorial

Safe data migrations in Rails

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
Backend
Tutorial

I love dev, and so do we!

14
.
11
.
2023
Michał Łęcicki
Software
Conferences