1
.
06
.
2025
31
.
05
.
2025
LLM
Ruby on Rails

MCP Client in Rails using ruby-mcp-client gem

Paweł Strzałkowski
Chief Technology Officer
Ruby MCP Client in Rails by Paweł Strzałkowski

Let's build a helpful assistant! Haven't you seen too many articles like this? This one, I promise, will be special. It will add Model Context Protocol (MCP) capabilities to your Ruby on Rails application and allow YOUR assistant to be smarter, better, and even more helpful!

Implementing an LLM-driven assistant in Rails

We start by creating a new Ruby on Rails application and adding a minimal implementation of an LLM-driven assistant. Each prompt coming from the frontend will be a separate message, with no context of the previous ones (basically, it's nowhere near being production-ready).

It will use the ruby-openai gem. For simplicity, most of the needed logic is in the controller. You may find the full implementation in the link below the article.

class AssistantController < ApplicationController
  def show; end

  def chat
    system_message = {
      "role" => "system",
      "content" => "You are a helpful assistant."
    }
    user_message = {
      "role" => "user",
      "content" => params[:prompt]
    }

    openai_client = OpenAI::Client.new
    response = openai_client.chat(
      parameters: {
        model: "gpt-4.1-mini",
        messages: [ system_message, user_message ],
        temperature: 0.7
      }
    )

    assistant_message = response.dig("choices", 0, "message")

    respond_to do |format|
      format.turbo_stream do
        render(
          turbo_stream: turbo_stream.append(
                          "messages",
                          partial: "assistant/message",
                          locals: { message: user_message }
                        ) +
                        turbo_stream.append(
                          "messages",
                          partial: "assistant/message",
                          locals: { message: assistant_message }
                        ) +
                        turbo_stream.update(
                          "chat_input",
                          partial: "assistant/chat_form"
                        )
        )
      end
    end
  end
end

It is a simple form with a prompt textarea. The chat UI displays the prompt and the answers provided by the LLM.

It works well but has very limited capabilities. You can't use it to trigger any interactions or ask about real-world information (i.e., the current weather). Even though you may be used to asking for those via the ChatGPT web UI, you can't do it with the API. The core model doesn't have access to such features.

But we can add MCP capabilities!

Adding MCP Client to a Rails app

To keep focus on the task, I will show an example using the capability of greeting users, which is a feature of the FastMCP-backed server we've created in the article Create an MCP Server using Ruby on Rails and FastMCP.

Let's now add the ruby-mcp-client gem, written by Szymon Kurcab, available at https://github.com/simonx1/ruby-mcp-client. Add it to the application by including

gem 'ruby-mcp-client'

in the Gemfile and running

$ bundle install

This gem introduces an MCPClient::Client library, which then becomes an interface between your application and the LLM, allowing it to interact with tools provided by MCP servers.

Basically, it's the same approach which I have described in my talk in Kraków on The joy of creativity in the age of AI. You provide tools for the LLM and react to their usage in the LLM's responses. However, this time, you don't have to implement the tools within your Rails application. This is the task of the MCP servers. You can concentrate on building your service and let the MCP servers provide additional functionalities.

Using ruby-mcp-client with Rails

It's time to extend the application with MCP-provided features. The below implementation is quite naive, but you'll get the picture of how to use the gem:

  • Configure the MCPClient::Client object with server data. You may use any server which implements STDIO or SSE transport type.
  • Load the tools from the configured MCP server and put them in the proper format for your LLM (e.g., OpenAI, Anthropic, or Google).
  • Provide these tools in an LLM call.
  • With every response from the LLM, check if it requests any tool usage.
  • If there are tool calls, iterate through them, execute each tool using the mcp_client, and send the results back to the LLM in a subsequent call.
  • Continue this process until the LLM responds with a final message without any further tool calls.

Here’s how the chat action in AssistantController evolves:

def chat
  openai_messages = []
  ui_messages = []

  openai_messages << {
    "role" => "system",
    "content" => "You are a helpful assistant."
  }

  user_message = {
    "role" => "user",
    "content" => params[:prompt]
  }
  openai_messages << user_message
  ui_messages << user_message

  mcp_client = MCPClient::Client.new(
    mcp_server_configs: [
      MCPClient.sse_config(
        base_url: "http://localhost:3000/mcp/sse",
        read_timeout: 30,
        retries: 3,
        retry_backoff: 1
      )
    ]
  )
  tools = mcp_client.to_openai_tools

  openai_client = OpenAI::Client.new
  response = openai_client.chat(
    parameters: {
      model: "gpt-4.1-mini",
      messages: openai_messages,
      temperature: 0.7,
      tools: tools,
      tool_choice: "auto"
    }
  )

  assistant_message = response.dig("choices", 0, "message")
  tool_calls = assistant_message["tool_calls"] || []

  while tool_calls.any?
    tool_call = tool_calls.first
    function_details = tool_call["function"]
    function_name = function_details["name"]
    function_arguments = JSON.parse(
      function_details["arguments"]
    )

    tool_call_result = mcp_client.call_tool(
      function_name, function_arguments
    )

    openai_messages << {
      "role" => "assistant",
      "tool_calls" => [ tool_call ]
    }
    openai_messages << {
      "role" => "tool",
      "tool_call_id" => tool_call["id"],
      "name" => function_name,
      "content" => tool_call_result.to_json
    }
    ui_messages << {
      "role" => "assistant",
      "content" => "Calling tool <strong>#{function_name}</strong>, with #{function_arguments}"
    }

    response = openai_client.chat(
      parameters: {
        model: "gpt-4.1-mini",
        messages: openai_messages,
        tools: tools,
        tool_choice: "auto",
        temperature: 0.7
      }
    )

    assistant_message = response.dig("choices", 0, "message")
    tool_calls = assistant_message["tool_calls"] || []
  end

  ui_messages << assistant_message

  respond_to do |format|
    format.turbo_stream do
      turbo_stream_response = ActiveSupport::SafeBuffer.new

      ui_messages.each do |message|
        turbo_stream_response += turbo_stream.append(
          "messages",
          partial: "assistant/message",
          locals: { message: }
        )
      end

      turbo_stream_response += turbo_stream.update(
        "chat_input",
        partial: "assistant/chat_form"
      )

      render turbo_stream: turbo_stream_response
    end
  end
end

What's important: throughout a single chain of interactions, we provide the LLM with all the history of tool usage and assistant messages. This allows the LLM to understand which tools have already been called and what the effect of each call was.

The result is:

The assistant was asked to greet two users. It used the same GreetUserTool (from our FastMCP server example) twice in a row and then ended the interaction with a short summary.

Optimizations

What could surely be optimized in this implementation:

  • Your host application should allow the user to choose to accept or decline tool usage, even if it's a switch like "always use tools and don't ask".
  • You should limit the number of tools used in a single run. Otherwise, your application may be caught in an endless (or at least a very long) loop of actions.
  • And, of course, make the UI much nicer.

There is much more

The ruby-mcp-client gem has many more features. Please review its Readme at https://github.com/simonx1/ruby-mcp-client for a full list. It is actively maintained, and new features come often.There are several implementations of MCP servers, but very few of a client, which makes this project unique enough to give it a try and see how it fits your needs. If you have any questions for the author of the ruby-mcp-client gem, the rumor has it that he can be found on the Ruby AI Builders Discord :

If you'd like to check the full code used in this article, please review out the repository at https://github.com/pstrzalk/rails-with-ruby-mcp-client.

Summary

Integrating the ruby-mcp-client gem provides a direct method for enhancing Ruby on Rails assistants with MCP capabilities. It allows them to connect with external tools and services.  And there you have it: a straightforward approach to constructing advanced, tool-aware AI assistants directly within Ruby on Rails applications!

Articles in this series

Paweł Strzałkowski
Chief Technology Officer

Check my Twitter

Check my Linkedin

Did you like it? 

Sign up To VIsuality newsletter

READ ALSO

Ruby MCP Client in Rails by Paweł Strzałkowski

MCP Client in Rails using ruby-mcp-client gem

17
.
03
.
2024
Paweł Strzałkowski
LLM
Ruby on Rails
Actionmcp in Ruby on Rails by Paweł Strzałkowski

MCP Server with Rails and ActionMCP

17
.
03
.
2024
Paweł Strzałkowski
LLM
Ruby on Rails
Banner - MCP Server with FastMCP and Rails by Paweł Strzałkowski

MCP Server with Rails and FastMCP

17
.
03
.
2024
Paweł Strzałkowski
LLM
Ruby
Ruby on Rails

Ruby on Rails and Model Context Protocol

17
.
03
.
2024
Paweł Strzałkowski
Ruby on Rails
LLM
Title image

Highlights from wroclove.rb 2025

02
.
10
.
2024
Kaja Witek
Conferences
Ruby
Jarosław Kowalewski - Migration from Heroku using Kamal

Migration from Heroku using Kamal

14
.
11
.
2023
Jarosław Kowalewski
Backend
store-vs-store_accessor by Michał Łęcicki

Active Record - store vs store_accessor

14
.
11
.
2023
Michał Łęcicki
Ruby
Ruby on Rails
How to become a Ruby Certified Programmer Title image

How to become a Ruby Certified Programmer

14
.
11
.
2023
Michał Łęcicki
Ruby
Visuality
Vector Search in Ruby - Paweł Strzałkowski

Vector Search in Ruby

17
.
03
.
2024
Paweł Strzałkowski
ChatGPT
Embeddings
Postgresql
Ruby
Ruby on Rails
LLM Embeddings in Ruby - Paweł Strzałkowski

LLM Embeddings in Ruby

17
.
03
.
2024
Paweł Strzałkowski
Ruby
LLM
Embeddings
ChatGPT
Ollama
Handling Errors in Concurrent Ruby, Michał Łęcicki

Handling Errors in Concurrent Ruby

14
.
11
.
2023
Michał Łęcicki
Ruby
Ruby on Rails
Tutorial
Recap of Friendly.rb 2024 conference

Insights and Inspiration from Friendly.rb: A Ruby Conference Recap

02
.
10
.
2024
Kaja Witek
Conferences
Ruby on Rails

Covering indexes - Postgres Stories

14
.
11
.
2023
Jarosław Kowalewski
Ruby on Rails
Postgresql
Backend
Ula Sołogub - SQL Injection in Ruby on Rails

The Deadly Sins in RoR security - SQL Injection

14
.
11
.
2023
Urszula Sołogub
Backend
Ruby on Rails
Software
Michal - Highlights from Ruby Unconf 2024

Highlights from Ruby Unconf 2024

14
.
11
.
2023
Michał Łęcicki
Conferences
Visuality
Cezary Kłos - Optimizing Cloud Infrastructure by $40 000 Annually

Optimizing Cloud Infrastructure by $40 000 Annually

14
.
11
.
2023
Cezary Kłos
Backend
Ruby on Rails

Smooth Concurrent Updates with Hotwire Stimulus

14
.
11
.
2023
Michał Łęcicki
Hotwire
Ruby on Rails
Software
Tutorial

Freelancers vs Software house

02
.
10
.
2024
Michał Krochecki
Visuality
Business

Table partitioning in Rails, part 2 - Postgres Stories

14
.
11
.
2023
Jarosław Kowalewski
Backend
Postgresql
Ruby on Rails

N+1 in Ruby on Rails

14
.
11
.
2023
Katarzyna Melon-Markowska
Ruby on Rails
Ruby
Backend

Turbo Streams and current user

07
.
05
.
2025
Mateusz Bilski
Hotwire
Ruby on Rails
Backend
Frontend

Showing progress of background jobs with Turbo

14
.
11
.
2023
Michał Łęcicki
Ruby on Rails
Ruby
Hotwire
Frontend
Backend