{"id":23157,"date":"2025-03-21T01:17:28","date_gmt":"2025-03-21T05:17:28","guid":{"rendered":"https:\/\/qxf2.com\/blog\/?p=23157"},"modified":"2025-03-25T07:45:54","modified_gmt":"2025-03-25T11:45:54","slug":"ai-assistant-and-model-context-protocol-to-automate-tasks","status":"publish","type":"post","link":"https:\/\/qxf2.com\/blog\/ai-assistant-and-model-context-protocol-to-automate-tasks\/","title":{"rendered":"AI assistant and Model Context Protocol to automate tasks"},"content":{"rendered":"<p>After being introduced to the <a href=\"https:\/\/www.anthropic.com\/news\/model-context-protocol\" target=\"_blank\" rel=\"noopener\">Model Context Protocol (MCP)<\/a> at a recent conference, <a href=\"https:\/\/qxf2.com?utm_source=ai_assistant_and_mcp&amp;utm_medium=click&amp;utm_campaign=From%20blog\" target=\"_blank\" rel=\"noopener\">we<\/a> were eager to experiment with MCP servers firsthand. In our project management workflow, we typically use a script to create new <a href=\"https:\/\/trello.com\" target=\"_blank\" rel=\"noopener\">Trello<\/a> boards and add members at the beginning of each sprint. To streamline this process, we decided to build an AI assistant that automates Trello board creation. This assistant uses an MCP server to invoke an API from a Python module.<\/p>\n<p><strong>Note:<\/strong> This post is designed to help you explore MCP in a few easy steps. To attain a deeper understanding of the concepts follow the hyperlinks throughout the post.<\/p>\n<hr>\n<h3>Overview<\/h3>\n<p>We designed a workflow that enables users to create Trello boards through natural, conversational interactions with an AI assistant. Our setup consists of four main components:<\/p>\n<h5>1. An AI Agent<\/h5>\n<p>We used <a href=\"https:\/\/github.com\/block\/goose\">goose<\/a>, an open-source and extensible AI agent. Having recently discovered goose, we were eager to explore its capabilities, making it a natural choice for our project. One of its key features is <a href=\"https:\/\/block.github.io\/goose\/blog\/2025\/02\/17\/agentic-ai-mcp#tool-calling\">tool calling<\/a>, which allows agents to execute API calls.<\/p>\n<h5>2. A locally running model<\/h5>\n<p>To run Large Language Models (LLMs) locally, we chose <a href=\"https:\/\/ollama.com\" target=\"_blank\" rel=\"noopener\">Ollama<\/a>, a platform that enables users to run LLMs on their own machines. This approach ensures efficient and private model execution without relying on external servers.<\/p>\n<h5>3. An MCP server<\/h5>\n<p>With an AI agent and a local LLM in place, we integrated an MCP server. It follows a client-server architecture, allowing host applications &#8211; such as AI tools, Integrated Development Environments (IDEs), and platforms\u2014to interact with multiple servers seamlessly. These servers can securely access local files, databases, services, and external systems via APIs, providing LLMs with the necessary context to perform tasks effectively.<\/p>\n<h5>4. Trello API<\/h5>\n<p>The MCP server needs access to Trello API to create a new board. The <a href=\"https:\/\/github.com\/shivahari\/rook\/blob\/main\/automate_trello.py\" target=\"_blank\" rel=\"noopener\">Trello automation Python module<\/a> houses a method &#8211; <a href=\"https:\/\/github.com\/shivahari\/rook\/blob\/main\/automate_trello.py#L13\" target=\"_blank\" rel=\"noopener\">create_trello_board<\/a> to create the board.<\/p>\n<p><strong>Note:<\/strong> For this project, the Trello automation Python module contains both the MCP server and Trello API components, though they are generally separate entities.<\/p>\n<hr>\n<h3>Implementation<\/h3>\n<p>Here&#8217;s how you can build an AI assistant that utilises an MCP server to call API to automate Trello board creation using natural language instructions:<\/p>\n<h5>a. Initial setup<\/h5>\n<p>1. Clone the project repo:<\/p>\n<pre lang=\"bash\">\r\ngit clone https:\/\/github.com\/shivahari\/rook.git\r\ncd rook\r\n<\/pre>\n<p>2. Install dependencies:<\/p>\n<pre lang=\"bash\">\r\npython3.10 -m venv <venv_name> # Create virtual environment\r\nsource <venv_name>\/bin\/activate # Activate virtual environment\r\npip install -r requirements.txt # Install requirements\r\n<\/pre>\n<p>3. Setup env vars:<\/p>\n<pre lang=\"bash\">\r\nexport TRELLO_KEY=your_api_key\r\nexport TRELLO_TOKEN=your_api_token\r\n<\/pre>\n<h5>b. Downloading an Ollama model<\/h5>\n<p>Browse the <a href=\"https:\/\/ollama.com\/search\" target=\"_blank\" rel=\"noopener\">Ollama model library<\/a> and select a model that supports <a href=\"https:\/\/ollama.com\/search?c=tools\" target=\"_blank\" rel=\"noopener\">tooling<\/a>. Through experimentation, I found that larger models tend to generate more natural and verbose outputs. For this example, I selected <a href=\"https:\/\/ollama.com\/library\/qwen2.5:3b\" target=\"_blank\" rel=\"noopener\">qwen2.5:3b<\/a>.<\/p>\n<p>To download the model, run:<\/p>\n<pre lang=\"python\">\r\nollama pull qwen2.5:3b\r\n<\/pre>\n<h5>c. Configure goose to use the downloaded model<\/h5>\n<p>Run the following command to configure Goose:<\/p>\n<pre lang=\"bash\">\r\ngoose configure\r\n<\/pre>\n<p>Now follow the steps in <a href=\"https:\/\/block.github.io\/goose\/docs\/getting-started\/providers#local-llms-ollama\" target=\"_blank\" rel=\"noopener\">configure goose to use Ollama model<\/a> to complete your configuration.<\/p>\n<h5>d. Start a goose session<\/h5>\n<p>To launch a goose session and connect it to your extension, use:<\/p>\n<pre lang=\"bash\">\r\ngoose session --with-extension \"python automate_trello.py\"\r\n<\/pre>\n<p>The <code>--with-extension<\/code> flag instructs goose to load the <code>automate_trello<\/code> module locally and use the MCP server defined in the module.<br \/>\n<strong>Note:<\/strong> You can refer to the <a href=\"https:\/\/block.github.io\/goose\/docs\/getting-started\/using-extensions\/#external-extensions\" target=\"_blank\" rel=\"noopener\">External Extensions documentation<\/a> for more details.<\/p>\n<h5>e. Create a new Trello board<\/h5>\n<p>Once the session is running, you can use natural language to create a Trello board:<\/p>\n<pre lang=\"bash\">\r\nstarting session | provider: ollama model: qwen2.5:3b\r\nlogging to \/Users\/ai\/.config\/goose\/sessions\/TTtanC97.jsonl\r\n\r\n\r\nGoose is running! Enter your instructions, or try asking what goose can do.\r\n\r\n\r\n( O)> What can you do for me?\r\nI can assist with creating a Trello board or list resources from extensions. How may I assist you today?\r\n\r\n( O)> Can you help creating a Trello board?\r\nOf course! Could you please provide me with the name for your Trello board?\r\n\r\n( O)> Let's name it \"MCP Board\"\r\n\r\n\u2500\u2500\u2500 create_trello_board | i0se8run \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nboard_name: MCP Board\r\n\r\nYour Trello board named \"MCP Board\" has been successfully created.\r\n<\/pre>\n<hr>\n<h3>How it works<\/h3>\n<p>When the user provides a request in natural language, goose interprets the user\u2019s request and prompts for relevant details such as the board name, the request is then executed, and a new Trello board is generated, finally the assistant provides the user with a confirmation message.<br \/>\nThe following image illustrates the full workflow:<br \/>\n<a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2025\/03\/ai_agent_mcp.png\" data-rel=\"lightbox-image-0\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2025\/03\/ai_agent_mcp.png\" alt=\"AI Agent Workflow\" width=\"991\" height=\"535\" class=\"aligncenter size-full wp-image-23179\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2025\/03\/ai_agent_mcp.png 991w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2025\/03\/ai_agent_mcp-300x162.png 300w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2025\/03\/ai_agent_mcp-768x415.png 768w\" sizes=\"auto, (max-width: 991px) 100vw, 991px\" \/><\/a><\/p>\n<hr>\n<p>And just like that, you have an AI assistant that can create Trello boards in just a few steps. Building this example provided us with valuable insights into the capabilities of MCP servers. As we move forward, we plan to explore its functionalities even further.<\/p>\n<hr>\n<h3>Advanced ML testing from Qxf2<\/h3>\n<p>Qxf2 offers <a href=\"https:\/\/qxf2.com\/aiml-testing-offering?utm_source=ai_assistant_and_mcp&amp;utm_medium=click&amp;utm_campaign=From%20blog\" target=\"_blank\" rel=\"noopener\">specialized QA for AI\/ML models<\/a>, helping startups validate their models with reliable testing strategies. Our hands-on approach to exploring new tools gives us a deeper understanding of data science and ML engineering workflows, enabling us to design tests that accurately reflect real-world usage. Whether you&#8217;re developing your first ML pipeline or scaling an existing one, Qxf2 provides the expertise you need to ensure quality and performance. Reach out to mak@qxf2.com to learn more.<\/p>\n<hr>\n","protected":false},"excerpt":{"rendered":"<p>After being introduced to the Model Context Protocol (MCP) at a recent conference, we were eager to experiment with MCP servers firsthand. In our project management workflow, we typically use a script to create new Trello boards and add members at the beginning of each sprint. To streamline this process, we decided to build an AI assistant that automates Trello [&hellip;]<\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[438,439],"tags":[],"class_list":["post-23157","post","type-post","status-publish","format-standard","hentry","category-ai-agents","category-mcp-server"],"_links":{"self":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/23157","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/comments?post=23157"}],"version-history":[{"count":24,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/23157\/revisions"}],"predecessor-version":[{"id":23222,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/23157\/revisions\/23222"}],"wp:attachment":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/media?parent=23157"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/categories?post=23157"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/tags?post=23157"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}