{"id":10018,"date":"2018-12-20T23:30:26","date_gmt":"2018-12-21T04:30:26","guid":{"rendered":"https:\/\/qxf2.com\/blog\/?p=10018"},"modified":"2018-12-20T23:30:26","modified_gmt":"2018-12-21T04:30:26","slug":"automating-the-setup-for-cloud-based-testing-with-selenium-grid-and-docker-swarm","status":"publish","type":"post","link":"https:\/\/qxf2.com\/blog\/automating-the-setup-for-cloud-based-testing-with-selenium-grid-and-docker-swarm\/","title":{"rendered":"Automated cloud testing setup using Selenium grid and Docker Swarm"},"content":{"rendered":"<p>Maintaining infrastructure for automated Selenium cross-browser tests is time-consuming. <\/p>\n<p>The cloud testing platforms like BrowserStack and Saucelabs help you. But in some cases, you want to have your own cloud testing environment. This is usually time-consuming and involves setting up and using Selenium Grid. This post helps testers to automate the setup process for cloud-based testing. We hope this post will make it easier for you to setup the cluster setup environment for distributing tests across a number of machines using <a href=\"https:\/\/www.seleniumhq.org\/docs\/07_selenium_grid.jsp\">Selenium Grid<\/a> and <a href=\"https:\/\/docs.docker.com\/engine\/swarm\/\">Docker Swarm<\/a>. <\/p>\n<p>Selenium Grid enables testers to parallelize their testing pipeline. It can be used to automate the process of manually setting up a distributed test environment where you can expose the code to these environments. This helps speed up the cycle of Continuous Integration. This can be sped up even further by using the capabilities of Docker Swarm. This article will show you some scripts which we built to help to quickly create and configure a cluster rapidly.<\/p>\n<hr\/>\n<h3>Prerequisites<\/h3>\n<p>This post assumes prior knowledge of <a href=\"https:\/\/www.docker.com\/\">Docker<\/a> and some experience working with Docker. You should also be familiar with the following <a href=\"https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/Instances.html\">AWS EC2 services.<\/a><\/p>\n<hr\/>\n<h3>Objective<\/h3>\n<p>By the end of this tutorial, you will be able to run some sample Selenium tests parallelly across AWS machine setup having Chrome and Firefox drivers. To achieve this you need to <\/p>\n<p>1) Automate the provisioning of resources on AWS<br \/>\n2) Set up the swarm manager and workers<br \/>\n3) Create the Selenium Grid<br \/>\n4) Deploy the Selenium Grid to AWS via Docker Compose<br \/>\n5) Run automated tests on Selenium Grid<br \/>\n6) Automate de-provisioning of resources on AWS<\/p>\n<p>Note:- This article will walk you through setting up a Grid on AWS using 3 machines (1 manager, 2 worker nodes). Before going into details lets have a brief about <a href=\"https:\/\/www.seleniumhq.org\/docs\/07_selenium_grid.jsp\">Selenium grid<\/a> and <a href=\"https:\/\/docs.docker.com\/engine\/swarm\/\">Docker Swarm.<\/a><\/p>\n<hr\/>\n<h3>What is Selenium Grid?<\/h3>\n<p><a href=\"https:\/\/www.seleniumhq.org\/docs\/07_selenium_grid.jsp\">Selenium-Grid<\/a> allows you to run your tests on different machines against different browsers in parallel. The entry point of Selenium Grid is a Selenium Hub. Our test cases will hit the hub and spin up whatever browser is available within your Grid using the DesiredCapabilities function of Selenium. Next elements are nodes, which are machines that are registered to the hub which can execute the test cases. To run multiple tests in parallel, a grid is a must.<br \/>\n<\/hr>\n<h3>What is Docker Swarm?<\/h3>\n<p><a href=\"https:\/\/docs.docker.com\/engine\/swarm\/\">Docker Swarm<\/a> is a tool used to cluster and orchestrate Docker containers. There are two types of nodes: manager nodes (to define services), and worker nodes (instructed by manager nodes based on the service definition). You submit a service definition to a manager node. The service definition consists of one or more tasks and how many replicas of that service you want to run on the cluster. <\/p>\n<hr\/>\n<h3>About our automation scripts<\/h3>\n<p>We wrote two scripts, <strong><em>swarm.sh<\/em><\/strong> and <strong><em>swarmdown.sh<\/em><\/strong>.<br \/>\nThe <strong><em>swarm.sh<\/em><\/strong> will<\/p>\n<li>spin up the EC2 instances,<\/li>\n<li>set up the swarm manager and workers,<\/li>\n<li>create the Selenium Grid,<\/li>\n<li>deploy the Selenium Grid to AWS via Docker Compose. <\/li>\n<p><strong><em> The swarmdown.sh <\/em><\/strong> will<\/p>\n<li>shut down the EC2 resources<\/li>\n<p>Here is the complete swarmup.sh script. We will have a look at each section in detail below.<\/p>\n<p><strong><em>swarmup.sh<\/em><\/strong><\/p>\n<pre lang='python'>\r\n#!\/bin\/bash\r\n\r\necho \"Spinning up three aws instances...\"\r\nfor i in 1 2 3 ; do\r\n\tdocker-machine create \\\r\n\t\t--driver amazonec2 \\\r\n\t\t--amazonec2-open-port 2377 \\\r\n\t\t--amazonec2-open-port 7946 \\\r\n\t\t--amazonec2-open-port 4789 \\\r\n\t\t--amazonec2-open-port 7946\/udp \\\r\n\t\t--amazonec2-open-port 4789\/udp \\\r\n\t\t--amazonec2-open-port 8080 \\\r\n\t\t--amazonec2-open-port 80 \\\r\n\t\t--amazonec2-open-port 22 \\\r\n\t\t--amazonec2-open-port 4444 \\\r\n\t\tswarmmode-$i\r\ndone\t\t\r\n\r\n#update os\r\nfor i in 1 2 3; do\r\n\techo \"updating swarmmode-$i\"\r\n    docker-machine ssh swarmmode-$i 'apt-get update && apt-get upgrade -y && reboot'\r\ndone\r\n\r\nsleep 10\r\n\r\necho \"Initializing Swarm mode...\"\r\nfor i in 1 2 3; do\r\n\tif [ \"$i\" == \"1\" ]; then\r\n        manager_ip=$(docker-machine ip swarmmode-$i)\r\n\t\teval $(docker-machine env swarmmode-$i) && \\\r\n          docker swarm init --advertise-addr \"$manager_ip\"\r\n        worker_token=$(docker swarm join-token worker -q)\r\n\telse        \r\n\t\teval $(docker-machine env swarmmode-$i) && \\\r\n        docker swarm join --token \"$worker_token\" \"$manager_ip:2377\"\r\n    fi\r\ndone\r\n\r\necho \"Deploying Selenium Grid to http:\/\/$(docker-machine ip swarmmode-1):4444...\"\r\n\r\neval $(docker-machine env swarmmode-1)\r\ndocker stack deploy --compose-file=docker-compose.yml selenium_test\r\n<\/pre>\n<p>We have created 3 EC2 instances of t2.micro instance type using ubuntu image(By default, the Amazon EC2 driver will use a daily image of Ubuntu 15.10 LTS so that latest version of Docker can be installed)<\/p>\n<hr\/>\n<h3>Provisioning resources to EC2<\/h3>\n<p>The first step in the script starts with spinning 3 EC2 instances. Before continuing to creating the instances let&#8217;s first understand about credentials configuration and ports that we need to configure in the security group.<\/p>\n<p><strong>Credentials Configuration:<\/strong> First step is to configure credentials. This can either be specified from command line flags or put in an environment file. We used environment variables to set the credentials. You can set the environment variables using the <code>export<\/code>command as shown below<\/p>\n<pre lang='python'>\r\n$ export AWS_ACCESS_KEY_ID=MY-ACCESS-ID\r\n$ export AWS_SECRET_ACCESS_KEY=MY-SECRET-KEY<\/pre>\n<p>You can find more details about configuring credentials <a href=\"https:\/\/docs.docker.com\/machine\/drivers\/aws\/#aws-credential-file\">here<\/a><\/p>\n<p><strong>Port Configuration:<\/strong> Docker Swarm requires few ports to be open for it to work. <\/p>\n<li>TCP port 2377. This port is used for communication between the nodes of a Docker Swarm or cluster. It only needs to be opened on manager nodes.<\/li>\n<li>TCP and UDP port 7946 for communication among nodes (container network discovery).<\/li>\n<li>UDP port 4789 for overlay network traffic (container ingress networking).<\/li>\n<p>Below is the command to spin up 3 EC2 instances<\/p>\n<pre lang='python'>\r\nfor i in 1 2 3 ; do\r\n\tdocker-machine create \\\r\n\t\t--driver amazonec2 \\\t\t\t\r\n\t\t--amazonec2-open-port 2377 \\\r\n\t\t--amazonec2-open-port 7946 \\\r\n\t\t--amazonec2-open-port 4789 \\\r\n\t\t--amazonec2-open-port 7946\/udp \\\r\n\t\t--amazonec2-open-port 4789\/udp \\\r\n\t\t--amazonec2-open-port 8080 \\\r\n\t\t--amazonec2-open-port 80 \\\r\n\t\t--amazonec2-open-port 22 \\\r\n\t\t--amazonec2-open-port 4444 \\\r\n\t        swarmmode-$i\r\ndone\r\n<\/pre>\n<p>Docker-machine has an EC2 driver for creating a Docker node out of AWS. Docker node in this context means an AWS VM instance with Docker pre-installed. With default options, <code>docker-machine<\/code> picks up a t2.micro EC2 node with Ubuntu 15.10 OS and installs latest Docker engine on the AWS instance. <code>docker-machine<\/code> also takes care of configuring appropriate certificates which allows us to access the AWS VM securely. You can refer to this <a href=\"https:\/\/github.com\/Nordstrom\/docker-machine\/blob\/master\/docs\/drivers\/aws.md\">link<\/a> for more details.<\/p>\n<hr\/>\n<h3>Ssh into each instance and update the os<\/h3>\n<p>The below commands allows you to ssh into each instances and update and upgrade os.<\/p>\n<pre lang='python'>\r\nfor i in 1 2 3; do\r\n\techo \"updating swarmmode-$i\"\r\n    docker-machine ssh swarmmode-$i 'apt-get update && apt-get upgrade -y && reboot'\r\ndone\r\n<\/pre>\n<hr\/>\n<h3>Setting up the swarm manager and workers<\/h3>\n<p>In our case, the swarm will be composed of, 1 node with the manager role and 2 nodes with the worker role. So we would be creating swarm cluster by making &#8220;swarmmode-1&#8221; node as master and &#8220;swarmmode-2&#8221; node as worker1 and &#8220;swarmmode-3&#8221; as worker2. For achieving this, we can initialize the swarm cluster on the manager node using  &#8220;<code>docker swarm init<\/code>&#8220;. The command <code>docker swarm join-token worker -q<\/code> gives the worker_token which is needed to add node workers to the swarm. <\/p>\n<pre lang='python'>\r\nfor i in 1 2 3; do\r\n\tif [ \"$i\" == \"1\" ]; then\r\n        manager_ip=$(docker-machine ip swarmmode-$i)\r\n\t\teval $(docker-machine env swarmmode-$i) && \\\r\n          docker swarm init --advertise-addr \"$manager_ip\"\r\n        worker_token=$(docker swarm join-token worker -q)\r\n\telse        \r\n\t\teval $(docker-machine env swarmmode-$i) && \\\r\n        docker swarm join --token \"$worker_token\" \"$manager_ip:2377\"\r\n    fi\r\ndone\r\n<\/pre>\n<p>The above code will add the first node as manager node and remaining two nodes to the swarm as workers. When these commands are executed you should see something like this in the console. You can run the script with below command<\/p>\n<pre lang='python'>\r\n$ sh swarmup.sh\r\n<\/pre>\n<pre lang='python'>\r\nSwarm initialized: current node (deyp099wnn94lgauxp9ljil83) is now a manager.\r\n\r\nTo add a worker to this swarm, run the following command:\r\n\r\n    docker swarm join --token SWMTKN-1-5rm2sib935txv5k13j6leaqsfuuttalktt7jv4s55249izjf54-8ia31tagc4sbehqeqiqst4jfz 172.30.0.170:2377\r\n\r\nTo add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.\r\n\r\nThis node joined a swarm as a worker.\r\nThis node joined a swarm as a worker.\r\n\r\n<\/pre>\n<hr\/>\n<h3>Creating the Selenium Grid<\/h3>\n<p>Next, create a Docker compose file and place in the same directory where your shell scripts are located. Our docker-compose.yml looks like this. Docker compose lets you deploy Selenium Grid in multiple containers. The .yml file describes the containers and how they interact with each other. We are using standard SeleniumHQ images with Chrome and Firefox browsers already installed for hub and nodes. In this article, we will be using below docker images:<\/p>\n<p><strong>selenium\/hub<\/strong> : Image for running a Grid Hub. It will expose port 4444 on the AWS instance so we can connect to the Selenium Grid.<br \/>\n<strong>selenium\/node-chrome<\/strong> : Grid Node with Chrome installed.<br \/>\n<strong>selenium\/node-firefox<\/strong> : Grid Node with Firefox installed.<\/p>\n<p>Using this .yml file, we will be configuring the services. In this case, we have a hub service with two node services(Chrome and Firefox). When you deploy this .yml file on the Docker Swarm, all the services will be running on nodes from this configuration.<\/p>\n<p><em><strong>docker-compose.yml<\/strong><\/em><\/p>\n<pre lang='python'>\r\n\r\nversion: \"3.5\"\r\nnetworks:\r\n  main:\r\n    driver: overlay\r\nservices:\r\n  hub:\r\n    image: selenium\/hub\r\n    ports:\r\n      - \"4444:4444\"\r\n    networks:\r\n      - main\r\n    deploy:\r\n      mode: replicated\r\n      replicas: 1\r\n      labels:\r\n        selenium.grid.type: \"hub\"\r\n        selenium.grid.hub: \"true\"\r\n      restart_policy:\r\n        condition: none\r\n      placement:\r\n        constraints: [node.role == manager ]\r\n  chrome:\r\n    image: selenium\/node-chrome\r\n    entrypoint: >\r\n      bash -c '\r\n        export IP_ADDRESS=$$(ip addr show eth0 | grep \"inet\\b\" | awk '\"'\"'{print $$2}'\"'\"' | awk -F\/ '\"'\"'{print $$1}'\"'\"' | head -1) &&\r\n        SE_OPTS=\"-host $$IP_ADDRESS\" \/opt\/bin\/entry_point.sh'\r\n    volumes:\r\n      - \/dev\/urandom:\/dev\/random\r\n      - \/dev\/shm:\/dev\/shm\r\n    depends_on:\r\n      - hub\r\n    environment:\r\n      HUB_PORT_4444_TCP_ADDR: hub\r\n      HUB_PORT_4444_TCP_PORT: 4444\r\n      NODE_MAX_SESSION: 1\r\n    networks:\r\n      - main\r\n    deploy:\r\n      mode: replicated\r\n      replicas: 1\r\n      labels:\r\n        selenium.grid.type: \"node\"\r\n        selenium.grid.node: \"true\"\r\n      restart_policy:\r\n        condition: none\r\n      placement:\r\n        constraints: [node.role == worker]\r\n  firefox:\r\n    image: selenium\/node-firefox\r\n    entrypoint: >\r\n      bash -c '\r\n        export IP_ADDRESS=$$HOSTNAME &&\r\n        SE_OPTS=\"-host $$IP_ADDRESS\" \/opt\/bin\/entry_point.sh'\r\n    volumes:\r\n      - \/dev\/shm:\/dev\/shm\r\n      - \/dev\/urandom:\/dev\/random\r\n    depends_on:\r\n      - hub\r\n    environment:\r\n      HUB_PORT_4444_TCP_ADDR: hub\r\n      HUB_PORT_4444_TCP_PORT: 4444\r\n      NODE_MAX_SESSION: 1\r\n    networks:\r\n      - main\r\n    deploy:\r\n      mode: replicated\r\n      replicas: 1\r\n      labels:\r\n        selenium.grid.type: \"node\"\r\n        selenium.grid.node: \"true\"\r\n      restart_policy:\r\n        condition: none\r\n      placement:\r\n        constraints: [node.role == worker]\r\n<\/pre>\n<p>Before deploying the Selenium Grid on Docker Swarm, let&#8217;s look into small details which need attention.<\/p>\n<p><strong>Placement constraints<\/strong>: [node.role == worker] allows placing the workload on the worker node instead of the manager nodes. In case you want to run the hub on the manager node use [node.role == manager] as the value instead. But best practice is to keep manager nodes free from CPU and\/or memory-intensive tasks.<\/p>\n<p><strong>Entrypoint<\/strong>: entry_point.sh gets the info to register the node to the hub. Hub also needs an address of the node to poll its status. We can use &#8216;SE_OPTS&#8217; within the entry_point.sh script so nodes running on different hosts will be able to successfully link back to the hub.     <\/p>\n<p><strong>Port mapping<\/strong>: We have exposed the ports as 4444:4444 which basically means one can connect to the grid on port 4444 on any node in the Swarm network. <\/p>\n<hr\/>\n<h3>Deploying the Selenium Grid to AWS via Docker Compose<\/h3>\n<p>The below command deploys the docker stack into the manager node:<\/p>\n<pre lang='python'>\r\n\r\n$ docker stack deploy --compose-file docker-compose.yml selenium_test\r\nDeploying Selenium Grid to http:\/\/100.24.107.127:4444...\r\nCreating network selenium_test_main\r\nCreating service selenium_test_hub\r\nCreating service selenium_test_chrome\r\nCreating service selenium_test_firefox\r\n<\/pre>\n<p>Here is the Grid console<\/p>\n<figure id=\"attachment_10114\" aria-describedby=\"caption-attachment-10114\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/seleniumgrid.jpg\" data-rel=\"lightbox-image-0\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/seleniumgrid-1024x221.jpg\" alt=\"\" width=\"1024\" height=\"221\" class=\"size-large wp-image-10114\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/seleniumgrid-1024x221.jpg 1024w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/seleniumgrid-300x65.jpg 300w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/seleniumgrid-768x166.jpg 768w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/seleniumgrid.jpg 1236w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption id=\"caption-attachment-10114\" class=\"wp-caption-text\">Grid console<\/figcaption><\/figure>\n<p>Now ssh into the swarmmode-1 instance and run below command:<\/p>\n<pre lang='python'>\r\n$docker-machine ssh swarmmode-1<\/pre>\n<p>And review the stack:<\/p>\n<pre lang='python'>\r\n$docker stack ps selenium_test<\/pre>\n<figure id=\"attachment_10115\" aria-describedby=\"caption-attachment-10115\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/stack_ps.jpg\" data-rel=\"lightbox-image-1\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/stack_ps-1024x57.jpg\" alt=\"\" width=\"1024\" height=\"57\" class=\"size-large wp-image-10115\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/stack_ps-1024x57.jpg 1024w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/stack_ps-300x17.jpg 300w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/stack_ps-768x42.jpg 768w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/stack_ps.jpg 1178w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption id=\"caption-attachment-10115\" class=\"wp-caption-text\">check status of the stack<\/figcaption><\/figure>\n<hr\/>\n<h3>Run tests on Selenium Grid<\/h3>\n<p> For testing the grid setup, we wrote couple of test scripts. Both the scripts loads qxf2.com and prints the title of the page. In your local machine, from the directory where you have these scripts, run below command<\/p>\n<pre lang='python'>\r\n$ pytest -k test\r\n<\/pre>\n<figure id=\"attachment_10152\" aria-describedby=\"caption-attachment-10152\" style=\"width: 637px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/11\/pytest-grid.jpg\" data-rel=\"lightbox-image-2\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/11\/pytest-grid.jpg\" alt=\"\" width=\"637\" height=\"142\" class=\"size-full wp-image-10152\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/11\/pytest-grid.jpg 637w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/11\/pytest-grid-300x67.jpg 300w\" sizes=\"auto, (max-width: 637px) 100vw, 637px\" \/><\/a><figcaption id=\"caption-attachment-10152\" class=\"wp-caption-text\">pytest results<\/figcaption><\/figure>\n<p><em>test_selenium_grid_chrome.py<\/em><\/p>\n<pre lang='python'>\r\n'''Test for launching qxf2.com in AWS machine through grid'''\r\n\r\nfrom selenium import webdriver\r\nfrom selenium.webdriver.common.desired_capabilities import DesiredCapabilities\r\n\r\ndef test_chrome():\r\n    #Set desired capabilities\r\n    desired_capabilities = DesiredCapabilities.CHROME\r\n    desired_capabilities['platform'] = 'LINUX'\r\n\r\n    #Using Remote connection to connect to aws grid - Used manager ip\r\n    url = \"http:\/\/100.24.107.127:4444\/wd\/hub\"\r\n    driver  = webdriver.Remote(url, desired_capabilities)\r\n\r\n    #Checking the driver session\r\n    print driver\r\n\r\n    #Launch qxf2 and print title\r\n    driver.get(\"http:\/\/www.qxf2.com\")\r\n\r\n    print driver.title\r\n    driver.quit()\r\n<\/pre>\n<p><em>test_selenium_grid_firefox.py<\/em><\/p>\n<pre lang='python'>\r\n'''Test for launching qxf2.com in AWS machine through grid'''\r\n\r\nfrom selenium import webdriver\r\nfrom selenium.webdriver.common.desired_capabilities import DesiredCapabilities\r\n\r\ndef test_firefox():\r\n    #Set desired capabilities\r\n    desired_capabilities = DesiredCapabilities.FIREFOX\r\n    desired_capabilities['platform'] = 'LINUX'\r\n\r\n    #Using Remote connection to connect to aws grid - Used manager ip\r\n    url = \"http:\/\/100.24.107.127:4444\/wd\/hub\"\r\n    driver  = webdriver.Remote(url, desired_capabilities)\r\n\r\n    #Checking the driver session\r\n    print driver\r\n\r\n    #Launch qxf2 and print title\r\n    driver.get(\"http:\/\/www.qxf2.com\")\r\n\r\n    print driver.title\r\n    driver.quit()\r\n<\/pre>\n<p>Review the service:<\/p>\n<figure id=\"attachment_10116\" aria-describedby=\"caption-attachment-10116\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/docker_service.jpg\" data-rel=\"lightbox-image-3\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/docker_service-1024x62.jpg\" alt=\"\" width=\"1024\" height=\"62\" class=\"size-large wp-image-10116\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/docker_service-1024x62.jpg 1024w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/docker_service-300x18.jpg 300w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/docker_service-768x46.jpg 768w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2018\/10\/docker_service.jpg 1026w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption id=\"caption-attachment-10116\" class=\"wp-caption-text\">docker service<\/figcaption><\/figure>\n<p>If you want to scale the grid nodes use the <code>scale<\/code>command on the Swarm manager: <\/p>\n<pre lang='python'>\r\n$ docker service scale selenium_test_chrome=4\r\n<\/pre>\n<p>This command would scale selenium_test_chrome to 4.  You can even apply Auto Scaling for adding new worker nodes based on the load.<\/p>\n<hr\/>\n<h3>Automate de-provisioning of resources on AWS<\/h3>\n<p>Since the resources used are all virtual, launching them as part of an experiment or to solve some short-term problem will often make lots of sense. When the work&#8217;s all done, the resource can be shut down. We wrote a script to achieve this.<br \/>\n<em>swarmdown.sh<\/em><\/p>\n<pre lang='python'>\r\n#!\/bin\/bash\r\n\r\ndocker-machine rm swarmmode-1 swarmmode-2 swarmmode-3 -y\r\n\r\n<\/pre>\n<p>From the directory where you saved this script, execute below command to run this script<\/p>\n<pre lang='python'>\r\nsh swarmdown.sh\r\n<\/pre>\n<hr\/>\n<p>To test the above scripts, from your local machine you can run below.<\/p>\n<pre lang='python'>\r\n$ sh swarmup.sh\r\n$ pytest -k test\r\n$ sh swarmdown.sh\r\n\r\n<\/pre>\n<p>The main objective of this article is to automatically provision the instances before test runs and then de-provision them after. You can configure to run tests in parallel using <a href=\"https:\/\/docs.pytest.org\/en\/latest\/\">pytest<\/a> in <a href=\"https:\/\/jenkins.io\/\">Jenkins <\/a>(or some other CI tool) so that they are part of the continuous integration process. <\/p>\n<p>So now you can have your own testing lab on the cloud. Happy testing!<\/p>\n<hr\/>\n<h3>References:<\/h3>\n<p>1) <a href=\"https:\/\/testdriven.io\/distributed-testing-with-selenium-grid\">Distributed testing with selenium grid<\/a><br \/>\n2) <a href=\"http:\/\/www.mchubarov.com\/2017\/02\/running-selenium-grid-with-docker-in.html\">Running selenium grid with docker<\/a><br \/>\n3) <a href=\"https:\/\/forums.docker.com\/t\/when-to-use-docker-compose-and-when-to-use-docker-swarm\/29107\/7\">When to use Docker-Compose and when to use Docker-Swarm<\/a><br \/>\n4) <a href=\"https:\/\/github.com\/bennettp123\/docker-swarm-scripts\">Github &#8211; Docker swarm scripts<\/a><\/p>\n<hr\/>\n","protected":false},"excerpt":{"rendered":"<p>Maintaining infrastructure for automated Selenium cross-browser tests is time-consuming. The cloud testing platforms like BrowserStack and Saucelabs help you. But in some cases, you want to have your own cloud testing environment. This is usually time-consuming and involves setting up and using Selenium Grid. This post helps testers to automate the setup process for cloud-based testing. We hope this post [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[38,177,52,175,30,176],"tags":[],"class_list":["post-10018","post","type-post","status-publish","format-standard","hentry","category-automation","category-aws","category-continuous-integration","category-docker-swarm","category-selenium","category-selenium-grid"],"_links":{"self":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/10018","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/comments?post=10018"}],"version-history":[{"count":50,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/10018\/revisions"}],"predecessor-version":[{"id":12301,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/10018\/revisions\/12301"}],"wp:attachment":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/media?parent=10018"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/categories?post=10018"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/tags?post=10018"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}