Monday, 6 March 2017

Ubuntu and Kubernetes Part 1 -- Components

This guide will walk you through installation of Kubernetes Cluster on Ubuntu 14.04. We will have 1 master and 3 worker nodes.
There are many projects which offer you to setup the K8s cluster in automated fashion by single command. Some of these projects like kubeadm, kube-anywhere are good enough to setup cluster quickly. So question arises is that why do we need to use manual guide to setup the cluster. Answer is that if you want to understand all moving pieces of your cluster, then manual step by step guide is better approach. Let's get started.

Kubernetes Components

Master node components

etcd - A highly available key-value store for shared configuration and service discovery.
kube-apiserver - Provides the API for Kubernetes orchestration.
kube-controller-manager - Enforces Kubernetes services.
kube-scheduler - Schedules containers on hosts.

Worker node components

flannel - An etcd backed network fabric for containers.
kube-proxy - Provides network proxy services.
kubelet - Processes a container manifest so the containers are launched according to how they are described.
Docker: An open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.









Monday, 17 October 2016

Cassandra Asynchronous Queries

Cassandra Python driver supports asynchronous queries via execute_async() method. This method immediately return ResponseFuture object. There are two ways to get final results from the object

  1. result(): The first is by calling the result() function. This would block untill query is completed or it would raise an exception in case of error.
    query = "SELECT * FROM users"
    try:
    
    future = session.execute_async(query) rows = future.result()
    except Exception:
    print "Exception hit."
  2. callbacks: Alternatively, you can attach callbacks and errback funtion through add_callback(), add_errback() and add_callbacks() function.

    def handle_success(rows):
        user = rows[0]
        print ("name:%s age:%s" % (user.name, user.age))
    
    def handle_error(exception):
        log.error("Failed to fetch user info: %s", exception)
    
    
    future = session.execute_async(query)
    future.add_callbacks(handle_success, handle_error)
Complete example:
from cassandra.cluster import Cluster

cluster = Cluster(['10.10.0.4', '10.10.0.32'])
session = cluster.connect('poc')
query = "insert into users (email, name, age, city) 
         values ('tahir@gmail.com', 'tahir', 34, 'SantaClara')"
future1 = session.execute_async(query)
query = "select * from users"
future2 = session.execute_async(query)

try:
    rows = future1.result()
except Exception:
    print ("Hit an exception")

try:
    rows = future2.result()
    for row in rows:
      print row.name, row.email
except ReadTimeout:
    log.exception("Query timed out:")

References: 
https://datastax.github.io/python-driver/getting_started.html

Sunday, 16 October 2016

Getting started with Cassandra Python API driver

Pre-Req

Please follow this to setup a multi-node cluster (You can try this post with single node cluster as well. For that just follow till step 4 of the link).

Instantiating a cluster: 

from cassandra.cluster import Cluster

cluster = Cluster(['10.10.0.4', '10.10.0.32'])
10.10.0.4 and 10.10.0.32 are ip addresses of 2 nodes of our cluster. These are initial contact points. 
You don't need to give exhaustive list here. Just few of them or even one of them is sufficient. Once the driver find one of them, he will automatically discover rest.

Connect to cluster:

Instantiating a cluster does not connect driver to the cluster. To connect you should give something like
session = cluster.connect()

Set Keyspace:

session.set_keyspace('mykeyspace')

Executing Queries:

Now we are all set to execute queries. Simplest way is to use execute method
rows = session.execute('SELECT name, age, email FROM users')

Complete example:

from cassandra.cluster import Cluster

cluster = Cluster(['10.10.0.4', '10.10.0.32'])
session = cluster.connect('mykeyspace')
session.execute("""
        CREATE TABLE users (
            email text, 
            name text,
            age int,
            city text,
            PRIMARY KEY ((email), age)
        )
        """)
session.execute("""
insert into users (email, name, age, city) values ('inaya@gmail.com', 'INAYA', 3, 'SantaClara')
""")
result = session.execute("select * from users")[0]
print result.name, result.age

Setting up MultiNode Cassandra Cluster on Ubuntu16-04 machines

What is Cassandra?

Cassandra is a distributed database for managing large amount of structured data. It offers capabilities like horizontal scalability and high availability (no single point of failure because of its decentralized nature). Following are some key points about it
  • Scalable: Cassandra supports horizontal scalability. Read and write throughput both increase linearly as new machines are added, with no downtime or interruption to applications.
  • Highly available: 
    • DecentralizedThere is no single point of failure. Every node in cluster is identical (no master/slave notion).
    • Fault Tolerant: Data is automatically replicated to multiple nodes for fault tolerance. Failed nodes can be replaced without any downtime. Replication across multiple data centers are supported. 

Setup a multi-node cluster on Ubuntu 16.04:

Prerqs

  1. Three machines with ubuntu 16.04 OS.
  2. Each machine should be able to communicate with each other.
NOTE: Repeat Below steps on each machine.

1. Installing oracle JVM:

  • sudo add-apt-repository ppa:webupd8team/java
  • sudo apt-get update
  • sudo apt-get install oracle-java8-set-default
  • java -version

2. Installing Cassandra:

  • echo "deb http://debian.datastax.com/community stable main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
  • curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -
  • sudo apt-get update
  • sudo apt-get install dsc30
  • sudo apt-get install cassandra-tools

3. Connecting to the cluster:

  • sudo nodetool status
  • cqlsh
You should be able to see the cqlsh prompt.

4. Create a ring -- Deleting default data

  • sudo service cassandra stop
  • sudo rm -rf /var/lib/cassandra/data/system/*

5. Create a ring -- Configuring the cluster

    • Modify /etc/cassandra/cassandra.yaml
      cluster_name: 'cassan'
      seed_provider:
        - class_name: org.apache.cassandra.locator.SimpleSeedProvider
          parameters:
              - seeds:  "<server1 ip>,<server2 ip>"
      listen_address: <local server ip>
      rpc_address: <local server ip>
      auto_bootstrap: false
      data_file_directories:
        - /var/lib/cassandra/data
      commitlog_directory: /var/lib/cassandra/commitlog
      saved_caches_directory: /var/lib/cassandra/saved_caches
      commitlog_sync: periodic
      commitlog_sync_period_in_ms: 10000
      partitioner: org.apache.cassandra.dht.Murmur3Partitioner
      endpoint_snitch: SimpleSnitch
      start_native_transport: true
      native_transport_port: 9042

    6. Create a ring -- Configuring the firewall

    To allow communication, we'll need to open the 7000, 9042 network ports for each node
    • sudo apt-get install -y iptables-persistent
    • Add following to /etc/iptables/rules.v4
      -A INPUT -p tcp -s <your_other_server_ip> -m multiport --dports 7000,9042 -m state --state NEW,ESTABLISHED -j ACCEPT
    • sudo service iptables-persistent restart
    • sudo service cassandra start

    Check the cluster status:

    • sudo nodetool statusYou should see something like following
      Datacenter: datacenter1
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address      Load       Tokens       Owns (effective)  Host ID                               Rack
    UN  10.10.0.32   123.93 KB  1            74.7%             6fa993f1-07f7-4368-8ee5-c52cedae3843  rack1
    UN  10.10.0.102  152.18 KB  1            12.4%             64c4c449-3949-4c83-a0a7-86b084a58d5c  rack1
    UN  10.10.0.4    229.86 KB  1            12.9%             83cd40ec-3e64-43ea-87a9-65bc8a90bd1d  rack1
    • You should also be able to see cqlsh prompt
      cqlsh <serverip> 9042

    Congratulations! You now have a multi-node Cassandra cluster running.



    Saturday, 5 March 2016

    Dockers: part 4 - Managing multiple services

    Introduction:
    1. Your application might have a lot of services. Its best to run each individual service inside their own docker container.
    2. However, dealing manually with multiple docker containers is hard. Building them, bringing them up and down, linking them withe each other and killing them are not so easy tasks to do one by one manually.
    3. docker-compose is a utility who makes dealing with orchestration processes (start, shutdown, link, volumes) of docker containers very easy task.
    4. Using compose is a three step process
      • Define your app's environment with DockerFile.
      • Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
      • Use docker-compose commands to manage whole lifecycle of your application:
            Start/Stop/rebuild services
            View status of running services
            Stream the log output of running services
            Run a one-off command on service
    5. Read "docker-compose File" section to see how the .yml file looks like.

    Installation (Ubuntu):
    1. Install Docker. See instructions here.
    2. Run following
      curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    3. Make the binary executable
      chmod +x /usr/local/bin/docker-compose
    4. Test installation
      docker-compose --version

    docker-compose File:

    build:
    Configuration options that are applied at build time.
    context:     Define build context.
    dockerfile: File from which image needs to be build.
    image:
    Image to start the container from.
    - If the image does not exist, Compose attempts to pull it.
    - In case, you have also specified "build", image will be build from dockerfile and tags it with the "image" tag.
    - Default name of the image in absence of "image" would be <project_name>_<service_name>.
    container_name: Specify a custom container name, rather than a generated default name.
    entrypoint:           Override the default entrypoint.
    command:            Override the default command.
    ports:
      - Expose ports in form of (HOST:CONTAINER).
      - NOTE: HOST part is optional. In absence of HOST, a random host port will be chosen.
    volumes:
      - 
    Mount paths or named volumes in form of HOST:CONTAINER.

    Example docker-compose file:
    version: '2'
    services:
      server:
        build:
           context: ./riemann_server
           dockerfile: Dockerfile
        entrypoint: ["/bin/nc"]
       command: ["-luv", "15555"]
       image: server_img
       container_name: server
       ports:
         - "15555:15555/udp"
       volumes:
         - /opt/pg/log:/opt/pg/log
     client:
       build:
         context: ./log_aggregator
         dockerfile: Dockerfile
       image: clinet_img
       container_name: client
       links:
         - server
       entrypoint: ["/bin/nc"]
       command: ["-vu", "server", "15555"]
       depends_on:
         - server
    Dockerfiles for both client and server are holding only one line i.e "From ubuntu:14.04.3".
    NOTE: For further details, see this.

    docker-compose CLI:
    You can see help using
    docker-compose --help
    build:
    • Services are build and tagged as "project_service".
    • In case of change in docker file or directory, run command again to rebuild.

    config:   Validate and view the compose file.
    create:  Create containers for service.
    start:     Start existing containers for service.
    up:         Builds, (re)creates, starts, and attaches to containers for a service.
    ps:         List containers.
    Stop:     Stops running containers without removing them
    rm:        Removes stopped service containers.
    kill:        Forces running containers to stop by sending a SIGKILL signal
    down:    Stop containers and remove containers, networks.

    Thursday, 3 March 2016

    Docker Cheatsheet

    BUILD
    sudo docker build --tag <image-name> --build-arg argument=value .

    REMOVE:

    image:                    docker rmi <image-name>
    container:               docker rm <container-name>
    all-image:               docker rmi $(sudo docker images -qf "dangling=true")
    all-container:          docker rm $(sudo docker ps -aq)

    LIST:
    images:                    docker images
    containers:               docker ps -a

    RUN:
    docker run --name <container-name-once-it-runs> -ti <image-name> /bin/bash

    EXEC:
    sudo docker exec -ti <container-name> /bin/bash
    e.g. sudo docker exec <container-name> /bin/sh -c "echo 172.17.0.2 dockerelk_logstash_1 >> /etc/hosts"

    Get IP address:
    sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-name>

    Dockers: Part 3 - DockerFile instructions

    Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.It actually allows you to automate the steps you would normally manually take to create an image.
    Below is the format of DockerFile instructions 
    INSTRUCTION arguments

    DockerFile Instructions:
    Docker runs the instructions in Dockerfile in order.

    FROM:
    First instruction must be FROM. It will specify the base image on top of which you will build your images.
    FROM <image>
    FROM <image>:<tag>
    FROM ubuntu:14.04.3

    ENV:
    1. Sets the environment variable. 
      ENV var val
      ENV var1="val1" var2="val2"
    2. It also makes interpolation in the dockerfile available from the next statement via ${var}.
    3. These variables will be persisted into any containers created from the image where these variables are defined.
    RUN:
    1. Run a command and commit the image.
      RUN <command>                                          # Shell form. 
      RUN ["executable" "param1" "param2"]        # exec form. 
    2. Shell form: Args are automatically prepended with "/bin/sh -c".
      Exec formmakes it possible to avoid shell string prepending, and to RUN commands using a base image that does not contain /bin/sh.RUN echo hello                        # /bin/sh -c echo hello
      RUN [ "echo", "hello" ]            # echo hi
    3. To use a different shell other that /bin/sh and keep using the shell form, you can do following on top of DockerFile.
      RUN rm /bin/sh && ln -s /bin/bash /bin/sh
    4. The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
      Containers can be created from any point in an image’s history.
    5. Use backslash "\" to continue the RUN command on multiple lines.

    CMD:
    1. CMD specifies the command to run when the container is launched.
      In another form, it can also specify the arguments to ENTRYPOINT.
      CMD ["executable" "param1" "param2"]
      CMD ["param1" "param2"]                            # as default parameters to ENTRYPOINT. ENTRYPOINT is must in this case.
    2. CMD defaults will be override by run arguments.
      sudo docker run -it <container-name> <arguments which will override>.
    NOTE: There can be only one CMD command in Dockerfile.

    ENTRYPOINT:
    1. Allow you to specify the default executable.
    2. CMD contents will be passed as argument to ENTRYPOINT.
    3. Any "docker run" arguments will be passed as parameter to ENTRYPOINT.
    4. To override DockerFile EntryPoint, you need to supply --entrypoint to docker run
      sudo docker run -it --entrypoint="/bin/bash" ent_cmdd
    5. Example:
      FROM ubuntu:14.04.3
      ENTRYPOINT ["/bin/ping"]
      CMD ["localhost" "-c 2"]

      NO argument (sudo docker -it ent_cmd)                    : ping localhost
      argument (sudo docker run -it ent_cmd google.com) : ping google.com

      NOTE: You can use the exec form of ENTRYPOINT to set fairly stable default commands and arguments and then use CMD to set additional defaults that are more likely to be changed.
    LABEL:
    1. LABEL adds metadata to the image in key-value pair format.
    2. LABEL multi.label1="value1" \
      multi.label2="value2" \
      other="value3"
      LABEL version="0.5.1" \
      Description="This image holds Riemann Server."
    3. New label values will override previous labels. So, your base image key value will be override by new value.
    4. Docker inspect <image-name>will show you image labels..

    EXPOSE:
    1. The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. 
    2. EXPOSE does NOT make the ports of the container accessible to the host. Exposed ports are just available to processes inside the container.
    3. Given the limitation of the EXPOSE instruction, a Dockerfile author will often include an EXPOSE rule only as a hint to which ports will provide services.
    4. One can see ExposedPorts of image under Config section using
    5. sudo docker inspect <image-name>
    6. To make the ports of the container accessible to the host, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports.
      EXAMPLE:
      FROM ubuntu:14.04.3
      ENTRYPOINT [ "/bin/nc" ]
      CMD [ "-luv","6871" ]         # (nc -luv 6871) will set a udp server listening on port 6871.
      EXPOSE 6871/udp
      sudo docker build -t expose .
      sudo docker run -ti expose
      From host try to send udp packet to udp port 6871.
      nc -vu 127.0.0.1 6871
      Nothing will receive inside container because port is exposed (mere a hint) but not published (only publish will make it accessible to host).
    7. sudo docker run -ti -p 127.0.0.1:6871:6871/udp expose
      nc -vu 127.0.0.1 6871

      Now typing anything on prompt will be received inside docker.
    8. Moreover, you can see the port mapping as well like given below
      sudo docker ps -l
    9. CONTAINERID    IMAGE   COMMAND   PORTS   NAMES
      facbb246bfe6         expose     "/bin/nc -luv    6871"      127.0.0.1:6871->6871/udp evil_darwin
      NOTE: Format for publishing a container᾿s port or a range of ports to the host: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
      containerPort is necessary. In case you donot provide hostPort, Docker will assign some available random host port.
      The port number inside the container (where the service listens) does not need to match the port number exposed on the outside of the container (where clients connect).
    WORKDIR:
    1. WORKDIR instruction sets the working directory for ADD, COPY, RUN, CMD, ENTRYPOINT.
    2. EXAMPLE: 
      FROM ubuntu:14.04.3
      WORKDIR /var/log
      ENTRYPOINT ["/bin/ls"]

      Upon running above container, you will see contents of /var/log.
    3. It can resolve environment variables set by ENV.
      ENV DIR /varWORKDIR $DIR/logENTRYPOINT ["/bin/ls"]
    4. It can not expand linux variables like $USER or $HOME etc.

    ADD:
    1. ADD instructions adds <src> from build-context to <dst>.ADD <src> <dst>
    2. <src> can be a file, directory, URL or tar file. In case of "tar" source, Docker will untar and then copy.
    3. If <dst> does not exist, ADD will create the full path for us.
    4. New files will be created with 755 permission. In case of URL, permissions will be 600.
    5. <src> may contain the regular expression.
      ADD hom* /mydir/
    6. You can not add from outside of build or context.

    COPY:
    Copy is same as 'ADD' but without the tar and remote url handling.
    Docker's best practice document suggests to use COPY because it is more transparent than ADD.

    USER:
    1. USER instruction will set the user for any following RUN, CMD, ENTRYPOINT command and to use when running the image.
    2. It can be override by using -u flag with "run" command.
    ARG:
    1. The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
      ARG logstash_version 
    2. User can define the default value for argument as well.
      ARG logstash_version=1.2.3
    3. User will build the image using
      docker build --build-arg logstash_version=2.2.2 .
    ONBUILD:
    1. The ONBUILD instruction adds to the image a trigger instruction.
    2. Instructions will be triggered later when the image is used as the base for another build. 
    3. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstram Dockerfile.