Today I upgraded one of my old macs with an SSD.The SSD is freaking fast.But since the SSD was less than half the size of the existing HDD, I thought it would be good idea to move the docker host machine to the HDD instead of the SSD

Doing this was pretty simple.

First make sure the virtual machine was switched off.Then click on setting and then on the storage button which is the 3rd from the left.

1

Select the disk.vmdk and click on the floppy icon with the minus button.For those who dont know what a floppy is click here.Then click ok to save it.

Next open the following path in finder.  ~/.docker/machine/machines/default

Next copy the disk.vmdk drive to the HDD.

Once it is copied you can delete it from this folder.

Now we need to connect the Virtual drive back to the virtual machine.

Then click on File and select Virtual Media Manager.

2

Then click on the remove button from the virtual drive.If you dont follow this setup virtualbox will not allow you to connect the drive to the virtual machine.

3

Now you are ready to connect the virtual drive to the virtual machine.Click on the setting button for the virtual machine.Select storage and click on the plus hdd button and select the virtual drive.

4

Now the kitematic will use the same virtual machine and it will not notice a difference.



S3 is a good platform to save files without having to worry about storage ,connections and bandwidth.

S3 works on the idea of having buckets of content.The content of the bucket can be private or public or even access control managed.Even if you have a small amounting traffic on your website and the access patterns are every sparse the cost of S3 is low,Moreover Amazon gives about 5gb free every month.But once you reach a scale where you transfer 100 of gigabytes every month the cost per gigabyte for Amazon turns out very high.

Since I was part of a startup with very little cash to spare I came up with this an idea to save bandwidth without having to change the backend or migrate to another provider.The gist of the idea was to set up a reverse proxy server in front of the Amazon servers so every request gets to our proxy server first.Our proxy server in turns makes calls to the Amazon server to get the relevant files.Since a single server cannot handle a bunch load of requests, we also set up a load balancer which distributes requests based on the source ip address.

First we start by creating our amazon cache server.Here is the docker file for it.
The dockerfile creates the nginx servers which proxies traffic from s3.To build the docker image run ‘docker build -t s3cache .’
This config file sets up the reverse proxy and cache configuration for the nginx server.Make sure you replace bucket with your bucket name.This configuration is only applicable to public buckets without authentication.

Lastly we setup the haproxy server.

docker pull dockerfile/haproxy


You will also have to change the `server ip` to the nginx ip address.To get the ip address of a container run the following command

docker inspect -f "{{.NetworkSettings.IPAddress}}" <container id>

 

Once you get the ipaddress replace the `server ip` with it and run the following.This will run the docker image.

docker run -d -p 80:80 -p 8080:8080 -v &lt;dir&gt;:/haproxy-override dockerfile/haproxy

 

Replace dir with the directory that contains the haproxy.cfg file.
To check the status of your haproxy server visit this link
http://dockerip:8080/haproxy?stats
The username is admin and the password is password
Try making some requests you will notice that the requests will always go through the same server due to your ip address.



Getting rid of docker containers can be a pain.Its been about 6 months I have been using docker to build images for development purposes.
I took at my free space and I have only a couple of gigs free (5 gigs to be exact).It was time to clean this shit up.

docker rm $(docker ps -a -q)
#This removes all docker containers which have exited

docker rmi $(docker images -q)
#This deletes all the docker images.

 



Docker is a light weight virtualization tool which uses the lxc(linux containers) to run its virtual machines.
Docker uses dockerfile to create the virtual machines.To setup the apache server with php on docker is very simple.
We first create a Dockerfile with the following command

touch Dockerfile 

This creates an empty file.Lets first get the base ubuntu files for the vm.

docker get ubuntu

.This will pull the ubuntu files form docker’s servers.Now we need to add the some commands to the dockerfile for docker to run.

FROM ubuntu
MAINTAINER Suraj Shirvankar

This instructs docker to use the ubnutu base files and also set suraj as the maintainer of the dockerfile.
We then run apt-get update which fetches the package info using aptitude.We also run the apt-get install apache2 to install the apache2 server.

FROM ubuntu
MAINTAINER Suraj Shirvankar
RUN apt-get update
RUN apt-get install apache2 -y

Since we will be using php with apache we install the php.We will also need the libapache2 module for php to work with apache2.

FROM ubuntu
MAINTAINER Suraj Shirvankar
RUN apt-get update
RUN apt-get install apache2 -y
RUN apt-get install php5 libapache2-mod-php5 -y

Now that php and apache is available we can start the apache2 service.

FROM ubuntu
MAINTAINER Suraj Shirvankar
RUN apt-get update
RUN apt-get install apache2 -y
RUN apt-get install php5 libapache2-mod-php5 -y
RUN /etc/init.d/apache2 start
CMD /bin/bash

Finally you run the docker build program.

docker build -t apache2 .