Bitbucket is a good git hosting service especially when they let you create unlimited private repos for free.However bitbucket doesnt let you use the same ssh key with two different users.Now you might be thinking why the hell does he need two different ssh keys.My answer to that is “I wanted to keep my personal and private account seperate”

I am across this post on stackoverflow.It pretty much solved my issue.All I had to do was change my ~/.ssh/config file to this..

Host github1
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_repo1

Host github2
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_repo2

You will also have to update the git remote url for the repos.

git remote set-url origin github1:user/repo1



Last month I wrote a post of saving bandwidth costs on Amazon but using nginx as a reverse proxy for Amazon s3.After doing some further research I came across cloudfront.Cloudfront is a CDN offering from the web services giant Amazon.

Cloudfront integrates really well with Amazon S3.Apart from the initial setup which migrated your s3 data to the CDN (Which took about 30mins).The rest of the process was straight forward.The CDN took care of loading data from s3 and the performance benefits where huge.

Previously a 200kb file would take about 8-10 seconds to download directly from s3.
Using cloudfront dropped the time down to 2-3 secs that’s close to 4x improvement.

I would highly recommend using cloudfront.To sweeten things up Amazon also provides a nice dashboard showing the different requests,hit rate misses errors.



S3 is a good platform to save files without having to worry about storage ,connections and bandwidth.

S3 works on the idea of having buckets of content.The content of the bucket can be private or public or even access control managed.Even if you have a small amounting traffic on your website and the access patterns are every sparse the cost of S3 is low,Moreover Amazon gives about 5gb free every month.But once you reach a scale where you transfer 100 of gigabytes every month the cost per gigabyte for Amazon turns out very high.

Since I was part of a startup with very little cash to spare I came up with this an idea to save bandwidth without having to change the backend or migrate to another provider.The gist of the idea was to set up a reverse proxy server in front of the Amazon servers so every request gets to our proxy server first.Our proxy server in turns makes calls to the Amazon server to get the relevant files.Since a single server cannot handle a bunch load of requests, we also set up a load balancer which distributes requests based on the source ip address.

First we start by creating our amazon cache server.Here is the docker file for it.
The dockerfile creates the nginx servers which proxies traffic from s3.To build the docker image run ‘docker build -t s3cache .’
This config file sets up the reverse proxy and cache configuration for the nginx server.Make sure you replace bucket with your bucket name.This configuration is only applicable to public buckets without authentication.

Lastly we setup the haproxy server.

docker pull dockerfile/haproxy


You will also have to change the `server ip` to the nginx ip address.To get the ip address of a container run the following command

docker inspect -f "{{.NetworkSettings.IPAddress}}" <container id>

 

Once you get the ipaddress replace the `server ip` with it and run the following.This will run the docker image.

docker run -d -p 80:80 -p 8080:8080 -v &lt;dir&gt;:/haproxy-override dockerfile/haproxy

 

Replace dir with the directory that contains the haproxy.cfg file.
To check the status of your haproxy server visit this link
http://dockerip:8080/haproxy?stats
The username is admin and the password is password
Try making some requests you will notice that the requests will always go through the same server due to your ip address.



Getting rid of docker containers can be a pain.Its been about 6 months I have been using docker to build images for development purposes.
I took at my free space and I have only a couple of gigs free (5 gigs to be exact).It was time to clean this shit up.

docker rm $(docker ps -a -q)
#This removes all docker containers which have exited

docker rmi $(docker images -q)
#This deletes all the docker images.