Today I upgraded one of my old macs with an SSD.The SSD is freaking fast.But since the SSD was less than half the size of the existing HDD, I thought it would be good idea to move the docker host machine to the HDD instead of the SSD

Doing this was pretty simple.

First make sure the virtual machine was switched off.Then click on setting and then on the storage button which is the 3rd from the left.

1

Select the disk.vmdk and click on the floppy icon with the minus button.For those who dont know what a floppy is click here.Then click ok to save it.

Next open the following path in finder.  ~/.docker/machine/machines/default

Next copy the disk.vmdk drive to the HDD.

Once it is copied you can delete it from this folder.

Now we need to connect the Virtual drive back to the virtual machine.

Then click on File and select Virtual Media Manager.

2

Then click on the remove button from the virtual drive.If you dont follow this setup virtualbox will not allow you to connect the drive to the virtual machine.

3

Now you are ready to connect the virtual drive to the virtual machine.Click on the setting button for the virtual machine.Select storage and click on the plus hdd button and select the virtual drive.

4

Now the kitematic will use the same virtual machine and it will not notice a difference.



Last week my colleague Victor from Paack and I were digging through the server to look at the logs.

-rw-rw-r-- 1 dev      dev     3.5M Nov  25 17:33 newrelic_agent.log
-rw-r--r-- 1 www-data root 1G Nov  25 17:55 nginx.access.log
-rw-r--r-- 1 www-data root 1G Nov  25 17:55 nginx.error.log
-rw-rw-r-- 1 dev      dev     0 Apr 30  2015 production.log
-rw-rw-r-- 1 dev      dev     1G Nov  25 17:33 puma.access.log
-rw-rw-r-- 1 dev      dev  10G Nov 25 12:03 puma.error.log
-rw-rw-r-- 1 dev      dev     0 Dec  2 17:34 puma.log
-rw-rw-r-- 1 dev      dev   50M Dec  2 17:52 sidekiq.log

When we looked at this folder we were shocked.Our log files were a couple of Gigabytes,That was a huge problem, since this could result in slower requests as it logs it.

Are some googling we found that logrotate came part of ubuntu.So though of using it.

The configuration was pretty simple, Logrotate configuration is really cool.It loooks like a bunch of dsls.

/home/dev/apps/paack/shared/log/*.log{
  weekly
  size 500M
  rotate 30
  missingok
  compress
  copytruncate
}

Weekly

Runs the script weekly

size

Maximum size of the log file

rotate

Create a maximum of 30 files.

missingok

Ignore error if the file is missng

compress

Compress the old logs with gzip

copytruncate

Truncate the old log file and create a new one file with the truncated data.

 

More information can be found on the documentation website



While working on the Paack application I came across the issue where the main hard disk on the machine was getting full.Mostly due to the fact that Capistrano saves atleast 25 of the last deployments on the server.That meant the assets and the precompiled data, would all be written on the same drive.At the time when I looked at the computer, the system had about 36% drive space free.This was a huge cause of concern since I didn’t want to deal with the limited storage issue.

So I began looking at the azure documentation/reference on setting up extra hdd on a single virtual machine.Azure restricts the number of drives you can attach per virtual machine based the tier of the virtual machine.

Continue reading



Nginx is a really awesome web server.It likes a magic tool which does everything you need.

Two weeks I set up a server with multiple websites running on.Since the server was beefy enough we could run 5 websites on it with relatively medium amount of traffic.

 

The way Nginx allows us to configure multiple websites is using a configuration block in the /etc/nginx/sites-available  or /etc/nginx/sites-enabled directory.Its not necessary to store the configuration in the particular directory but it is treated as good practice.

This is how I setup websites on Nginx.I would first create the configuration file in the /etc/nginx/sites-available/ directory.Here is the sample configuration.

The most important command to run once you have setup the configuration is sudo nginx -t.It doesn’t run the server but just tests if the configurations are correct syntactically.

Now that everything is ready, we just have to enable to website configuration.To enable the configuration we create a symbolic link using the following command

ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com

Make sure you include full folder/file path when creating the symbolic link.

To add another website follow the process mentioned above.

Once you are done run sudo service nginx restart.This command will reload the configuration and then restart the server.



In my quest to move away from heroku. I came across Capistrano a popular automated deployment tool written in Ruby. Capistrano is really popular with millions of downloads and lots of Plugins

Here is how I setup Capistrano to deploy static websites and create rollbacks in case the build was buggy.

Capistrano deploys the code to /var/www/application_name.It does so by logging in the user into the server and pulling the repo from the git repo specified.On top of this capistrano adds a bunch of commands to make this process simpler.

Running

cap -T

 

shows the lists of commands capistrano supports.



Heroku is a really cool service.It lets me snip up servers in a matter of minutes rather than hours.But Heroku has a dark side, as a developer heroku provides you a free instance which should be sufficient for your regular testing and development process.But as soon as you want to launch your application, you will get deterred by the amount of money you will be paying per month

Starting from my last project I have been trying to move away from heroku and build the features that heroku offers.The process would look something like this.

  1. Automatic deployment and running of Tasks (Done)(Thanks to capistrano)
  2. Auto Scaling of website based on CPU usage.
  3. Deployment of code once the test pass.

1.Automatic deployment and running of Tasks

Capistrano plays a huge role in the this step,I was able to deploy the code from my machine without even touching the server.No more ssh and managing different deployment version.

2.AutoScaling of website

This is probably the most difficult part of the whole setup.AutoScaling in the cloud world is very dependent on the service provider you chose.Microsoft Azure , Amazon AWS and DigitalOcean all have a different way to launch and provision website.Currently I have got the server provisioning working but integrating the code deployment and load balancer is still very tedious.

3.Deployment of code once the test pass

This step of the process will also rely on a tool which is mostly gonna be jenkins.Jenkins is very popular and it has huge community of existing plugins which I can make use of.

 



Apache 2 is a well know server for linux.Setting up an SSL certificate on it is very simple  and straightforward.

Firstly we need an SSL certificate,We can get it from numerous online SSL providers or we could generate one ourself.

To generate a 2048 bit key.

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt

This will create your public ssl certificate and a private key.

Next we have to enable ssl module for apache2

sudo a2enmod ssl

Once have this enabled,Lets add the certificate to all of the websites running on the server.On apache2 you can find the websites on /etc/apache2/sites-available/.Lets change the default-ssl.conf file on the /etc/apache2/sites-available/default-ssl.conf.

Just change a couple to lines in the file.

SSLEngine on
SSLCertificateFile /etc/apache2/ssl/apache.crt
SSLCertificateKeyFile /etc/apache2/ssl/apache.key

These configurations tell apache which certificate to use and which key goes along with it.All we have to do now is enable the configuration

sudo a2ensite default-ssl.conf

And then to restart the apache2 server

sudo service apache2 restart

 



Everywhere you go either to eat, sleep and even fly.You will always come across open wireless networks.

Currently where I reside most of the shopping malls have free to use Internet.In exchange for your traffic and mobile number you get to access the Internets at crawling speed.These networks are usually open networks, so basically your traffic is naked and unencrypted.Any one can sniff the traffic,inject traffic into your stream and even MITM you, So rather than staying vulnerable target just use a VPN.You have two options to get a VPN,either you buy a VPN from a VPN provider which you cannot certainly cannot trust for anonymity OR the better option setup your own VPN.

Last week I finished my VPN cookbook and this post is a follow up to my previous rant about documentation and how value of open source is diminished without good documentation.

Let’s get to the meat of the post,the post will help you setup your own VPN server ,so you no longer have to worry about open networks.

  • First you need to get chef installed.Download it from here.
  • Then install two ruby gems to help you install the VPN server.
    Librarian-chef a ruby gem to manage your chef repositories.
    Knife-solo is like a tool which understand the chef cookbooks and helps you run the code on your server

Once you have the prerequisites installed, create a Cheffile and add my cookbook to it.Your cheffile should look like this.

cookbook 'pptpd', github: 'h0lyalg0rithm/pptpd'

Then run librarian install to download all the cookbooks to your computer.

Then run knife solo [email protected] and this will run the chef client on your server and install the recipes.

Once you run it you will notice that it creates a mode directory which will contain a json file.
Edit you json file to add the username and password you want setup in your VPN server.

{
  "run_list": [
    "recipe[pptpd]"
  ],
  "automatic": {
    "ipaddress": "host"
  },
  "pptpd":{
    "users":[{
        "username": "user",
        "password": "password"
      }]
  }
}

Now you have the VPN setup connect to it.PPTPD is one of the oldest VPN servers out there.It should even work on older smart phones even some of the old nokias.



Amazon S3 is a really powerful data store.S3 works by grouping content in buckets. If you have a lot of buckets on amazon it is not feasible or secure to use your own security access key with the buckets(Especially if you have to handover the project).
Moreover there is no official way to transfer s3 buckets from one aws account to another.The best way is to create a bucket on the clients account and only provide the appropriate permission on the bucket.
Amazon lets you define permission through their iam policy.IAM policy has 3 keys which define your permission.
These where take right from the amazon docs.

Actions: what actions you will allow. Each AWS service has its own set of actions. For example, you might allow a user to use the Amazon S3 ListBucket action, which returns information about the items in a bucket. Any actions that you don’t explicitly allow are denied.

Resources: which resources you allow the action on. For example, what specific Amazon S3 buckets will you allow the user to perform the ListBucket action on? Users cannot access any resources that you have not explicitly granted permissions to.

Effect: what the effect will be when the user requests access—either allow or deny. Because the default is that resources are denied to users, you typically specify that you will allow users access to resource.

Here is the IAM Policy that  I use to restrict a user to a particular bucket.It gives user all the permission over the bucket ie to Add,Delete,List…

{
  "Version": "2014-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
       "arn:aws:s3:::<bucket-name>",
       "arn:aws:s3:::<bucket-name>/*"
      ]
    }
  ]
}
Just replace <bucket-name> with your s3 bucket name.

 



Last month I wrote a post of saving bandwidth costs on Amazon but using nginx as a reverse proxy for Amazon s3.After doing some further research I came across cloudfront.Cloudfront is a CDN offering from the web services giant Amazon.

Cloudfront integrates really well with Amazon S3.Apart from the initial setup which migrated your s3 data to the CDN (Which took about 30mins).The rest of the process was straight forward.The CDN took care of loading data from s3 and the performance benefits where huge.

Previously a 200kb file would take about 8-10 seconds to download directly from s3.
Using cloudfront dropped the time down to 2-3 secs that’s close to 4x improvement.

I would highly recommend using cloudfront.To sweeten things up Amazon also provides a nice dashboard showing the different requests,hit rate misses errors.