Adding mysql columns to a large table

Changing a large Mysql table has become much easier. I just wanted to document using the pt-online-schema-change command. This following example adds a new column to a Users table.

# pt-online-schema-change,t=Users,u=xxx
 --database xxx  --ask-pass --alter "ADD COLUMN (last_ip varchar(40))"   
--nocheck-replication-filters --critical-load Threads_running=100  --execute

Enter MySQL password:
Found 7 slaves:
Will check slave lag on:

# 8 software updates are available:
#   * The current version for MySQL Community Server (GPL) is 5.6.24.
#   * The current version for MySQL Community Server (GPL) is 5.6.24.
#   * The current version for MySQL Community Server (GPL) is 5.6.24.
#   * The current version for MySQL Community Server (GPL) is 5.6.24.
#   * The current version for MySQL Community Server (GPL) is 5.6.24.
#   * The current version for MySQL Community Server (GPL) is 5.6.24.
#   * The current version for MySQL Community Server (GPL) is 5.6.24.
#   * The current version for MySQL Community Server (GPL) is 5.6.24.

Operation, tries, wait:
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
Altering `xxx`.`Users`...
Creating new table...
Created new table xxx._Users_new OK.
Waiting forever for new table `xxx`.`_Users_new` to replicate to ip-xxxx...
Altering new table...
Altered `xxx`.`_Users_new` OK.
2016-03-30T13:55:11 Creating triggers...
2016-03-30T13:55:11 Created triggers OK.
2016-03-30T13:55:11 Copying approximately 3827686 rows...
Copying `xxx`.`Users`:  10% 04:18 remain

Securing Jenkins Ubuntu 12.04.5 LTS

Jenkins by default allows everyone to see your jobs.   Securing jenkins is pretty easy:

0) Add two arguments to JENKINS_ARGS in /etc/default/jenkins

# –argumentsRealm.passwd.$ADMIN_USER=[password]
# –argumentsRealm.roles.$ADMIN_USER=admin

This should be near the end of the file.  Once changed, restart, jenkins.

1) Install

2) Enable the plugin by going to the secure area:

a) http://YOURDOMAIN:PORT/configureSecurity/

b) Click:

3) Restart Jenkins.

4) Under configuration settings http://YOURDOMAIN:PORT/manage

Click on Manage Roles (could have changed, basically anything with roles)

Add a new group called “Anonymous” and uncheck everything. Then you want to add another group called “authenticated” and check everything. Jenkins will immediately prompt you for a login this way.

vi /var/lib/jenkins/config.xml Screen Shot 2016-01-08 at 8.46.07 AM

CDN Comparison: Edgecast, S3 and Cloudfront in San Francisco

Recently we decided to compare CDNs.  Using NewRelic we setup a test on Synthetics.  Pretty interesting data.

Edgecast All counties

Edgecast Just San Francisco

EdgeCast seems to have occasional long hangs even in San Francisco.


S3 Just San Francisco

S3 seems to be slow, but very consistant

CloudFront Just San Francisco

Cloudfront has the greatest speed bursts, however, suffers from a few long pulls.

s3-parallel-put: Move files to AWS S3 Fast

Today I used s3-parallel-put to send files to S3.   The directory I was working with contained millions of small files.  Using the standard s3cmd with the sync option never seemed to finish and without any error messages. With s3-parallel-put I can push files in parallel even controlling the number of processes.

Using the command:

python /usr/bin/s3-parallel-put
--bucket=[YOUR BUCKET] 
 /uploads/ >> /tmp/backup/log.txt 2>&1

The only tricky part here is the “guess” option. This basically tells AWS to guess the content type of the object you are uploading. AWS needs this information when it retrieves the object. Web browsers do most of the retrieving and they want headers! (which include content-types).

Also in the examples in the github project there is a “PREFIX”. I still have no idea what it is.

You may need to install boto, if so this is what I did (Using Linux):

  875  2015-05-18_12:41:14 sudo easy_install pip
  874  2015-05-18_12:41:24 pip install boto
  876  2015-05-18_12:41:43 sudo pip install boto 




How to start a maria mysql server on amazon linux

It should be an easy task to restart a mysql server. However, I recently ran into this:

[root@ip-172-31-44-221 ~]# service mysqld start
Redirecting to /bin/systemctl start  mysqld.service
Failed to issue method call: Unit mysqld.service failed to load: No such file or directory.
[root@ip-172-31-44-221 ~]#

There was nothing in /etc/init.d setup for mysql. After an hour of research I discovered this:

 1006  systemctl start mariadb.service
 1007  systemctl enable mariadb.service
 1008  /usr/bin/mysql_secure_installation

Apparently after installation of maria db you need to use systemctl.

Uploading a file using CURL to S3 AWS

This is a simple way to upload a file CURL. The type i am doing is an “IPA” file which is an Apple iTunes binary. You need to change the contentType to your liking. This script is awesome for backups and other devops needs.

Example of

dateValue=`date -R`
echo "SENDING TO S3"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -vv -X PUT -T "${file}" \
  -H "Host: ${bucket}" \
  -H "Date: ${dateValue}" \
  -H "Content-Type: ${contentType}" \
  -H "Authorization: AWS ${s3Key}:${signature}" \



To use:

sh yourfile.ipa

AWS Moving mysql to a new volume

Ugh. I am out of space on an old mysql box. Moving to a new volume on AWS is pretty easy here are my notes. The only real catch here is TO REMEMBER to add it to fstab.

Creating a new volume

1) Create a new volume on ec2
2) Attach this new volume to the instance you want through their GUI.  You can just click on a volume and click on attach to instance
3) ssh to the instance
4) fdisk /dev/xvdf
5)  mkfs.ext3 /dev/xvdf
6)  mount -t ext3 /dev/xvdf /disk
7)  vi /etc/fstab
8)  add /dev/xvdf       /disk   ext3    defaults,owner,errors=remount-ro 1      2
9) then fdisk -l to see your new hd, and df -H

Moving mysql to new position

1) rsync -vaC /var/lib/mysql/ /disk/mysql/
2) edit my.cnf and add:
   innodb_data_home_dir = /disk/ibdata
   log_bin                 = /disk/mysql-bin.log
3) mv /var/lib/mysql/ibdata /disk/ibdata/
4) service mysqld restart
5) then make sure mysql is running etc, and the slave is working

Using Ruby on Rails and AWS SNS to send APNS

SNS from AWS now offers a great way to send mobile APNS messages to your users. Sure you can blow away money with urban airship. The great part is that AWS can keep track of endpoints that fail, does a queue system for you and AWS gives great error messages back when a APNS fails. Before we would have to read in a stream from apple’s push notification servers to see if the message failed and deal with all the bizarre time outs. The cost with AWS SNS is really cheap as well.

Setting up a SNS app

To setup an app simply open up your p12 file and copy paste the two sections of your p12 file into these boxes below on AWS. You can open your p12 file by using just an editor like VI.

Screen Shot 2014-03-17 at 8.34.02 PM

Setting up SNS AWS

Create endpoints. Once you have an app setup with the correct p12 / pem file you can now create endpoints. An endpoint is a APNS token that can hold a value. We place the account_id into this value. In the image below you can see a User Data field. We add the account_id into that column.

Screen Shot 2014-03-17 at 8.27.05 PM

Code Time

First you want to get the AWS gem for ruby and install it. Once you have it installed you can try and setup an endpoint from the API. The code looks complicated but the only thing it is doing is getting an endpoint with the apns token passed from the mobile client.

arn = 'arn:aws:sns:us-east-1::app/APNS/.iOS.Production'
endpoint = AWS.sns.client.create_platform_endpoint :platform_application_arn => arn, :token => params[:apns_token from_mobile_device], :custom_user_data =>, :attributes => {}

Now you have an endpoint. With this you can message away.

out_message = {:m=>"Hi",:someotherdata=>"f19b9"}
apns_string[:aps] = {:alert => out_message, :sound => 'receive_message.wav', :badge => 1 }
aws_message = {:default => out_message,:APNS => apns_string.to_json}

AWS.sns.client.publish :message => aws_message.to_json, :target_arn => endpoint, :message_structure => 'json'

Cool parts of the AWS GUI.

You can publish / send an sms right there in the GUI. You can also see what tokens are enabled and disabled. Before you had to keep track of all these your self in a monster mySQL table.

Screen Shot 2014-03-17 at 8.56.41 PM

Loading files locally from a iOS webview

Ran into this issue yesterday. I am using a webView to develop a prototype in HTML5. The problem is that my webView accesses an external file index.html. If you try and file the standard file:// protocol you will get strange security issues and the files wont load. The solution is below, basically we had to use myapp:// protocol which is defined by the iOS app.

#---- Place this by your webview

[NSURLProtocol registerClass:[NSURLProtocolCustom class]];

#---- "NSURLProtocolCustom.h"

@interface NSURLProtocolCustom : NSURLProtocol

#---- "NSURLProtocolCustom.m"

#import "NSURLProtocolCustom.h"

@implementation NSURLProtocolCustom

+ (BOOL)canInitWithRequest:(NSURLRequest*)theRequest
    if ([theRequest.URL.scheme caseInsensitiveCompare:@"myapp"] == NSOrderedSame) {
        return YES;
    return NO;

+ (NSURLRequest*)canonicalRequestForRequest:(NSURLRequest*)theRequest
    return theRequest;

- (void)startLoading
    NSURLResponse *response = [[NSURLResponse alloc] initWithURL:self.request.URL 
    NSMutableString *FileName  = [[self.request.URL.absoluteString stringByReplacingOccurrencesOfString:@"myapp://" withString:@""] stringByReplacingOccurrencesOfString:@".png" withString:@""]; 
    NSString *imagePath = [[NSBundle mainBundle] pathForResource:FileName ofType:@"png"];  
    NSData *data = [NSData dataWithContentsOfFile:imagePath];
    NSLog(@"---- DATA URL"); 
    [[self client] URLProtocol:self didReceiveResponse:response cacheStoragePolicy:NSURLCacheStorageNotAllowed];
    [[self client] URLProtocol:self didLoadData:data];
    [[self client] URLProtocolDidFinishLoading:self];

- (void)stopLoading
    NSLog(@"request cancelled. stop loading the response, if possible");