AWS CloudFront SSL Install

Using the Amazon CoudFront as a CDN is a great way to accelerate your website. If you run with HTTPS enabled, you will also want to reference the files you have hosted on CloudFront over HTTPS to avoid error messages from the web browser. You can install an SSL certificate of your own onto the CloudFront edge servers very easily by following the process below.

Prerequisite: Install the AWS CLI following these instructions

  1. Upload New Certificate and CA Bundle

    [root@www2]# aws iam upload-server-certificate --path=/cloudfront/ --server-certificate-name --certificate-body file:// --certificate-chain file:// --private-key file://

    The AWS API will repond with a JSON blob describing the new certificate that was installed.

    "ServerCertificateMetadata": {
        "ServerCertificateId": "ASCAIAL7ABZ47NPIXXDG6",
        "ServerCertificateName": "",
        "Expiration": "2016-10-12T23:59:59Z",
        "Path": "/cloudfront/",
        "Arn": "arn:aws:iam::116215659343:server-certificate/cloudfront/",
        "UploadDate": "2015-10-09T18:03:10.749Z"
  2. Switch CloudFront to the NEW Certificate using the WebConsole. This will take a while to take effect as the certificate needs to propagate to all AWS CloudFront edge servers.

Clearing UNC Hard Disk Errors

One of the advantages of running a Linux RAID configuration is that it simplifies clearing of UNC errors on your hard disks. If you have such a setup, you may follow the process below to clear the UNC errors from your disk and extends it's life a bit longer. The example here is for a RAID 1 configuration, but any RAID level will do (besides RAID 0).

Determine the disk (X) with the errors using smartctl

smartctl -A /dev/sda
smartctl -A /dev/sdb

Determine the LBA where the first error was found

smartctl -a /dev/sdX

Determine the partition (Y) that contains the LBA

sfdisk -l -uS /dev/sdX

Do not continue this process if there are excessive errors (80 or more).  Replace the disk immediately.

Determine the RAID array (Z) that the partition belongs to

cat /proc/mdstat

Fail and remove the partition from the RAID

mdadm /dev/mdZ --fail /dev/sdXY
mdadm /dev/mdZ --remove /dev/sdXY

You must zero the superblocks on the partition to allow a proper remirror

mdadm --zero-superblock /dev/sdXY

Re-add the partition to the RAID to initiate the remirror

mdadm /dev/mdZ --add /dev/sdXY

Monitor the remirror progress.  When complete, review smartctl to see if the errors are gone.

  • If errors are still there, confirm you have been working with the correct partition
  • If all errors cannot be cleared, replace the disk.

When all errors are cleared, run a long SMART test to confirm disk is healthy.

smartctl -t long /dev/sdX

It should complete without any read errors.

  • If more errors are found.  Repeat the process above.
  • Do not repeat the process any more than 2 times.  The drive is unhealthy at this point and should be replaced.
  • Do not repeat the process if there are excessive errors (80 or more).

CUPS Command Reference

Show Printing System Status

lpstat -t

Show Queue Status

lpq -Plaser101

Create a PostScript Laser Printer Queue

lpadmin -p office-printer-1 -v lpd://office-printer-1/lp -m postscript.ppd.gz -E

Create a JetDirect Socket Queue

lpadmin -p office-printer-1-v socket://office-printer-1:9100 -m postscript.ppd.gz -E

Remove a Printer Queue

lpadmin -x office-printer-1

Set Default Printer Queue

lpadmin -d office-printer-1

Pause / Unpause a Printer Queue

cupsdisable office-printer-1       // disables printer
cupsenable office-printer-1        // unpause printer
reject office-printer-1            // no new jobs accepted
accept office-printer-1            // if lpstat says that no jobs are being accepted

Cancel Print Jobs

cancel 33                          // Job number 33 canceled
cancel -a office-printer-1         // All jobs for office-printer-1 are canceled
lpmove 33 office-printer-2         // Job 33 moved to office-printer-2

Redirect Printing

lpadmin -p office-printer-1 -v lpd://office-printer-2/lp

HP ProCurve CLI Reference


Show Running Configuration
    • procurve1# write terminal

Save Running Config to Startup Config
    • procurve1# write memory

Show Mac-Address, Port, and VLAN
    • procurve1(config)# show mac-address

Show Port Descriptions
    • procurve1# show name 

Show Port State & Status
    • procurve1# show interfaces brief

Show port errors and dropped packets
    • procurve1# show interfaces port


Add Port to VLAN
    • procurve1(config)# vlan vlanID
    • procurve1(vlan-ID)# untagged port#

Remove Port from VLAN
    • procurve1(config)# vlan vlanID
    • procurve1(vlan-ID)# no untagged port#

Basic Port Commands

Disable Port
    • procurve1(config)# interface port
    • procurve1(eth-port)# disable 

Enable Port
    • procurve1(config)# interface port
    • procurve1(eth-port)# enable

Set Port Description
    • procurve1(config)# interface port
    • procurve1(eth-port)# name Laser101

Remove Port Description
    • procurve1(config)# interface port
    • procurve1(eth-port)# no name

Ubiquiti EdgeSwitch CLI Reference


Show Running Configuration
    • (ubnt1)# show running-config

Show Saved Configuration
    • (ubnt1)# show startup-config

Save Running Config to Startup Config
    • (ubnt1) # write memory


Create a VLAN
    • (ubnt1) # vlan database
    • (ubnt1) (Vlan)# vlan 10
    • (ubnt1) (Vlan)# vlan name 10 "Voice"

Add Port to VLAN
    • (ubnt1) # configure
    • (ubnt1) (Config)# interface 0/1 
    • (ubnt1) (Interface 0/1)# vlan pvid 10
    • (ubnt1) (Interface 0/1)# vlan participation include 10
    • (ubnt1) (Interface 0/1)# vlan participation exclude 1

Remove Port from VLAN
    • (ubnt1) # configure
    • (ubnt1) (Config)# interface 0/1 
    • (ubnt1) (Interface 0/1)# vlan pvid 1
    • (ubnt1) (Interface 0/1)# vlan participation include 1
    • (ubnt1) (Interface 0/1)# vlan participation exclude 10

Show What Vlan All Ports Are In
    • (ubnt1) # configure
    • (ubnt1) # show vlan port all

Force Vlans To Be Tagged when transmitted on a port
    • (ubnt1) # configure
    • (ubnt1) (Config)# interface 0/1
    • (ubnt1) (Interface 0/1)# vlan tagging 1,2,3,4,5

Force Only Tagged VLANs Allowed Into a Port (HP ~ Trunk)
    • (ubnt1) # configure
    • (ubnt1) (Config)# interface 0/1
    • (ubnt1) (Interface 0/1)# vlan acceptframe vlanonly

Force Only Untagged VLANs Allowed Into a Port (Cisco ~ Access Mode)
    • (ubnt1) # configure
    • (ubnt1) (Config)# interface 0/1
    • (ubnt1) (Interface 0/1)# vlan acceptframe admituntaggedonly

Allow Both Tagged and Untagged VLANs into a port (Cisco ~ Trunk)
    • (ubnt1) # configure
    • (ubnt1) (Config)# interface 0/1
    • (ubnt1) (Interface 0/1)# vlan acceptframe all

Basic Port Commands

Disable Port
    • (ubnt1) (Config)# interface slot/port
    • (ubnt1) (Interface slot/port)# shutdown

Enable Port
    • (ubnt1) (Config)# interface slot/port
    • (ubnt1) (Interface slot/port)# no shutdown

Set Port Description
    • (ubnt1) (Config)# interface slot/port
    • (ubnt1) (Interface slot/port)# description "Office PC"

Show Port Descriptions, State & Status
    • (ubnt1)# show interfaces status

Show Port Status
    • (ubnt1)# show port all

Show Power over Ethernet (PoE) status on all ports
    • (ubnt1)# show poe status all


Show Mac Address forwarding table
    • (ubnt1)# show mac-addr-table

Show port errors and dropped packets
    • (ubnt1)# show interface 0/1

Test Ethernet cable to device
    • (ubnt1)# cablestatus 0/1

Show switch Temperatures and Fan Status
    • (ubnt1)# show environment

Generate a Self-Signed SSL Certificate

First, determine the name to be used for the key. For a webserver, use the fully qualified domain name. For a more general key (*, just use the domain. The following example creates a general purpose 2048-bit key for that is valid for 10 years. Generate a private key and secure it with a passphrase. This passphrase will be temporarily.

openssl genrsa -des3 -out 2048

Generate the certificate signing request.

openssl req -new -key -out

Answer the questions as prompted

  • Country Name: US
  • State or Province Name: Michigan
  • Locality Name (eg, city) [Default City]:Detroit
  • Organization Name: Jonathan E. Ross
  • Organizational Unit Name:
  • Common Name: *
  • Email Address:
  • A challenge password: (leave blank)
  • An optional company name: (leave blank)

Remove the temporary passphrase from the private key.


openssl rsa -in -out


Sign the certificate signing request ourselves.

openssl x509 -req -days 3650 -in -signkey -out

SSH Server Security Guide

Although SSH is itself a secure protocol, the standard password based login mechanism is weak. Allowing password based logins via SSH poses severe security concerns, mainly from brute-force login attempts. To combat this concern, one would have to ensure that all passwords for login accounts on a system are complex enough to withstand these attacks.

The only proven method for securing password based ssh logins is to simply not allow them at all. Instead, users are required to login with an RSA public key pair. The concept is rather simple, but the strength of this access mechanism comes from the following:

  • A user needs to have a key
  • The key needs to be authorized on the server
  • The key requires a pass-phrase to be used


The RSA key pair has two parts, a public key and a private key. The public key is like a special lock that only the private key it was generated with can unlock. So a system can have hundreds of public keys (locks) and every user has a unique key to the system (their private key). Their private key also requires a pass-phrase to use.

A successful login requires a user to have their public key (their custom lock) installed on the remote server. They also need to have their private key (their key) on the system they are connecting from. Finally, they need to enter a pass-phrase to be able to use their private key (their key) to match the public key installed (to unlock their custom lock) and gain access to the system.

It is because of these three requirements that using RSA key pairs to login over SSH eliminates the susceptibility of a server to brute-force attempts. It is also the only method to effectively secure SSH servers on which multiple people log in.


Securing an SSH server by using key-based authentication involves a few simple steps:

  • Generate the key pair
  • Install the public key on the remote server
  • Verify login with key
  • Change the SSH server configuration to deny password-only authentication


This should be done on the system you will be using to connect to your remote server. This could be your home computer, for example.

First, generate a location for the keys to exist by running the following command in a terminal.

mkdir ~/.ssh/

Next, generate the keys inside this directory

cd ~/.ssh/

ssh-keygen -t rsa -f mykey_rsa

Note that the -f allows you to specify a filename. If omitted, it will default to whatever is specified in /etc/sshd/sshconfig. Typically this would be idrsa.

Note that two files were generated: mykeyrsa, and The .pub file is the public key. This is the “lock”. It will be installed onto each remote server that you need access to. This file is safe to transport over networks, as opposed to the other file: the private key. The private key should stay on the host used to connect to remote servers. It should never be transferred to other systems.

When prompted, enter the pass-phrase desired. This creates the “key” part that was discussed in the above theory. Do not leave the pass-phrase blank. This can technically be done but should only be done under very special circumstances which will not be covered in this guide.

Now, configure your computer to use this as your “Identity” when connecting. Create the ~/.ssh/config file and insert the following line inside it.

IdentityFile ~/.ssh/mykey_rsa

Finally, secure the ~/.ssh directory by running the following:

chmod o-rwx ~/.ssh/ -R

chmod g-rwx ~/.ssh/ -R


Next, the public key (the .pub file generated above) needs to be installed on the remote server. Begin by transferring the key from your system to the server:

scp server:~/.ssh/

Once the public key is on the server, append the key to an authorized_keys2 file. This file tells the server what keys are authorized to connect to the server using that user account.

cd ~/.ssh/

cat >> authorized_keys2

Ensure the permissions are correct, then cleanup the directory:

chmod o-rwx authorized_keys2 chmod g-rwx authorized_keys2 rm mykey_rsa


Now we are ready to verify the key allows us to login. Simply SSH into the server with the following command:

ssh server

If the public key and private key are properly in place on both the server and your system, respectively, you should be prompted for your pass-phrase. After entering it successfully, you should be connected to the server.


This final step should only be performed if you can login successfully to the server using your SSH key. If you cannot do so and you proceed further, you will lock yourself out of your server. This would be very bad if physical access to the server was not feasible.

Begin by editing the SSH server configuration:

nano /etc/sshd/sshd_config

Locate the following lines in the file and adjust to match as necessary.

PermitRootLogin no

PubkeyAuthentication yes

AuthorizedKeysFile .ssh/authorized_keys2

PasswordAuthentication no

Now restart the SSH service:

service sshd restart

Finally, before you disconnect, verify that connecting to the server using your SSH key still works. Also, make sure that connecting without a valid SSH key will fail.


The current state of computer security indicates that every internet-facing SSH server is guaranteed to be subjected to brute-force login attempts. It is therefore imperative that SSH servers are configured to prevent these attempts from being successful. Failure to do so will only postpone the day your server gets hacked.

There are many effective ways to secure an SSH server. The instructions presented above can be followed if one has a desire to use public key mechanisms to secure their SSH server.

Linux Raid Guide

The following is a guide I have assembled detailing the creation and management of a RAID 5 array running on Linux. This is software RAID I’m talking about as opposed to hardware RAID which requires a separate hardware RAID controller. Let us begin with some background information and requirements to make all this work. The information below works on CentOS 5.3 x86_64 with the 2.6.30 Linux Kernel.


For a software RAID 5 in Linux you need a minimum of 3 hard disks. All three disks should be of the same size, speed, and model. They don’t have to be but the most optimal situation would be if they were.

You also need md support compiled into your Kernel, and mdadm installed.


The very first step is to actually install the hard drives. It is good practice to install them on the same controller. I used SATA drives for this guide. After installing them I had 3 new devices: sdc, sdd, and sde.


Next, you need to create partitions on the disks using fdisk. Create a single primary partition on each drive with a partition type of fd (linux raid).


Each RAID partition created is seen by mdadm as a RAID device. We need to tell mdadm to create a new array using these devices. The following command will do so:

mdadm --create --verbose /dev/md2 --level=5 --auto=yes --raid-devices=3 /dev/sd[cde]1

After executing this command you will have a new RAID device, /dev/md2, and the array itself will be in the process of building.

You can watch the process with:

watch cat /proc/mdstat

Depending on the size of the disks you have installed this could take a very long time.

Once you are done creating the array completely, you should update the mdadm.conf file. This will help the Kernel at boot time to detect and initialize the array. You should also run this command when you make any changes to the array, i.e. adding disks.

mdadm --detail --scan > /etc/mdadm.conf


After the creation of the array is complete it is not usable until you put a file system onto it. Deciding which file system is best is probably the most crucial step in this process. Here are some key factors to keep in mind:

When choosing a file system…

  • How will I add disks?
  • Can I resize my volume?
  • Will expanding storage space require downtime?

The best solution in my opinion is the use of XFS as a file system. It allows growing of the file system without having to unmount the volume, so no downtime would be required for future improvements.

To create and manage an XFS file system you need to have several tools installed: xfsprogs, xfstools, and kmod-xfs. Make sure the xfs module is loaded, or just reboot, before creating the file system with the following command:

mkfs.xfs -f /dev/md2

That should complete relatively quickly, after which you can mount up /dev/md2 and make sure it is readable and writable. At this point we now have a working software RAID 5 array on Linux.


In time it will be necessary to expand the array and add more storage by adding another hard disk. There are 5 steps involved in this procedure:

  1. Insert the new disk
  2. Create the RAID partition on the disk
  3. Add the new disk as a spare in the existing array
  4. Grow (expand) the array onto the new disk
  5. Grow the raid volume’s file system to utilize the new capacity

The first two steps have already been covered. Once they are completed we must tell mdadm there is a new device we would like it to use as a spare in our existing array. Assuming our new disk is sdf and we created the partition sdf1 properly, the following command does so:

mdadm /dev/md2 --add /dev/sdf1

If you look at /proc/mdstat it will be clear that there is now a spare in the array. Now we simply tell mdadm that we have one more drive in the actual array. So if you went from 3 drives to 4 you would use the following command:

mdadm /dev/md2 --grow --raid-devices=4

This will take an extremely long time, longer than the initial array creation. You can, again, watch the status in /proc/mdstat.

Finally, we expand the file system running upon the array. If you chose XFS as recommended the command is very simple and doesn’t require any unmounting of the file system. Just execute:

xfs_grow /dev/md2

In no time the file system will have expanded itself.


It is inevitable that you will one day need to replace one of the disks in your RAID array. Knowing what to do in that situation will save you from really screwing something up.

Replacing a disk is as simple as…

  • Determine which device has failed
  • Fail the device
  • Remove the device from the array
  • Remove the physical disk
  • Replace the failed disk with a new disk
  • Determine the new disk’s device name
  • Partition the new disk
  • Add the new raid device into the array
  • Wait for the array to finish rebuilding

Looking at /proc/mdstat will tell you which disk has failed. There will be an “F” next to it. Let’s look at an example where sdg has failed and the RAID device on that disk is sdg1 and is a member of a RAID 5 array called md2.

First, mark the device as failed:

mdadm /dev/md2 --fail /dev/sdg1

Now remove it from the array:

mdadm /dev/md2 --remove /dev/sdg1

Next, physically remove the failed disk from the system and replace it with a good disk. Depending on how the drives were detected, drives may not be labeled (sda, sdb, etc) in an order you might expect to match up with the physical connections. The following command will match drive labels with serial numbers so you can be assured you remove the actual bad drive.

ls -l /dev/disk/by-id

After inserting the new disk, you will want to look at the output of dmesg to determine the device name Linux assigns this disk. Lets continue assuming the new disk was named sdg.

We now partition the new disk. This is even easier than before because we can use sfdisk to make it have the same partition table as other disks in the array. To make it look just like sdf you would use the following command:

sfdisk -d /dev/sdf | sfdisk /dev/sdg

Finally, we add the new raid device into the array:

mdadm /dev/md2 --add /dev/sdg1

Then, just keep an eye on /proc/mdstat until the RAID is completely rebuilt. It could take a while.


In time things may go horribly wrong. For example, losing more than one disk in an array is very possible. Though, it is likely that only one of the failed disks is actually bad, it is difficult to determine which one has indeed failed. In this scenario we can easily recover the array.

In the event of total hardware failure, it is possible to move all the RAID members for the array into another machine. In this case, /etc/mdadm.conf would not be populated with the UUID for the array to be rescued. Mdadm has a special assemble mode just for this case.

mdadm --assemble /dev/md2 --verbose /dev/sdc1 /dev/sdd1 /dev/sdf1 /dev/sdg1

This will attempt to reassemble an array using the RAID members sdc1, sdd1, sdf1, and sdg1. Notice that it is not necessary to specify the RAID level. It is, however, necessary to specify the RAID device the array should be assembled into. You much choose a device, /dev/md2 in this case, that is not in use.

Now, if things are worse off, the above command would have failed with a message saying something along the lines of “could not assemble array with only 2 out of 4 members”. This would be the message for a failed RAID 5 with 4 member disks. Healthy is 4/4, degraded is3/4, so having 2/4 is not possible. We can recover from this, however.

mdadm --assemble --force /dev/md2 --verbose /dev/sdc1 /dev/sdd1 /dev/sdf1 /dev/sdg1

In this command we simply force the assemble. What we will end up with is likely a degraded array. Mdadm will work backwards for us while assembling. So, it will recover the members of the array which failed in domino-effect fashion after the initial failed member. All that is left will be a single failed member. Just re-add the failed member as is explained in the previous section “Replacing a Failed Disk”. If it adds back in properly and the array resyncs without problems, then the disk is not bad afterall.

mdadm /dev/md2 --add /dev/sdg1

Finally, after we have rescued our array by using mdadm’s assemble mode, we need to regenerate the mdadm.conf file so the array can be redetected properly upon reboot.

mdadm --detail --scan > /etc/mdadm.conf

Rename a Linux MD Device

Often it may be necessary to change the name of an MD device in Linux. For example, say you are migrating from one RAID array to another. At one point you may have both arrays active, the old and the new. In the end you may wish to remove the old array and just have the new, but reuse the name as the old array.

In this example, md3 is the old array and md4 was created to be the new array. We’ve moved the data over to md4 already and now we want to rename md4 to be md3 from this point forward.

First, remove md3 completely (after dismounting the filesystem):

mdadm --stop /dev/md3

mdadm --remove /dev/md3

Next, dismount the new array (md4) and reassemble as md3:

mdadm --stop /dev/md4

mdadm --assemble /dev/md3 /dev/sd[abcdefghijk]1 --update=name

The magic here is “–update=name” which tells mdadm to update the superblocks which previously contained the name md4 with the new name you have specified.

Be sure to update your /etc/fstab if necessary!