Progressive download mp4 with JW Player

Written Tuesday, July 26th, 2011 by Chris

JW PlayerThe latest version of JW Player (Version 5.7) now allows us to stream H264 videos to iOS while still supporting the flash player for other platforms. YEAH! HTML5 isn’t quite ready for us to use and abandon the old ways, but we’re getting closer.

Anyhow, while deploying a progressive download video for a client I chose to use this latest version of JWPlayer. I ran into an issue where the source file was an mp4 and it was fully downloading before playing, rather than a progressive download (play as it’s downloading). I’ve run into this problem before and luckily it’s an easy fix.

It has to do with the moov atom on an mp4 being placed at the tail end of the video rather than the head of the video. So once you ffmpeg the video into your require h264 specs, you need to run qt-faststart on the video which will relocate the moov atom to the beginning of the video and your problem is fixed. Very simple syntax:

qt-faststart sourceVideo.mp4 finalVideo.mp4

for those who don’t have qt-faststart already installed on their CentOS computer:

shell> svn checkout svn:// ffmpeg
shell> cd ffmpeg/
shell> ./configure
shell> make
shell> make tools/qt-faststart
shell> sudo cp tools/qt-faststart /usr/local/bin/qt-faststart

Thanx to turbolinux blog for the qt-faststart install instructions.

Editing your Hosts file in Windows

Written Thursday, June 2nd, 2011 by Jim

The Hosts file on your computer is responsible for mapping hostnames to IP addresses. There are various reasons why you may want to edit your hosts file that I won’t get into here, but here in the office our main reason is to bypass the Internet on our development server located in-house. This means that if the Internet is down, we can still access our development sites via our web browser and continue developing.

We also will have clients edit their hosts file so they can see new versions of their site before we make any DNS changes.

The hosts file in Windows is located at:


Once you locate the hosts file, right-click on the file and open it in any text editor such as notepad. From here editing it is easy. At the bottom of the file we enter the IP address of our internet connection, followed by the domain we want to to bypass (make sure there is a space or tab between the IP address and the hostname). Repeat for any additional hostnames and we’re done. If you want to temporarily omit an entry without completely deleting it, just ad a # in front of the IP address. The system will simply ignore the entry. Now just save and exit your file and your done. An example entry:

On a Windows 7 machine you may not be able to edit the hosts file directly, in which case just simply copy the file temporarily to a different location, edit it there, then copy/overwrite it back to the proper directory.

Transcode Video for Microsoft Movie Maker

Written Wednesday, June 1st, 2011 by Chris

Windows Movie MakerI’m not a huge fan of MS Movie Maker, but it comes on newer versions of Windows and is easy to use, so who am I to judge. 🙂

Just had a small project that required some DVDs converted so they’d work in Movie Maker. It was a bit more challenging than I expected. I just assumed xVid, DivX, or H264 avi files would be imported into Movie Maker. Alas, this is not the case (at least from what I was able to find).

After several attempts at various video and audio codecs I finally got one to work. Here’s the process I used:

bash$> cat VTS_1_1.vod VTS_1_2.vob VTS_1_3.vob > myMovie.vob
bash$> ffmpeg -i myMovie.vob -qscale 7 -vcodec wmv2 -s 720x480 -aspect 4:3 -ab 256k -ar 48000 -vol 400 -async 48000 -ac 2 -acodec pcm_s16le -g 300 myMovie.wmv

** the asspect ratio, resolution and other settings were all defined to use the same specs as the original VOB file.

Diff over SSH

Written Monday, May 23rd, 2011 by Chris

Readers of this blog will know that I’m a big fan or rsync. We use it almost exclusively for doing our site deployments. Generally I do an rsync dry-run to ensure I’ve gotten the correct folders and have a reasonable idea that the files I’m about to launch live are really what should be going up.

Sometimes though I’ll see a file and I’m unsure as to why it’s going to be pushed up to the live server. I’m not always the one who’s edited the files so I’d like to see the differences between the live and the dev server. I know looking at the diff in CVS or SVN is one way, but mid deployment sometimes it’s just easier to see the differences between the two files.

So use this to see the differences between a local file and one on a remote server:
ssh user@ "cat /var/www/html/remote_file_to_compare.php" | diff - "/var/www/html/local_file_to_compare"

** Use the proper IP address and file names of course.

You can add switches to the diff command but this gives you the basic syntax. The nice think about this solution is that it’s transferable to any command. I’m doing a diff here, but you can run any remote server command in this way. For example:
ssh user@ "free -m"
ssh user@ "df -h"

This simple trick will save you some time of logging into the server with a new session.

Flash and z-index issue

Written Thursday, May 19th, 2011 by Jim

Adobe FlashI came across a scenario where I needed to add html content in a place that was currently taken up by Flash content. Now, I could’ve broken up the Flash content into multiple swfs, and placed them accordingly on my page so the Flash content and html content can harmoniously co-exist beside each other, but I found a simpler, alternative solution that produces way less code and doesn’t involve breaking up my Flash.

I first attempted to place my html div content on top of the Flash using position: absolute; in my CSS (and of course, position: relative; on the parent div). This worked flawlessly in FF3, FF4, Chrome 11, Opera 11, and IE9 but the Flash content appeared above my html content in Safari 5 and (of course) IE7 and IE8.

To solve this issue I attempted to layer my content using z-index in my CSS, hoping to force the html content to display on top of the Flash, but that didn’t work. It didn’t matter if the Flash was set to a z-index of 0 and the html content was set to 1000, the Flash didn’t want to budge. At this point it seemed a shame to have to do all this the long convoluted way just to support those 3 odd behaving browsers.

Upon a little research, almost everyone who has ran into this particular issue has claimed that Flash will always take priority over any content on your page, but upon stumbling on a page from slightlymore, it appears there is a solution.

Turns out that altering the window mode in your Flash with a paramater of ‘opaque’ allowed the html to sit on top of the Flash quite nicely. Simple.

So what was:
'wmode', 'window',
was changed to:
'wmode', 'opaque',

Doing this made the z-index values in my CSS completely irrelavant, so I removed them. Now all had to do was re-design my current swf so content didn’t reside underneath the html and I was done.

Thanks to Clinton from slightlymore for pointing this out.

Google Site Search over SSL

Written Tuesday, May 17th, 2011 by Chris

We recently had a client who wanted to enhance the search function on their website. Their current solution was just a keyword search through various fields in the database which is fine, but isn’t as robust a solution as they wish they could have. One of the key features they wanted was to have “recommendations” when someone made a spelling error. This is potentially quite complex to implement. Rather than building it we opted to use Google Site Search.

We implemented the free Google Site Search function and got it all up and working, then we launched it and upgraded to the premium ad free version for the live site. That all went smoothly until we hit what appeared to be a small problem.

The entire website in question is run over SSL. This means that in certain browsers (Internet Explorer in particular) a warning message pops up when mixed SSL/non-SSL content is displayed on a page. The Google content is non-ssl so we had a problem. Granted, it’s only a warning, but MS has done a good enough job of making the language scary enough that many users will opt not to display the unprotected content…resulting in a blank search results page in our case.

I did a lot of searching around and caused a few additional bald spots on my scalp looking for a fix to the issue. Something you’d expect to be a quick fix isn’t. On the Google Site Search customize section they even provide a HTTPS option for a user to choose. It pulls the required external javascript from HTTPS as it should. After looking at what that file actually does I saw that it generates an iframe that has a hard coded HTTP: within it. I simply downloaded that javascript file ( put it on my site, modified the contents of the file where it shows HTTP: to HTTPS:, changed the call to use my own js file rather than the one from Google, and it’s done and works.

If you want to look at the code on show_afs_search.js you’ll need to read up on how to make the code readable. Start by reading this previous entry I wrote.

No credit to anyone this time…I figured it out all on my own. 🙂 In case you’re thinking it’s such an obscure problem no one else would ever have the same issue, read here.

Update: I was premature with posting this fix. Although it did work on the results page there seems to be issues when you choose a link from that results page. The href makes a call to javascript that isn’t wrapped within the SSL which then produces a warning. I gave up on it and just went to using CURL to grad the XML data, parsed, styled it then outputted it.

Artifact on Firefox 4 on a Mac

Written Tuesday, May 10th, 2011 by Chris

Mac and Firefox
Recently we had a odd bug pop up on one of our projects. It only showed up on Firefox (3.6 & 4) on OS X. The issue was that on input fields (in our case a radio buttons and text entry fields) there was an odd graphical artifact that showed up far to the right of the entry field.

Mac artifact

It took some digging and a lot of experimenting, but we finally tracked it down to a line of CSS:
overflow: auto;

When I changed that to overflow: hidden; the problem went away.

I didn’t bother to dig any deeper into the problem or try and figure out the cause of the problem yet…I will when things settle down.

Font Embedding using Font Squirrel’s @font-face Generator

Written Monday, May 2nd, 2011 by Jim

Font Squirrel

Font embedding is something I haven’t implemented into website development until recently. The main reasons for this was lack of browser support and the preparation and process that goes with making a font work within a website didn’t make it a worthwhile effort, especially when alternative methods of using graphics was satisfactory and gave a consistent cross-browser look.

Fortunately, the guys at Font Squirrel has made this process a bit easier by providing a service that gives us our desired fonts in all the necessary formats required by all the major browsers. With their @font-face Kit Generator, you upload your desired font (make sure it’s properly licensed), and it will spit out a downloadable package that contains your font in all the necessary formats needed plus the @font-face css code needed for implementation.

For example, uploading a font “MyFontFamily.otf” to the generator will result in in a downloadable package containing, “myfontfamily-webfont.eot” (needed for Internet Explorer 4+), “myfontfamily-webfont.svg” (required for iOS 4.2 and under), “myfontfamily-webfont.woff” (IE9+, FF3.6+, Chrome 5+) and “myfontfamily-webfont.ttf” (raw truetype file that works with FF3.5+, Safari 3.1+, Chrome and Opera 10+). You would then attach the @font-face CSS provided into your site’s CSS file:

@font-face {
font-family: 'MyFontFamily';
src: url('myfont-webfont.eot?#iefix') format('embedded-opentype'),
url('myfont-webfont.woff') format('woff'),
url('myfont-webfont.ttf') format('truetype'),
url('myfont-webfont.svg#svgFontName') format('svg');

Make sure your fonts are placed and being called from the proper location and declare “font-family: ‘MyFontFamily’;” in your CSS where you want your font to show up. That’s it!

Check it out!

Google AdSense Specs

Written Tuesday, April 26th, 2011 by Chris

Google Adsense

Since I always forget and have to look it up each time I figured I’d post this info to my own blog:

– Headline: 25 characters max
– Line 1: 35 characters max
– Line 2: 35 characters max
– Display URL: 35 characters max
– Destination URL: 1024 characters max

Google Source
Google Optimization notes


Written Thursday, April 7th, 2011 by Chris

The above subject line is what I searched for this afternoon. I took a moment to be amused as this search term brought up 2.2 million results. What does it even mean!?

AWS = Amazon Web Services (Amazon’s most excellent server hosting services)
SES = Simple Email Service (Amazon’s solution to bulk mailings)
MTA = Mail Transport Agent (sendmail, postfix, etc).

A bit of back story for you. I recently set up a group of servers for a client since their previous single dedicated server wasn’t cutting the mustard. We migrated them over to AWS with a load balancer, small farm of web servers (EC2), NFS server (EC2), a database (RDS), and a CDN for distributing some of their content (S3+CloudFront). It took a bit of doing and learning the “Amazon way” but it all worked out and is quite a nice setup. It allows for rapid scaling up to deal with bursts of traffic and scale down to save costs. I’ll write more about this setup in a future post. I want to talk about email problems today.

Amazon Web Services

When we first set things up we discovered that getting mail sent out form an EC2 server is sort of like sending mail through snail mail…it doesn’t get to its destination. EC2 apparently is blacklisted on most all SPAM blacklists. I can see why with the ease of creating and destroying servers…those nasty spammers must have had a hay-day in the early days of AWS. We were left with the problem of not being able to send out our administrative emails like user account validation, lost password recovery, etc.

Our first solution was to use Google’s awesome corp email service. So I set up the domain with Google and we modified the servers sendmail to relay all our admin type emails through our Google account support account. This worked really well until this morning when I found out that Gmail has a limit of emails that are allowed to go out to different users….again, antispam percautions on their part. Here’s some references:

So much for that solution which I was thinking was a nice method. Back to the drawing board.

I ended up finding Amazon’s Simple Email Service (SES) which seemed to do exactly what I needed. I filled out the forms and got an approved account in less than a day. Thank goodness too because Google had cut us off and we were loosing registrations quickly.

Now the trick was to get SES working on the EC2 web servers. We initially set up Sendmail on the server to do our relay…but only because it was already installed on the server and we know how to set up the relay easily enough. So I went through the process of configing sendmail to use SES. The instructions are easy to follow and I did everything they said to do but kept running into problems when I went to actually send the emails. I was getting errors:

SYSERR(root): buildaddr: unknown mailer aws-email
and my emails weren’t getting delivered.

In my php code I know I was sending the emails using a validated account and was successfully able to run the command from commandline without problems.

I gave up on sendmail and installed my trusty MTA postfix. Of course I had to set it up, but again it was an easy config.

It too had errors:

status=bounced (Command died with status 1: "/opt/aws/bin/". Command output: Email address is not verified. )
relay=aws-email, delay=0.39, delays=0.01/0/0/0.38, dsn=5.3.0, status=bounced (Command died with status 1: "/opt/aws/bin/". Command output: Missing final '@domain' )

At this point I’m pretty frustrated with the whole thing. After a break I came back it and eventually found this post:

Ben was right, the email was trying to send from the crazy EC2 hostname.

I did what he said and created a /etc/postfix/sender_canonical file with:
/(.*?)@(.*)/ $

And updated the file:
sender_canonical_maps = regexp:/etc/postfix/sender_canonical

It still didn’t work, BUT it was rewriting the domain portion correctly. I saw that it was apache that was trying to send out the mail so I just tweaked his reg exp to be this:

and there you have it…it worked! I know it’s not exactly right to hard code the valid user email into the postfix config so I’ll fix that up tomorrow, but for now I at least have a working config and can proceed with putting out some other fires.

Thanx Ben for the tip, it really helped me out.

Update (Oct 1, 2013) – changed the method of doing this. Followed these instructions. Just a note to be sure to authorize your sending email addresses or domain otherwise you’ll get a error: “554 Transaction failed: Invalid email address” and “501 Invalid MAIL FROM address provided”.

Setting up a Primary URL

Written Tuesday, April 5th, 2011 by Chris

This post is more for my own reference as I always forget the syntax.

When you have several domains but want to have one of them set up as a primary I generally set up Apache like so:

<VirtualHost *:80>
ServerAlias * * *
DocumentRoot /var/www/domain/

Then in the root folder set up a .htaccess with the following:

Options +FollowSymlinks
RewriteEngine on
RewriteCond %{HTTP_HOST} !^www\.domain\.com$
RewriteRule ^(.*)$1 [L,R=301]

Error code: sec_error_unknown_issuer

Written Monday, February 14th, 2011 by Chris


Some time ago we purchased a SSL certificate from Comodo for a client to use on their site. When we first installed it everything worked smoothly.

Recently however, we noticed an issue where we were getting a SSL warning page with the error:
The certificate is not trusted because the issuer certificate is unknown.
Error code: sec_error_unknown_issuer

After lots of digging and reading I figured out that the “intermediate CA” had been changed. It was a simple matter of re-downloading the SSL from Comodo’s admin interface and installing the new file.

It would have been nice if Comodo had emailed me to tell me the certificate had changed/updated.

For instructions on how to install a SSL (in case you’ve stumbled upon this link and are looking for that) use these instructions as a guide (no promises it’s up to date).

Stop The Meter On Your Internet Use

Written Saturday, February 5th, 2011 by Chris

Although the CRTC has been served noticed by the Gov’t to reconsider it’s stance on Metered billing for Internet usage, please watch these videos and consider signing the petition. The exorbitant charges for over-usage is INSANE! If the ISPs had just charged a reasonable price for over-usage instead of gouging their customers I doubt this would have become such a big deal.

PS: George, you’re THE MAN! Love your show…and yes, I watch it over the Internet.

This video gives a much better understanding of what’s going on:

Also, read over this article to get an idea of how crazy the over-usage prices really are:

For the naysayers out there, read this to put things into perspective.

We’re Geeks not Nerds. Get it right!

Written Monday, January 31st, 2011 by Jim

The terms “geek” and “nerd” have often been used inter-changeably, and I find, for one, being called a “nerd” is rather… insulting! I’m a geek! And even though I know the difference I just can’t explain it. defines the two terms as such:

  • geek Slang
    a computer expert or enthusiast (a term of pride as self-reference, but often considered offensive when used by outsiders.)
  • nerd Slang
    an intelligent but single-minded person obsessed with a nonsocial hobby or pursuit: a computer nerd.

We know, that when trying to define slang, we get less then accurate results. In this day in age, geeks are not limited to the realm of computers (ie. movie geeks, gaming geeks), and nerds are not necessarily single-minded. Even using computers as an example of a “non-social” hobby is far from accurate (ie. facebook)

There is a viral image of a venn-diagram floating around the internet that sets these terms straight, defining the distinctions between nerds, geeks, dorks, and the like. Unfortunately, it is difficult to pin-point an original source of this brilliance as this image has been replicated and reproduced in blogs, articles, and forums all over the web.

Study, observe, and get it right!

Nerd Venn Diagram

HTML Email Horrors – Part 2: MS Outlook 2007/2010

Written Monday, January 24th, 2011 by Jim

Have I mentioned how much of a pain in the neck HTML emails are? Every email client will render an HTML email differently because each one has it’s own rules around how it deals with the markup. For example, Gmail doesn’t support styles in the <head> section and Hotmail ignores margins. Basically, designing for html emails means reverting back to the old ways like using tables and throwing out anything new we’ve learned like external stylesheets. For any developer, we all know how important web standards are among cross-browser compliance, however among the multitude of email clients out there, these standards are broken much more frequently. There has even been a movement called the Email Standards Project who are trying to strive for compliant code among all email clients. The worst of the worst among these clients however, bar none, is Outlook 2007. Even previous versions of Outlook rendered more correctly than Outlook 2007.

MS Outlook 2007

With the release of Outlook 2007, Microsoft switched from their Internet Explorer rendering engine to the (get ready for it) …Word rendering engine! It doesn’t make any sense to switch html rendering capabilities from a web-browser engine to a word-processor engine, but that is exactly what they have done. To make things even worse, Microsoft fails to see their mistake and continues to use the Word rendering engine in Outlook 2010. We know from statistics that Outlook 2007 has a market share of 9% and with the introduction of Outlook 2010, this number will continue to grow as users continue to upgrade and phase out their older (yet better) versions. This means that Outlook 2007/2010 and the headaches that come with it for developers, will be here to stay for at least the next 5 years. <facepalm>

We already had to step in a time-machine and go back in time to deal with these html emails but apparently we didn’t go back far enough!

Need some help? At least there is a community out there to help voice our opinions to Microsoft. On Microsoft’s own website there is a full-listing of supported and unsupported capabilities in Outlook 2007. If you’d like to know which CSS attributes are supported in specific email clients, the guys at CampaignMonitor provided us this handy CSS support guide.

Div Tag Disappears in TinyMCE

Written Thursday, January 13th, 2011 by Chris


We’ve worked with WordPress as a simple website platform for some time now. Often we’ve run into scenarios where we’ve customized a section of a page or post that required us to add a set of DIV tags. This is all well and good and works great if you’re only ever going to work in HTML mode. But if you swap between HTML and Design modes (as a client would do once they took ownership of the content updates) then the DIV tag get’s removed (disappears).

Tiny MCE

This is due to the WYSIWYG editor in WordPress called TinyMCE. Now I know there’s various plugins to help manage TinyMCE, but we just needed to simply prevent the DIV tags from being removed. Oh, and any fix we make we want it to not be overwritten when we do a WordPress Upgrade.

The fix
In wp-content/themes/your_custom_theme either edit or create a file called functions.php. Inside there add the following:

function change_mce_options( $init ) {
$init['extended_valid_elements'] = 'div[*]';
return $init;
add_filter('tiny_mce_before_init', 'change_mce_options');

Of course if you have other TinyMCE customizations you want to make you can add them into there as well. For example I also have these:

$init['theme_advanced_blockformats'] = 'p,address,pre,code,h1,h2,h3,h4,h5,h6';
$init['theme_advanced_disable'] = 'forecolor';

At this point I’d generally give credit to the person and or site to figure this out…but it was some time ago that I found this fix and I just don’t recall where I got this information from. So I apologize for not being able to cite my source.

Linux “timer” command

Written Monday, October 25th, 2010 by Chris

I was setting up a new server to be monitored this week and found a new Munin script that I’d not seen before. It is http_loadtime. It loads a page on the server (http://localhost/) and times the duration of the load to be complete. On the server I was setting up it all went fine and started to graph right away.


I figured this would be a nice thing to see on the other web servers I manage so I added this new script to those servers as well…but that’s when I started to run into issues. The script wouldn’t run due to a missing “timer” command. The script does a “which timer” to find the location of the timer command so that it can call it from the full path…a very sensible thing to do. The problem was that the “which timer” command didn’t return anything. A search of the server showed the command was no where to be found. There are other timer type libraries, but not the actual timer command.

The funny thing is when I executed the timer command from the command line outside the script it worked fine and exactly as expected. So where is the damn timer command then? How does it work?

After lots of searching which resulted in far too many results with people wanting to change the clock on their server and nothing to do with what I was looking for. I found an interesting note somewhere (can’t find the link again sorry) that mentioned the timer command may be build into the shell.

That was the info I needed. I knew about some commands being build into the shell, but never actually knew what they were. Apparently in the default install of CentOS 5 the bash shell has “timer” built in. Armed with that info it was a simple matter of doing a yum install timer and I was good to go. Script worked perfectly fine as is.

PS: This is an odd issue but for those looking for the answer it’s super hard to find in the search engines. So please link back to this post if you find it helped you. Hopefully we can get this posts ranking high enough that it’ll show higher on Google to help the next person.

SSL on Name Based Host

Written Wednesday, October 20th, 2010 by Chris

I was working on my dev server today and ran into an issue with a project I’ve recently taken over. The admin section of the site requires SSL so I had to set up a self signed certificate on my server. This is not my first time at doing SSL certs so I didn’t have an issue there. I did however run into a new Apache error:
[warn] _default_ VirtualHost overlap on port 443, the first has precedence

Google is my friend so I was able to find a fix. On the dev server I use name based virtual hosting allowing me have a single IP run many client dev sites. To fix the issue I just needed to add:
NameVirtualHost *:443

I put mine right under
NameVirtualHost *:80

note: Above the VirtualHosts config, outside the VirtualHost *:80 tags

I give credit to where credit is due. Thank you webchalk.

Munin Not Graphing

Written Sunday, October 17th, 2010 by Chris


Today I went to add a new server to my monitoring setup and experienced an odd issue. I of course first installed the munin-node to the host and updated the munin.conf file on the monitoring server. Since the munin script runs every 5 minutes it generally takes a while to see the results of the new host in the graphs. The host showed up in the list of monitored servers…just no list of services monitored.

After some time I still wasn’t seeing any results. It took me some time to figure this one one. I looked up all sorts of help on Google and monitored the various munin and linux log files…nothing which helped. I finally started to diagnose the issue by running the munin scripts manually.

It took forever but I finally figured out that the host name on the new server was the cause of the issue. Apparently the hostname of the server needs to match exactly (including case) what the hostname is in the munin.conf file. When I read this in the munin-node.conf file:
# Set this if the client doesn't report the correct hostname when
# telnetting to localhost, port 4949

it occurred to me to telnet into the host to see if the hostname matched. I’d telneted in already to ensure there wasn’t a firewall issue, but I didn’t think about the host name.

So if your munin graphs don’t work be sure to either use the exact same hostname in the munin-conf file or set the host_name in the munin-node.conf file to match that which you used in the munin.conf on the monitoring server.

** normally at this point I give credit to the website that helped me figure this out, but this time I did it all on my own.

HTML Email Horrors – Part 1: Horizontal Image Gaps in Gmail

Written Tuesday, October 12th, 2010 by Jim

Horizontal Gaps in Gmail

HTML emails are a pain in the neck to begin with. There are so many rules and conditions and every email client has it’s own nit-picks to making an html email render correctly. I can go on and on and probably dedicate an entire section on the nuances of email clients and their rendering abilities, but I’m going to just mention one of them right here. Particularly because it’s due to a change that happened recently.

It turns out, for some reason, Google changed the way they handle the rendering of images in an html email. As a result, horizontal gaps would appear above and below images where they really should be flush against each other. Depending on how you’ve made your image slices this could make your email look very broken.

The solution to this is to add a style=”display:block” within all the img tags. For example:

<img style="display:block" src="yourimage.jpg" alt="your image" />

That’s it. Well that solves the Gmail problem.

Full credit goes to these guys.

Hidden Windows Features

Written Thursday, September 16th, 2010 by Jim


Network Password Management

A very frustrating aspect of Windows (among many) is it’s unclear management of network passwords. On some occasions, when entering a password you will be given a check-box feature asking if you’d like to save the password onto the system. On other occasions this feature is not available. When logging on to our internal network here at the office it can be quite handy to have our local machines remember our network passwords so we don’t have to enter them in day by day. If the system manages to remember our passwords, how do we change or even remove them for that matter? Windows has a buried feature that allows the management of all your network passwords. This has always been a part of Windows and may be well known but I thought it’d be worth mentioning for those who are unaware of it.

Click “Start” –> “Run…”
Type: “control userpasswords2” (without the quotes)
Click on the “Advanced” tab.
Click on the “Manage Passwords” button.

Voila! All your previously connected network computers should be listed here and you can manage your passwords via the “properties” button.

Shutdown Timer

Windows has the built-in ability to shutdown your computer via a timer. This can be handy if you’ve initiated a lengthy process and want to shut down the computer after it’s done but you don’t want to wait around for it to finish to do so.

Click “Start” –> “Run…”
Type: “shutdown -s -t XX” (where XX is the time until shutdown in seconds)

Alternatively, instead of “-s” you can type “-r” and that will reboot the computer instead.

WordPress Plug-ins

Written Thursday, September 9th, 2010 by Jim


We like using WordPress for many of our clients that require frequent updating to their websites. It’s a great CMS solution that is straight forward to use, and for us developers, there is vast community support which include a plethora of plug-ins. Plug-ins improve site functionality, save time and enhance user experience. Here are some of our favorite WordPress plug-ins (in no particular order) that we find ourselves using over and over again.

  • Kimili Flash Embed: Allows easy embedding of Flash movies (swfs) using SWFObject.
  • Multi-Level Navigation: Adds a multi-level CSS based dropdown/flyout/slider menu to your navigation items.
  • Cforms: A highly customizable, flexible and powerful form builder.
  • Category Order: Easily allows re-ordering of categories via drag and drop.
  • Exclude Pages from Navigation: Provides a checkbox on the editing page which you can check to exclude pages from the primary navigation.
  • Hide Admin Panels: Hides admin panels for a specific user and/or role. This is great for preventing certain users from accidentally changing settings that should not be changed.
  • WP-Maintenance Mode: Adds a splash page to the website that lets visitors know the site is down for maintenance.
  • Slideshow Gallery: A flexible and easily configurable Javascript powered slide-show gallery.
  • Custom Admin Branding: Custom branding of the WordPress install which include custom images for and styles for the log-in screen, admin header and footer.
  • WP-Lytebox: Used to display images by overlaying it on the current page. It’s based on the popular Lightbox script.
  • Google Analytics for WordPress: Simple addition of Google Analytics adding lots of features such as custom variables and automatic clickout and download tracking.
  • IntenseDebate: A feature-rich comment system which enhances and encourages conversation on the website.
  • WP-Cache: An extremely efficient WordPress page caching system to make your site much faster and responsive.


Written Thursday, September 2nd, 2010 by Jim


As a designer, many times I am given the task of re-designing or re-purposing a website and the client wishes to maintain the look and feel of their previous site or simply just maintain their brand. It’s important to continue using the fonts that were used because consistent font use is all part of defining a brand. The problem is, for the most part, the client is unable to provide us the font file that was used because it is in the hands of an unreachable previous web-developer or designer. In other cases, I may be given an image (say, of a poster or a banner) and have been given the task of altering or adding copy to it but again, the font used in the image is unavailable to me. What do I do?

The geniuses at MyFonts have a tool on their website called WhatTheFont! It is a font recognition tool where you upload an image that contains the unknown font, define which letters are which, and then it spits out various font possibilities that resembles the closest match to the font in your image. It’s not fool-proof and there are times I’ve uploaded images that the system couldn’t recognize, but all in all it does a pretty darn good job saving me time and migraines. Check it out!

Javascript Deobfuscate & Beautifier

Written Wednesday, September 1st, 2010 by Chris

Some developers spend a great deal of time developing javascript for various projects and want to protect that code from being stolen or re-purposed for other reasons. I get why they do it and support the concept. The problem is when the code has issues. If the code has been obfuscated it’s impossible to make code changes or edits to the JavaScript….or in the case I recently came across, fix the developers code as there is a bug/conflict that is unique to my install.

So, for every security feature available there’s a hack to undo it. I found this tool that deobfuscates and beautifies the JavaScript so it’s readable and editable (thanks Einar for making this tool…very helpful).

** side note: Sometimes it’s not about obfuscating the code. Sometimes a developer is just trying to minify the code for faster download. Minifying can reduce the size of a JavaScript considerably, but in my opinion should only be done if it’s a REALLY large .js file.


Written Monday, August 23rd, 2010 by Chris


I’m not an ISP and only provide site hosting to clients who we’ve developed sites for. Generally our clients don’t get access to their sites via SFTP since we do the site updates and we’re just not set up to let clients start FTPing on our servers. The clients who do need to make updates to their sites can either use the CMS we build for them or they can go to a ISP to host their site for them.

I’ve recently run into an exception where we needed to provide SFTP access to a couple of our clients. SFTP means we’d need to give them SSH access which translates into shell access. I’m not keen on giving shell access on my servers to anyone but my staff (and even then only to a select few). I’ve known about SSH jails, but never had a need to create one…until now.

So, I went at it and made myself a SSH jail for these select clients. I can now provide SFTP access to the clients web accounts while preventing them from being able to navigate to arbitrary locations on the server (and no shell access). Other than a few security alterations, I basically followed these directions to set up the server with a jail.

Magento Database Recovery

Written Thursday, August 12th, 2010 by Chris


We recently made some alterations to our Magento dev server and buggered up the database in such a bad way we couldn’t even log into the admin. The easiest way to recover was to restore from last nights backup. Turns out it wasn’t so easy.

I removed the buggered MySQL db and restored last nights dump. As I was importing the DB I was getting this error:

ERROR 1005 (HY000): Can't create table 'Table.frm' (errno: 150)

I knew that none of the files had changed so it must have been a database issue. After some digging I added this to the sql file:


The import worked fine this time round. When I browsed to the site I got this php error:

"PHP Fatal error: Call to a member function extend() on a non-object in /var/www/html/website_thebigtoybook/privatesale/app/code/core/Mage/Core/Model/Mysql4/Config.php on line 115"

I’m pretty pissed off at this point. Thankfully I was able to track down a fix pretty quickly (with some help) and added the following to the top of the .sql file:


I of course had to delete the database, then re-import with this new setting. It worked like a charm and we were back in business. If I had more time I’d look into exactly what these commands do (although I do have some idea…related to indicies and INNODB tables) but I’m going away for a few days and just need to get it done…so more homework on this will be done another day.

Input Director

Written Wednesday, July 28th, 2010 by Chris

Input Director

I, like many developers, have several computers at their disposal. Other than the dozen or so servers that I manage for myself and various clients (which are Linux systems and only ever touch with SSH) I have two Windows workstations at my desk. One is my desktop and the other my notebook. I use the notebook for all communication stuff like Email, IM, Skype, etc. (although it does get used as a test system from time to time as well). All my dev and everyday type work is done with my desktop system.

In the IT world there’s a thing called a KVM (Keyboard, Video, Mouse) which sysadmins use to allow one keyboard, one monitor, and one mouse to control several systems. Just hit a keyboard combo (or go old school and press a button on the KVM box) and you can toggle through all your systems without having to move around and swap between keyboards. Major space saver as well. Not sure it’s used much anymore since SSH is really the only way to go, but maybe Windows sysadmins still use them.

Input Director connected computers

In any case, I had a need for a sort of KVM for my desktop to toggle between my desktop and my notebook. Since the notebook has a monitor I really only need a KM. Several years ago I found a product called Synergy, which was great in my experience a bit buggy…especially over a wireless connection. It did the trick, but I was super happy when I found Input Director. It is far less buggy, and is still being developed and improved upon.

Once you have the software set up (which I admit can be a little tricky at times) it basically allows you to (virtually, not physically) move your mouse off one machine and onto the other (via the network). Once the mouse is on the other system it will allow you use your keyboard to type to your hearts content. Of course moving your mouse off the screen back onto your primary computer will move mouse and keyboard control back to your main system. If you have more than two computers you can setup Input Director to manage them all quite easily. One note of caution though, if you have several people on the same network using the Input Director be sure to check you settings or your buddy will be able to take over your computer…makes for a few laughs on April 1.

One other point I want to make, you can copy and paste text from one system to the other. Apparently you can also copy and paste files as long as the source file is in a shared folder (although I’ve not tried this).

Browser Tools

Written Wednesday, July 14th, 2010 by Chris

We develop websites…and of course we use various types of tools to do this. Every web developer I know follows the following simple rules:

1) Develop for Firefox. Build out your site to work with Firefox first. Once you’re happy with the page/site, modify your CSS and HTML to then make it work in IE, Safari, Chrome, etc.
Firefox, Internet Explorer, Safari, Chrome and Opera
2) Install and use the Web Developer Toolbar
Web Developer Toolbar
3) The Firebug plugin for Firefox is a MUST HAVE: (not to be confused with our friends at the create-a-game site Fyrebug)

For Flash developers there’s some tools that make all the difference when it comes to debugging and testing:
4) Flash Switcher which allows you to easily toggle between various versions of Flash.
Flash Switcher

5) Flash Tracer which allows you to see your Flash IDE output, but inside your browser.
Flash Tracer
6) The debug versions of Flash which you can get (along with all the past versions of Flash) from Adobe’s archive.


Written Sunday, July 4th, 2010 by Chris

Munin and Nagios

We use Munin and Nagios to monitor our farm of servers. We recently upgraded our own corp site with one based off WordPress. Just this evening I noticed that Munin wasn’t returning nice charts for the Apache processes. After some digging I discovered it was due to rewrite rules in the .htaccess overriding the httpd.conf config. Easy fix though, just added a rule to the .htaccess to ignore the server-status url:

RewriteCond %{REQUEST_URI} !=/server-status

I’m posting here for my own notes. I give full credit to this site.


Written Wednesday, June 30th, 2010 by Chris


I’ve talked about some of my favorite windows based programs, but the best program EVER is rsync.  I don’t go a work day without using rsync. I use rysnc for two major purposes

1) Deployments: We work off a local dev server for all our projects.  Historically we’ve SFTPd the files up to the live environment once we were done.  This was fine for our first deployment, but doing updates sometimes  required us to track down files deep in the folder structure.  More often than not we’d forget a file or two.  rsync solved all those issues.  Now when we launch the site we just do the following

rsync --dry-run -vaz --exclude-from /var/www/html/website_client/excludesFolder/excludeList.txt /var/www/html/website_client/ myusername@

Seems simple enough I know, but no one told me about this.  I just had to figure it out on my own.  So I’m helping spread the idea.  It’s not my original idea, so I take no credit (all credit goes to the makers of rsync).  note: Obviously remove the –dry-run to actually MOVE the files.  The excludeList.txt is just a plain text file of files you don’t want to sync up to the production servers. You know, things like the DB connection file and the .htaccess files that have different paths.

2) Backups: I use rsync to do both my local nightly backups as well as my offsite backups.  Due to limited space (or the desire to optimize space) there’s a trick you can use:

/usr/bin/rsync -vaz --partial --timeout=800 --exclude-from /backup/rsync_exclude.txt --progress --bwlimit=50 -e/usr/bin/ssh --del --link-dest=/backup/2010-06-09 root@source.ip.address:/data/ /backup/2010-06-24

What’s going on there is:

  • -vaz = verbose, archive, compress : ie. I log the files being transferred, grab everything recursively and preserve as much as possible, on and compress the transfer to reduce bandwidth usage.
  • –partial : in case something goes wrong in the middle of a large file transfer, this will allow you to pick up where you left off.
  • –exclude-from : discussed above, it says what we don’t want to have transferred
  • –bw-limit : throttle the bandwidth so I can still browse the web without huge lag.  Not required for local rsync of course
  • -e : sets the location of  your ssh
  • –del : so that we get a mirror image we’ll delete files off the backup server if it’s been removed off the origin server
  • –link-dest : this is the most important one – this is a previous copy of the site…so last nights backup.  I’ll explain more below
  • source : where are you getting the files from
  • destination : where on your backup server are you putting the files.

I’ve taken all this and placed it into a backup script to dynamically generate the dates and make it easy for me to do the same task on other servers.

So what this command does is look at the –link-dest path start downloading files that have changed or are new.  The really important part though is that it creates hard links to the files that are the same/haven’t changed.  What this does is allow you to browse through the folder structure of a backup and see every file that exists…even if it wasn’t backed up that day, BUT it doesn’t take up any additional disk space (well, technically it takes up a few bytes for the link itself, but what’s a few bytes among friends).  When you delete the file you’re actually just deleting the hard link, so as long as there’s a hard link somewhere on the file system the file will still be there. Nice eh!?

So, to summarize, rsync is awesome for both deployments and backups.  If I were stranded on a deserted island and could only have one program, rsync would be it.

Directory Opus

Written Monday, June 28th, 2010 by Chris

Directory Opus

In my last post I talked about my passion for tabs.  That passion lead me to start using Directory Opus.

A windows program that is a replacement for Explorer.  The features I use the most are:

  • explorer window tabs!  Yeah!
  • rename function (select to rename a file then just press the down arrow to start renaming the next file in the list…big time saver)
  • search function is awesome
  • side by side windows (apparently it has FTP built in as well, but I’ve never used it)
  • when copying files it gives you a proper progress bar for the current and total group of files…but better than that you can pause your copy/move!
  • favorites (that’s Jim’s opinion, not mine)
  • view pane to show previews of images (including PSDs) and videos (again Jim likes this one, I don’t use it)
  • overall ability to customize your explorer

There’s tons of other features that I know I’m not talking about…I’ve not even discovered all the cool things this app can do. It’s not a tool that everyone NEEDS to have, but it’s sure nice to have.

Putty Connection Manager

Written Friday, June 25th, 2010 by Chris

Putty Connection Manager

For those users of Putty in Windows, Putty Connection Manager is a MUST HAVE!  There’s several nice features, but there’s two in particular I just can’t live without any more.

1) Putty tabs.  Ever since I started using Firefox’s I’ve been a tab freak.  If an app doesn’t allow me to have several instances of it within the same window it bugs me.  With PuttyCM I can have tons of Putty sessions open and can easily manage them in one window.

2) Multi-window commands.  When you have several sessions open at once, you can lay them out on the screen so you can see them all at once.  This in itself is nice so you can have a log being monitored while you run commands from a different session.  The best part of this though is that you can send the same command to all the open windows.  Very nice when you have to install something on 10 servers.  I know you can use a script (and I’ve done this), but there’s often a case when you need to do stuff on the fly (a quick and dirty way) and being able to do this over all the systems at once is super nice.  Just be careful with the rm -rf *  🙂

Putty Connection Manager, get it, you’ll LOVE IT!

We’re geeks!

Written Friday, June 25th, 2010 by Chris

Geek Inside

I have a notebook on my desk (old school coiled paper notebook…not one of these fancy Interweb type gadgets) that I’ve carried around with me for the past 10 years.  It has all sorts of geek notes.  Thinks like how to compile a FreeBSD kernel, how to setup Samba, how to config NFS, etc.  Some of the notes I now know by heart, but I still refer to it frequently.  I know I can find all this stuff in Google, but good old pen and paper has been tried and true for me so I have a tendency to start there…especially if I’ve already done it once before.

It’s time to join the new millennium though and move these notes to an electronic version.  It’ll save me the time of having to Google as much and of course add to the masses of knowledge out there…all be it mostly redundant I’m sure.

New Website

Written Tuesday, June 22nd, 2010 by Chris

Welcome to the new website.  We’ve just moved our Flash based website over to WordPress.  Although we do about 80% of our development in Flash (or Flex) we’ve decided to migrate the site over to an HTML based site instead.  There are several reasons for this:

1) Easier Maintenance – Although we build tools to allow for easy updates in Flash it’s still not as convenient as good old HTML.  We can make fast edits and updates without much effort at all. Even though we’re developers and can technically do all the pieces on our own website…we’re busy, and the thing that gets pushed off is always your own stuff.  So our site was woefully out of date with several missing projects not yet listed.  Making this switch to WordPress means I can have a Intern or administrative staff member help out by allowing them access to update the site themselves.

2) WordPress = CMS – We’ve been using WordPress more and more for our clients.  Although I have many beefs with the way WordPress does things, the simple fact is that it’s easy.  It’s easy for our clients to use, easy to skin, easy to install 3rd party plugins, easy to pass off to someone else to work on (I’m a busy guy remember).  I like easy.

3) iPhone/iPad – There’s been a lot of buzz about the new Apple toys not supporting Flash.  I’m not making the switch because of this reason (traffic to the site using these devices is less than half of one percent), but it is a positive side effect that I wanted to point out.  I’ll get into my views of the iPhone/iPad lack of Flash support in another post.

4) I want to blog – I have opinions.  Pretty strong ones in fact, and I’d like to voice some of them.  I also have a very good memory…it’s just short.  So I wanted to have my own repository of notes.  I’m still pretty old school when it comes to reminders.  I keep notebooks and sticky notes all over my desk.  So this is an attempt of organizing myself.

5) I’m an expert (“ex” as in “a has-been” and “spurt” as in “a drip under pressure”) – … in some areas, so I figured my company site was a good place to talk about the things I do and share some of that knowledge.  I’ll try and convince some of the other BashBang team to write about things they are experts in or things they’ve learned.  In actuality I hate to be called an expert.  There’s always someone out there who knows more than me.  I’m actually a jack of all trades type.  Not just in computers, but other stuff too.  I’ll probably stick to computer stuff in this blog though.

6) I want to write gooder – I write a lot, but it’s mostly proposals and emails.  Everything I write I try to add in a laugh or two…the more boring the document, the more campy my writing becomes. I’ve heard from various sources that the more one writes the better one gets at it … so I’m calling this hour one, 9999 hours to go (Outliers).