Setting up ColdFusion 10 to replicate ehCache

Categories: HTML/ColdFusion

Outside of Rob Brooks-Bilson's blog, there's not a lot of information on digging down into the ColdFusion's internal implementation of ehCache. I recently spent some time getting the built-in ehCache implementation to replicate across multiple nodes using RMI. Overall the process isn't that difficult, but due to the lack of information out there, it took me much longer to figure out the exact steps necessary to get things work.

NOTE: ColdFusion 10 added the ability to specify a specific ehcache.xml file on a Per Application basis using the this.cache.configfile value. Unfortunately, you can not use this method to configure replication. You will need to replace the default ehcache.xml that ships with ColdFusion.

  1. The first thing you want to do is backup your default ehcache.xml file.  This will be in your CF10's install folder at cfusion/lib/ehcache.xml. I recommend renaming the file ehcache.xml.bak. If you run into issues, you can always restore the original configuration.
  2. Open the ehcache.xml in your favorite editor.
  3. Find the CacheManagerPeerProvider comment. After that comment there's a sample <cacheManagerPeerProviderFactory /> element. Below that, insert the following element:

    <!--
      In order for this rule to work, you must:
    
      * Configure the operating system to use multicast
      * Open up UDP on port 4446 in the firewall
    -->
    <cacheManagerPeerProviderFactory
      class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
      properties="peerDiscovery=automatic,multicastGroupAddress=230.0.0.1,multicastGroupPort=4446,timeToLive=1"
      propertySeparator=","
    />

  4. Right below that, you should see the CacheManagerPeerListener comment. After the comment, there's a sample <CacheManagerPeerListener /> element. Below that, insert the following element:

    <!--
      In order for this rule to work, you must:
    
      * Open up TCP on port 40001 & 40002 in the firewall
    -->
    <cacheManagerPeerListenerFactory
      class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
      properties="port=40001,remoteObjectPort=40002"
    />

  5. Lastly, go down to the <defaultCache /> element and add the <cacheEventListenerFactory /> element. For example:

     <!--
     Mandatory Default Cache configuration. These settings will be applied to caches
     created programmtically using CacheManager.add(String cacheName).
    
     The defaultCache has an implicit name "default" which is a reserved cache name.
     -->
    <defaultCache
      maxElementsInMemory="10000"
      eternal="false"
      timeToIdleSeconds="86400"
      timeToLiveSeconds="86400"
      overflowToDisk="false"
      diskSpoolBufferSizeMB="30"
      maxElementsOnDisk="10000000"
      diskPersistent="false"
      diskExpiryThreadIntervalSeconds="3600"
      memoryStoreEvictionPolicy="LRU"
      clearOnFlush="true"
      statistics="true"
    >
      <!-- apply the replication listener to all caches created by ColdFusion -->
      <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" />
    </defaultCache>
  6. Save your changes.
  7. Restart ColdFusion.
  8. Open up the following firewall rules:
    • UDP: port 4446
    • TCP: ports 40001-40002

In order for this all to work, your server must be configured for multicast. In Redhat, this isn't a standard config, so there's some additional steps you may need to do.

While ehCache does allow you to set up a peer-to-peer configuration, it's very unwieldy and won't work well with ColdFusion, unless you explicitly use named regions in all your code.  The reason multicast is really required, is because it's the only method in which ehCache replication works without having to manually specify the name of each region to replicate.

I hope this helps someone out!

jNotify jQuery Plug-in v1.2.00

Categories: jQuery, JavaScript

The jNotify plugin was just updated, with a new "click to dismiss" feature, which lets a user click on a notification to dismiss the notification!

Overriding jQuery Mobile's default behavior when loading a page has generated a 500 status code

Categories: HTML/ColdFusion, jQuery

I'm working on a jQuery Mobile project at the moment and I was not happy with the way that jQM was handling errors from the server. Our application's error handling code sets the HTTP's status code to 500. We do this, because it makes it much easier to track problematic requests. The issue we were running into was that jQM wants to just display a "Error Loading Page" dialog whenever it detects a 500 status code.

In order case, our error pages are formatted for use with jQM, so we really wanted to display the error, just as if the page had successfully loaded. We could have returned a status code of 200, but we wanted to respond with the correct status code.

To work around this issue, jQM has a "pagecontainerloadfailed" event which it fires whenever a page fails to load. Unfortunately, it's not very clear on how to use this event to handle the request as a successful page load. You can use the event.preventDefault() to cancel the default behavior, but how do you handle it as a successful page load?

Well that took some digging, but I finally found a solution that appears to work really well:

// if an error occurs loading the page, the server will make sure we have a properly formatted document
$(document).on("pagecontainerloadfailed", function(event, data) {
  // if the error response is formatted for jQM, it'll have a custom response header that flags to use the standard page handler
  if( data.xhr.getResponseHeader("X-jQueryMobile-HasLayout") === "true" ){
    // let the framework know we're going to handle things.
    event.preventDefault();
    
    // load the results as if they had succeeded
    $.mobile.pageContainer.data("mobilePagecontainer")._loadSuccess(data.absUrl, event, data.options, data.deferred)(data.xhr.responseText, "success", data.xhr);
  }
});

What this code does is look for a custom response header of "X-jQueryMobile-HasLayout" and if it sees this header is true, will then stop the default behavior and load the page as if it detecting a 200 status code.

I decided to use a custom header to determine if I should run the code or not, because then any unexpected error that occurred outside our framework would still behave the way jQM acts by default. However, if our application error handler runs, we set the header to "true". This tells jQM that the resulting HTML should be in a format that jQM needs to render the page.

I ended up posting this basic solution in GitHub 6866 - Add documentation for presenting server generated error pages and Alexander Schmitz, a jQuery Foundation member, confirmed this is the basic approach they plan on implementing in the future.

Firefox 29 now aborts unresponsive HTTP requests after 300 seconds

Categories: HTML/ColdFusion

While this shouldn't affect most users (and by and large is probably a good thing), I noticed that Firefox 29 started automatically aborting requests after 300 seconds if the server has responded. As a developer, I often have test scripts that can take a very long time to run to completion, so this change was causing me some issues running a script to validate some data.

The change appears to be with the network.http.response.timeout setting in the about:config menu. It appears this setting used to be undefined, but it now set to 300.

It appears setting the value to 0 will reset the behavior so that Firefox will no longer automatically abort requests. Once I made this change, my test script starting running successfully (it's final runtime was about 6 minutes.)

Anyway, I don't expect this to affect most people. I think the impact is going to be felt mostly by developers who might have unit tests that run via a browser or other test scripts which could take a very long time to run.

qForms updated to fix issue in Chrome 34

Categories: qForms

While I never planned on releasing another update for qForms v1 (after all, it was written 14 years ago and supports Netscape 3), a change in Chrome 34 broke qForms in a way where the value of radio elements and checkboxes could not properly be detected. The change has to do with the fact that now RadioNodeList returns a value property.

If you're still using qForms in production, I suggest you upgrade to build 144.

MouseIntent jQuery Plug-in

Categories: HTML/ColdFusion, jQuery, JavaScript

One common pattern I've seen in usability, is that users don't always have great control over their mouse. It's easy to accidentally overshoot a target area with you mouse, which is really frustrating with things like nested dropdown menus, where accidentally mousing away from a target area may end up collapsing/closing the open menu.

Giva's recently released the MouseIntent jQuery Plug-in which aims to give developers a way to control this behavior. It works by monitoring an invisible border around the element to see what the user appears to be doing. If a user quickly moves back into the original element, then then no mouseaway event is ever fired. The plugin has a number of settings that allow you to control the behavior—such as monitoring whether or not the user is still moving the mouse in the invisible border area.

I've used the plugin in one of our dropdown menus that has nested menus and it's really helped to improve our user's experience.

Maskerade Date Mask Input Plugin

Categories: HTML/ColdFusion, jQuery, JavaScript

Giva has released released a new plugin (Maskerade jQuery Plug-in) which can convert a normal text field into a power date mask input field. The plugin supports a large array of date masks (even quarters) and even supports copy/paste. Here's a list of some of it's key features:

  • Keypress validation (ie. you don't need to submit the form for the mask to be applied)
  • Full keyboard support, including number to text-date interpretation (eg. typing 6 for a month will show June) and number-entry interpretation (eg. typing 02 in a yyyy date field will be interpreted as 2002)
  • Full mouse support
  • Masks can be defined as attributes of the input field; individual jQuery mask calls are not needed
  • Includes time-mask capability, with a date or alone
  • Default values and masks set as placeholders in the input field
  • Ability to set min and max dates allowed on a field
  • Allows for enforcing relational validation (ie. date1 must be before date2)
  • Automatic adjusting for invalid dates (eg. Feb 29, 2001 is adjusted to Feb 28, 2001)
  • Each date/time part fully highlighted on focus
  • Automatic tabbing to next date/time part once interpreted (eg. typing 2 in a "mm" date part will automatically tab you to the next date part), which allows quick keyboard entry
  • Allows for dask masks by quarters (eg. Q1, Q2, etc.)
  • Ability to support multiple languages
  • Custom event handlers; for example, a single keystroke can be defined to change the date to the current date
  • Detach/attach Maskerade behavior from the element

We have actually been using this plugin in production for a long time with great success.

ColdFusion 9/10 generating hidden exceptions when using Images stored in RAM disk

Categories: HTML/ColdFusion

We recently discovered an issue in ColdFusion 9+ when images that were temporarily stored in the RAM disk, but later removed would start throwing exceptions in the application.log, exception.log and coldfusion-out.log. The code itself would run just fine, so the issue is a bit masked because you won't see it unless you're monitoring your log files.

What we were seeing was a lot of errors like the following being thrown throughout our logs:

Could not read from ""ram:///797C39D0-CAE4-21F9-D573CFDC3FE7482E.jpg"" because it is a not a file.

When we tracked down why the log entries were being written, we discovered that the following workflow was causing the problem:

  1. Called a UDF to return a reference to a ColdFusion image object. The UDF would:
    • Use the RAM disk to convert the image into a common image format
    • It would then remove the temp file from the RAM disk
    • It would scale the image
    • Finally, it returned a reference to the ColdFusion image
  2. We would then attempt to write the image object to disk

It was when trying to write the image to disk, we'd start to see 3 exceptions being logged, but the code would generate the expected output. What appears to be happening, is that internally ColdFusion is trying to access the original "source" of the file for some reason.

What we did to fix the issue, was to return a new copy of the image using imageNew(imageGetBufferedImage(source)). What this does is create a copy of the image that no longer references any file on disk, but creates an image purely in RAM.

I'm sure this isn't a very common problem, but if you found that you're using Dave Ferguson's ColdFusion 9 PNG image processing fix you may find yourself running into this issue.

I've filed a Bug #3690487 with Adobe. This problem does affect ColdFusion 9 and 10, so if you think Adobe should fix it, make sure to vote it up!

Windows XP Windows Update issue (i.e. the svchost.exe 100% CPU issue)

Categories: Technology

While I left Windows XP behind a long time ago as my main operating system, I still run numerous virtual machines running Windows XP in order to test with older versions of Internet Explorer. One problem I've been running into with my VMs is when the Windows Update was running, the CPU would get pegged at 99% – 100% usage, which makes Windows unusable.

I tried a number of things to work around the problem to no avail and finally just decided to shut down Windows Update in order to make the VMs usable. However, that leaves my unable to patch my VMs to make sure they're completely up-to-date.

Today I finally had to update one of my VMs, so I really needed to resolve the problem. After some reading, I found that Microsoft is aware of the problem and that it relates to parsing the update tree to find out which updates are needed. The good news is I found a fix that seems to work for me. The trick is to manually update 2 different Security Updates.

Here's how I finally resolved the problem:

  1. Disable automatic Windows Updates
  2. If your CPU is pegged, open the Windows Task Manager (CTLR+ALT+DEL) and kill the svchost.exe pegging the CPU
  3. Install the following updates, rebooting after each one:
  4. Manually run the Windows Update, it should now run normally
  5. If you wish, enable automatic Windows Updates

Hope that helps someone!

Update to the Linkselect jQuery Plugin (v1.5.11)

Categories: Source Code, jQuery, JavaScript

I just pushed out an update to the Linkselect jQuery Plugin.

Revisions

v1.5.11 (2013-07-09)

  • Linkselect plugin now annouces an "update" event whenever the value changes--which allows you to set up listeners for when the value has changed.

Update to the mcDropdown jQuery Plugin (v1.3.3)

Categories: Source Code, jQuery, JavaScript

I just pushed out an update to the mcDropdown jQuery Plugin.

Revisions

v1.3.3 (2013-07-09)

  • Fixed issue where menu option underneath a selected sub-menu option would sometimes cause the option to disappear from the menu
  • Fixed issue where sub-menus would sometimes still be open after re-opening a menu

Using queryName.columnName shorthand can generate errors with <cfqueryparam />

Categories: HTML/ColdFusion

Yesterday I ran into a very strange bug with ColdFusion 9 and I thought it worth blogging about. I think this probably affects earlier versions of the product, but I haven't tested to confirm.

What was happening is whenever I tried executing a specific query, I was seeing the following error:

Invalid data coldfusion.sql.QueryColumn@540350 for CFSQLTYPE CF_SQL_INTEGER

The error had me very perplexed, because the variable that it was complaining about I knew was an integer. When displayed on screen, it showed as an integer. It would even return true when passed to the isNumeric() function. After spending way to much time on the issue, I finally track down the root problem.

What was happening is the variable's value was coming from a ColdFusion query, that I was converting to a structure. Since the query was designed to return at most a single row, I was using the shortcut notation of queryName.columnName to update the variable. I've used this shorthand plenty in the past, because it will either display the value in the first row (when not inside a <cfoutput query=""> or <cfloop query="">) or it will display an empty string if the query returned no rows. This has generally worked fine for me, but apparently this ends up storing a reference to the query object, instead of copying the value directly—which <cfqueryparam /> did not like.

The fix was pretty straightforward, all I need to do was to change my code to reflect grabbing the value from the first row of the dataset: queryName.columnName[1].

My original code looked like this:

<cfquery name="data" attributeCollection="#Application.dsn.getAttributes()#">
  select
    Name, Email, Phone
  from
    Employee
  where
    EmployeeId = <cfqueryparam cfsqltype="cf_sql_integer" value="#arguments.EmployeeId#" />
</cfquery>

<!---// get the column names from the query //--->
<cfset columns = getMetaData(data) />

<!---// return the preferences as a struct //--->
<cfloop index="column" array="#columns#">
  <cfset results[column.Name] = data[column.Name] />
</cfloop>

All I did was change the code to:

<cfquery name="data" attributeCollection="#Application.dsn.getAttributes()#">
  select
    Name, Email, Phone
  from
    Employee
  where
    EmployeeId = <cfqueryparam cfsqltype="cf_sql_integer" value="#arguments.EmployeeId#" />
</cfquery>

<!---// get the column names from the query //--->
<cfset columns = getMetaData(data) />

<!---// return the preferences as a struct //--->
<cfloop index="column" array="#columns#">
  <cfset results[column.Name] = data[column.Name][1] />
</cfloop>
NOTE:
You'll notice the only change is the [1] after data[column.Name].

What I learned from this is that I shouldn't trust using the queryName.columnName shorthand—at least not outside a cfoutput/cfloop query block. Instead I need to make sure to reference an actual row from the query.

Forcing your Application.onError event to run to completion in ColdFusion

Categories: HTML/ColdFusion

A common practice when building applications in ColdFusion is to utilize the onError event in the Application.cfc in order to track and log errors that occur in your application, so that you can track down the problems and resolve them. However, there's one type of error that can often escape your onError event handler—and that's requests that are timing out.

First, just some background information on how ColdFusion handles "request timeouts". If the server (or current page) is designed to timeout after 30 seconds, ColdFusion will not simply stop executing when the length of the running request gets to 30 seconds. Instead there are specific operations in ColdFusion1 that check the current running time to see if the request should be halted. That's why if you have a SQL query that takes 45 seconds to run, the page doesn't simple stop after 30 seconds. Instead the query will finish executing and your code won't halt execution until it tries to execute logic that would check the current execution runtime.

NOTE:
This also why when ColdFusion reports the line that took to long to run, it often isn't pointing to the actual line of code that was the real culprit, but a tag like <cfoutput>—which is just displaying the information.

Now that you hopefully have a better understanding of when request timeouts are thrown, let's examine why the Application.onError might not run.

The problem isn't that the Application.onError event doesn't get fired—it does. The problem is that because your page has already been running longer than the allotted time, as soon as ColdFusion encounters one of the operations that checks to see if the page should timeout, it will throw a second error—which effectively breaks your onError event.

The way you can get around this problem is by tracking the current execution timing in your Application and then having your Application.onError immediately adjust the page's request timeout setting as it's first line of logic. Since the <cfsetting /> tag does not check against the current execution time, this allow you to add a buffer to your onError request so that the event can run to completion.

Here's a sample snippet of an Application.cfc which will allow the Application.onError event to run for another 10 seconds—regardless of how long the current template has been running:

<cfcomponent output="false">
  <!---// track the starting execution time //--->
  <cfset executionStartTime = getTickCount() />

  <!---// onError //--->
  <cffunction name="onError" returnType="void" output="true">
    <cfargument name="exception" type="any" required="true" />
    <cfargument name="eventName" type="string" required="true" />

    <!---// declare local variables //--->

    <!---//
      take the current time the request has been running and add 
      10 seconds, to attempt to run the onError handler succesfully 
    //--->
    <cfsetting requesttimeout="#(((getTickCount()-executionStartTime)/1000)+10)#" />

    <!---// insert error code handling here //--->
  </cffunction>
</cfcomponent>

I've been using this trick to help make sure any requests that are timing out are fully logged, so I can evaluate the issue and look for ways to fix the root problem.

1 Unfortunately I do not have a list of which operations in ColdFusion do an integrity check on the request lifecycle. I do know that <cfloop>, <cfoutput>, <cfquery> and most complex cf-based tags do check the current running time against the page's request timeout setting.

Merging changes after a successful pull request back into your fork using EGit

Categories: Source Code

Yesterday I finally had the incentive to download Git so I could fork a Github project (Mustache.cfc) and contribute some changes I'd made. Being that Eclipse is my main IDE, EGit seemed to be a logical client to install. The install process was painless and I was able to figure out how to clone a local copy, commit and push changes back to Github. Even the Github pull request process was painless. Much to my chagrin, my pull request was accepted almost immediately and merged back into the main branch.

Naturally the next thing I wanted to do was to sync my fork with the original repository. This is where I got stuck. Whether due to my lack of understanding of Git terminology or just a lack of my Google Search skills, I was not able to easily find instructions for how to sync the original repository back to my fork. After much searching, I was finally able to find some help.

However, I figured I'd clean up the instructions a bit and provide a little more detail on the steps for merging the main repository back into your fork.

The first thing you need to do, is set up a new remote repository linked to your fork:

  1. Open the "Git Repositories" view (Window > Show View > Other > Git > Git Repositories)
  2. Locate your local copy of the fork
  3. Expand the "Remotes" branch (you should see a "origin" entry)
  4. Right-click on "Remotes" and select "Create Remote"
  5. Enter a name representing the master repository, for this example I'll use "mainrepo"
  6. Select the "Configure fetch" option
  7. Click "OK"
  8. Go to the main repository in your browser
  9. Copy the Git URI into your clipboard
  10. Go back to Eclipse
  11. Click on the "Change" button
  12. Paste the URI into the "URI:" field
  13. Click "Finish"
  14. Click the "Add…" button
  15. In the "Source" field, type in the name of the branch you want to import from.

    NOTE: In most cases if you start typing "m", it should show you an auto-complete entry with the "master" branch—just select that option to sync to the master branch.
  16. Click "Finish" (or click "Next" if you want more options)
  17. Click "Save and Fetch"

Now that you have a linked the original repository to your fork, you can merge changes from the original repository to your fork:

  1. Right-click on your project and go to Team > Merge
  2. Under "Remote Tracking", find the "mainrepo/master" (or the appropriate repository/branch based on the remote repository you just configured.)
  3. Click "Merge"
  4. If your repository is up-to-date, there's nothing else to do.
  5. Finally, just commit your changes locally and push back to the upstream

These steps worked for me, so hopefully they'll help guide someone else!

Adding custom callbacks to existing JavaScript functions

Categories: jQuery, JavaScript

This morning I was reading Adding your own callbacks to existing JavaScript functions by Dave Ward—which covers how to overwrite an existing function so you can add some additional functionality (in this case, adding callbacks.) While the article is informative, a couple of improvements can dramatically improve his suggestion.

If you don't want to take the time to read Dave's article, in a nutshell he describes how we can overwrite a JavaScript function by storing a reference to the original function in a variable. So, we can take the following function:

function sayHello(name){
  alert("Hello, " + name + "!");
}

And we can now overwrite that function by storing a reference to the original function in a variable:

var sayHelloOld = sayHello;

function sayHello(){
  var name = prompt("Enter your name");
  sayHelloOld.apply(this, [name]);
}

Now there's a couple of problems with the above code.

  1. As written, this would only work if the original sayHello() function was defined in another <script> tag because of function hoisting.
  2. We're polluting the global name space.

We can solve both those problems by using a closure around our code:

// define a closure and pass in a reference to the global window object
(function (w){
  var sayHelloOld = w.sayHello;
    
  w.sayHello = function (){
    var name = prompt("Enter your name");
    sayHelloOld.apply(this, [name]);
  }
})(window || {});

(NOTE: You can see a working copy on JSFiddle.)

The other topic Dave discusses is how to add callback hooks to run before and after a the original function code runs. His suggestion is built around using the global name space to declare some function names. Since the example is based around jQuery, I'd suggest a much better method would be to add in custom events to your function. This gives you a way to bind callbacks to run, but your neither cluttering the global namespace nor running into issues if the callbacks aren't needed.

Dave's Original Solution

Here's what Dave's original solution looks like:

var oldTmpl = jQuery.fn.tmpl;
 
// Note: the parameters don't need to be named the same as in the
//  original. This could just as well be function(a, b, c).
jQuery.fn.tmpl = function() {
  if (typeof onBeforeTmpl === 'function')
    onBeforeTmpl.apply(this, arguments);
 
  // Make a call to the old tmpl() function, maintaining the value 
  //  of "this" and its expected function arguments.
  var tmplResult = oldTmpl.apply(this, arguments);
 
  if (typeof onAfterTmpl === 'function')
    onAfterTmpl.apply(this, arguments);
 
  // Returning the result of tmpl() back so that it's actually 
  //  useful, but also to preserve jQuery's chaining.
  return tmplResult;
};

Improved Solution

Using the two previously mentioned techniques combined, here's how I'd change that code:

(function ($){
  var oldTmpl = $.fn.tmpl;

  // Note: the parameters don't need to be named the same as in the
  //  original. This could just as well be function(a, b, c).
  $.fn.tmpl = function(){
    // trigger the before callback
    // to attach a callback, we just bind() this custom event to our jQuery object
    this.trigger("onBeforeTmpl", arguments);

    // Make a call to the old tmpl() function, maintaining the value 
    //  of "this" and its expected function arguments.
    var tmplResult = oldTmpl.apply(this, arguments);

    // trigger the after callback
    // to attach a callback, we just bind() this custom event to our jQuery object
    this.trigger("onAfterTmpl", arguments);

    // Returning the result of tmpl() back so that it's actually 
    //  useful, but also to preserve jQuery's chaining.
    return tmplResult;
  };
})(jQuery || {});

This gives use a few benefits over Dave's original solution:

  1. We're not polluting the global namespace
  2. We can now attach custom callbacks to each jQuery object separately

To use our code, we can do:

$("#id")
  .bind("onBeforeTmpl", function (){
    alert("before!");
  })
  .bind("onAfterTmpl", function (){
    alert("after!");
  })
  .tmpl(data, options, parentItem);
NOTE:
If your prefer to run the same callbacks for all $.tmpl() calls, you could attach the custom events globally.

Any comments on how to make this solution even better?