While upgrading to ColdFusion 10u18 (APSB15-29) in my development environment, I immediately started having a problem with the CF Administrator. The topnav.cfm was generate errors (I wasn't seeing the normal icons at the top right) and any attempt to go to the Server Update > Updates page was throwing one of the following errors:
java.lang.NullPointerException
at coldfusion.server.UpdateService.init(UpdateService.java:127)
at coldfusion.server.UpdateService.<init>(UpdateService.java:118)
at coldfusion.server.UpdateService.getInstance(UpdateService.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at coldfusion.runtime.java.JavaProxy.invoke(JavaProxy.java:97)
at coldfusion.runtime.CfJspPage._invoke(CfJspPage.java:2428)
or:
Element UPDATESETTINGS.UPDATESERVICE is undefined in a Java object of type class [Ljava.lang.String; referenced as ''
The error occurred in C:/work/cf10_final_hotfix/cfusion/wwwroot/CFIDE/administrator/updates/index.cfm: line 103
Called from C:/work/cf10_final_hotfix/cfusion/wwwroot/CFIDE/administrator/updates/index.cfm: line 52
Called from C:/work/cf10_final_hotfix/cfusion/wwwroot/CFIDE/administrator/updates/index.cfm: line 51
Called from C:/work/cf10_final_hotfix/cfusion/wwwroot/CFIDE/administrator/updates/index.cfm: line 1-1 : Unable to display error's location in a CFML template.
After working with Adobe, it was determined that the {cf_install_home}/{instance_name}/lib/neo_updates.xml was corrupted and missing the <defaulturl> element. It appears this file was corrupted in the version of hotfix_018.jar that was originally available. I just re-downloaded the JAR file and the problem appears to be resolved, so I think they already fixed the problem.
The fix was to replace the neo_updates.xml with the following:
<?xml version="1.0" encoding="UTF-8"?>
<settings>
<update autocheck="false" checkinterval="10" checkperiodically="false">
<url>
http://www.adobe.com/go/coldfusion-updates
</url>
<defaulturl>http://www.adobe.com/go/coldfusion-updates</defaulturl>
<notification>
<emaillist/>
</notification>
</update>
</settings>
And then restart the ColdFusion service.
Hopefully no one else runs into the issue, but if you do, hopefully this helps someone!
It's been a while since I've updated the theme of the blog, so I spent some time re-doing my blog. My main goal was to create something built for responsive. I was able to bootstrap my idea, with the Start Bootstrap - Clean Blog theme. This leverages Bootstrap for the base CSS and adds some specialized styling. I made a few tweaks to the template source, to adjust it based on the way I wanted.
There's not a ton of information out there on using ColdFusion Components from within Java, so I wanted to document a problem I was having. I'm in the process of evaluating using Drools as a Rules Engine to use within ColdFusion. One of the problems you face in using Drools, is you need to represent your CFC in a way that Drools can work with it. This means you need to do a couple of things:
Setting all this up was pretty straight forward.
The problem that I was running into, was I could never successfully call the "setter" on a CFC object that was created dynamically using the "accessors='true'" option on my CFC.
Here was my very simple CFC:
component output="false" persistent="false" accessors="true" { property name="name" type="string" ; property name="age" type="numeric" default="-1" ; property name="valid" type="boolean" default="true" ; }
The problem is, every time I'd try to call the setter on my dynamic proxy item, I'd see an like "java.lang.ClassCastException: coldfusion.runtime.TemplateProxy".
The Java interface I was using, looked like this:
public interface ApplicantInterface { public String getName(); // automatic setters, return "this" which is a reference to the proxy object public void setName(String name); public int getAge(); // automatic setters, return "this" which is a reference to the proxy object public void setAge(int age); public boolean getValid(); // automatic setters, return "this" which is a reference to the proxy object public void setValid(boolean valid); }
Accessing the getters() worked as expected, so I knew there was something in the way the setters were working that was causing a problem.
After dumping out the metadata for my CFC, I finally discovered the issue.
When CF10 automatically creates setters, the return type on the objects is not void, but instead is a references to the CFC itself.
There are two ways to resolve the problem:
I decided to use option #2. Here's what my final interface looked like:
public interface ApplicantInterface { public String getName(); // automatic setters, return "this" which is a reference to the proxy object public coldfusion.runtime.TemplateProxy setName(String name); public int getAge(); // automatic setters, return "this" which is a reference to the proxy object public coldfusion.runtime.TemplateProxy setAge(int age); public boolean getValid(); // automatic setters, return "this" which is a reference to the proxy object public coldfusion.runtime.TemplateProxy setValid(boolean valid); }
In order to use a return type of "coldfusion.runtime.TemplateProxy", you'll need to make sure to add the cfusion.jar to the classpath when compiling your code. For example:
javac -g -cp {{ColdFusion 10 Install Folder}}\cfusion\lib\cfusion.jar -d ./bin/ ./src/*.java
Hope this helps someone else in the future!
I was working on getting vsftpd set up with some virtual users and wanted to use an Apache-style users file to manage the virtual users. I found a number of guides that showed how to configure things, but couldn't get it working. After much debugging, I realized the problem was that CentOS doesn't install the pam_pwdfile.so module by default.
So, before you can use a pwdfile with vsftpd, you will need to install the pam_pwdfile.so module. Here are the install directions I used:
After you have installed the module, make sure to restart any services that you might be dependent on the PAM.
Outside of Rob Brooks-Bilson's blog, there's not a lot of information on digging down into the ColdFusion's internal implementation of ehCache. I recently spent some time getting the built-in ehCache implementation to replicate across multiple nodes using RMI. Overall the process isn't that difficult, but due to the lack of information out there, it took me much longer to figure out the exact steps necessary to get things work.
NOTE: ColdFusion 10 added the ability to specify a specific ehcache.xml file on a Per Application basis using the this.cache.configfile value. Unfortunately, you can not use this method to configure replication. You will need to replace the default ehcache.xml that ships with ColdFusion.
<!-- In order for this rule to work, you must: * Configure the operating system to use multicast * Open up UDP on port 4446 in the firewall --> <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic,multicastGroupAddress=230.0.0.1,multicastGroupPort=4446,timeToLive=1" propertySeparator="," />
<!-- In order for this rule to work, you must: * Open up TCP on port 40001 & 40002 in the firewall --> <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="port=40001,remoteObjectPort=40002" />
<!-- Mandatory Default Cache configuration. These settings will be applied to caches created programmtically using CacheManager.add(String cacheName). The defaultCache has an implicit name "default" which is a reserved cache name. --> <defaultCache maxElementsInMemory="10000" eternal="false" timeToIdleSeconds="86400" timeToLiveSeconds="86400" overflowToDisk="false" diskSpoolBufferSizeMB="30" maxElementsOnDisk="10000000" diskPersistent="false" diskExpiryThreadIntervalSeconds="3600" memoryStoreEvictionPolicy="LRU" clearOnFlush="true" statistics="true" > <!-- apply the replication listener to all caches created by ColdFusion --> <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" /> </defaultCache>
In order for this all to work, your server must be configured for multicast. In Redhat, this isn't a standard config, so there's some additional steps you may need to do.
While ehCache does allow you to set up a peer-to-peer configuration, it's very unwieldy and won't work well with ColdFusion, unless you explicitly use named regions in all your code. The reason multicast is really required, is because it's the only method in which ehCache replication works without having to manually specify the name of each region to replicate.
I hope this helps someone out!
The jNotify plugin was just updated, with a new "click to dismiss" feature, which lets a user click on a notification to dismiss the notification!
I'm working on a jQuery Mobile project at the moment and I was not happy with the way that jQM was handling errors from the server. Our application's error handling code sets the HTTP's status code to 500. We do this, because it makes it much easier to track problematic requests. The issue we were running into was that jQM wants to just display a "Error Loading Page" dialog whenever it detects a 500 status code.
In order case, our error pages are formatted for use with jQM, so we really wanted to display the error, just as if the page had successfully loaded. We could have returned a status code of 200, but we wanted to respond with the correct status code.
To work around this issue, jQM has a "pagecontainerloadfailed" event which it fires whenever a page fails to load. Unfortunately, it's not very clear on how to use this event to handle the request as a successful page load. You can use the event.preventDefault() to cancel the default behavior, but how do you handle it as a successful page load?
Well that took some digging, but I finally found a solution that appears to work really well:
// if an error occurs loading the page, the server will make sure we have a properly formatted document $(document).on("pagecontainerloadfailed", function(event, data) { // if the error response is formatted for jQM, it'll have a custom response header that flags to use the standard page handler if( data.xhr.getResponseHeader("X-jQueryMobile-HasLayout") === "true" ){ // let the framework know we're going to handle things. event.preventDefault(); // load the results as if they had succeeded $.mobile.pageContainer.data("mobilePagecontainer")._loadSuccess(data.absUrl, event, data.options, data.deferred)(data.xhr.responseText, "success", data.xhr); } });
What this code does is look for a custom response header of "X-jQueryMobile-HasLayout" and if it sees this header is true, will then stop the default behavior and load the page as if it detecting a 200 status code.
I decided to use a custom header to determine if I should run the code or not, because then any unexpected error that occurred outside our framework would still behave the way jQM acts by default. However, if our application error handler runs, we set the header to "true". This tells jQM that the resulting HTML should be in a format that jQM needs to render the page.
I ended up posting this basic solution in GitHub 6866 - Add documentation for presenting server generated error pages and Alexander Schmitz, a jQuery Foundation member, confirmed this is the basic approach they plan on implementing in the future.
While this shouldn't affect most users (and by and large is probably a good thing), I noticed that Firefox 29 started automatically aborting requests after 300 seconds if the server has responded. As a developer, I often have test scripts that can take a very long time to run to completion, so this change was causing me some issues running a script to validate some data.
The change appears to be with the network.http.response.timeout setting in the about:config menu. It appears this setting used to be undefined, but it now set to 300.
It appears setting the value to 0 will reset the behavior so that Firefox will no longer automatically abort requests. Once I made this change, my test script starting running successfully (it's final runtime was about 6 minutes.)
Anyway, I don't expect this to affect most people. I think the impact is going to be felt mostly by developers who might have unit tests that run via a browser or other test scripts which could take a very long time to run.
While I never planned on releasing another update for qForms v1 (after all, it was written 14 years ago and supports Netscape 3), a change in Chrome 34 broke qForms in a way where the value of radio elements and checkboxes could not properly be detected. The change has to do with the fact that now RadioNodeList returns a value property.
If you're still using qForms in production, I suggest you upgrade to build 144.
One common pattern I've seen in usability, is that users don't always have great control over their mouse. It's easy to accidentally overshoot a target area with you mouse, which is really frustrating with things like nested dropdown menus, where accidentally mousing away from a target area may end up collapsing/closing the open menu.
Giva's recently released the MouseIntent jQuery Plug-in which aims to give developers a way to control this behavior. It works by monitoring an invisible border around the element to see what the user appears to be doing. If a user quickly moves back into the original element, then then no mouseaway event is ever fired. The plugin has a number of settings that allow you to control the behavior—such as monitoring whether or not the user is still moving the mouse in the invisible border area.
I've used the plugin in one of our dropdown menus that has nested menus and it's really helped to improve our user's experience.
Giva has released released a new plugin (Maskerade jQuery Plug-in) which can convert a normal text field into a power date mask input field. The plugin supports a large array of date masks (even quarters) and even supports copy/paste. Here's a list of some of it's key features:
We have actually been using this plugin in production for a long time with great success.
We recently discovered an issue in ColdFusion 9+ when images that were temporarily stored in the RAM disk, but later removed would start throwing exceptions in the application.log, exception.log and coldfusion-out.log. The code itself would run just fine, so the issue is a bit masked because you won't see it unless you're monitoring your log files.
What we were seeing was a lot of errors like the following being thrown throughout our logs:
Could not read from ""ram:///797C39D0-CAE4-21F9-D573CFDC3FE7482E.jpg"" because it is a not a file.
When we tracked down why the log entries were being written, we discovered that the following workflow was causing the problem:
It was when trying to write the image to disk, we'd start to see 3 exceptions being logged, but the code would generate the expected output. What appears to be happening, is that internally ColdFusion is trying to access the original "source" of the file for some reason.
What we did to fix the issue, was to return a new copy of the image using imageNew(imageGetBufferedImage(source)). What this does is create a copy of the image that no longer references any file on disk, but creates an image purely in RAM.
I'm sure this isn't a very common problem, but if you found that you're using Dave Ferguson's ColdFusion 9 PNG image processing fix you may find yourself running into this issue.
I've filed a Bug #3690487 with Adobe. This problem does affect ColdFusion 9 and 10, so if you think Adobe should fix it, make sure to vote it up!
While I left Windows XP behind a long time ago as my main operating system, I still run numerous virtual machines running Windows XP in order to test with older versions of Internet Explorer. One problem I've been running into with my VMs is when the Windows Update was running, the CPU would get pegged at 99% – 100% usage, which makes Windows unusable.
I tried a number of things to work around the problem to no avail and finally just decided to shut down Windows Update in order to make the VMs usable. However, that leaves my unable to patch my VMs to make sure they're completely up-to-date.
Today I finally had to update one of my VMs, so I really needed to resolve the problem. After some reading, I found that Microsoft is aware of the problem and that it relates to parsing the update tree to find out which updates are needed. The good news is I found a fix that seems to work for me. The trick is to manually update 2 different Security Updates.
Here's how I finally resolved the problem:
Hope that helps someone!
I just pushed out an update to the Linkselect jQuery Plugin.
I just pushed out an update to the mcDropdown jQuery Plugin.