I was talking with a friend this afternoon and we were discussing an issue his client was having. They were using the <cfinclude> tag to read in a UTF-8 file to display some "cached" data but were having a problems. It turns out that any actual UTF-8 encoded characters were not displaying correctly, because the included files were written with <cffile>, which does not write a BOM (Byte Order Mark) when saving UTF-8 data.
You can now use the fileWriteUTF8() function to write UTF-8 files that contain the correct BOM so that <cfinclude> will correctly render the output.
NOTE:There are a couple of other solutions to this problem. First, you can use <cffile action="read" charset="utf-8" variable="someVar"> to read the file and then output the contents of the variable. This works, but in the case of the client there cached files also contained some CFML, so they need to actually parse the file as well. The other solution that should work as well, would be to write a <cfprocessingdirective pageencoding="utf-8"> to the file when it's originally created. However, that's pretty darn ugly. Sounds to me if CFMX really should have a charset attribute for the <cfinclude> or at bare minimum, the <cffile> tag should write the BOM for UTF-8 encoded files.
Comments for this entry have been disabled.