Wednesday, April 27, 2011

ASP.NET MVC View Engine with FreeMarker

FreeMarker is a quite success template engine in Java world. It is used with Apache Struts, an MVC framework, as a view engine. ASP.NET MVC is using Web Form as the default view engine. The problem is the view become a spaghetti of HTML and C# snippets fairly quickly, just like classic ASP and PHP. FreeMarker.Net is a project that ports the FreeMaker as an ASP.NET MVC view engine

The idea is compiling FreeMarker to .Net assembly with IKVM and create a wrapper to wrap .Net objects so that FreeMarker can understand.

Compiling FreeMarker
It is a strength forward process. It can be done with one command:
ikvmc freemarker.jar -target:library

Separating Necessary IKVM Libraries
Only the following libraries is required for development:
  • IKVM.OpenJDK.Beans.dll 
  • IKVM.OpenJDK.Charsets.dll
  • IKVM.OpenJDK.Core.dll
  • IKVM.OpenJDK.SwingAWT.dll
  • IKVM.OpenJDK.Text.dll
  • IKVM.OpenJDK.Util.dll
  • IKVM.Runtime.dll

Wrapping .Net Object
FreeMarker does not directly deal with the objects. Instead, it deals with the TemplateModel objects. There are a few template models to be implemented:
  • TemplateBooleanModel
  • TemplateDateModel
  • TemplateHashModel
  • TemplateMethodModelEx
  • TemplateNumberModel
  • TemplateScalarModel
  • TemplateSequenceModel
FreeMarker provides an ObjectWrapper interface to wrap the raw objects into TemplateModel.

The NetObjectModel is actually is a TemplateHashModel
public class NetObjectModel : StringModel, 
                              TemplateHashModel {
        Dictionary<string, PropertyInfo> props = 
            new Dictionary<string, PropertyInfo>();
        Dictionary<string, MethodModel> methods = 
            new Dictionary<string, MethodModel>();
        Dictionary<string, ExtensionMethodModel> extensionMethods = 
            new Dictionary<string, ExtensionMethodModel>();

        public NetObjectModel(object data, 
                              NetObjectWrapper wrapper)
            : base(data, wrapper){
            var type = data.GetType();
            foreach (var p in type.GetProperties()) {
                props.Add(p.Name, p);
            foreach (var m in type.GetMethods()) {
                if (!methods.ContainsKey(m.Name)) {
                                new MethodModel(data,  

        public virtual TemplateModel get(string key) {
            if (props.ContainsKey(key)) {
                return wrapper.wrap(props[key].GetGetMethod()
                                              .Invoke(data, null));
            else if (methods.ContainsKey(key)) {
                return methods[key];
            else if (wrapper.ExtensionTypes.Count > 0) {
                if (!extensionMethods.ContainsKey(key)) {
                    extensionMethods[key] = 
                        new ExtensionMethodModel(data, 
                return extensionMethods[key];
            else {
                return TemplateModel.__Fields.NOTHING;

        public virtual bool isEmpty() {
            return props.Count == 0;

To adapt the ASP.NET objects, three more template model are created:
  • HttpApplicationStateModel
  • HttpRequestModel
  • HttpSessionStateModel
They are similar but not in the same interface, all of them are like this:
public class HttpApplicationStateModel : NetObjectModel, 
                                         TemplateHashModel {

        public HttpApplicationStateModel(
             object data, 
             NetObjectWrapper wrapper)
            : base(data, wrapper) {

        public override TemplateModel get(string key) {
            HttpApplicationStateBase dic = 
                data as HttpApplicationStateBase;
            if (dic.Keys.Cast<string>().Contains(key)) {
                return wrapper.wrap(dic[key]);
            else {
                return base.get(key);

        public override bool isEmpty() {
            IDictionary<string, object> dic = 
                data as IDictionary<string, object>;
            return dic.Count == 0 && base.isEmpty();
View Engine
The view engine is fairly simple, it simply initialize the FreeMarker configuration.
public class FreemarkerViewEngine : VirtualPathProviderViewEngine {
        Configuration config;
        public FreemarkerViewEngine(string rootPath, 
                                    string encoding = "utf-8") {
            config = new Configuration();
            AspNetObjectWrapper wrapper = 
                new AspNetObjectWrapper();

            base.ViewLocationFormats = 
                new string[] { "~/Views/{1}/{0}.ftl" };
            base.PartialViewLocationFormats = 

        protected override IView CreatePartialView(
            ControllerContext controllerContext, 
            string partialPath) {
            return new FreemarkerView(config, partialPath);

        protected override IView CreateView(
            ControllerContext controllerContext, 
            string viewPath, 
            string masterPath) {
            return new FreemarkerView(config, viewPath);

FreemarkerView implements IView to render the content. It simply create the dictionary as variable bindings and invoke the FreeMarker.
public class FreemarkerView : IView{
    public class ViewDataContainer : IViewDataContainer {
        private ViewContext context;
        public ViewDataContainer(ViewContext context) {
            this.context = context;

        public ViewDataDictionary ViewData {
            get {
                return context.ViewData;
            set {
                context.ViewData = value;

    private freemarker.template.Configuration config;
    private string viewPath;
    public FreemarkerView(freemarker.template.Configuration config, string viewPath) {
        this.config = config;
        this.viewPath = viewPath;

    public void Render(ViewContext viewContext, 
                        System.IO.TextWriter writer) {
        Template temp = config.getTemplate(viewPath.Substring(2));
        Dictionary<string, object> data = 
                        new Dictionary<string, object>{
            {"model", viewContext.ViewData.Model},
            {"session", viewContext.HttpContext.Session},
            {"http", viewContext.HttpContext},
            {"request", viewContext.HttpContext.Request},
            {"application", viewContext.HttpContext.Application},
            {"view", viewContext},
            {"controller", viewContext.Controller},
            {"url", new UrlHelper(viewContext.RequestContext)},
            {"html", new HtmlHelper(viewContext, 
                            new ViewDataContainer(viewContext))},
            {"ajax", new AjaxHelper(viewContext, 
                            new ViewDataContainer(viewContext))},

        Writer output = new JavaTextWriter(writer);
        temp.process(data, output);
Configuration is as simple as adding the view engine to the view engine collections.
protected void Application_Start() {

    // ****** Optionally, you can remove all other view engines ******
    ViewEngines.Engines.Add(new FreemarkerViewEngine(this.Server.MapPath("~/")));


Extension Methods
ASP.NET MVC relies on extension method quite heavily. In the NetObjectWrapper, the following code is added to find the extension methods:
public virtual void AddExtensionNamespace(string ns) {
    var types = this.ExtensionTypes;
    foreach (var a in AppDomain.CurrentDomain.GetAssemblies()) {
        try {
            foreach (var t in a.GetExportedTypes()) {
                if (!types.Contains(t) && 
                    t.IsClass && 
                    t.Name.EndsWith("Extensions") && 
                    t.Namespace == ns && 
                    t.GetConstructors().Length == 0) {
        catch { }
In the view engine's constructor, it reads the namespaces specified under system.web/pages sections in web.config:
PagesSection section = 
if (section != null) {
    foreach (NamespaceInfo info in section.Namespaces) {
An ExtensionMethodModel class is created to find the appropriate method to invoke:
foreach (var type in wrapper.ExtensionTypes) {
    MethodInfo method = type.GetMethod(
        BindingFlags.Public | BindingFlags.Static, 
        argTypes, null);
    if (method != null) {
        cache.Add(key, method);
        return wrapper.wrap(method.Invoke(target, args.ToArray()));
ResourceManagerDirectiveModel (a TemplateDirectiveModel) and ResourceManagerModel are created so that we could do something like this:
<@resource type="Freemarker.Net.MvcWeb.App_GlobalResources.PersonResource, Freemarker.Net.MvcWeb"/>


ResourceManagerDirectiveModel is getting the ResourceManager from the resource and put it into res template variable:
public void execute(freemarker.core.Environment env, java.util.Map parameters, TemplateModel[] models, TemplateDirectiveBody body) {
    if (parameters.containsKey("type")) {
        TemplateScalarModel scalar = 
        var type = Type.GetType(scalar.getAsString());
                                        BindingFlags.Static | 
                                        BindingFlags.NonPublic | 
                        .Invoke(null, null)));
Adding More Directives
To all more directives can be added in the future, MEF is used. The directive implementation only needs to export and implement the ImportableDirective interface.
    public class ResourceManagerDirectiveModel : TemplateDirectiveModel, ImportableDirective {
The view engine will import them in the constructor:
public class FreemarkerViewEngine : VirtualPathProviderViewEngine {
        Configuration config;
        IEnumerable<ImportableDirective> directives;
        public FreemarkerViewEngine(string rootPath, 
                                    string encoding = "utf-8") {
            // initialize the config 
            // ...
            // import the extension methods' namespaces
            // ...
            // import the directives
            var dir = new DirectoryInfo(
                Path.Combine(rootPath, "bin"));
            var catalogs = dir.GetFiles("*.dll")
                .Where(o => o.Name != "freemarker.dll" && 
                           !(o.Name.StartsWith("IKVM.") || 
                .Select(o => new AssemblyCatalog(
            var container = new CompositionContainer(
                new AggregateCatalog(
                    new AggregateCatalog(catalogs),
                    new AssemblyCatalog(

The source code is available in CodePlex. Please visit

P.S.: The bonus of this project is we have not only a new view engine in ASP.NET MVC, but also a new template engine in .Net Framework. Maybe someday we could use FreeMarker to generate code instead of T4 in Visual Studio SDK.

Thursday, April 21, 2011

Caching in .Net Framework 4

We used to have caching in ASP.NET. For non-web application caching, Cache Application Block from Enterprise Library may be the choice. In .Net Framework 4, caching is baked into the library and no longer limited to ASP.NET.

After adding System.Runtime.Caching assembly as the reference in your project, we can cache the data retrieved by a web services:
using System.Runtime.Caching;
class CrmHelper{
    static ObjectCache cache = MemoryCache.Default;
    string key = string.Format("{0}/{1}", 
                          "account", "address1_addresstypecode");
    MetadataService service = new MetadataService();
    service.Credentials = CredentialCache.DefaultCredentials;

    Option[] options = null;
    if (cache.Contains(key)){
        options = (Option[])cache[key];
    else {
        var picklist = (PicklistAttributeMetadata)
                "account", "address1_addresstypecode");
        options = picklist.Options;
        cache.Set(key, options, new CacheItemPolicy {
            AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(2)
if (cache.Contains(key)) ... cache.Set(key, data) is going to be a repeating pattern. We could write an extension method to wrap it up:
public static class Extensions {
    public static T Get<T>(this ObjectCache cache, 
                           string key,
                           Func<T> retrieveFunction) {
        if (cache.Contains(key)) {
            return (T)cache[key];
        else {
            var result = retrieveFunction();
            cache.Set(key, result, new CacheItemPolicy {
                AbsoluteExpiration = 
            return result;
The previous code is now cleaned up like this:
using System.Runtime.Caching;
class CrmHelper{
    static ObjectCache cache = MemoryCache.Default;
    string key = string.Format("{0}/{1}", 
                          "account", "address1_addresstypecode");
    MetadataService service = new MetadataService();
    service.Credentials = CredentialCache.DefaultCredentials; 

    Option[] options = cache.Get(key, () => {
        var picklist = (PicklistAttributeMetadata)
                "account", "address1_addresstypecode");
        return picklist.Options;
It is worth to note that the Cache Application Block from Enterprise Library is deprecating (
Caching Application Block functionality is built into .NET Framework 4.0; therefore the Enterprise Library Caching Application Block will be deprecated in releases after 5.0. You should consider using the .NET 4.0 System.Runtime.Caching classes instead of the Caching Application Block in future development.
If you only want a quick memory cache, the new caching API might already fit your needs. However, Enterprise Library provides some other implementations of caching (e.g. database, file, etc). If you want something more fancy, Enterprise Library could be an alternative.


Sunday, March 27, 2011

WCF Channel Factory and Unity 2.0

Unity 2.0 does not support ChannelFactory in configuration natively. However, we could extend Unity. Inspired by Chris Tavares's FactoryElement extension for static factory configuration, here is the ChannelFactory extension for WCF channel factory configuration in Unity.

The element will get the generic type of the ChannelFactory and calling the CreateChannel() method upon the injection.
public class ChannelFactoryElement : InjectionMemberElement {
        private const string endpointConfigurationNamePropertyName = "endpointConfigurationName";
        private static int numFactories;
        private readonly int factoryNum;

        public ChannelFactoryElement() {
            factoryNum = Interlocked.Increment(ref numFactories);

        [ConfigurationProperty(endpointConfigurationNamePropertyName, IsKey = false, IsRequired = true)]
        public string Name {
            get { return (string)base[endpointConfigurationNamePropertyName]; }
            set { base[endpointConfigurationNamePropertyName] = value; }

        public override string Key {
            get { return "ChannelFactory " + factoryNum; }

        public override IEnumerable<injectionmember> GetInjectionMembers(IUnityContainer container, Type fromType,
            Type toType, string name) {

            Type factoryType = typeof(ChannelFactory<>).MakeGenericType(toType);            
            var constructor = factoryType.GetConstructor(new Type[] { typeof(string) });
            var factory = constructor.Invoke(new object[]{Name});
            var method = factoryType.GetMethod("CreateChannel", Type.EmptyTypes);
            return new InjectionMember[] { new InjectionFactory(o => method.Invoke(factory, null)) };


The extension will register the factory as the element name of ChannelFactoryElement.
public class ChannelFactoryConfigExtension : SectionExtension {
        public override void AddExtensions(SectionExtensionContext context) {

The factory element uses endpointConfigurationName to retrieve the client's endpoint as in ChannelFactory(endpointConfigurationName) constructor.
<unity xmlns="">
    <sectionExtension type="MyNamespace.ChannelFactoryConfigExtension, MyAssembly" />
      <register type="MyNamespace.IMyService, MyAssembly">
        <factory endpointConfigurationName="testing"/>

Friday, March 25, 2011

PHP-Azure Migration - Part 4

Up to the last post of the migration, we have done all data and file migration. How about the new files being uploaded from the CMS?

We are using a very old version of FCKEditor as our rich text editor. Since FCKEditor supports uploading file from the editor, we need to change the code in editor/filemanager/connectors/php/command.php. Here is an example of getting the folder and files for the file browser.
function GetFoldersAndFiles( $resourceType, $currentFolder ) {
        $sServerDir = MapAzureFolder( $resourceType, $currentFolder);
        $blobs = $blobStorageClient->listBlobs($ContainerName, $sServerDir, '/');
 $aFolders = array();
 $aFiles = array();
 $folders = GetSessionFolders($resourceType, $currentFolder);
 foreach($blobs as $blob){
  $name = GetName($blob->Name);
  if ($blob->IsPrefix){
   $folders[$name] = 0;
  } else {     
   $iFileSize = $blob->Size;
   if ( !$iFileSize ) {
    $iFileSize = 0 ;
   if ( $iFileSize > 0 ) {
    $iFileSize = round( $iFileSize / 1024 ) ;
    if ( $iFileSize < 1 ) $iFileSize = 1 ;
    $aFiles[] = '<File name="' . ConvertToXmlAttribute( $name ) . '" size="' . $iFileSize . '" />' ;
 foreach($folders as $name => $value){
  $aFolders[] = '<Folder name="' . ConvertToXmlAttribute( $name ) . '" />' ;
It is worth to mention the GetSessionFolders() function. Azure Storage does not have concept of folder. All folders are actually derived by the paths of the files. So, when listing the blobs, we can use $blob->IsPrefix to distinguish the derived folders and the files.

If a user want to create a folder and upload the file to the folder, we need to put the folder into the session. That is how GetSessionFolders() function come from.

When upload a file, we have to handle the Flash format. It seems that modern browsers are intelligent enough to recognize image files like JPEG, GIF, PNG without the content type. When it comes to Flash file, it could not be played properly.
function FileUpload( $resourceType, $currentFolder, $sCommand ) {
 $sFileName = 'uploadedfiles/'. strtolower($resourceType).$currentFolder.$sFileName;
 $result = $blobStorageClient->putBlob($ContainerName, $sFileName,  $oFile['tmp_name']);

 if(strtolower($resourceType) == 'flash') {
  $contentType = 'application/x-shockwave-flash';
  $blobStorageClient->setBlobProperties($ContainerName,  $sFileName, null, array('x-ms-blob-content-type' => $contentType));
 $sFileUrl = $currentFolder.$sFileName;

For the ASP.NET portal, we have to do the same. Under /editor/filemanager/connectors/aspx/connector.aspx, we changed the first line to
<%@ Page Language="c#" Trace="false" Inherits="FCKAzureAdapter.Connector" AutoEventWireup="false" %>

The connector is simply extending FredCK.FCKeditorV2.FileBrowser.Connector and override the GetFiles(), GetFolders(), and CreateFolder() methods with Azure API. Here is an example of GetFolders():
protected override void GetFolders(System.Xml.XmlNode connectorNode, string resourceType, string currentFolder) {
    string sServerDir = this.ServerMapFolder(resourceType, currentFolder);

    // Create the "Folders" node.
    XmlNode oFoldersNode = XmlUtil.AppendElement(connectorNode, "Folders");

    CloudBlobContainer container = GetCloudBlobContainer(containerPath);
    var dir = container.GetDirectoryReference(folder);

    var items = dir.ListBlobs();
    List<string> dirList = new List<string>();
    foreach (var x in items.OfType<CloudBlobDirectory>())
    var dirs = dirList.Select(o => o.Substring(o.LastIndexOf('/', o.Length - 2)).Replace("/", "")).ToArray();

    Dictionary<string, List<string>> folderList = 
            HttpContext.Current.Session["Folders"] as Dictionary<string, List<string>>;
    foreach (var dir in dirs)
        // Create the "Folder" node.
        XmlNode oFolderNode = XmlUtil.AppendElement(oFoldersNode, "Folder");
        var dirName = dir.Uri.GetName();
        XmlUtil.SetAttribute(oFolderNode, "name", dirName);
        if (folderList != null && 
            && folderList[resourceType].Contains(dirName))

    if (folderList != null && folderList.ContainsKey(resourceType))
        foreach (var folder in folderList[resourceType].Where(o => o != currentFolder && o.StartsWith(currentFolder)))
            var folderName = folder.Substring(currentFolder.Length).Trim('/');
            if (!folderName.Contains('/')) {
                // Create the "Folder" node.
                XmlNode oFolderNode = XmlUtil.AppendElement(oFoldersNode, "Folder");
                XmlUtil.SetAttribute(oFolderNode, "name", folderName);
Then, we do the same for editor/filemanager/connectors/aspx/upload.aspx.

Next time, we will discuss what other problems we have encountered.

Thursday, March 03, 2011

PHP-Azure Migration - Part 3

In the previous post, we have fixed all database queries to make the web site start working in SQL Server. The next task is tackling the file I/O without the file system. The file system is not a permanent storage in Windows Azure. We need to copy all resource files (file attachments of articles) from the local file system to Azure Storage.

File Migration
The migration seemed quite straight forward. We can just copying the file to Azure Storage. However, there is no FTP or any batch utilities from Microsoft for batch upload. Given we have 4GB of files with some hierarchical directories, it is infeasible to the files one by one. We used Cloud Storage Studio for the migration.

URL Redirection
The web site and the resource files are now in different servers. We can no longer use relative path. However, since the database is storing the relative path of the resource files, we either change the data (as well as semantics) to store the absolute URIs of the resource files; or redirect the browser to get the files from the correct URLs.

As changing the data would involve a lot of testings in business logics, we decided to create a ASP.NET Module to redirect the requests. Given that
  1. IIS (as well as Windows Azure) allow us to call ASP.NET Module in any web site (including PHP web site)
  2. All files were stored in a particular folder so we could derive the redirect pattern easily
Now, all existing files have been migrated and will be redirected to new URL. Next time, let's take a look how to deal with new uploaded files.

Wednesday, March 02, 2011

PHP-Azure Migration - Part 2

In the previous post, I have discussed the database migration from MySQL to SQL Azure. Data migration is only the first part of data processing, the next step is to modify the queries in the web site due the the syntax differences between two database systems.

Driver Adaption
In the ASP.NET frontend, we only need to replace the ADO.NET driver from MySQL to SQL Server driver and it is all done. In the PHP CMS, however, we will need to do some adaptation since PHP did not have a unified API for database driver.

Lucky enough, it is not much work as there is a driver layer in the CMS. There were only three functions we need to change: query, fetch, total which corresponding to mysql_query, mysql_fetch_arry, and mysql_num_rows in MySQL driver and sqlsrv_query, sqlsrv_fetch_array, and sqlsrv_num_rows in SQL Server.

Query Modification
The major differences can be summarizes as follows:
  1. Getting last inserted record identity
  2. Paging
  3. Scalar functions
  4. Aggregation functions
  5. Escaping strings
There is a DAO layer in the CMS that storing all queries.  So, all we did is go through all queries in the DAOs and modify them.

1. Getting last inserted record identity
MySQL PHP driver provides mysql_insert_id() to get the auto-increment column value of the last record inserted. There is no equivalent function in SQL Server PHP Driver. However, it is no more than just querying "@@IDENTITY" in SQL Server.

2. Paging
While doing paging in MySQL only takes "LIMIT n OFFSET skip", SQL Server will need
        SELECT TOP z columns      -- (where z = n + skip)
        FROM table_name
        ORDER BY key ASC
    ) AS a ORDER BY key DESC

3. Scalar functions
MySQL uses NOW() to get the current timestamp. SQL Server uses GETDATE().
MySQL provides RAND() so we could do random order. SQL Server provides NEWID(). Please note that NEWID() generate a GUID so the ordering is random. It does not mean NEWID() is same as RAND() in MySQL.

4. Aggregation functions
MySQL provides many convenient aggregation functions like GROUP_CONCAT(). The queries have to be rewritten for SQL Server or we need to create the corresponding user defined aggregation functions.

5. Escaping strings
Most developers use \' to escape single quote. In fact, both MySQL and SQL Server use two consecutive single quotes to escape a single quote. MySQL just accepting the other formats.

Up to this moment, we discussing the data layer (if you have a data layer in PHP). There is no code change in the business layer yet. This is also an example that how we could modularize a program if we partition layers clearly regardless which platform we are using.

Tuesday, March 01, 2011

PHP-Azure Migration - Part 1

Azure is Microsoft's cloud platform. The working models are different in a few ways. We migrated a web site to Azure. The web site consists of a PHP CMS, ASP.NET frontend, and a MySQL database.

Database Migration
There was some discussion among the team about whether using MySQL worker role or SQL Azure. As the final decision is SQL Azure, the first task we need to do is migrating the MySQL database to SQL Azure. The most obvious tools we can work on is Microsoft SQL Server Migration Assistant (SSMA) for MySQL.

SSMA was working fine in SQL Server. It reconstructed the SQL Server equivalent schema from MySQL and copied the data from MySQL to SQL Server. However, there are a few catches when migrating to SQL Azure:
1. Tables without primary keys:
For some reason, some tables in the web site did not have primary keys. The problem is a table in SQL Azure need a clustered index. No primary key means no clustered index (in SSMA sense). We have used a development SQL Server to re-create the schema first. Then, look for the tables without primary keys and add the primary keys back.

2. Large records
The CMS designer has a concept that all data should be put into the database as the backup strategies. So, everything including images files, Flash video files, and video files were stored in some attachment table as well. SSMA could handle most of them but not all. However, when the record is large (say 10MB), it will eventually failed and interrupted the how migration process. It is a very traditional timeout problem. We are just dealing with two:
  1. Command timeout for SQL Server
  2. Idle timeout for MySQL
Since we do not have control for timeout configuration and error handling strategies, we skip those tables during the migration. We dumped out the attachment into file systems. After a few tuning in the upload and download script to make the CMS working, we create a PHP script to migrate only the metadata columns in the attachment tables.

To conclude, the migration were performed in a two steps:
  1. Execute the SQL script to create the tables
  2. SSMA to copy most data
  3. Execute the PHP script to migrate the attachment metadata
The performance was quite satisfactory.