Main BLOGGER
Google
WWW THIS BLOG
Tuesday, May 16, 2006
 
some thoughts about dynamic deployment

I am improving my implementation for dynamic deployment via X# code.

 

Previously, for each incoming X# code, the container will create a separate module (in .JAR or .DLL) by translating X# to Java/C# at first, then invoking the Java compiler or C# compiler. When the user calls the deployed services, which in turn will trigger the newly generated modules, we will face the same situation like DLL hell. Simply introducing version control on DLLs is not enough. If we treat the incoming X# as patches, we may have DLLs with different version co-existing in the same system.

 

This leads to an interesting situation. Those DLLs may provide the same functionality but in different version. And the policy to use those DLLs varies. The application may need function A in DLL v1.1, function B in DLL v1.2. Or the newest version will always be used.  One possible approach could utilize the chained responsibility pattern. In this approach, the older DLL will forward the invocation to newer DLL until the version criteria are met or the newest version is used. The obvious drawback of this approach is that, in the long run, the chain will become so long to be broken easily. And it will also compromise the performance. Another approach could employ a version control coordinator to fetch the matched DLLs into memory and invoke DLLs through this coordinator. The extra effort should be put to avoid name confliction among DLLs with same functionality but with different version.  And the prerequisite of this approach is the need of Meta data for DLLs.

 

The root of trouble in above approaches is to treat DLLs as the atomic deployment unit. If we could use some facility to re-assemble DLLs with different version into a target DLL meeting all kinds of version criteria, the entire headache will go. Since in our framework, the deployment unit is in X# code instead of DLLs, we could achieve this goal easily. First, the re-assembly will be as simple as maintain a DOM tree. We can cut or attach new node in the tree to generate the target Tree. And from this tree, we will generate the final executable DLLs. During runtime execution, there will be no extra overhead. Second, there is no need for extra Meta data management because the XML based X# code includes all the Meta information within itself. On the other hand, it provides a better way than CVS to maintain code versioning because tree-based X# has finer granularity than line-base CVS.

 

In a nutshell, the point is to do versioning control on X# code instead of DLLs and postpone the generation of DLL until the merged X# code is ready.

 

I will apply this approach to APEA-MCWS project. The user will provide callback to override the previous implementation and our container will re-assemble the X# code via XML DOM tree manipulation, and then dynamically generate the DLL (only one copy in the system!). The version control management is transferred from DLL to X#. And we can borrow the idea of differential serialization/de-serialization to improve the efficiency of dynamic X# deployment. Instead of sending the XML snippet, which could be very huge, we can just describe the operation for the differential part. For example, we could say “Delete Node xxx à Add Node yyy à Change Node zzz “etc.  The result framework will support dynamic deployment of web service in a novel way.

 

I believe this part of work could be a separate paper named “language supported web service dynamic deployment”. What do you think?

 




<< Home

Powered by Blogger

Google
WWW THIS BLOG