Monday, September 13, 2010

Silverlight - Using Fluent API to tie related async calls.

Introduction

In Silverlight all service calls are by default async calls. With ansync calls, you will be making a webservice call using one method and expects a call back in another method once you get a response from service. If you are using repository pattern you will be using same type of mechanism for communication between your view model and repositories.
In Async pattern there will be two related functions to accomplish a single task. One will initiate the task and another function will complete. Both these methods should work in tandem to accomplish single task.

And it will be highly error prone if you do not do proper event subscription & unsubscription. You also might want to take care of few common things like error handling, showing/hiding progress bar which will be common across all these calls. In this blog I will show you a interesting technique accomplishing same using elegant Fluent API solution.

Assume you have following repository class where you will define events once repositories webservice call complete it raises completed event, which will notify view model's completed event.
public class CustomerRepository

{

public event EventHandler<CustomEventArgs> GetCustomerNameCompleted;
public event EventHandler<CustomEventArgs> GetCustomerNameCompleted2;


public void GetCustomerName(string studyID)
{
WebServiceStub serviceStub = new WebServiceStub();
serviceStub.GetCustomerNameCompleted += (obj, args) =>
{

if (this.GetCustomerNameCompleted != null)
{
this.GetCustomerNameCompleted(this, args.GetCustomEventArgs());
}
};
serviceStub.GetCustomerNameByID(studyID);
}
public void GetCustomerName(string studyID, string Param1)
{
WebServiceStub serviceStub = new WebServiceStub();
serviceStub.GetCustomerNameCompleted2 += (obj, args) =>
{
if (this.GetCustomerNameCompleted2 != null)
{
this.GetCustomerNameCompleted2(this, args.GetCustomEventArgs());
}
};
serviceStub.GetCustomerName(studyID, Param1);
}

And in you View model


public void GetCustomers(int customerID)
{
repository.GetCustomerNameCompleted += repository_GetCustomerNameCompleted;

repository.GetCustomers(customerID)
}
void repository_GetCustomerNameCompleted(object sender, CustomEventArgs e)

{
repository.GetCustomerNameCompleted -= repository_GetCustomerNameCompleted;
}
You will be doing subscriptions and unsubscriptions throughout your silverlight ViewModels to repositories to fetch data. For each task you will have two methods in your view model.
You might be tempted to write completed method as anonymous, but it wont work as you have to unsubscribe once you are done. Otherwise subscriptions will keep on growing and you will get multiple callback for each call.

And if you forget to unsubscribe not only you will run into danger of memory leaks but buggy code. Instead of that if you can you use fluent API to make a call something as follows:

public void GetCustomer(string customerID)
{
StitchMethodFlow<string>

.Init
.WhenCalled(repository.GetCustomerName)
.WithParams(customerID)
.OnCompletionExecute((o,e)=>{//compelted;})
.CallBackSubscriptionProvider(repository.RegisterEventSubscription)
.StartExecution();
}

Above code looks simple, more maintainable, self explaining and code related to single task being done at single place not leaking to multiple methods. Now event subscription and Unsubscription are taken care by StitchMethodFlow.

You can download sample from here to see how it works. Demo is console application.

Monday, August 30, 2010

Recursive XSL Templates

I am re-posting my article once published in code project in 2006. This brief article will demonstrate the recursive nature of templates. For original posting click here.
For source code download click here.
Output Needed:



Introduction

XSL is a declarative programming language. Variables that are once declared cannot be reassigned. It is often difficult for programmers coming from procedural languages background to do advanced tasks. Using XSL, we can solve complex problems, which at first glance often seem to be difficult or impossible. In this brief article, I will demonstrate the recursive nature of templates. In a typical case of a product catalog display, our requirement is to display the product details in each cell, and number of columns should be selectable by the user. The sample XML file is listed below:

<data>
<product name="Heine HKL with trolleystand" weight="34.4kg" price="230.45">
<product name="Universal clamp and Projector" weight="10.64kg" price="670.45">
<product name="Examination Lamp, Universal Mount" weight="1.08kg" price="25.45">
<product name="Provita Examination Lamp, Mobile Base" weight="1.4kg" price="215.45">
<product name="35Watt Flexible Arm Light to fit Rail system" weight="11.67kg" price="130.45">
.
.
.
.
.
.
</data>


Assuming we are getting the above from a business component, each element row corresponds to a product, whose attributes comprises of product specific data. Each product along with its details will be rendered in a single cell. And the number of columns should be definable at runtime. Following is the brief XSL which does the rendering:


<xml:namespace prefix = xsl />0">
<tr>
<xsl:apply-templates select="">=
$startindex and position() < ($startindex+$numofCols)]" mode="rows"> </xsl:apply-templates> </tr> <xsl:call-template name="renderColumns"> <xsl:with-param name="listrows" select="">= $startindex+$numofCols]"></xsl:with-param>
<xsl:with-param name="startindex" select="$startindex"></xsl:with-param>
<xsl:with-param name="numofCols" select="$numofCols"></xsl:with-param>
</xsl:call-template>
</xsl:if>
</xsl:template>

<xsl:template match="node()" mode="rows">
<td nowrap="true">
<table style="BORDER-BOTTOM: thin solid; BORDER-LEFT: thin solid; WIDTH: 100%; BORDER-TOP: thin solid; BORDER-RIGHT: thin solid">
<xsl:apply-templates select="@*"></xsl:apply-templates>
<tbody></tbody></table>
</td>
</xsl:template>

<xsl:template match="@*">
<tr>
<td style="TEXT-TRANSFORM: uppercase; BACKGROUND-COLOR: gainsboro; FONT-SIZE: larger">
<xsl:value-of select="name()"></xsl:value-of>
</td>
<td>
<xsl:value-of select="."></xsl:value-of>
</td>
</tr>
</xsl:template>
</xsl:stylesheet>

Explanation

<xsl:template match="/">
<xsl:param name="numCols" select="3"></xsl:param>


As would a C or C++ programmer organize and reuse the code using functions or object methods, in XSL, we can organize code using templates. The above code is the root template which will be invoked by the XSLT processor. We are declaring a parameter using xsl:param, whose name is numCols. The user can pass this parameter; if the user is not supplying any value in this parameter, then, by default, it will have a value of 3. This variable will specify the number of columns to be rendered.

<xsl:call-template name="renderColumns">
<xsl:with-param name="listrows" select="//product"></xsl:with-param>
<xsl:with-param name="startindex" select="1"></xsl:with-param>
<xsl:with-param name="numofCols" select="$numCols"></xsl:with-param>
</xsl:call-template>

We are calling the renderColumns template, to which we are passing three parameters. In listrows, we are selecting all product elements, startindex signifies the starting index, and numofCols will control the number of rows to be rendered.

<xsl:template name="renderColumns">
<xsl:param name="listrows"></xsl:param>
<xsl:param name="startindex"></xsl:param>
<xsl:param name="numofCols"></xsl:param>
<xsl:if test="">0">
<tr>
<xsl:apply-templates select="">=
$startindex and position() < ($startindex+$numofCols)]" mode="rows">
</xsl:apply-templates>
</tr>
<xsl:call-template name="renderColumns">
<xsl:with-param name="listrows" select="">= $startindex+$numofCols]"></xsl:with-param>
<xsl:with-param name="startindex" select="$startindex"></xsl:with-param>
<xsl:with-param name="numofCols" select="$numofCols"></xsl:with-param>
</xsl:call-template>
</xsl:if>
</xsl:template>

In the XSL template renderColumns, we are selecting the elements whose position will be greater than or equal to the starting index and whose position is less than or equal to the sum of startindex and numofcols. After rendering these subset of elements, we are recursively calling renderColumns by selecting a subset of elements whose position is greater than the sum of startindex and numofCols, which are already rendered. For exiting this recursive loop, we have a test condition at the start which checks for the count of elements in the listrows variable. As we are selecting only those elements which are yet to be rendered while calling recursively, the set of nodes by each call will be reduced by the number of elements rendered. For rendering rows, in this call template, we are using the following template:

<xsl:template match="node()" mode="rows">
<td nowrap="true">
<table style="BORDER-BOTTOM: thin solid; BORDER-LEFT: thin solid; WIDTH: 100%; BORDER-TOP: thin solid; BORDER-RIGHT: thin solid">
<xsl:apply-templates select="@*"></xsl:apply-templates>
<tbody></tbody></table>
</td>
</xsl:template>

in which we are converting the attribute nodes into elements and calling another template:

<xsl:template match="@*">
<tr>
<td style="TEXT-TRANSFORM: uppercase; BACKGROUND-COLOR: gainsboro; FONT-SIZE: larger">
<xsl:value-of select="name()"></xsl:value-of>
</td>
<td>
<xsl:value-of select="."></xsl:value-of>
</td>
</tr>
</xsl:template>

which does the job of rendering the product details in a cell.

Conclusion

Even though variables are constants through out the life time of a variable in XSL, we can achieve things which, at first glance, look impossible due to the declarative nature of XSL. Upon close look and thinking in a declarative manner, we can solve the problem.

Wednesday, August 25, 2010

Design Patterns by Metaphors - Part I (Creational Patterns)

From today, I am starting blog series on Design Patterns. There are numerous technical posts, material on this subject out there awaiting just googling to get discovered. So I don't want to repeat same here. What I will be trying to do is explain them using metaphors, in layman terms, so that newbies can understand easily and can adapt them with ease.

I will start with Creational Patterns.

Creational Patters:

Are patterns which guides you in tackling object creation challenges in certain scenarios.

Abstract Factory:




Simpson lives in world of SONY. Every electronic appliance in his world is of SONY. But he wants agility to move to other electronic worlds as well, may be Samsung in future. In order for him to able to move to different worlds without much effort, he uses remote covers over top of particular remote to access functionality. All remotes in electronics will support that cover. Tomorrow even if he changes to Samsung, he will still be able to use them with his cover.
Due to this common cover fitting all remotes to access only common minimum features, he will not be able to access any special button which might be specific to a particular remote. He trades this for flexibility to be able to move to different worlds.

Builder Pattern




Result is so varied that we do not have common interface for end result. In case of Abstract Factory we had common interface for end result.
User Will provide which builder to use to Director.

Factory Method




Whenever Simpson requests for car to Car Factory, he will always get latest car available at that time, wrapped under Car prototype cover. As time evolves, as new models gets added, he still gets latest models but wrapped in car prototype. He will get latest engine performance, latest ABS, latest electronics working under hood.
But all cars are wrapped under car prototype, which has only minimum control points. Though he can experience latest engine performance with car prototypes accelerator, he cannot access to car GenX’s latest Blue Me feature as that is wrapped under car prototype.

Abstract vs Factory


Thanks to David Hayden for making it so clear in his post at here.

Singleton

Monday, August 16, 2010

Silverlight - Property Value Synchronization between Two View Models

Recently I had been confronted with a design issue to keep two property values in two different View Models in Sync.



It is a simple issue, where you can subscribe to PropertyChange notifications and detect whenever interested property changes in Object1, set AnotherProperty's value in Another Object. And vice versa. To avoid recursive infinite loop situation, you can keep a flag which tracks direction of change and avoids this infinite loop.

But I wanted to solve this by keeping following constratints

1. I need to reuse this syncing logic wherever required in future.
2. Loosely Coupled.
3. No reflection, I want compile time support to detect any errors or most of them.
4. Should support FluentAPI for better usability.

Few assumption I made are
4. Both objects implements INotifyPropertyChanged event which raises event change notifications whenever property is changed.

Before discussing code, once developed you can use it like following to keep both objects in Sync.


//ViewModel which has a property
Class1 orgViewModel = new Class1();
//Another ViewModel which has another property.
Class2 anotherViewModel = new Class2();
//Sync API will be exposed by this SyncClass Which takes ViewModel types and property type.
SyncClass SyncClass = new SyncClass();

//Configure ViewModel with predicate to detect when property is changed and provide functions to read and write property values.
SyncClass.MonitorClass(orgViewModel, (propname) => { return propname == "MyVal"; })
.SetGetValue(()=>orgViewModel.MyVal)
.SetSetValue((i)=>orgViewModel.MyVal=i);
//Configure another ViewModel with predicate to detect when property is changed and provide functions to read and write property values.
SyncClass.MonitorAnotherClass(anotherViewModel, (propname) => { return propname == "AnotherVal"; })
.SetGetValue(()=>anotherViewModel.AnotherVal)
.SetSetValue((i1)=>anotherViewModel.AnotherVal=i1);
//Start Synchronization.
SyncClass.StartSync();

Once you configure you view models and property in above manner SynClass will auto sync two properties.

Code Listing

delegate void ValueChanged(object sender,EventArgs e);

class UsingClass
{
bool IsOriginator = false;
public event ValueChanged ObjectValueChanged;

T _MonitorObject;
Func _GetValue;
Action _SetValue;
Func<string,bool> _IsPropChanged;
public UsingClass(T MonitorObject,Func<string,bool> IsPropertyChanged)
{
_IsPropChanged = IsPropertyChanged;
_MonitorObject = MonitorObject;
INotifyPropertyChanged propChanged = _MonitorObject as INotifyPropertyChanged;
if (propChanged != null)
{
propChanged.PropertyChanged += (o1, e1) =>
{
if (_IsPropChanged(e1.Proeprtyname))
{
this.IsOriginator = true;
if (this.ObjectValueChanged != null)
{
this.ObjectValueChanged(this, new EventArgs());
}
}
};
}

}
public UsingClass SetGetValue(Func GetValue)
{
_GetValue=GetValue;
return this;
}
public T1 GetValue()
{
return _GetValue();
}
public void SetSetValue(Action SetValue)
{
this._SetValue = SetValue;
}
public void SetValue(T1 val1)
{
if (!IsOriginator)
{
_SetValue(val1);
}
else
{
IsOriginator =
false;
}
}
}

class SyncClass
{
UsingClass Object1;
UsingClass Object2;
public UsingClass MonitorClass(T MonitorObject,Func<string,bool> CheckPropertyChanged)
{
Object1 =
new UsingClass(MonitorObject,CheckPropertyChanged);
return Object1;

}
public UsingClass MonitorAnotherClass(T1 MonitorAnotherObject, Func<string, bool> CheckPropertyChanged)
{
Object2 =
new UsingClass(MonitorAnotherObject, CheckPropertyChanged);
return Object2;
}
public void StartSync()
{
Object1.ObjectValueChanged += (o, e) =>
{
Object2.SetValue(Object1.GetValue());
};
Object2.ObjectValueChanged += (o1, e1) =>
{
Object1.SetValue(Object2.GetValue());
};
}

}

I prepared a sample in Console Application which will be easy to execute and see how its working.

Click here to download sample application. You need VS 2010 to open this solution.

Hope you enjoyed this code crunch..

Thursday, July 29, 2010

Silverlight - Binding System uses Visual Tree

Recently I was working on a control where I need to re-arrange layout of passed in user controls, which alters visual tree structure. During this I found binding system relies on visual tree.

With example:


Page1
|
|--UserControl1
|--UserControl2
|--UserControl3

Transforms into

Custom Control
|
|-UserControl1
|-UserControl2
|-Page1
| |-UserControl3


Above page when passed to this custom control need to strip few controls and arrange in different layout. If Binding done at Page1 level, once you remove user controls to different place than this visual tree, bindings will not be resolved.

Friday, July 16, 2010

Object oriented Javascript

In this blog, I will try to explore how to perform basic object oriented development using Javascript.

Lets explore by a sample shopping cart application. In this we will have

  1. Items which represent shopping items which will have attributes like item name, Price per Unit, Quantity, and method Total Price which will return Total sub price of particular Items.
  2. Customers can Add Multiple Items to Shopping Cart. This will support two behaviors, adding items to cart, TotalPrice which will return total price of items selected into cart.

<script language=javascript>
//Class representing Item.
function Item(productName,pricePerUnit,numberOfUnits) {
this.ProductName = productName;
this.PricePerUnit = pricePerUnit;
this.NumberOfItems = numberOfUnits;
//Total Price Method will return computed total price of this item
this.TotalPrice = function () {
return this.PricePerUnit * this.NumberOfItems;
}
}
//Class representing Shopping Cart which will hold all items
function ShoppingCart() {
//internal variable holding all items
var Items = new Array();
//Method which will add item to shopping cart.
this.AddItem = function (ShoppingItem) {
Items[Items.length++] = ShoppingItem;
}
//Method which will return computed Total price of all items in cart.
this.TotalPrice = function () {
var totalPrice = 0;
for (var i = 0; i <>
totalPrice += Items[i].TotalPrice();
}
return totalPrice;
}
}

var soapItems = new Item("Pears",10,10);
var foodItems = new Item("Lays", 20, 10);
var cart = new ShoppingCart();
cart.AddItem(soapItems);
cart.AddItem(foodItems);
alert("Total Price :" +cart.TotalPrice());
</script>

In JavaScript classes can be defined using functions. You can alter this definition by accessing function prototype later at any point of time. We will see more in future posts.

Thursday, July 15, 2010

Policy Injection: WCF Instance Provider

Policy Injection is one of Enterprise Application Blocks, which can be used to perform AOP(Aspect Oriented Programming). In this blog I will discuss about using Policy Injection for WCF services. For this to happen your WCF service objects should be created using Policy Injection. WCF provides behavior extension points to customize various aspects. We will see how to provide custom instance provider which will be used by WCF to create WCF objects.

NOTE: In order for a class to be instantiable through Policy Injection it has to either implement a interface or it should be derived from MarshalByRef class

In scenarios where you want to intercept WCF calls and apply policies defined in Policy Injection, you need to provide your custom WCF instance provider which will use Policy Injection application block to create instances of your WCF services.

You need to devise custom instance provider and implement custom behavior to use this custom instance provider. Once you define custom behavior you can apply this behavior to your WCF service end points.

Custom WCF instance provider must implement IInstanceProvider.

WCF Instance Provider:

public class PolicyInjectionInstanceProvider:IInstanceProvider
{
private Type serviceContractType { get; set; }
private static readonly IUnityContainer container;
private readonly object _sync=new object();
private static readonly TransparentProxyInterceptor injector = new TransparentProxyInterceptor();

static PolicyInjectionInstanceProvider()
{
container = new UnityContainer()
.AddNewExtension();

IConfigurationSource configSource = ConfigurationSourceFactory.Create();
PolicyInjectionSettings settings = (PolicyInjectionSettings)configSource.GetSection(PolicyInjectionSettings.SectionName);
if (settings != null)
{
settings.ConfigureContainer(container, configSource);
}
}

public PolicyInjectionInstanceProvider(Type t)
{
if (t != null && !t.IsInterface)
{
throw new ArgumentException("Specified type must be an interface.");
}
this.serviceContractType = t;
}

#region IInstanceProvider Members

public object GetInstance(System.ServiceModel.InstanceContext instanceContext, System.ServiceModel.Channels.Message message)
{
Type type = instanceContext.Host.Description.ServiceType;

if (serviceContractType != null)
{
lock (_sync)
{
container.Configure().SetDefaultInterceptorFor(serviceContractType, injector);
container.RegisterType(serviceContractType, type);
return container.Resolve(serviceContractType);
}
}
else
{
if (!type.IsMarshalByRef)
{
throw new ArgumentException("Type must inherit from MarhsalByRefObject if no ServiceInterface is Specified.");
}
lock (_sync)
{
container.Configure().SetDefaultInterceptorFor(type, injector);
return container.Resolve(type);
}
}
}

public object GetInstance(System.ServiceModel.InstanceContext instanceContext)
{
return GetInstance(instanceContext, null);
}

public void ReleaseInstance(System.ServiceModel.InstanceContext instanceContext, object instance)
{
IDisposable disposable = instance as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}

#endregion
}

Custom Behavior Extension Element:

public class PolicyInjectionBehavior : BehaviorExtensionElement, IEndpointBehavior
{
public override Type BehaviorType
{
get { return typeof(PolicyInjectionBehavior ); }
}

protected override object CreateBehavior()
{
return new PolicyInjectionBehavior ();
}
#region IEndpointBehavior Members

public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
{

}

public void ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime)
{

}

public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
{
Type contractType = endpoint.Contract.ContractType;
endpointDispatcher.DispatchRuntime.InstanceProvider = new PolicyInjectionInstanceProvider(contractType);

}

public void Validate(ServiceEndpoint endpoint)
{

}

#endregion
}

Import above defined behavior in web.config using behavior extensions provided by WCF. Note that type attribute should provide fully qualify name (and be careful with spaces), along with assembly and version number.

<system.serviceModel>
<extensions>
<behaviorExtensions>
<add name="policyInjectionInstanceProvider" type="policyInjectionBehavior, assembly_name, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/>
</behaviorExtensions>
</extensions>
<behaviors>
<endpointBehaviors>
<behavior name="PolicyInjectionProviderBehavior">
</behavior>
</endpointBehaviors>
</behaviors>
</System.serviceModel>

Once you import this custom behavior extension, you can use this as behavior in your service model and specify this behavior in service end points.

Whenever client tries to access your WCF service, service object will be created using PolicyInjectionInstanceProvider class.

Silverlight: Auto Notifying Delegate Commands

Prism provides Delegate Command equivalent to WPF's Delegate Command (ICommand) implementation in Silverlight. Using Delegate Command in Silverlight you can bind Commands to your view.
Whenever state of command changes, you will be doing something like this:

MyDelegateCmd.RaiseCanExecuteChanged()

which will notify binding system to requery binded command's CanExecute. In above approach your code will be scattered with RaiseCanExecteChanged across application. Instead you can use following class which will auto notify binding system.

public class AutoDelegateCommand<T> : DelegateCommand<T>
{

public INotifyPropertyChanged ViewModel { get; set; }
private void ListenForPropertyChagnedEventAndRequery(INotifyPropertyChanged presentationModel)
{
if (presentationModel != null)
{

presentationModel.PropertyChanged += (sender, args) =>
{
QueryCanExecute(args);
};
}
}

public void QueryCanExecute(PropertyChangedEventArgs args)
{
this.RaiseCanExecuteChanged();
}
}
public AutoDelegateCommand(INotifyPropertyChanged PresentationModel, Action executeMethod)
: base(executeMethod)
{
this.ViewModel = PresentationModel;
ListenForPropertyChagnedEventAndRequery(PresentationModel);
}
public AutoDelegateCommand(INotifyPropertyChanged PresentationModel, Action executeMethod, Func canExecuteMethod)
: base(executeMethod, canExecuteMethod)
{
this.ViewModel = PresentationModel;
ListenForPropertyChagnedEventAndRequery(PresentationModel);
}

}

By using above delegate command there will be no need to call requery in your view model. AutoDelegateCommand will subscribe to PropertyChanged event of passed in view model, and for each property change event it will raise RaiseCanExecute event.

Wednesday, July 14, 2010

Silverlight Profiling

You can do profiling of your silverlight 4 applications. Inside VS 2010 you will not be able to do it. You have to fire up VS Command Prompt and run following commands to perform profiling.
  1. VSPerfClrEnv /sampleon
  2. "C:\Program Files\Internet Explorer\iexplore.exe" http:\\youhostname\pathtosilverlightapplication
  3. VSPerfCmd /start:sample /output:ProfileTraceFileName /attach. You can Identify Process ID using Task Manager.
  4. Run you scenarios.
  5. VSPerfcmd /detach
  6. VSPerfCmd /shutdown
  7. VSPerfClrEnv /off

You can open your ProfileTraceFileName file in VS 2010 and see you code execution paths. But before you open you need to provide paths to MS Symbol Server and also path to your profiling application's pdb files to load symbols of your application. You can do this using VS 2010's menu "Debug->Settings", In settings option window, select symbols under Debug category. Ensure Microsoft Symbol server is selected, and also add path to your pdb files. After doing this, if you open Trace file you can see your code execution paths.

Continuous Integration using TFS 2008 and VS 2010 RC

What is Continuous Integration?
Continuous Integration is a practice where integration happens on frequent basis, and each integration will trigger automated build along with verification of code with automated unit tests.


Why should we use it?
Few of issues you may face during frequent integrations are:
1. Uncompilable code – Occurred largely whenever somebody changed Public API of code and checked in. Depended code elsewhere got broken due to these changes.
2. Bugs – Due to change in internal logic of one module, without checking impact elsewhere, might cause bugs.
By introducing Continuous Integration into development process you will be auto building for each integration, build report can be mailed to all team members. If somebody checked in some code which made solution uncompilable, with this process in place, you will be able to detect as soon as it is failed and can act upon it.
Once build happens, unit tests can be configured to run tests to verify code correctness. By doing this bugs can be detected early, which will help in reducing effort required to fix that bug. The sooner the easier it will be to fix the bug.
NOTE: Using check-in policy, check-ins of uncompilable code can be avoided.

Implementing using TFS 2008 and VS 2010 RC

We will discuss how to implement in a situation where VS 2010 RC is used for development, and TFS 2008 server is still being used.
Till TFS 2005, there is no easy way to implement continuous integration. But from TFS 2008, Microsoft started support Continuous Integration which made it easier to implement.

Simple Architecture of TFS 2008


I wont discuss TFS 2008 installation here.

Steps to implement continuous Integration using TFS 2008:
Build Server Setup

1. Designate a machine as build server. Ensure VS 2008 is installed in this machine to be able to successfully build. Install TFS Build service on this system. (Setup of TFS build service can be found in your TFS server CD under BUILD directory in root).
2. During installation of TFS Build Service, you need to provide credentials using which TFS Build service has to run. Ensure this user is also part of particular projects Build Service group in TFS server.
3. In Visual Studio Open Team Explorer. Right Click on Builds and select Manage Builds to add Build agent to use build server.


4. In Manage Build Agents dialog box, click on New button to add build server.

5. In Build Agent properties window, enter build server details

6. By clicking on OK, you had successfully added build server to TFS service. Now you can define builds to run on this build server.
Once you had added Build Agent, you can define builds and run on that build server.

Create Build Defintion

1. Open Team Explorer window, Right click on Builds and select New Build Definition…


2. In Build Definition window,
a. In General Tab enter

b. In Workspace tab, select Source Control Folder from which build files needs to be retrieved and Local Folder, specifies local folder on build server to which these build files will be downloaded for building.

c. In Project File, under selected source control folder if TFSBuild.Proj is not created, as is case of New Build Definition, click on Create… to create TFSBuild.Proj.

d. In MS Build Project File creation Wizard select solution for which you want to build.

e. Select configuration for Build

f. In Options, select unit tests and code analysis criteria and click on Finish to finish build file creation.

g. Select Retention Policy, which lays criteria for build management.

h. In Build Defaults, specify Build Agent using which you need to execute this Build, and also specify a UNC path of Drop location, to which all build files will be deployed.

i. In Trigger, for Continuous Integration select Build each check-in option. So, for each check-in, build will happen and unit test and code analysis if specified will also be performed.

j. By Clicking on OK, you had successfully created a Build Definition. Now each check in will trigger a Build.
3. By double clicking on Build Definition in Team Explorer, Build Explorer window will be displayed where you can monitor builds, and see build reports.
With TFS 2008 and VSTS2010RC

If you are using VSTS 2008 and TFS 2008, you had successfully completed. But if you are using VS 2010RC, you are not yet completed. BuildAgent of TFS 2008 uses MSBuild 3.5 engine to compile your solutions. This build engine will not be able to compile VS 2010 solutions. In order to compile VS 2010 solutions as well, on Build Server performing below steps:
1. Install VS 2010RC on Build Server to make sure .Net 4.0 & SDK’s and MSBuild 4.0 are installed.
2. Configure Team Build 2008 to use MSBuild 4.0 instead of MSBuild 3.5. To do this edit %Program Files%\Microsoft Visual Studio\9.0\Common7\IDE\PrivateAssemblies\TFSBuildService.exe.config and set MSBuildPath property to C:\Windows\Microsoft.Net\Framework\v4.0.30128\
3. Restart the Team Foundation Build Service.

Send Mail with Build Report

Till now, you had created Build Agent, created Build Definition, and specified check-in as trigger point for continuous Integration process. Now, we want to take step further a bit, and need to send mail with build status along with build report.

For this there are two options available.
1. All Team members should subscribe for Build Completed events using Project Alerts window.

Using alert “A build completes” you can specify multiple email id’s in Send to for whom you want to alert. This needs to be done for all users.
2. Using Custom Tasks.
You can develop custom tasks which can be executed during Build Process. A custom task can be defined in a class library by deriving from Task abstract class, or by implementing ITask interface.
MSBuild Extensions pack already implemented some custom tasks, of which send mail is one task.

Conclusion
Though Continuous integration doesn’t prevent check-ins of uncompilable code or buggy code, it will enable to identify them quickly and act upon.

Unit Testing

Introduction


In this article we will discuss about unit testing, why we need it and what are various quirks in implementing them.


Why?


In today’s world of ever increasing complexity of software, where requirements are always changing, we need to control cost of delivery with highest quality. Whoever worked on medium to long term projects will know the complexity of bug fixing after release and maintenance.
Cost of fixing bug increases with lifecycle of software. During requirements phase it costs cheap, and it increases with stages like development, testing, maintenance. To detect bugs in early cycle of development we need a mechanism. Unit test provides a mechanism to detect bugs in development cycle.



What is Unit Test?


In unit test we will take unit of code and isolate it from dependencies and inspect it for any defects. Mostly unit will be a method or set of method in case we are testing public APIs.



How to write Unit Tests?


Before discussing how to write unit tests, let’s inspect problem more thoroughly so that we can understand our approach better.
In development mostly we find following problems relating to quality
1. Requirements not implemented.
2. Requirements not implemented properly.
3. Missed Requirements (Requirements are not defined)
Developer might have missed some requirements to implement in production code, few times implemented but not correctly implemented, in other cases requirements are not defined, but developer has implemented them during development.
Our unit tests should be able to detect above problems. There are three unit test approaches to tackle above problems



Structural Unit Testing


In structural approach we will try to write unit test cases based on production code we are trying to test. Here we will use code coverage, number of operators, operands in a statement, number of parameters as benchmark in deciding how many tests cases are required.
a. We will try to achieve 100% code coverage.
b. Depending upon number of parameters, number of ways to invoke method will increases. To reduce complexity try to keep number of parameters to minimal. We will try to write unit test cases covering all parameters.
c. Need to write test cases using Boundary values.



Functional Unit Testing


Using functional testing, we will write unit test cases for each requirement. Here we will take requirement/functionality as unit try to test in that aspect isolating class from other dependencies.



No approach


No approach – doesn’t have specific set, but written by experienced developers who will be able to think of cases using which they can test class. It can effective but not a systematic way to assure things are always the way we wanted.


Using Structural testing we will be able to test missed requirements, bugs in code. Using functional testing we will able to detect improper implementation of requirements and bugs as well.



Does it solve the problem?


Unit tests are better to ensure quality to a finite set of scenarios you had tested. You can reasonably assure that for scenarios you had tested, your code will work. Unit tests acts as a safety net whenever code changes are required. There are still uncertain scenarios which are not tested, where bugs might be lurking. Overtime you will reduce uncertainty in quality.



I can’t afford to write Unit Tests?


In this fast paced world, it looks obvious that there is no time to write unit test cases. By not writing unit tests, you are increasing uncertainty in your code quality. If you take a horizon of more than one year for your code base, by investing in Unit Test you will be reducing effort required for testing and maintenance. Cost is less during development, but increases in maintenance. Unless are running away by not maintaining your code base, you will reduce costs by investing in unit tests in maintenance phase which will be huge.



What about Integration Tests?


You should be able to test Integration tests as well in Unit Test. Integration scenarios are scenarios where class being tested will rely on another class to perform certain task. In those scenarios, typically you will be using mocking dependencies and setting expectations. It is expected by dependent object to perform those expectations as is envisioned in our unit tests. Make these expectations as unit tests of dependent object. By ensuring this you are testing integration scenarios as well. It requires careful planning but never impossible.



Apart from that make your classes less chatty. Reduce the surface area of interaction between classes. By doing this you are decreasing integration scenarios and making fewer expectations.



My class is Un-Testable?


During testing most often we will stumble upon code which is not testable. Most of the times you should be able to refactor code and make it testable. If you are using Static Method invocations, which are not testable, you can create a proxy and use this proxy to interact with static objects and methods. By doing this you can mock proxy and make this untestable code into testable.



Reduce Maintenance of Unit Tests


As with any code, overtime as with changing code, corresponding unit tests will also get changed. To reduce maintenance cost of unit tests without compromising quality, you can test only public APIs, i.e. Public methods, as these are the methods used by users. Using these methods you should be able to invoke private and protected functionality as well and should be able to test them. This is sort of adding features of structural testing into functional testing and should be able to achieve same results. By testing only public API now your unit tests are more resilient to refactoring of class internals.



Conclusion


If you take cost of software, maintenance costs are huge than construction costs itself. In order to be properly equipped to reduce maintenance costs, unit tests are most indispensible tool. Unit tests reduce uncertainty in your code.