Wednesday, 26 October 2011

Data Transfer Objects - some possibilities

Plain Old Class Objects
In their simplest incarnation, Data Transfer Objects do 'what it says on the tin'. That is, they hold data, and transfer that data, typically from one layer of an application to another.

public class Product
public string ProductCode {get;set;}
public DateTime? PublishableFrom {get;set;}
public DateTime? PublishableUntil {get;set;}
Data About Data
We often want to know a bit more about what data is like, as well as what values it has. In the context of Data Transfer Objects, we can use Data Annotations to indicate this metadata. It's often safe to make it available between layers of the application.

For example, after looking at our database structures, we might annotate our Product Code
[Required, MaxLength(10)]public string ProductCode {get;set;}
In the context of Data Transfer Objects, we might use the annotations to set up 'immediate feedback' to the user for a data entry interface. Also to quickly check the data in the DataAccess layer before attempting to save it, typically avoiding a trip to the database and getting a better message for the user. The key point is that the metadata is the same between the layers, and potentially between different user interfaces for the same DTO.

There is a balance between this added consistency and performance, of course. To set up a form or page with the maximum lengths uses reflection to get the metadata - however it is a 'once per control' cost. I won't discuss here when to use annotations for this purpose, when not to, which tools would help...

Extension Methods For Other People's Classes
Extension methods allow us to add lightweight functionality to an existing class. A utility function ToDateOrEmptyString might be implemented as an extension to the Nullable<System.DateTime> class. It gives us more readable code, and a more consistent presentation (especially if our application has several alternative user interfaces but requires a specific date format).

Extension Methods For Our Data Transfer Objects
While our DTOs are primarily concerned with data, sometimes we have extension methods which can be cleanly included. A typical example on our product class might examine the PublishableFrom and PublishableUntil properties, and work out if a product IsPublishableToday().  An advantage is that we don't have to worry about serializing and deserializing a derived property, simply reuse the extension methods in all layers requiring them.

Where to Do all this work
Personally, I'd want to keep formatting as extension methods implemented in the GUI layer. However, if there are alternative GUIs, I'd let that override my usual preference. Unless the task was large enough to justify a whole new formatting layer.

More Atomic Requests
The extension method technique is intended to work with the black-box view that n-tiered architecture creates. I wouldn't have a  heavyweight extension method IsAvailable() to check whether there is any stock in, whether the logged-in user is authorised to obtain it, and so forth. So we might have a service-layer or repository layer call that returned a boolean. Or more probably our Products returned would be filtered, calling  GetProductsAvailable(), GetProductsPublishableToday(), or GetProductsAll() as appropriate/

Aggregate Data Transfer Objects
It's common to drill down into the detail of an object. So again on the theme of atomic requests, if the user wanted to view the actual levels of stock, we would consider a design with alternative Dtos for our product. In most cases we would retrieve a plain Dto just telling us about the product basics, however we would also have a more complex Dto, say ProductWithStockAvailableAndDue. I won't dwell here on possible designs for StockAvailableAndDue, or all possible ProductWithXXX aggregates.

Not One Size Fits All
It is worth considering Small, Medium, Large sizes of Dtos for a base object. Small having Id and a brief description, Medium roughly equivalent to a table row, Large being an object and its lazy loaded children. As with any design decision, it all depends on the details of the application.

Friday, 21 October 2011

Source control for an assortment of development machines

Connection strings

For a long time, we have dealt with an assortment of SQL server releases across developers machines. Suppose we have a hibernate config file for a DotNet project. Developer 1 might have the advanced search facilities installed
    <property name="hibernate.connection.connection_string"> Server=.\SQLEXPAPR;Database=project1Db; Uid=user;Pwd=password</property>
while Developer 2 might have a different version of SQLServer 
<property name="hibernate.connection.connection_string"> Server=.\SQLEXPRESS;Database=project1Db; Uid=user;Pwd=password</property>

By convention,  we keep any config sections for an application or a test suite in a separate file such as hibernate.config.xml. Instead of dealing with the constant conficts as each developer checks in their own .config file, we check in a .config.xml.example file, which each developer edits with their own setup.

Theoretically, there could be a change which affects the whole system. However in practice we find that its sufficient to keep the web.config or app.config universal, and the variable bits for the different config sections separate.  And to talk to each other if we do change anything!

Compilation Options and Testing Tools

We all have slightly different testing tools installed (e.g. Nunit, JustCode), which we use before checking in and making changes available to the Continous Integration package, currently CruiseControl. Some of the tools need setting up in the project properties. Because this is done rarely, we just edit the Project properties by hand e.g. a unit test project might have the Debug section, StartExternalProject set to c:\CommonLibrary\Third Party Tools\NUnit\nunit.exe. This isn't a problem for source control, because the individual changes are stored in the .csproj.user or .vbproj.user file, and we simply exclude them from the checkin process.

32 and 64 bit

As we move from 32 bit operating systems to 64 bit, we find we need slight variations in the project properties to cope with different versions of tools. The Compile tabs on the project Property form is the key one. An early attempt had us scratching our skulls when we did a Build/Clean, because there were different versions of the output path for different platforms, so we still picked up incompatible verions of the code.

The trick is to use the Configuration Manager to allow a build specifically for x86. (If you need to put a new project platform, you should be careful of the tick box setting, 'create new solution platforms'.) Then check the Advanced Compile Options and Build Output Path for each platform as relevant.

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|x86' ">

This needs to be done for both platforms (x86, x64)  and for both Debug and Release.

To watch out for:
If your tests use relative paths, you'll need to set up your Build Output Path with the same depth e.g. bin\x86\debug and bin\any\debug.