Friday, 30 December 2011

Christmas Lists - legacy ASP.Net, JQuery and database constraints.

Letters to Santa

Mr and Mrs Geek have trained their kids, nephews and nieces and grandchildren to write to Santa using a home grown system, developed over a number of years. It saves the letter using the person's name. If people want to edit the letter (if they think of new presents they would like) they can see a list and choose one. Of course, all people on the list think they been good!

The database has been set up with a unique constraint on the person's name. It would be terrible if Santa mixed up Mr Geek's brother's son John G and Mrs Geek's sister's boy John J. Also, the kids want completely different items each year - no babyish stuff for them - so overwrite last year's letter completely. The adults on the other hand are delighted to add each year to their collection of jokey T-shirts.

Well, what with all these new great-nephews and great-nieces added to the tribe, the system has to be a bit better at telling which are new letters and which are edits of old ones. People get stuck when they have written a new letter, try to save it, and find there is already one with the same name.
In Christmases past, the parents have tried supplying one save button for a new letter and one for an old - it was tough on the old grey memory cells. They've let the server come back with an exception - that was fine for a genuine conflict, but the interface was clunky when the youngster really did want to replace the old list.

So this year they're adding a new feature 'The letter is already written. Are you sure you want to replace it?' They didn't want to ask their family to click a replace-please button, then click again to actually do the replace.

While they would love to rewrite the whole thing in the shiny new MVC technology, there are puddings to stir, cakes to ice, mince pies to cut out...

Playing with the new feature

They toyed with the idea of using the exactly the same hidden field mechanism as ASP.Net, something like the following snippet
  function __doPostBack(eventTarget, eventArgument) {
    var form = $(document).forms[0];
    if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
      theForm.__EVENTTARGET.value = eventTarget;
      theForm.__EVENTARGUMENT.value = eventArgument;
      theForm.submit();
    }
  }
and set the button OnClientClick="__doPostBack('resubmitme', 'update');"

Santa was watching. Stealing ASP.Net's fields is not being good. So the two senior Geeks went to play with JQuery instead.

Their system still lets the database throw an exception if someone tries to insert a duplicate. Doing an attempted retrieve before an attempted insert just didn't feel right, and might put more strain on an  ancient network and database server.

So it now makes some changes to the form, asks the question, and resubmits itself if that's confirmed.

Cutting the aspx

They made sure the JQuery library was included. Then added:

a javascript function at the top of the file (easier to debug that way) that asks the question. If the user is happy, it changes a hidden field, to be read by the server when the javascript resubmits.
  function ConfirmUpdate(duplicateNameMessage, btnAddClientID) {
    var element = $(document).find('.message-error')[0];
    if ($.trim(element.innerText)== duplicateNameMessage) {
      if (confirm(duplicateNameMessage +'. Do you want to replace it?')){
        var txtShouldUpdate = $(document).find('#txtShouldUpdate')[0];
        txtShouldUpdate.value = 'true';
        var btnAdd = $(document).find('#' + btnAddClientID)[0];
        btnAdd.value = 'Replacing...';
        btnAdd.click();
      }
    }
  }
a text box for a name
              <asp:TextBox runat="server" ID="txtNewName" Visible="true"> </asp:TextBox>
a button
              <asp:Button runat="server" ID="btnAdd" CssClass="seasonalbutton"
                      OnClick="btnAdd_Click" 
                         ToolTip="some Help" />
a hidden field
             <input id="txtShouldUpdate" name="txtShouldUpdate" type="hidden"  /> 
and an error block
           <div class="message-error">
                <asp:Literal runat="server" ID="ctlAddError" EnableViewState="true"></asp:Literal>
            </div>

The code behind

btnAdd_Click() inspects the hidden field to see if it should update or insert. TryInsert() usually 'just works'. However a duplicate name exception on calls AddJQueryScriptToConfirmReplace(), which wires up the document.ready event and triggers our client side dialogue.

    protected void btnAdd_Click(object sender, EventArgs e)
        {
          string name = txtNewName.Text.Trim();
          if (string.IsNullOrEmpty((string)Request.Params.Get("txtShouldUpdate")))
          {
            TryInsert(name);
          }
          else
          {
            TryUpdate(name);
          }
        }

    private void TryInsert(string name) //TryUpdate() is left as a puzzle for your stocking
    {
      try
      {
        var saved = InsertWithName(name);
        if (saved != null)
        {
          RedirectToSavedLists();
        }
      }
      catch (DuplicateNameException dupExc)
      {
        var message = dupExc.Message;
        ctlAddError.Text = Server.HtmlEncode(message);
        AddJQueryScriptToConfirmReplace(DuplicateNameMessage(name));
      }
      catch (Exception exc)
     {
        var message = exc.Message; // any other exception
        ctlAddError.Text = Server.HtmlEncode(message);
        }
    }

    private void AddJQueryScriptToConfirmReplace(string duplicateNameMessage)
        {
          BindJQuery();
          StringBuilder jsStart = new StringBuilder();
          jsStart.AppendLine("<script type=\"text/javascript\">");
          jsStart.AppendLine("$(document).ready(function() {");
          jsStart.AppendLine("ConfirmUpdate('" + duplicateNameMessage + "','" + btnAdd.ClientID  + "');");
          jsStart.AppendLine("});");
          jsStart.AppendLine("</script>");
          Page.ClientScript.RegisterClientScriptBlock(GetType(),
               "yourscriptblock",
                jsStart.ToString());
        }
 private string DuplicateNameMessage(SQLException exc,string name)
    {
      return "The name " + name + " already exists";
 // Why? because its the server's responsibility to work out what the exception message should be.
    }

Notes: Edited to remove first moves towards discussing database structures and what if there are child elements, say a PresentRequest table. That needs a blog entry of its own, another time.

Friday, 25 November 2011

The OAKen saga - a battle with SQL server install

SQLServer 2008 Express must run on our server OAK for Jenkins Continuous Integration.

 
The monster fights fiercely.

 
Had our aim been true, we would:
Download SQLServer Express With Tools (free version) to get SQL Server Management Studio plus the Advanced Services such as Searching
Install Visual Studio, whatever releases we want.

 
But, we had put on Visual Studio, so:
The SQLEXPRESS beast ran as Windows Authentication. It would not go to Mixed Mode.
The change could only be wrought by SSMS.

 
Alas, the download (SSMS) has a bug. It asked for Service Pack upgrades, but could not see them.
It could not see VS2008 SP1 was there.

 
With greatest care, your hero delved into the registry.
She took the trail http://devlicio.us/blogs/sergio_pereira/archive/2010/03/07/rule-quot-previous-releases-of-microsoft-visual-studio-2008-quot-failed.aspx
HKLM\SOFTWARE\Microsoft\DevDiv\VS\Servicing\9.0\PRO\1033
"SP"=dword:00000000
"SPIndex"=dword:00000000
"SPName"="RTM"
and
HKLM\SOFTWARE\Microsoft\DevDiv\VS\Servicing\9.0\
"SP"=dword:00000000
"SPIndex"=dword:00000000

 
And now, we have gained a new instance install of SQL Express with Advanced Services.
Known henceforth as OAK\SQLEXPADV. It is happy, for I can run SQL scripts.

 
I fear OAK\SQLEXPRESS is lost. It lamely accepts Windows Authentication logins, but will not upgrade to Mixed mode.

 
Even OAK\SQLEXPADV has a journey to make. It cannot be reached from the remote regions of BIRCH or PINE. Or not yet!

 
I trust OAK\SQLEXPADV is now close to Jenkins. The script RebuildOakDatabase succeeds.
I shall continue my quest this afternoon.

 
From your bard,
Margaret.

Later writers add:
A couple of things on Jenkins that weren't obvious:
  • have set msbuild parameters /p:Configuration=Release /p:Platform="Any CPU" to match Third Party Tools\NUnit\nunit-console.exe as opposed to Third Party Tools\NUnit\nunit-console-x86.exe.
  • Need to think about whether x86 or x64 will be our default build, and which we should unit- and integration-test.
  • Jenkins uses workspace as its current directory, so have added a BuildOakDatabase.bat that cds to our main scripts folder before running the usual BuildOakDatabase.bat

Jenkins is now succeeding for all four of our LabCom suite products. We can switch off the old Cruise Control tests in a few days time.

Wednesday, 26 October 2011

Data Transfer Objects - some possibilities

Plain Old Class Objects
In their simplest incarnation, Data Transfer Objects do 'what it says on the tin'. That is, they hold data, and transfer that data, typically from one layer of an application to another.

public class Product
{
public string ProductCode {get;set;}
public DateTime? PublishableFrom {get;set;}
public DateTime? PublishableUntil {get;set;}
}
Data About Data
We often want to know a bit more about what data is like, as well as what values it has. In the context of Data Transfer Objects, we can use Data Annotations to indicate this metadata. It's often safe to make it available between layers of the application.

For example, after looking at our database structures, we might annotate our Product Code
[Required, MaxLength(10)]public string ProductCode {get;set;}
In the context of Data Transfer Objects, we might use the annotations to set up 'immediate feedback' to the user for a data entry interface. Also to quickly check the data in the DataAccess layer before attempting to save it, typically avoiding a trip to the database and getting a better message for the user. The key point is that the metadata is the same between the layers, and potentially between different user interfaces for the same DTO.

There is a balance between this added consistency and performance, of course. To set up a form or page with the maximum lengths uses reflection to get the metadata - however it is a 'once per control' cost. I won't discuss here when to use annotations for this purpose, when not to, which tools would help...

Extension Methods For Other People's Classes
Extension methods allow us to add lightweight functionality to an existing class. A utility function ToDateOrEmptyString might be implemented as an extension to the Nullable<System.DateTime> class. It gives us more readable code, and a more consistent presentation (especially if our application has several alternative user interfaces but requires a specific date format).

Extension Methods For Our Data Transfer Objects
While our DTOs are primarily concerned with data, sometimes we have extension methods which can be cleanly included. A typical example on our product class might examine the PublishableFrom and PublishableUntil properties, and work out if a product IsPublishableToday().  An advantage is that we don't have to worry about serializing and deserializing a derived property, simply reuse the extension methods in all layers requiring them.

Where to Do all this work
Personally, I'd want to keep formatting as extension methods implemented in the GUI layer. However, if there are alternative GUIs, I'd let that override my usual preference. Unless the task was large enough to justify a whole new formatting layer.

More Atomic Requests
The extension method technique is intended to work with the black-box view that n-tiered architecture creates. I wouldn't have a  heavyweight extension method IsAvailable() to check whether there is any stock in, whether the logged-in user is authorised to obtain it, and so forth. So we might have a service-layer or repository layer call that returned a boolean. Or more probably our Products returned would be filtered, calling  GetProductsAvailable(), GetProductsPublishableToday(), or GetProductsAll() as appropriate/

Aggregate Data Transfer Objects
It's common to drill down into the detail of an object. So again on the theme of atomic requests, if the user wanted to view the actual levels of stock, we would consider a design with alternative Dtos for our product. In most cases we would retrieve a plain Dto just telling us about the product basics, however we would also have a more complex Dto, say ProductWithStockAvailableAndDue. I won't dwell here on possible designs for StockAvailableAndDue, or all possible ProductWithXXX aggregates.

Not One Size Fits All
It is worth considering Small, Medium, Large sizes of Dtos for a base object. Small having Id and a brief description, Medium roughly equivalent to a table row, Large being an object and its lazy loaded children. As with any design decision, it all depends on the details of the application.

Friday, 21 October 2011

Source control for an assortment of development machines

Connection strings

For a long time, we have dealt with an assortment of SQL server releases across developers machines. Suppose we have a hibernate config file for a DotNet project. Developer 1 might have the advanced search facilities installed
    <property name="hibernate.connection.connection_string"> Server=.\SQLEXPAPR;Database=project1Db; Uid=user;Pwd=password</property>
while Developer 2 might have a different version of SQLServer 
<property name="hibernate.connection.connection_string"> Server=.\SQLEXPRESS;Database=project1Db; Uid=user;Pwd=password</property>

By convention,  we keep any config sections for an application or a test suite in a separate file such as hibernate.config.xml. Instead of dealing with the constant conficts as each developer checks in their own .config file, we check in a .config.xml.example file, which each developer edits with their own setup.

Theoretically, there could be a change which affects the whole system. However in practice we find that its sufficient to keep the web.config or app.config universal, and the variable bits for the different config sections separate.  And to talk to each other if we do change anything!


Compilation Options and Testing Tools

We all have slightly different testing tools installed (e.g. Nunit, JustCode), which we use before checking in and making changes available to the Continous Integration package, currently CruiseControl. Some of the tools need setting up in the project properties. Because this is done rarely, we just edit the Project properties by hand e.g. a unit test project might have the Debug section, StartExternalProject set to c:\CommonLibrary\Third Party Tools\NUnit\nunit.exe. This isn't a problem for source control, because the individual changes are stored in the .csproj.user or .vbproj.user file, and we simply exclude them from the checkin process.

32 and 64 bit

As we move from 32 bit operating systems to 64 bit, we find we need slight variations in the project properties to cope with different versions of tools. The Compile tabs on the project Property form is the key one. An early attempt had us scratching our skulls when we did a Build/Clean, because there were different versions of the output path for different platforms, so we still picked up incompatible verions of the code.

The trick is to use the Configuration Manager to allow a build specifically for x86. (If you need to put a new project platform, you should be careful of the tick box setting, 'create new solution platforms'.) Then check the Advanced Compile Options and Build Output Path for each platform as relevant.

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|x86' ">

This needs to be done for both platforms (x86, x64)  and for both Debug and Release.


To watch out for:
If your tests use relative paths, you'll need to set up your Build Output Path with the same depth e.g. bin\x86\debug and bin\any\debug.