Monday, 12 January 2015

Dependency inversion

Yesterdays post showed how to get all the reports with all the subreports

Here's the transform to change this to all subreports with their calling reports
as well as all reports that are not subreports at all.

<?xml version='1.0'?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
>
<xsl:output method="xml" encoding="utf-8" />
<xsl:key name="subreports-by-name" match="/ReportHierarchy/ReportDirectory/Report/Subreport" use="Name"/>
<xsl:template match="/">
<ReportDependencies>
<Subreport>
<Name>
<xsl:text>None</xsl:text>
</Name>
<xsl:for-each select="/ReportHierarchy/ReportDirectory/Report[generate-id() != generate-id(key('subreports-by-name', Name)[1])]">
<xsl:sort select="Name" />
<Report>
<Name>
<xsl:value-of select="Name" />
</Name>
<ReportDirectory>
<xsl:value-of select="../Name"/>
</ReportDirectory>
</Report>
</xsl:for-each>
</Subreport>
<xsl:for-each  select="/ReportHierarchy/ReportDirectory/Report/Subreport [count(. | key('subreports-by-name', Name)[1]) = 1]">
<xsl:sort select="Name" />
<Subreport>
<Name>
<xsl:value-of select="Name"/>
</Name>
<xsl:value-of select="surname" />
<xsl:for-each select="key('subreports-by-name', Name)">
<xsl:sort select="../Name" />
<Report>
<Name>
<xsl:value-of select="../Name" />
</Name>
<ReportDirectory>
<xsl:value-of select="../../Name"/>
</ReportDirectory>
</Report>
</xsl:for-each>
</Subreport>
</xsl:for-each>
</ReportDependencies>
</xsl:template>
</xsl:stylesheet>

And the sample output....
<?xml version="1.0" encoding="utf-8"?>
<ReportDependencies>
<Subreport>
<Name>None</Name>
<Report>
<Name>AdventureWorks2008R2_Base.rdl</Name>
<ReportDirectory>AdventureWorks 2008R2</ReportDirectory>
</Report>
        </Subreport>
<Subreport>
<Name>subreportDT1</Name>
<Report>
<Name>FinishingPackNetSpeedForGroup1.rdl</Name>
<ReportDirectory>SsrsReportDocumentation</ReportDirectory>
</Report>
<Report>
<Name>FinishingPackNetSpeedForGroup2.rdl</Name>
<ReportDirectory>SsrsReportDocumentation</ReportDirectory>
</Report>
</Subreport>
</ReportDependencies> 

Sunday, 11 January 2015

SSRS static analysis. Dependencies and health checks

As discussed in this morning's post, we've stitched together a utility to go through the folders containing SSRS source code, then write out the dependency tree like this

<?xml version="1.0" encoding="utf-8"?>
 <ReportHierarchy>
<ReportDirectory> <!-- with no subreports -->
<Name>GaugeSamples 2008 R2</Name> <!-- from adventure works samples-->
<Report>
<Name>Gallery_ThermometerRS2008R2.rdl</Name>
</Report>
<Report>
<Name>PointersRS2008R2.rdl</Name>
</Report>
<Report>
<Name>ScalesRS2008R2.rdl</Name>
</Report>
</ReportDirectory>
<ReportDirectory> <!-- with no reports -->
<Name>logs</Name>
</ReportDirectory>
<ReportDirectory><!-- with subreports, names changed to protect confidentiality -->
<Name>CustomerReports</Name>
<Report>
<Name>ReportingPackSpeed.rdl</Name>
<Subreport>
<Name>subreportDT1</Name>
<ReportName>SpeedMachineGroupForYear</ReportName>
</Subreport>
<Subreport>
<Name>subreportDT2</Name>
<ReportName>SpeedMachineGroupForYear</ReportName>
</Subreport>
             </Report>
</ReportDirectory>
</ReportHierarchy>

There are a couple of tidy ups to do on the code

  • To get correct Report Directory if there is more than one level of subdirectories.
  • To write and execute the XSLT transformation to get the subreportOf property, including a None output.
  • Find out if subreports are supported in the header and footer, and if so include them in the .xslt file.
  <ReportDirectory>
<Name>CustomerReports</Name>
<Report>SpeedMachineGroupForYear
                                     <SubreportOf>ReportingPackSpeed.rdl</SubReportOf>
</Report>
</ReportDirectory>
</ReportDirectory>
        <Name>GaugeSamples 2008 R2</Name> <!-- from adventure works samples-->                                    <Report>Gallery_ThermometerRS2008R2.rdl                                                                                          <SubreportOf>None</SubReportOf>
        </Report>
</ReportDirectory>

However, as is and without warranty, here is the batch file:
@Echo off

rem Apply %transformation%.xslt to all .rdl file in a directory
set transformation=ReportHierarchy
rem Directory containing the rdl files
set rdlDir=c:\projects\SSRSTools\
rem create a new file with an xml header
echo >%transformation%.xml
WriteXmlHeader.vbs %transformation%.xml
rem root element tag is same as transformation name
AppendOpeningTag %transformation% %transformation%.xml
for /f "delims=" %%d in ('dir /b /ad "%rdldir%\*"') do (
echo "%%d\*.rdl"
rem for everything in the directory, apply the transformation enclosing in <ReportDirectory></ReportDirectory> tags
AppendOpeningTag ReportDirectory %transformation%.xml
AppendName "%%d" %transformation%.xml
rem We use the /f flag to only return filenames. Our variable name will be “%%r”
rem “dir /b *.*” indicates a bare directory listing (no filesize etc. just filenames)
set tempfile=temp.xml
set fileTag=Report
for /f "delims=" %%r IN ('dir /b /a-d  "%rdldir%\%%d\*.rdl"') do (
rem echo "%%d\%%r"
   rem Append the transformation results, with suitable header and footer,
   rem while removing the xmlHeader that the transformation generated
AppendOpeningTag %fileTag% %transformation%.xml
AppendName %%r %transformation%.xml
msxsl.exe "%rdldir%\%%d\%%r" %transformation%.xslt -o %tempfile%
AppendWithoutXmlHeader.vbs %tempfile% %transformation%.xml
AppendClosingTag %fileTag% %transformation%.xml
)
rem tidy up temporary file
if exist %tempfile% Del %tempfile%

rem close directory tag
AppendClosingTag ReportDirectory %transformation%.xml
)
rem close root tag
AppendClosingTag %transformation% %transformation%.xml
echo
Echo %transformation%.xml created
And the extraction xslt file

<?xml version='1.0'?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:a="http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition"
exclude-result-prefixes="a">
<xsl:output method="xml" encoding="utf-8" />
<xsl:template match="/">
<xsl:for-each select="a:Report/a:Body/a:ReportItems/a:Subreport">
<xsl:call-template name="subreport"/>
</xsl:for-each>
</xsl:template>
<xsl:template name="subreport">
<Subreport>
<Name>
<xsl:value-of select="@Name"/>
</Name> <ReportName>
<xsl:value-of select="a:ReportName"/>
</ReportName>
</Subreport>
</xsl:template>
</xsl:stylesheet>
Thanks to the person who posted the tip about dealing with the reportdefinition namespace at http://www.sqlservercentral.com/Forums/Topic564926-21-1.aspx


Travelling light with SSRS



The BI team has been reviewing what tools are available for SQL Server Reporting Services. We want to leave a light footprint on our customer sites - and don't assume that anyone maintaining the system has the same choice of tools..

While we're developing, we can use more or less what tools we wish. However, we don't want to deploy any extra dependencies, only SSRS native code and .rdl code. And we don't want a steep learning curve for new people - in the early days they should be focussing on the customer and their business.

Preliminary inspection
Our first task is to see what assets we have already:-

What have we got in the source code? And how up to date is it? Open a command window,
DIR /S *.*>sourceYYYYMMDD.log will do it.

What has already been deployed? Open the report manager in a browser (as administrator)
e.g localhost:80/Reports, switch to detailed view and take a look.

And will it build? The bones of an automated script are something like this...
set today=%DATE:~6,4%%DATE:~3,2%%DATE:~0,2%

"%ProgramFiles(x86)%\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe" /Rebuild Release /out "..\logs\Build%today%.txt" "..\AdventureWorks 2008R2.sln"

Are the source code and deployed code synchronised? If the dates don't look right, or maybe even if they do, there's an article and download at http://www.codeproject.com/Articles/339744/SSRS-Downloading-RDL-Files.

Report Structure

http://ssrsreportdocumentation.codeplex.com/ contains a report which looks at the SSRS database. Basically, the tool says  'tell me as much as you can about reports which have already been deployed'.
I've made one tweak to it, to make a cut down copy which doesn't give the execution log details, and which we can safely run on legacy projects before we have a feel for data volume.

I've also put together an early version of a static analysis tool for the source code. It uses XSLT on the report document language (.rdl) files to identify reports and subreports. Yes, you could achieve the same thing using manual search tools such as those within BIDS or  Notepad++, but it involves a lot of pointing and clicking. To be a separate post on this blog. (Villagers - any ideas for a good name for this tool?)

Pre-release Checks

What can we do to avoid shooting ourselves in the foot?

The simplest thing is to preview from BIDS, and eyeball the output to compare with your change request.

Does is build without errors? The overlapping items warning can indicate problems in one export formats without showing in another export format.

Can you export to CSV or Excel without errors?  DIV/0 is a common problem that doesn't always show on the preview window.

A fairly simple and lightweight approach is to preview from bids, save as xml, and use a diff tool to compare with the current live copy. (That assumes their data sources point at the same database. If not, you can preserve the live copy with a name like .old.rdl till the deployment is approved).

The disadvantage is that you don't get the layout changes. So if you've caught a text box with your mouse and moved it a fraction, that can be hard to spot. You would need to compare the (mime format) mhtml from live and pre-release versions, so needing an extra tool to decode it.  I believe that http://rsunit.codeplex.com/ has tackled the extraction from mhtml to html.

Which brings us to unit test tools. http://rsunit.codeplex.com/ includes integration tests as well as unit tests.  However I haven't used it in production yet. And in the past I have found (with other reporting tools) that it takes a long time if a selection of reports is included in an integration testing run.

One thing I would like to do soon is to 'round up the usual suspects'. The rogues' gallery in Europe includes: forgetting to change the default language 'En-us'; and leaving the default page size of 'letter'. The static analysis tool for .rdl files can pull out the tag (or absence of such a tag) easily into a single xml file. This gives us the possibility of a XSLT transform to present the results in Unit Test format (e.g. Nunit or Microsoft Test Result format).

http://vmdp.blogspot.co.uk/2010/10/testing-steps-for-ssrs.html gives a usual suspect list from the perspective of a web designer.

Deploying SSRS.

The simplest, but most dangerous approach is point and click from BIDS, after setting project deployment properties.

A command line approach allows pre-deployment check, but only if the quality checks have been completed on the whole project, not individual files.

Report Manager allows individual reports to be uploaded. Some of our customers use this approach successfully. You just have to watch that any report is uploaded if its subreports have changed.

The endgame, as it is for all project types, is Continuous Integration. There are a couple of open source tools using the Soap API, to be considered once the pre-release quality checks have been implemented.

Monitoring after deployment
Usefully, SSRS includes various views of the Execution Log, which can be interrogated with SQL.  Care is needed on busy systems to look only at recent events. An example blog post is
http://blog.thoughtstuff.co.uk/2011/01/sql-server-reporting-services-recently-executed-modified-reports/

There is a good-looking tool called SCRUBS which takes an ETL (Extract/Transform/Load) approach to provide a long term view of the SSRS data. However, it's an installed component, and couldn't be left on a customer site without regular support.


Tuesday, 30 December 2014

New Years Resolutions

Thinking about acceptance tests, especially for SSRS reports

 Can we smoke test out any that won't run at all? http://rsunit.codeplex.com/ looks promising (integration test)

  How would we compare existing report with about to release report? problems are pdf page sizes, excel with #error cells.

 Can we post-template or vb script our macros? Some customers don't accept dot net dlls because they don't have permanent developers around who can maintain them. Could be possible with svn checkout and vbscript.

  We should show revision number svn tag see http://www.atollic.com/index.php/kb/8-kb_category_version_control/62-kb_svn_revision_number

Saturday, 25 February 2012

Separation of concerns - Refactoring XSLT for business logic

Doing some work for a client in the manufacturing sector, we've been thinking a lot about legacy code, and how to make the business rules which it contains accessible. One example of this is a set of legacy XSLT reports, which operate on XML produced by a package solution. The original reports use mostly simple features of the XSLT language, and kept very closely to the data structures in the XML.

However, as the business requirements have become more complex, the reports are increasingly cross-referencing different parts of the xml tree using foreign keys;
removing duplicates where the XML doesn't model, for instance, a many-many relationship;
adding unique ids for a particular stage in the processing;
and so forth.

So there's a technical issue, how best to resolve a structure clash*, and a management issue, how to keep the code as self-documenting as possible, and so avoid future maintenance headaches.

The approach is to transform the data, selecting the key featues and applying the business rules. It becomes a good place to comment on why certain things have been done slightly differently in different situations. Then the presentation logic can be applied separately.

Background.

An order uses several machines,
each machine's work may be split into several runs,
and each run may perform either or both of two tasks, version A or B.
The fragment below shows planned start dates and times (planstart) for runs, which is a small part of the actual table output.

Original code **


Well, it's not bad in terms of readability***. The calculations are done once at top of file, then applied in lots of places (as well as in the fragment shown). Then some matching to get the results for each run. The comment tells us why the logic applies, not just repeats the code. However, there's a sense that the code getting bulky and is becoming hard to add new rules to. If someone wanted to skim the data structures and the business rules, it wouldn't be trivial.

        <!-- Generate unique ids consistently, and make sure we match the expected sort order everywhere -->
        <xsl:variable name="runs" select="some rules"/>
        <xsl:variable name="generatedidentifiersforruns">
          <xsl:call-template name="generateidentifiersforruns">
            <xsl:with-param name="runs" select="$runs"/> 
          </xsl:call-template>
        </xsl:variable >
  ....
        <table width="745" border="1">
            <tr>
              <td style="font-size:18px; background-color:#CCC;" colspan="13">
                Overview
              </td>
            </tr>
            <tbody>
     <!--Convert template result to nodeset -->
              <xsl:variable name="ids" select="msxsl:node-set($generatedidentifiersforruns)[1]/keyAndID"/>
              <xsl:for-each select="$ids">
                <xsl:sort select="number(generatedID)"/>
                 <xsl:variable name="generatedID" select="number(generatedID)"/>
                <xsl:variable name="machineId" select="number(machine_id)"/>
                <xsl:variable name="planDate" select="string(plan_start)"/>
                <xsl:for-each select="$runs[number(machine_id)=$machineId and string(plan_start)=$planDate]">
                  <!-- expect only one match -->
                  <tr >
                    <td style="background-color:#CCC;" colspan="7">
                      Machine:
                      <xsl:value-of select="machine_name"></xsl:value-of>
                      Start: <xsl:value-of select="plan_start"></xsl:value-of>
                    </td>
                  </tr>
                  <tr>
                    <td>ID</td>
                    <td>Sub Job</td>
                    <td>Sub Job Desc</td>
     <td>Version Id</td>
                    <td>Version Desc</td>
                    ...
                    <td>Job reference</td> 
                    <td>.... total</td>
                  </tr>
                  <xsl:for-each select="task">
                    <xsl:sort select="task_version"/>
                    <tr>
                      <td>
                        <xsl:if test="position()=1">
                          <xsl:value-of select="$generatedID"/>
                        </xsl:if>
                      </td>
                      <td>
                        <xsl:call-template name="formatsubjob">
                          <xsl:with-param name="...." select="."/>
                          <xsl:with-param name="jobnumber" select="...."/>
                        </xsl:call-template>
                      </td>
      .....
                      <td style=" font-size:20px" align="right">
                         <xsl:value-of select="$job_no"/> - <xsl:value-of select="$generatedID"/>
                      </td>
                      <td>
         <!-- Maintenance note: If both A and B are carried out, task_data_total refers to both of them -->
      <xsl:for-each select="task">
           <xsl:if test="position()=1">
                              <xsl:value-of select="task_data_total"/> 
                           </xsl:if >
      </xsl:for-each>   
                      </td>
                   </xsl:for-each>
                  <!-- end task -->
                </xsl:for-each>
                <!-- end run -->
              </xsl:for-each>
              <!-- end runs -->
            </tbody>
          </table>
 

Why change things?

I certainly don't want to change anything that ain't broke. However, there were new requirements concerning whether task A, task B or both were needed (and other conditions to be taken into account).
It was important to keep our heads clear! Well, I was constantly scrolling up and down the xml file and the xslt file while debugging, so decided to write some extension**** methods to get the data only that we were concerned with, and inspect it.

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"
>
  <xsl:output method="xml" indent="yes"/>
  <xsl:include href="****.xsl"/>

  <xsl:template name="machine-extensions">
    <xsl:variable name="runs-to-show" select="...."/>
    <xsl:variable name="generatedidentifiersforruns">
      <xsl:call-template name="generateidentifiersforpressmows">
        <xsl:with-param name="runs" select="$runs-to-show"/>
      </xsl:call-template>
    </xsl:variable >
    <xsl:variable name="key-and-id-for-runs" select="msxsl:node-set($generatedidentifiersforruns)[1]/keyAndID"/>
 <!-- aha - we could remove msxsl:node-set which causes a browser dependency, 'cos the generate identifier code is only in a template for sharing reasons -->
 
 <xsl:element name="runs">
        <xsl:for-each  select="$runs-to-show">
          <xsl:variable name="machine_id" select="machine_id"/>
          <xsl:variable name="plan_start" select="plan_start"/>
          <xsl:element name="run">
            <xsl:element name="machine_id" >
              <xsl:value-of select ="$machine_id"/>
            </xsl:element>
            <xsl:element name="machine_name" >
              <xsl:variable name="machine_name">
                <xsl:call-template name="machinenamedisplay">
                  <xsl:with-param name="machine_name" select="machine_name"/>
                </xsl:call-template>
              </xsl:variable>
              <xsl:value-of select ="string($machine_name)"/>
            </xsl:element>
            <xsl:element name="plan_start" >
              <xsl:value-of select ="plan_start"/>
            </xsl:element>
            <xsl:element name="generatedID">
              <xsl:value-of select="string($key-and-id-for-runs[number(press_id)=number($machine_id)
                            and string(plan_start)=string($plan_start)]/generatedID)"/>
            </xsl:element>
            <xsl:element name ="task">
              ...
            </xsl:element>
            <xsl:element name ="machine">
    ...
            </xsl:element>
            <xsl:element name="is_total_for_all_tasks_in_the_run">
              <!-- Explanation of why task A and task B are done together, and when task_data_total refers to both together -->
              <xsl:value-of select="count(task)>1
                  and .... "/>
            </xsl:element>
   ... and lookups on raw materials, anything else the machine and pre-machine people need to know.
          </xsl:element>
        </xsl:for-each>
      </xsl:element>
    </xsl:element>
  </xsl:template>
</xsl:stylesheet>

which produces output like this

<?xml version="1.0" encoding="utf-8"?>
<job>
 <runs>
    <run>
      <machine_id>12</machine_id>
      <machine_name>L1 (something)</machine_name>
      <plan_start></plan_start>
      <generatedID>2</generatedID>
      <task />
      <task />
<is_total_for_all_tasks_in_the_run>false</is_total_for_all_tasks_in_the_run>
    </run>
    <run>
      <machine_id>12</machine_id>
      <machine_name>L1 (something)</machine_name>
      <plan_start></plan_start>
      <generatedID>2</generatedID>
      <task />
      <task />
<is_total_for_all_tasks_in_the_run>false</is_total_for_all_tasks_in_the_run>
    </run>
    <run>
      <machine_id>13</machine_id>
      <machine_name>L2 (something else)</machine_name>
      <plan_start></plan_start>
      <generatedID>1</generatedID>
      <task />
      <task />
<is_total_for_all_tasks_in_the_run>false</is_total_for_all_tasks_in_the_run>
    </run>
  </runs>
</job>

And what have we gained?

From the client's point of view, we've added a new feature at reasonable cost.
Design best practice: Separation of concerns, the business logic is in the extensions file. There are more work-practice related rules than I've shown here.
Intermediate results: if any subtotals are done on the report its far easier to see where any problems lie.
Clarity of the report xslt: We can feed the result data into the report, which becomes much easier to maintain.
Consistency: We can feed the result data into several report xslts, rather than repeating code. It's a permissive change, because it would feed an xslt for a touchpad,say, now.
Clearer code: it becomes feasible to clean up the business logic


* The term comes from Jackson Structured Programming - a design paradigm which encouraged you to follow the structure of your incoming, often magnetic tape friendly, data, and/or the structure of your outgoing data or report. A structure clash happens when the two don't coincide.
** Some of the xml element names have been changed to preserve client confidentiallity. Apologies if there are mis-edits.
*** Yes, I know this is a warts and all example. Just bear with it.
**** Not quite extension methods as used in C#, but same principle. Add to what we already have, since we can't get into the package to change it.

Tuesday, 31 January 2012

Fibonnacci knitting

Other than building software, I enjoy making knitwear. Thanks to
 I've been playing with stripes this month.

The idea of using the Fibonacci numbers to make either a full scarf or to finish a neckline or cuffs isn't a new one, because they converge to converge to the Golden Ratio (which is visually A Good Thing).  And it's always good to use some kind of pattern in the 'dribble zones' of baby wear.

The idea is to come up with a balanced stripe pattern, by adding consecutive numbers
so, if we start with the seeds 0,1  we get the sequence 0, 1, 0+1=1, 1+1=2, 1+2=3, 2+3=5, 3+5=8, 13,21 etc.

The fun starts here. Any set chosen from a smallish range of these number makes a good pattern
so, choosing 2, 3,5, 8 and b (blue yarn) and G (green yarn) our garment can be striped bbGGGbbbbbGGGGGGGGG, which is boring but balanced.
Even better, if we want a 5 row rib edging (to stop the garment rolling)
we mix up the sequence numbers to 5,2,8,3,2,5 to stripe bbbbb(in rib), then change to stocking stitch GGbbbbbbbbGGGbbGGGGG  The result is almost always easy on the eye.

Even better, since the Fibonnaci numbers contains 1 and 2, we can always match our total row count from some part of the sequence. So to knit the center of the fish-and-chip baby jumper, we need 30 rows stocking stitch. My cream and coral version used (5cream +1 coral) 6 times,. The shoulder section was repeats of (3cream+2coral). 

I still haven't played with the Lucas numbers which also converge to the Golden Ratio. The first two Lucas numbers are 2 and 1 instead of 0 and 1, and so we get  2,1,3,4,7,11,18,29 and so on.

Friday, 30 December 2011

Christmas Lists - legacy ASP.Net, JQuery and database constraints.

Letters to Santa

Mr and Mrs Geek have trained their kids, nephews and nieces and grandchildren to write to Santa using a home grown system, developed over a number of years. It saves the letter using the person's name. If people want to edit the letter (if they think of new presents they would like) they can see a list and choose one. Of course, all people on the list think they been good!

The database has been set up with a unique constraint on the person's name. It would be terrible if Santa mixed up Mr Geek's brother's son John G and Mrs Geek's sister's boy John J. Also, the kids want completely different items each year - no babyish stuff for them - so overwrite last year's letter completely. The adults on the other hand are delighted to add each year to their collection of jokey T-shirts.

Well, what with all these new great-nephews and great-nieces added to the tribe, the system has to be a bit better at telling which are new letters and which are edits of old ones. People get stuck when they have written a new letter, try to save it, and find there is already one with the same name.
In Christmases past, the parents have tried supplying one save button for a new letter and one for an old - it was tough on the old grey memory cells. They've let the server come back with an exception - that was fine for a genuine conflict, but the interface was clunky when the youngster really did want to replace the old list.

So this year they're adding a new feature 'The letter is already written. Are you sure you want to replace it?' They didn't want to ask their family to click a replace-please button, then click again to actually do the replace.

While they would love to rewrite the whole thing in the shiny new MVC technology, there are puddings to stir, cakes to ice, mince pies to cut out...

Playing with the new feature

They toyed with the idea of using the exactly the same hidden field mechanism as ASP.Net, something like the following snippet
  function __doPostBack(eventTarget, eventArgument) {
    var form = $(document).forms[0];
    if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
      theForm.__EVENTTARGET.value = eventTarget;
      theForm.__EVENTARGUMENT.value = eventArgument;
      theForm.submit();
    }
  }
and set the button OnClientClick="__doPostBack('resubmitme', 'update');"

Santa was watching. Stealing ASP.Net's fields is not being good. So the two senior Geeks went to play with JQuery instead.

Their system still lets the database throw an exception if someone tries to insert a duplicate. Doing an attempted retrieve before an attempted insert just didn't feel right, and might put more strain on an  ancient network and database server.

So it now makes some changes to the form, asks the question, and resubmits itself if that's confirmed.

Cutting the aspx

They made sure the JQuery library was included. Then added:

a javascript function at the top of the file (easier to debug that way) that asks the question. If the user is happy, it changes a hidden field, to be read by the server when the javascript resubmits.
  function ConfirmUpdate(duplicateNameMessage, btnAddClientID) {
    var element = $(document).find('.message-error')[0];
    if ($.trim(element.innerText)== duplicateNameMessage) {
      if (confirm(duplicateNameMessage +'. Do you want to replace it?')){
        var txtShouldUpdate = $(document).find('#txtShouldUpdate')[0];
        txtShouldUpdate.value = 'true';
        var btnAdd = $(document).find('#' + btnAddClientID)[0];
        btnAdd.value = 'Replacing...';
        btnAdd.click();
      }
    }
  }
a text box for a name
              <asp:TextBox runat="server" ID="txtNewName" Visible="true"> </asp:TextBox>
a button
              <asp:Button runat="server" ID="btnAdd" CssClass="seasonalbutton"
                      OnClick="btnAdd_Click" 
                         ToolTip="some Help" />
a hidden field
             <input id="txtShouldUpdate" name="txtShouldUpdate" type="hidden"  /> 
and an error block
           <div class="message-error">
                <asp:Literal runat="server" ID="ctlAddError" EnableViewState="true"></asp:Literal>
            </div>

The code behind

btnAdd_Click() inspects the hidden field to see if it should update or insert. TryInsert() usually 'just works'. However a duplicate name exception on calls AddJQueryScriptToConfirmReplace(), which wires up the document.ready event and triggers our client side dialogue.

    protected void btnAdd_Click(object sender, EventArgs e)
        {
          string name = txtNewName.Text.Trim();
          if (string.IsNullOrEmpty((string)Request.Params.Get("txtShouldUpdate")))
          {
            TryInsert(name);
          }
          else
          {
            TryUpdate(name);
          }
        }

    private void TryInsert(string name) //TryUpdate() is left as a puzzle for your stocking
    {
      try
      {
        var saved = InsertWithName(name);
        if (saved != null)
        {
          RedirectToSavedLists();
        }
      }
      catch (DuplicateNameException dupExc)
      {
        var message = dupExc.Message;
        ctlAddError.Text = Server.HtmlEncode(message);
        AddJQueryScriptToConfirmReplace(DuplicateNameMessage(name));
      }
      catch (Exception exc)
     {
        var message = exc.Message; // any other exception
        ctlAddError.Text = Server.HtmlEncode(message);
        }
    }

    private void AddJQueryScriptToConfirmReplace(string duplicateNameMessage)
        {
          BindJQuery();
          StringBuilder jsStart = new StringBuilder();
          jsStart.AppendLine("<script type=\"text/javascript\">");
          jsStart.AppendLine("$(document).ready(function() {");
          jsStart.AppendLine("ConfirmUpdate('" + duplicateNameMessage + "','" + btnAdd.ClientID  + "');");
          jsStart.AppendLine("});");
          jsStart.AppendLine("</script>");
          Page.ClientScript.RegisterClientScriptBlock(GetType(),
               "yourscriptblock",
                jsStart.ToString());
        }
 private string DuplicateNameMessage(SQLException exc,string name)
    {
      return "The name " + name + " already exists";
 // Why? because its the server's responsibility to work out what the exception message should be.
    }

Notes: Edited to remove first moves towards discussing database structures and what if there are child elements, say a PresentRequest table. That needs a blog entry of its own, another time.