Introducing the DTU Calculator Command Line Utility

Several weeks ago I added support for a command line utility (CLU) to help collect performance metrics from SQL Server, prior to using the Azure SQL Database DTU Calculator, as an alternative to the PowerShell script. I apologize for not adding a blog post sooner. If you are unfamiliar with the Azure SQL Database DTU Calculator, I encourage you to start with the first post in the series.

I added the CLU for a couple of reasons. The first being a result of the sheer volume of questions I received regarding the PowerShell script not working for some reason or another; the second reason being that I have a suspicion many of the people using the DTU Calculator would feel comfortable with a tool written in C#.

When I created the PowerShell script, I tried to keep it as simple as possible and I figured that any engineer could open it up and modify as needed. In hindsight, I should have updated the script to include more validation and error handling but I felt that adding too much script would do more harm than good. I may test that theory by adding validation and error handling in the future – feel free to help me out, update it yourself, and send me the script – but for now I’ve created an alternative C# based utility.

The C# code is hosted on GitHub so you can review/modify as needed. If you’ve reviewed the code, you can see it’s pretty simple so I won’t spend much time discussing. However, there are a couple of points worth mentioning:

  • The utility collects CPU and IOPS exactly the same as the PowerShell script. However, the “Database – Log Bytes Flushed” counter is optional. The utility uses that counter if it exists, and proceeds without it if it doesn’t.*
  • The app.config contains all the settings for the utility. You can change the counter category, instance, and/or name; the collection interval and duration; as well as the  CSV file location. If you need more than that, you can download the source code and modify as needed.

*Note:  Since the PowerShell script and command line utility measure CPU and IOPS at the server level and Log Bytes Flushed is measured at the database level, I decided to make the database counter optional.

As always, if you have questions and/or feedback, don’t hesitate to reach out.

Migrating from AWS RDS SQL Server to Azure SQL Database Using the Azure SQL Database DTU Calculator

Last week a colleague of mine in the UK asked me for some help migrating from AWS’ RDS SQL Server to Azure SQL Database.  The details of why you’d want to do this are out of the scope of this post but needless to say, Azure SQL Database is better.  This post assumes you’re familiar with the DTU Calculator and how it works.  If not, click here to learn more.

AWS RDS provides managed services for deploying SQL Server as well as several other major DBMS platforms.  The service is single-tenant in that the RDS server is just a dedicated VM that AWS has wrapped to make it look like a platform as a service (PaaS) offering.  Unfortunately, AWS doesn’t provide access to the underlying VM so capturing the performance metrics needed by the DTU Calculator is a little bit tricky and the PowerShell script I developed to help doesn’t work.  My first thought was to capture performance metrics using SQL Server’s Dynamic Management Views (DMOs) and provide a command-line interface wrapper to generate a CSV file the DTU Calculator understands.  This is where I ran into my first problem.  Does anyone understand how to use those things?  Seriously?!?  I spent hours searching the internets trying to find something, anything, that would help.  Finally, I came across the following book by Louis Davidson and Tim Ford, Performance Tuning with SQL Server Dynamic Management Views.  After reading through a few chapters related to the performance counters I needed, I determined I would need to write the following SQL statements:

SELECT
       counter_name,
       A.cntr_value,
       B.cntr_value,
       Ratio = CASE WHEN B.cntr_value = 0 THEN 0
       ELSE A.cntr_value / B.cntr_value END
FROM sys.dm_os_performance_counters A,
       (SELECT cntr_value
       FROM sys.dm_os_performance_counters
       WHERE OBJECT_NAME LIKE 'SQLServer:Resource Pool Stats%'
             AND COUNTER_NAME LIKE 'CPU usage % base%') B
WHERE COUNTER_NAME LIKE 'CPU Usage %'
AND OBJECT_NAME LIKE 'SQLServer:Resource Pool Stats%'
AND COUNTER_NAME NOT LIKE 'CPU usage % base%'
AND COUNTER_NAME NOT LIKE 'CPU usage target%'

SELECT *, 'PERF_COUNTER_LARGE_RAWCOUNT' FROM sys.dm_os_performance_counters
WHERE COUNTER_NAME LIKE 'Disk Read Bytes/sec%'

SELECT *, 'PERF_COUNTER_LARGE_RAWCOUNT' FROM sys.dm_os_performance_counters
WHERE COUNTER_NAME LIKE 'Disk Write Bytes/sec%'

SELECT *, 'PERF_COUNTER_BULK_COUNT' FROM sys.dm_os_performance_counters
WHERE COUNTER_NAME LIKE 'Log Bytes Flushed/sec%'
AND INSTANCE_NAME LIKE '<Database>'

Reading the queries, it’s easy to see what’s going on.  First, I capture the CPU as a percentage.  Second, I get reads/writes as raw values.  Finally, I get the log which happens to be a bulk count so I’ll need to store and compare its value on each interval to find the real value.  Running the queries against a local database gave me the results I wanted but I was skeptical AWS RDS would do the same so  I fired up an RDS SQL Server instance with a database and ran the queries in SSMS.  Just as I suspected, no luck.  It seems AWS RDS will give me CPU information but I’m S.O.L. for the  other counters.  The queries run fine but the last three don’t return any values.

I spent several more hours investigating alternatives and settled on AWS CloudWatch.  AWS does provide RDS performance metrics via CloudWatch but they are aggregation metrics.  The smallest increment is 60 seconds so I’ll need to take the average of each counter for each interval.  CloudWatch provides multiple counters for RDS but the following seem to be related to what the DTU Calculator needs:  CPUUtilization, ReadThroughput, WriteThroughput.  Unfortunately, there doesn’t appear to be a way to capture the equivalent of Log Bytes Flushed/sec so I’ll need to ignore that counter and hope the others get me close enough in my calculation.

I settled on using the GetMetricStatistics action.  I won’t go into detail regarding that action and I’m definitely not an AWS CloudWatch expert so you’ll want to do some research of your own to validate my findings.  I’ve done some testing using the CloudWatch CLI and I believe something similar to the following commands will work:

mon-get-stats CPUUtilization --start-time 2015-11-15T22:00:00.000Z --end-time 2015-11-15T23:00:00.000Z --period 60 --statistics "Average" --namespace "AWS/RDS" --dimensions "DBInstanceIdentifier=[dbinstanceidentifier]"

mon-get-stats ReadThroughput --start-time 2015-11-15T22:00:00.000Z --end-time 2015-11-15T23:00:00.000Z --period 60 --statistics "Average" --namespace "AWS/RDS" --dimensions "DBInstanceIdentifier=[dbinstanceidentifier]"

mon-get-stats WriteThroughput --start-time 2015-11-15T22:00:00.000Z --end-time 2015-11-15T23:00:00.000Z --period 60 --statistics "Average" --namespace "AWS/RDS" --dimensions "DBInstanceIdentifier=[dbinstanceidentifier]"

Take the output from those three commands and copy the results into a CSV file.  Use the following headers:  ‘% Processor Time’, ‘Disk Reads/sec’, ‘Disk Writes/sec’, ‘Log Bytes Flushed/sec.’  Since you don’t have metrics for the ‘Log Bytes Flushed/sec’ column, fill that column with zeros.  Once you’ve built the CSV file, you’re all set to upload the results into the DTU Calculator.  If your RDS SQL Server is running more than a single database, select the ‘Elastic Databases’ option to calculate the equivalent pool size.

As I mentioned above, calculating the Azure SQL Database equivalent for an AWS RDS SQL Server is a bit tricky and by no means perfect.  However, my hope is this post gives you some indication of how to gather performance metrics from RDS using AWS CloudWatch and hopefully provides a baseline calculation for migrating to Azure SQL Database.  As always, I’d love to hear your feedback.

Analyzing the Azure SQL Database DTU Calculator Results

The following is the third in a series of posts about the Azure SQL Database DTU Calculator.  The first post in this series described the need for the calculator and the process of creating it.  The second post described using the calculator.  This post will focus on analyzing the results.

Single Azure SQL Database

Assuming you’ve captured your SQL Server’s performance and uploaded it in the calculator, you will now see the results page.  The example below is for a single database so your results may vary slightly if you’re using Elastic Database, but in general, the results page is divided into three basic sections:  ‘Service Tier and Performance Level’, ‘DTUs Over Time’, and ‘View More Details’.  Note that the ‘View More Details’ section is collapsed by default so you’ll need to expand it in order to analyze CPU, IOPS, and Log recommendations individually.

Service Tier/Performance Level

The ‘Service Tier/Performance Level’ section provides an overall recommendation for migrating to Azure SQL Database.  This recommendation is calculated by taking the maximum number of DTUs for either CPU, IOPS, or Log at each interval and using that maximum as the basis for the recommendation.  Since the calculator recommends the Service Tier/Performance Level which maximizes performance while minimizing costs, it’s important to note the recommendation may not cover 100% of your workload.  There may be some portion of your workload that falls into a higher Service Tier and Performance Level so you’ll want to validate your database’s performance post migration.

Note:  You can gain additional insights by hovering your mouse over each section of the chart to see what percentage of your workload is covered by a specific Service Tier/Performance Level as well as how much that Service Tier/Performance Level covers the entire workload.

Service Tier/Performance Level

DTUs Over Time

The ‘DTUs Over Time’ section is similar to the ‘Service Tier/Performance Level’ recommendation in that it uses the maximum DTU for CPU, IOPS, and Log.  The difference is the ‘DTUs Over Time’ chart displays the DTU value at each interval.  This type of visualization is useful since it shows how DTUs relate to performance.  In general, you would hope your performance is somewhat consistent but having this chart allows you to identify performance spikes that may cause problems with your Azure SQL Database deployment and do some additional investigation to identify and resolve those performance issues before migrating to Azure SQL Database.

DTUs Over Time

View More Details

The ‘View More Details’ section is broken down into four subsections.  The first three sections show individual recommendation for CPU, IOPS, and Log.  Having individual recommendation allows you to quickly identify how your workload is bound.  For example, the following shows recommendations for CPU and IOPS.  Notice the recommendation for CPU is Standard – S1 and the recommendation for IOPS is Standard – S0.  This means your workload is CPU bound as your workload requires more CPU than disk.

Service Tier/Performance Level for CPU 

Service Tier/Performance Level for IOPS

The last chart shows how much of your workload’s CPU, IOPS, and Log is covered by a Service Tier/Performance Level.  By utilizing this chart you can visualize how your workload changes across Service Tiers and Performance Levels.  For example, the following chart shows that 30% of CPU, 78% of IOPS, and 100% of Log are immediately covered by the Basic Service Tier and it isn’t until Standard – S1 that all three metrics have over 90% coverage.

Service Tier/Performance Level for CPU, IOPS, and Log

Elastic Databases

The Elastic Databases recommendation is almost identical to the recommendation for a single database with two major differences.  The first difference is that elastic database pools don’t have a concept of Performance Levels so the recommendation only provides a Service Tier (Basic, Standard, or Premium).  The second difference is the Elastic Databases recommendation shows the total eDTUs required for the pool.  The total eDTUs required for the pool is important because of the way Elastic Databases are billed.  You pay for the total eDTUs required for all databases in the pool.  See the SQL Database Pricing page for more information.

Introducing the Azure SQL Database DTU Calculator: Part 2

The following is the second in a series of posts about the Azure SQL Database DTU Calculator.  In this post, I’ll cover some recent updates I’ve made as well as an overview of using the calculator for either a single database or a pool of databases.

Running a single SQL Server database in Azure has been supported for a few years but the concept of a Database Throughput Units (DTU) is relatively new.  The Azure SQL team first introduced the concept after some customers complained about performance when using Azure SQL Web and Business editions while other customers raved about performance.  You can guess which neighbors were the noisy ones.  In order to give all Azure customers a more predictable level of performance, the Azure SQL team introduced a new set of Service Tiers and Performance levels.  I won’t go into details regarding all the features of Service Tiers and Performance Levels but it is worth mentioning that each tier and level has an associated cost.  If you want more performance, you pay more.  If you want less…  you get the point.

Introducing tiers and levels to provide more reliable performance was a great first step and works really well for a single database which requires consistent performance.  However, many customers have more than a single database and in many cases, the performance of those databases isn’t consistent.  There are periods of high intensity followed by periods of low intensity.  Initially, developers would need to create a separate Azure SQL Database and try to manage the peaks and lulls of each database separately.  However, this type of database management was complicated and not cost effective when when working with several databases or more.

At Build 2015, Microsoft announced the preview of Azure SQL Elastic Databases to support unpredictable database demands and reduce the complexity of managing multiple databases.  The on-demand performance scaling of individual databases within an elastic database pool is possible because each database within a pool uses eDTUs from a shared set associated with the pool.  Allowing databases that require more performance to consume more DTUs while databases requiring less to consume less.

Regardless of running a single database or a pool of databases, developers migrating to Azure still need to know how their databases will perform after they migrate.  The good news is that SQL Azure allows you to scale your database up or down depending on your database needs.  The bad news is all of that happens post migration so it’s difficult to get a sense of cost without knowing the Service Tier and Performance Level required for your database workload.  That’s where the DTU Calculator can help.

Single Database or Multiple Databases

The instructions for using the DTU calculator are straightforward so I’ll try to focus on some of the things I don’t mention on the main page of the calculator.  First, you’ll need to decide if you’re going to run the calculator for a single database or multiple databases.  Running the calculator for a single database means your SQL Server only has one database and that database is what you plan to migrate to Azure.  If your SQL Server has more than one database but you still only plan to migrate one of those databases, you’ll either need to modify the PowerShell script I provide on the calculator’s main page, or understand that the calculator will likely overestimate your workload because the script is configured to capture totals for CPU, IOPs, and Log on the server– not by individual database.

If you SQL Server has multiple databases and you plan to migrate all of those databases to Azure, then you can run the PowerShell script as provided.  The calculator also supports multiple databases on multiple servers.  However, you will need to run the script on each server and then upload all the results in the calculator.

Resource Utilization

As part of the calculator, I’ve provided a PowerShell script to help capture the correct performance counters for the correct period of time.  By default, the script captures performance for the server at one second intervals for the duration of an hour.  However, you do not need to use the script as provided or at all.  Feel free to modify the script as needed or roll your own.  The only requirement for uploading results to the calculator is a CSV file with headers:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Write/sec

Note:  Make sure you run the PowerShell script as administrator.  If not, you may see errors about missing counters.

Upload Results

Once you have the performance CSV file(s), you are ready to calculate the results.  First, enter the number of cores for your SQL Server.  The calculator uses the number of cores to normalize the results against the servers running in Azure.  After specifying the number of cores, click the browse button to upload your CSV file.  If you are running the calculator for multiple databases, use the add and remove icons to add/remove multiple files to the table.  Once you have entered the number of cores for each server and uploaded all of your files, click the calculate button.

Review Analysis

After clicking the calculate button, you’ll see several charts which provide an analysis of your database resource consumption. The charts depict the percentage of time (based on your measurements) that your database’s resource consumption fits within the limits of each Service Tier and Performance Level.  You can review CPU, Iops, and Log individually as well as collectively to better understand which metrics affect the performance of your database.

Conclusion

In this post, I’ve given a little more background regarding database throughput units, Azure SQL Database, and Elastic Databases.  I’ve also  provided some instructions for using the DTU Calculator for either a single database or a pool of databases.  In my next post, I’ll cover analyzing the results.

Introducing the Azure SQL Database DTU Calculator

The following is the first in a series of posts about the Azure SQL Database DTU Calculator.  This post describes the need for the calculator and the process of creating it.

As part of my job at Microsoft, I get the opportunity to work closely with companies across the US.  My particular area of focus is Azure so I spend most of my days talking about a company’s business model, architecture, IT costs, and other technical issues to help them identify areas where Azure can help as well as provide them with a roadmap for migration.  During one such meeting about 6 months ago, I was talking to a company about Azure SQL Database as a potential option for migrating their RackSpace SQL Server to Azure.  If you’re familiar with Azure SQL Database, then you’re probably familiar with the term database throughput units (DTUs).  If not, a DTU is something the Azure SQL team created to describe the relative capacity of a performance level across a blended measure of CPU, memory, reads, and writes.  The article, Azure SQL Database Service Tiers and Performance Levels, contains a more thorough explanation.  Anyway, during that discussion one of the company’s architects asked, “how do I know how many DTUs my database needs.” 

Seems like a reasonable question and at the time I didn’t have a great answer.  I’d read the guidance, Azure SQL Database introduces new near real-time performance metrics, so I was familiar with the process of starting with feature capability, picking the lowest performance level, migrating the database, and then adjusting based on utilization.  However, I could see the concern this caused for the engineers.  Migrating a database from location A to location B isn’t a trivial task and doing so with any level of ambiguity isn’t something any engineer would recommend.  It was at that time I reached out to a friend on the Azure SQL Database team for help.  The conversation went something like this (names have been changed to protect the innocent):

Justin: 

Hey John.  Quick question… Do you have anything to do with or know anything about SQL Database DTUs?

John: 

You bet.

Justin: 

Cool.  In your opinion, would it be possible to write a DTU Calculator which a developer could run against their on-premise database to see which tier of SQL Database they would map to?

John: 

We’ve discussed doing such a thing, and I think it’s a good idea, just haven’t had the time to do it yet.  Feel free to send me email, and I’ll get the right conversation going.

A calculator is born…

One of my main objectives when developing the calculator was to keep it simple – something engineers at any level of expertise could easily use.  With that in mind, I spent the last several months working with the Azure SQL team to understand all the complexities of converting SQL Server performance to database throughput units and developing a website to wrap those complexities in a simple, intuitive interface which can analyze your production database workload and accurately calculate the service tier and performance level needed in Azure SQL Database – all before migration.  I’m happy to announce the Azure SQL Database DTU Calculator does just that!  The calculator is available to use but I’d love to hear your feedback and results.  As I mentioned earlier, this post is just an introduction.  Look for my next post to go into more detail about using the calculator and interpreting the results.

Windows 8 FlipView with WebView

Over the last several months, I’ve been working with a client on a Windows 8 app.  The app is basically a book but rather than embedding the pages of the book in the app, the client wants to load the pages dynamically when the app starts and cache them for a period of time.  Since each page of the book has a corresponding html page, the client decided to load the pages into a WebView control.  A good choice since the WebView control can easily render the html content but it also gives the developer a lot of flexibility to inject scripts and manipulate the DOM – which he did but it’s out of the scope of this blog.

Rather than loading all the pages into a single WebView control, the client wanted the user to be able to swipe left and right to change pages.  The FlipView control was the first choice since it supports the swiping behavior but neither of us were sure we could wrap a WebView control with a FlipView control.  After searching the inter-webs for a while, I found a lot of contradictory information.  Some forums say it’s possible but don’t provide good examples while other claim it’s not possible.  I’m here to confirm that it is possible and here’s how:

Step 1:

Open Visual Studio and create a Windows Store app using the language of your choice.  For my project, I chose XAML/C# and the blank app template.

I’m pretty sure Html5/JS will work but I haven’t confirmed.  Sorry.

Step 2:

For simplicity of this blog, I’m not going to load the html pages dynamically from a web backend.  I’ll just add the pages to a folder in the app and load them from there.  I added a folder called, Content.  Under that folder I added 5 pages.  A title page and 4 content pages.

Step 3:

Open the MainPage.xaml file and add the following XAML inside your Grid.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <FlipView x:Name="HtmlFlipView" 
                  Width="850" 
                  HorizontalAlignment="Center" Background="White">
        </FlipView>
</Grid>

Step 4:

protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    if (HtmlFlipView == null || HtmlFlipView.Items == null) return;

    HtmlFlipView.Items.Clear();
    for (int i = 0; i < 5; i++)
    {
        var htmlWebView = new WebView();
        string htmlPage = await WrapHtmlPageAsync(string.Format("mobydickpage{0}.html", i));
        htmlWebView.NavigateToString(htmlPage);

        HtmlFlipView.Items.Add(htmlWebView);
    }
}

private async Task<string> ReadLocalFileAsync(string path)
{
    StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri(path));
    return await FileIO.ReadTextAsync(file);
}

private async Task<string> WrapHtmlPageAsync(string htmlPage)
{
    const string customStyle =
"body {font-size: 18px;}";

    // Internal content and scripts
    string content = await ReadLocalFileAsync(string.Format("ms-appx:///content/{0}", htmlPage));

    return string.Format("<!DOCTYPE html><html><head><meta charset='UTF-8'><title>Moby Dick</title>" +
        "<style type='text/css'>{0}</style>" +
        "</head>" +
        "<body>" +
        "{1}" +
        "</body>" +
        "</html>", customStyle, content);
}

That’s it!  When the app loads the MainPage view, the OnNavigatedTo event runs.  I added code to iterate for each page; creating a new WebView control and loading the corresponding html page.

There are probably a million better solutions out there but this one demonstrates the point.  It’s entirely possible to create a WebView control inside a FlipView control to achieve a swipe effect between pages.

Extended Validation (EV) and the Windows Store

Update – 11/21/2016

There’s currently an issue with the Windows Dev Center registration page which causes the a properly signed XML file to fail on upload. I am working with the Microsoft team responsible for Windows Dev Center to get the issue resolved and will provide an update as soon as it’s fixed.

For now the MSFT team has asked that you work directly with MSFT Dev Center Support.  You can contact them at, Microsoft Support, and selecting either the “Chat now” or “Submit an incident” button.

The good, the bad, and the ugly.

Even if you’re an experienced Windows 8 developer with a plethora of apps in the store, chances are you haven’t heard of the extended validation option for your account.  And rightly so.  The use case is very specific.  It’s only available if you’re registered as a Company as opposed to an individual developer and only applicable to apps that need access to the documentsLibrary capability.  In fact, at the time of this post, I think I was one of the first to complete the process end-to-end.  It wasn’t a pleasant experience.  Definitely not something I would want when evangelizing Windows Store awesomeness.  I’m hoping this post can help others get through the good, the bad, and the ugly (not necessarily in that order) of the extended validation process.

Extended Validation:  What and why?

According to the Windows Store,

…apps that declare the documentsLibrary capability can only be submitted from developer accounts which can demonstrate they have acquired an Extended Validation (EV) code signing certificate from a certificate authority (CA).

I suppose an EV process makes sense when you read what the documentsLibrary capability does.  Effectively, it provides the app programmatic access to the user’s Documents, filtered by file types declared in the manifest.  The keyword being, “programmatic,” which is different than the standard UI file picker in that no user interface is involved.  This capability is somewhat outside the scope of the WinRT app sandbox which only lets the user decide which files it wants the app to interact with.  With the documentsLibrary capability, the app can interact with any file in the Documents folder as long as the file type is declared in the app’s manifest.

Extended Validation:  How?

Most apps don’t need programmatic access to the user’s Documents folder, but let’s suppose for sake of this blog your app does.  There are several steps to follow:

First, download the signable .xml file.  The link to download the file can be found by logging into your Windows Store account and clicking the “Account” link.  You’ll find the EV section just after the Publisher ID.

image

Second, purchase a code-signing certificate from Symantec or Digicert.  Both certificate authorities provide EV code-signing certificates which will work.  My personal experience is with Digicert but I expect Symantec would work the same.  Effectively, you purchase the certificate, fill out a form, and submit the form to your respective certificate authority.  The CA will then send a thumb drive to your company’s address with some instructions on how to use the drive to install the certificate.  Digicert has an image that describes this process:

So far the process seems pretty easy, right?  Download the .xml file, purchase the certificate, insert the thumb drive, and run the .exe on the thumb drive to sign the file.  If only it was that easy.  If you look on Symantec’s EV page under “Key Features”, take notice of the supported file types,

Digitally sign 32-bit and 64-bit user-mode (.exe, .cab, .dll, .ocx, .msi, .xpi, and .xap files) and kernel-mode software

Okay but where’s .xml?

The Bad

I don’t want to go into a ton of details but traditional code-signing engines don’t sign .xml file types.  They are typically used to sign installers or libraries like the file types above.  In fact, if you tried to sign the .xml file with a typical code-signing tool, you’d get an error…  Not cool.

I spent several hours going back and forth with the Windows Store team and Digicert explaining the problem.  Digicert’s support was completely baffled as to why I would want to sign an .xml file so they weren’t much help.  Microsoft support was better but it required my traversing the org tree to find the team responsible for the EV section of the Store.  Not as easy as you’d think with 90,000+ employees.

The Ugly

Working with Microsoft’s EV team was awesome.  I was able to chat directly with a developer – finally someone with answers – and he was able to write a console program which signs .xml files.  I ran the program, configured the parameters and what do you know?  It worked!  Hurray!  I can now publish my app to the store.

NOTE:  the code signing certificate produced by the CA must be generated using SHA2, not SHA1 and the certificate must exist in the user’s certificate store before running the .exe.

The Good…?

Even though developing a custom C# project to sign .xml files isn’t an ideal solution it’s better than nothing so I’m calling this one “good.”  I’m also calling it good because I’ve received permission from Microsoft and the team who developed the project to host it on CodePlex.  You’re welcome.  Feel free to download the entire project or just the .exe to sign .xml files for as part of your extended validation status.  The tool is available at, https://codesignforxml.codeplex.com/.

Hope it helps,

Justin