Backspace Away!
It’s no wonder why the urge to comment out code that isn’t migrating well and anything outside of the initial automation results - is so strong. It’s like trying to close a suitcase that’s clearly over-stuffed. You want to close it up with everything you originally envisioned packing into it but some stuff just doesn’t seem to fit. In cases like these, it can be difficult to decide what to keep and what to toss out. And like most, traveling light and simple, is preferred. If life’s problems where only a backspace away from being resolved. Don't like your electric bill? Delete it. Neighbor's dog barks too loud at night... well, you get the point.
And so the options that I’ve found myself debating lately have been whether to comment out these lines of code and possibly throw a custom exception or to create a precompiler warning. While both postpone the time at which the possible underlying issues are addressed, they present the developer with completely different opportunities. And only one of them gets you closer to lift off quicker.
Throwing an error at runtime is highly effective at ensuring that possible migration issues are not ignored or missed entirely - especially after your QA team begins testing. The test cases might not produce the necessary conditions for those lines to run and thus give Visual Studio a reason to throw your exception. While they, run-time exceptions, are no longer compile-time errors, they could make their way to the final product - surprising users with potentially bizarre comments and buggy functionality, - like the one below.
1: if (confirmPaymentDialogResult == DialogResult.Ok)
2: {
3: //CheckoutCart Object is still using the Interop version.
4: //PaymentsFullfillmentObject.ChargeAmout(AxCheckoutCartObject.TotalAmount());
5: PaymentsFullfillmentObject.ChargeAndShip();
6: throw new Exception("Don't forget to fix this issue!");
7: }
However, I think its important that when making modifications to the code like this, it’s critical to look at the context of the user experience and what exactly you’re commenting out. If you’re going to comment out the use of the AxCheckoutCartObject above than it’s probably best to also comment out the ChargeAndShip() method.
One of the problems with this though is that it’s possible to forget about this issue until someone from your QA team throws a hissy fit and demands to know how the module was ever tested. But wait... aren't developers are notoriously awesome testers!?
I’ve noticed since I’ve been working with Artinsoft, that one of the many neat benefits of using the VB Companion is some of the migration code that’s placed in the source code to assist with the migration.
1: //UPGRADE_TODO: (1065) Error handling statement (On Error Resume Next) could not be converted.
2: //More Information: http://www.vbtonet.com/ewis/ewi1065.aspx
3: Artinsoft.VB6.Utils.NotUpgradedHelper.NotifyNotUpgradedElement("On Error Resume Next");
The example above is taken from an instance where VB Companion actually created and inserted this NotifyNotUpgradedElement() method. What this means is that while you are debugging, you’ll be notified about the conversion issue whenever the application hits that line of code. This offers considerable benefits especially since it allows you to ignore all of those NotifyNotUpgradesElement calls and get back to them later.
So I feel like I’ve talk myself out of using the throw exception and to replace it with the NotifyNotUpgraded() method approach. But I think we can still do more to ensure that issues aren’t missed entirely or caught post-development by your QA team.
I’d like to be notified of these issues regardless of whether the lines are ever executed. So what I’ve discovered recently is the use of the warning directive.
1: #warning Artinsoft: Connection mode should be set to 'ModeRead'.
This will show up between the error list and the information list tabs. We just need to remember to check these warnings.
So now, if we incorporate the warning directive and the NotifyNotUpgradedElement() call then the “throw Exception” really isn’t needed any longer and we’ll increase the likelihood of remembering to go back addresses issues that were tabled for whatever reason.
So the previous code example might end up looking like this:
1: if (confirmPaymentDialogResult == DialogResult.Ok)
2: {
3: //CheckoutCart Object is still using the Interop version.
4: //PaymentsFullfillmentObject.ChargeAmout(AxCheckoutCartObject.TotalAmount());
5: Artinsoft.VB6.Utils.NotUpgradedHelper
.NotifyNotUpgradedElement("AxCheckoutCartObject.TotalAmount()");
6: #warning Artinsoft: PaymentsFullfillmentObject.ChargeAndShip()
needs to be replaced with .Net equivalent.
7: PaymentsFullfillmentObject.ChargeAndShip();
8: }
This approach provides three benefits. It allows the developer to comment out lines of code that are throwing exceptions during compilation or debugging without worry about forgetting about them. This in turn will get the migrated application up and running quicker. Additionally, during each debugging session, it optionally allows the developer to see each issue as the application encounters them or skip over all of them completely. And finally, while compiling, it provides a list of these issues as warnings in the error list tab.
This approach is proving to be quite useful.
I don’t have many months worth of experience with software migrations as a dedicated profession, per se, but I have been doing them on and off for well over a decade in some technical capacity - mostly as a software developer or architect. The one thing that has struck me as, perhaps, the most perplexing aspect of migration, at least at the micro level, is when you come across 3rd-party components that have a solid .Net version available but their interface has changed in a destructive fashion from their latest COM version.
Typically, I think of interface changes as being either destructive or non-destructive. Newly 'dot-net-ized' components that merely have namespace changes or changes to the names of their methods or properties as being 'linear' - essentially one-to-one. While at the same time, the component may offer additional method signatures, new methods or functionality entirely. I consider this to be 'expansive' or 'progressive'. The newly release versions may indicate that certain members are being deprecated but that support still exists for the time being. In all cases, these are non-destructive.
On the other hand, destructive mutations are ones where support for particular functionality is dropped entirely, either by a non-backwards-compatible offering of method signatures and/or the removal methods or properties entirely. Perhaps a term like 'abandonment' or 'orphaned' works well. Or worse yet, destructive mutations can encompass changes where the re-architecting of an application becomes necessary because objects in the application’s architecture are shifted around in the models hierarchy or object responsibility. I refer to this as 'mutinous' or 'violent'. Some of the ADODB examples are destructive in this way. I like to think of this type of destructive.
I often think back to my initiation in object model programming and I have never been able to forget the expression, 'commitment'. The idea was that the public interface of component was a sort of commitment to the applications that consume them and especially a commitment to the developers that implant them into their applications. The publishers of these components were never ever supposed to falter on this commitment. At least, that’s what I thought while this couldn’t be further from reality.
In the past few weeks, I’ve seen lots of destructive component evolutions. The examples that I’ve come across include Crystal Reports and Microsoft’s ADODB. And while I say a lot, I am referring to more than one or two, - where there’s a lot of pondering and little evidence in so far as what to do with certain VB6 calls when porting over to C#. What surprises me the most though is how really lacking migration assistance is from COM to .Net versions of these components! While I can appreciate the need to deprecate features and methods from time to time as a way to 'evolve' the application but why not at least provide some backward compatibility or hints as to what to do with the deprecated members.
While it's wonderful that developers have considerably more control and ease in handling binary data in .Net, it is frustrating when a method with one argument, like AppendChunk() from ADODB, mutates into a half-dozen lines. It just doesn’t feel like evolution.
VB6, simple:
1: recordsetObject1("binaryData").AppendChunk recordsetObject2("binaryData")
C#, not so simple:
1: byte[] chunk1 = (byte[])recordsetObject1["binaryData"];
2: byte[] chunk2 = (byte[])recordsetObject2["binaryData"];
3: byte[] chunkFinal = new byte[chunk1.Length + chunk2.Length];
4: System.Buffer.BlockCopy(chunk1, 0, chunkFinal, 0, chunk1.Length);
5: System.Buffer.BlockCopy(chunk2, 0, chunkFinal, chunk1.Length+1, chunk2.Length);
6: recordsetObject1["binaryData"] = chunkFinal;
Certainly, I could have reduced the migration solution by 2 lines. Yuck!
1: byte[] chunkFinal = new byte[((byte[])recordsetObject1["binaryData"]).Length +
((byte[])recordsetObject2["binaryData"]).Length];
2: System.Buffer.BlockCopy(((byte[])recordsetObject1["binaryData"]), 0, chunkFinal, 0,
((byte[])recordsetObject1["binaryData"]).Length);
3: System.Buffer.BlockCopy(((byte[])recordsetObject2["binaryData"]), 0, chunkFinal,
((byte[])recordsetObject1["binaryData"]).Length+1, ((byte[])recordsetObject2["binaryData"]).Length);
4: recordsetObject1["binaryData"] = chunkFinal;
As it turns out, Microsoft did a twisted thing with this AppendChunk() function. According to the MSDN website, the first AppendChunk call on a Field object writes data to the field, overwriting any existing data and subsequent AppendChunk calls add to existing data. Ugh!
Other ADODB examples include the property ADODBRecordSet.CursorType and method ADODBRecordSet.Open( ) - where they dropped the signature with CursorTypeEnum, property ADODBConnection.IsolationLevel( ) and property ADODBConnection.Mode( ).
Changes like these last three are particularly manageable because the developer can perform the same tasks just fine. While I don’t like to have to type more than necessary (when it comes to coding, I’m quite lazy!) , this type of destructive migration challenge I can live. It’s solvable and we get past them quickly. What drives me nuts are the violent changes to 3rd party components!
So Crystal Reports has had quite an impressive following since the early 90’s. With the marriage of IT and Enterprise Reporting, the demand grew and grew. Let’s see - Crystal Services is acquired by Holistic Systems is bought by Seagate Software which is quickly rebranded to Crystal Decisions bought by Business Objects and folded into SAP, alas - in 2007. Is it any wonder that the component(s) has undergone mutinous changes?
The latest COM version of Crystal Decisions supports a number of Report Viewer events, such as DownloadFinished( ). Where did it go in the latest .Net version? Who knows! Oh, the funny thing here is that the .Net Crystal Decisions report still uses the COM-based AxWebBrowser component. We’ll just let that go. To be fair to SAP, even Microsoft has been slow to replace that component. Anyhow, I don’t blame SAP for tossing a bunch of that component out. It’s quite bloated. In fact, it’s kind of funny. In my experiences, predominantly in the Web arena, I’ve never had a need for Crystal Reports. After having worked with it - well, let’s just say I’m surprised it doesn’t offer a different licensing model.
That’s a topic for another time.