This isn’t to say that the program is working. I’m hitting a lot of bugs and fixing them. I’m also adding extra logic to eliminate situations that raise exceptions, usually just to get rid of the error message, so real errors become more visible. This seems to speed things up a little bit, too, at the cost of code size and complexity.
Fortunately, because the overall structure of the code emerged from the real world, where data (and its organization) is not always perfect, it wasn’t too hard to add these special cases to the code. There’s nothing wrong with special cases. It’s easier to add code to catch 90% of the special cases that aren’t so special, than it is to get people to conform 100% to rules about how to use their apps.
One source of problems was assuming that script that formed the heart of the app was bug-free, when it wasn’t. Treating that as a black box wasn’t wise, because I didn’t seriously study what it did, or the underlying data it worked with. This lead to a serious gap in knowledge that’s biting me back right now.
Also, there are some serious architectural problems that are causing problems.
Big architectural problems so far are:
- Must have a supervisory thread that can take requests to start and stop the batch processor thread. That’s because it appears that the best way to “fix” a COM server that stops responding, is to kill the thread making the request. (I suspect doing this could lead to an object leak.)
- Should run threads using the VB thread control.
- Must research the Polygon classes, and figure out how to move those. This might require study of how the app uses data sources.
- Should fix the code up so it can be edited apart from the ESRI APIs. The file batcher parts should be a separate tool entirely.
- Must fix up the binding between the DataGrid and the underlying database file.
Rash Decisions
I made the rash decision to move over the most used files, first. This went completely against common sense, which dicates that you move over the least used data first. There was, however, one compelling reason to do this: debugging help.
You can’t really get people to help you debug utility software, especially not data processing code, unless you pay them. There is no budget for that. (In fact, I don’t really have a budget for writing this tool, and will lose money writing it. It’s more of a learning exercise.) By moving people’s most important data, they’re forced to find the problems, and help me fix them. The batch jobs will run longer, sooner, saving everyone pain.
Had I moved their data over last, I would have encountered the exact same problems, but much later in the debugging cycle. It would have been harder to incorporate these bug fixes, because the previous bugs would have influenced the direction of the code first.
Obviously, the best thing to do would have been to work on a copy of the most-used data. That would have required setting up two servers here, and, as it is, I have only one cheap Windows XP machine, and no extra XP licenses.
Read More
Here’s an article that touches on some aspects of getting code to run, and how it fits into architecture and design:
Ten Software Development Myths Which Are Still Around