Hello! I know, I know, I’ve been missing in action from the blog for a while. I had a lot of travel and other activities in June and July that kept me busy. Also, I’m still trying to settle into a routine for all the things I want to do following my MathWorks retirement.
But I’m back, and I will resume posting at regular intervals. I’m also working on things for the File Exchange, and I’ll give you updates here when those are ready.
While I was traveling in July, I saw the “ITK’s architecture” post on Cris Luengo’s image analysis blog. I have followed Cris’s work for a long time, and I read this post with great interest.
In it, Cris discusses a number of design issues related to creating a C++ library for image analysis. He compares design choices made in ITK to choices he has made in his DIPlib. I encountered similiar design issues and choices in my MathWorks career (even though MATLAB design can be a lot different than C++ library design), so I was fascinated to read Cris’s take. In this post, I will describe briefly some of Chris’s comments and add my own. I have oversimplified some of his ideas and omitted others, so if you are interested in these topics, I encourage to you to read his post directly.
He first discusses how an “everything is a class” design philosophy affects the complexity of the resulting client interfaces. Now, I am no expert in object-oriented design, and no one should take my advice about it. But I will still go ahead and say that my experience has taught me to have the most confidence in a classes that evolve via refactoring, for example to eliminate duplicate code. Such classes are usually at the implementation level and don’t affect the interface. Interfaces resulting from everything-is-a-class thinking, or from some kind of “isa” thinking about real-world domain quantities, tend to result in overly complicated and inflexible designs. As I said, though, don’t take my advice.
Cris then explores the idea of algorithm “pipelining” in some detail, and he identifies some problems with trying to represent typical image processing workflows in a framework with automated pipelining.
The next topic is types and how much or little type flexibility there is in the interface. My own comments here are not closely related to C++ library design, which Cris is concerned with. In my career, I often found myself reminding toolbox designers of the merits of an “everything is just a matrix” approach in MATLAB designs. (That did evolve to “everything is just a numeric array,” a slightly more complicated scenario.) People would grumble about having just a plain uint16 array that can’t tell you its color space or white point, or even the intended data range. Those are real pains, and I have felt them many times. But it is important in MATLAB design practice to remember the many advantages of “it’s just a matrix.”
Cris includes some interesting tangential observations at the end. First, there is this quote from The Architecture of Open Source Applications volume 2:
maintainers […] account for more than 75% of the cost of software development over the lifetime of a project.
Based on my 30 years at MathWorks, I totally buy this statement, at least as a rough approximation. It reminds me of something I used to tell developers: Every feature added to the product represents a permanent increase in software development costs to the company. I think the majority of features in the Image Processing Toolbox older than about 10 years have been overhauled at least once. Changes are needed because of user feedback, changing user needs, changes in MATLAB or other related products, changes in the size and complexity of data sets of interest, better algorithms, or changes in computation technologies such as cores, memory architectures, compilers, instruction sets, GPUs, graphics pipelines, etc.
Cris finishes with a discussion of the challenges associated with reproducing published results and the research publication incentives that perversely discourage publish straightforward, effective methods in favor of incremental changes to complex methods that have only marginal real-life benefit.
This hits home for me as well, in a few ways:
- Most published papers do not contain a sufficiently detailed and accurate algorithm description to be confident about reproduction.
- The very high volume of papers that describe marginal results makes it challenging to choose which ones to spend time on. One must develop the ability to quickly assess both the merit a paper and the implementation difficulty of a paper. Generally, one doesn’t have the time to build an implementation in order to make a first-pass assessment.
- Implementation details that are seemingly minor and uninteresting are left out of papers, but they can meaningfully affect algorithm results in practice.
Thank you, Cris, for raising so many interesting and practical design issues in your post.