Wednesday, September 29, 2004

Morphology of Software

I read a while ago about Propp's Morphology of Fairy Tales. It made me ask myself what a Morphology of Software would look like, would there be a villain and a return, there are certainly interdictions? Seems like it would be a useful thing to have and you'd want one for different levels of granularity (enterprise through to application). Would a statement in this morphology be something like 'all ERP solutions have a business data warehouse to enable BI/MI?'.

Some people are working on the automatic generation of stories, given a morphology. Could we automatically generate a high level architecture from a morphology and some additional data? Would we want to?

Primacy of code

Some extremists would say that the source should be the only place to look when I need to understand a system. No secondary sources allowed.

NLP and other approaches to understand how humans interpret the world around them point to the fact that people have different preferences for learning:

Some like to read.
Some like to have things explained to them.
Some like to view pictures.
Some like to engage in discussion.
Some like to experiment and learn through feedback.
Some like mixtures of the above.

Some need more than one go at experiencing something before they understand it.
Some need things in small chunks.

Maybe primacy of code discriminates against those who don't learn well through textual readings.

I'd like to see as a start an IDE where I can put visual comments (better still, in a way that lets it create an animated slide show) rather than just text. The IDE would place an XML description in the comment (or maybe a link to a seperate image file ) so that the compiler wouldn't barf and then render it accordingly. Ideally there would be an ALT tag and the IDE could do 'Save as Text...' and the commnets would just be the content of the ALT tag.

Tuesday, September 07, 2004

Ch ch ch changes...

  1. Addition/removal of attributes.

  2. Addition/removal of entities.

  3. Changes in cardinality of relationships. [1]

  4. Changes of ownership in relationships. [1]

  5. Granularity changes. [1]

  6. Encoding changes. [1]

  7. Unit changes. [1]

  8. Identifier changes. [1]

You can implement the above with some set of transformations (some obvious - i.e. add attribute, some less obvious - i.e. granularity change) and to ensure backward compatibility you'd want to build a schema that, when the transformations are applied, is still understandable by actors expecting the original schema. Clearly at some point backwards compatibility has to break, either structurally or semantically.

If you are considering how to build an extensible schema the above might help in identifying what extension mechanism to use where.

[1] Ventrone, V. and Heiler, S. (1991). Semantic heterogeneity as a result of domain evolution. SIGMOD Record (ACM Special Interest Group on Management of Data), 20(4):16–20.

Monday, September 06, 2004

Software Documentation

When Brian Marick had a problem with CruiseControl he rejected 'use the source Luke' because he didn't have time. This makes an interesting implicit point about software documentation; often you are too busy to use the source. If someone has provided a framework which you are going to utilise then it needs to be documented since you are always too busy. You may respond with "that's ok, that only applies to stuff like Spring, Ant, Cruise and the rest since they are tools or frameworks and designed to be treated as a black box". The 'black box' is a consequence of abstraction, and all s/w should utilise abstraction where needed to control complexity. If we follow the 'no time' argument to its end then this implies there needs to be some degree of s/w documentation for key abstractions within a s/w construction (key == (stuff we use a lot || hard stuff we use little enough we forget it)).

I think it is this that gets my goat when I'm reading code with no decent comments or class responsibilities. I know that there is no guarantee that comments are kept up-to-date, but the question in my mind is 'should we avoid using things because they not be reliable or make sure these things become reliable'? I'd prefer the latter, though its harder and if there is a solution its probably self-discipline.