Wednesday, August 31, 2011

Good Software Design is Long-Lasting

Ten Principles of Good Design Redux | Part 7

Note: This post is technically out-of-order, but if modern microprocessors can do it then so can I.

I very recently designed a system to replace a “legacy” application that has been running for over 25 years. I have no doubt that this is not unusual and that there are still millions, if not billions, of lines of code still running that are well into their 20s. One would imagine that we learned our lesson with the whole Y2K nastiness, or simply by observing just how long some code manages to stay running.

Unfortunately I have seen time and time again how ostensibly competent Software Architects tacitly design software systems for a limited life-span. And it is the clients of these architects who have to bare the costs, since by the time the software starts to display the symptoms of its accelerated decrepitude they are usually heavily locked into the software, and have to do significant surgery to make critically-needed changes. Typically these changes add significant technical debt to the system and make subsequent changes harder still.

This can all be avoided if one designs the software with the assumption that it might live for a quarter of a century or longer. Of course you may be given a specific lifespan as a formal requirement. It is up to you to decide just how much salt to take those requirements with given your understanding of the client or domain. The case might also be made that designing relatively short-lived software is a way to guarantee your own job security. That may hold true in some cases but I would assert that if you are morally so flexible then you should be in politics rather than software development.

Tuesday, August 23, 2011

Good Software Design is Unobtrusive

Ten Principles of Good Design Redux | Part 5

If I were to apply this principle as it was originally stated by Rams, it would contradict the assertion that I made in an earlier post that a software design needs to be a work of art. Ram’s suggests that products are “like tools and are neither decorative objects nor works of art”. I wholeheartedly agree that Form should always follow Function in software design, but I also believe that there is a genetic component to aesthetic that is useful in establishing the quality and suitability of a software design. 

One would naturally assume that the form follows function principle would be universally adhered to by all software developers and architects. And that is generally a safe assumption. It is not however safe to assume that this form-pursued function is merely the set of formal requirements for the system; often the system is also designed to address the personal requirements of the designer.   

Many Software Architects are rabidly righteous; “Tiger Architects” defending their favoured technologies, design patterns and methodologies with ferocity and zeal, even in cases where those memes are obviously not optimal for a design. Despite the fact that Software is a memeplex on the leading edge of memetic evolution, the aforementioned phenomena is mostly a manifestation of  significantly baser stuff; mammalian territoriality and human ego (though the former may be considered the precursor of the latter).

I have been guilty of this in the past, but over the years I have learned to look at my own software designs through a deconstructivist lens; separating the fundamental requirements of the system from those that I have introduced because of my own biases, preferences and comfort zones. That is not to say that I simply reject all parts of the design that display evidence of the aforementioned, but I attempt to make sure that none of these personal requirements subsumes any of the fundamental requirements of the system. One might say that I practice Deconstructive Software Architecture, though that might be misapprehended if it is confused with the similarly-named movement in meatspace architecture. 

I am motivated to add the following rider to this principle:

Good Software Design is Egoless.

Friday, August 19, 2011

You can teach an old dog new tricks!

I always tell people that if you cut me I bleed pure Microsoft. That is not to say that I am under the illusion that Microsoft products and technologies are the best tools for any problem, they are just the tools that I am most familiar with. I did spend nearly a decade working for Microsoft after all, and had a hand in building some of those very tools, so one would assume that my skill set would be a bit skewed in that direction. So it is not a surprise that I have managed to learn almost nothing about the Oracle Technology Stack (though I did once have to use the Oracle database in a solution).

Until this week that is.

This week I had the opportunity to spend two days with a few technical folks from Oracle and I have to say it was great fun and very enlightening.

In preparation for my meeting I did a little reading on the Oracle Technology Stack (if you can call it a “stack”) and was befuddled by the array of products and technologies that it includes. Oracle is The Borg of the software industry; it has been on an, apparently insane, shopping spree for the last few years, buying up and assimilating company after company, including PeopleSoft, Siebel Systems, BEA Systems and Sun Microsystems, all of whom had significant technology stacks in their own right. No doubt Oracle invested a fortune to integrate and unify these stacks but after looking at the Oracle Marchitecture I walked into the meeting with fairly low expectations.

I was very pleasantly surprised and impressed.

The Oracle stack is mature, fully-featured,  highly-integrated, and supported by rich, comprehensive tools. I was highly impressed with JDeveloper, Oracles flagship development environment, and the unified development experience it offers, particularly for applications targeting the Oracle SOA Suite, which is part of the Oracle Fusion Middleware platform. It allows for the creation of SOA Composite Applications that can include web services (in their broadest sense), BEPL and BPMN work flows, Business Rules, POJOs and a bunch of other capabilities, all supported by intuitive visual designers. The entire application can be developed with JDeveloper and then published directly to the Application Server.

The other technology that I was particularly impressed with is the Oracle Policy Automation (OPA) tool suite. It provides tools for modelling and runtime evaluation of complex Business Rules. The capability that is most impressive is that it allows you to take a policy document that was written in Microsoft Word and transform that document in-place into executable rules. It also supports authoring of rules in Excel. You can then host and evaluate those rules on a web-service-accessible server or you can host the evaluation engine directly in your Java or .NET application. Given that Microsoft does not have a technology to rival OPA I will definitely consider using it in future .NET solutions that require a rich stand-alone Business Rules Engine. Yes, I know Microsoft also has rule engines in BizTalk and Workflow Foundation (though not in version 4.0 for some reason!), but the OPA rules authoring experience leaves both of these technologies in the dust.

I have only just begun to scratch the surface but  I am really looking forward to learning more about the Oracle Technology Stack (I never thought I would ever hear myself say those words!). 

Now all I have to do is work out how I can use Scala instead of Java as a development language in JDeveloper.

Monday, August 8, 2011

Good Software Design Makes a Product Understandable

Ten Principles of Good Design Redux | Part 4

This principle is applicable in two areas; firstly to User Interface Design and User Experience Design, and secondly to the Conceptual, Logical and Physical Designs of a software system. In the case of the latter the comprehensibility of a software system design is so obviously vital to the successful realisation of that design that I will focus this post entirely on the former. 
 
The ease with which a user comes to understand the visual metaphors utilized in a user interface, and thus the speed with which they can accomplish necessary tasks and operations, defines how usable that user interface is. From a user experience perspective “understandable” and “usable” are synonymous.
 
In 1995 I worked for a progressive IT company who were in the process of setting up a full usability testing lab. The plan for this lab called for the installation of cameras for recording video of the user’s facial and body language, a key and mouse logger to monitor and capture user input data, a pulse monitor to measure the user’s stress levels, and a high resolution screen-capture of the user’s interaction with the application itself. The plan even called for having statisticians and clinical psychologists on staff to structure the usability tests and interpret the integrated data streams. The goal was to not only test the usability of clients applications, but also gain a deep understanding of the Laws of Usability. I left the company before the plan was realized, to go and work on a new shiny thingy called the “World Wide Web”, so unfortunately I never got to see the lab completed, but it did introduce me to Usability as a sub-domain of Software Design. Even in 1995 the importance of Software Usability and Interaction Design, and their significant impact on the successful development and adoption of software, was well understood, though the Laws themselves had not yet fully emerged.
 
In the mid 2000s, while at Microsoft,  I was fortunate enough to interact with some of the brilliant User Experience folks in the Windows Product Group and Microsoft Research. They were on the very bleeding edge of User Interface and Experience Design. Despite all of the UI innovation that happens in these teams, Microsoft has to be conservative in its adoption of these UI innovations because of the very broad use of their client operating systems; I saw many mind-blowing UI prototypes during that time, and I have seen a small percentage of the elements of those prototypes show up in shipping Microsoft products. Working with these teams gave me some idea of just how far the User Experience Design discipline had come in a relatively short time; Usability had gone from being a Craft to a fully-fledged Science, with its own sub-disciplines, laws and formalisms.
 
The Web has had a massive impact on our understanding of what makes a user interface usable and understandable. Because of their rapid mutation and evolution in response to users explicit and tacit feedback, web pages are to Usability Engineers what fruit flies are to Geneticists. User interface and experience design on the web has been highly experimental because of the breadth of its accessibility and the relative simplicity of web technologies. It has brought about a “democratization of [UI] design”. I am continually amazed at the brilliant UI that I see online.    
 
A couple of years back I read a great book that further formalised the way I approach User Interface design. It is called “Super Crunchers: Why Thinking-by-Numbers Is the New Way to Be Smart” . It obviously is not a UI Design book; It is actually about the hidden knowledge in the Exabytes of digital data that we generate, which can be revealed by applying various statistical analysis techniques. There is a great chapter about the use of randomized testing to improve the usability of web sites. The study documented in the book suggests that there are in fact no Laws of Usability, and that there is no such a thing as a user interface that will be usable and comprehensible by all users. It further suggests that the key to a usable interface is adaptability and customizability.
 
Ideally a user interface should adapt or be adaptable to each individual user. A user should not have to learn new visual and interaction metaphors to get a task done. “Fortunately”, the software industry has forced users to learn a lot of metaphors, e.g The Desktop, The Hyperlink, The Tree View, etc., which can now be used relatively freely, though we should certainly not feel obliged to use them. If possible a user interface should simply adapt without requiring that the user explicitly set a number of configuration options. At a minimum a user interface should give the user a simple mechanism for customizing the user interface to suit their preferences or preferred interaction “style”.
 
An example of an excellent user interface is Microsoft’s Visual Studio 2010. For a product with so many features and capabilities, I find it incredibly easy to use and fully configure to my personal preferences. Obviously this is not a general purpose product, but for those who use it, i.e. software geeks, it is easy to understand and is highly usable (except for the occasional performance issue). I have used Eclipse in anger (pun entirely intended) over the course of the last decade and it offers a good Usability counterpoint to Visual Studio 2010. 
 
So one would image, given what we already know about designing easily understood and usable user interfaces and experiences, that all modern user interfaces would be highly usable. Sadly that is not the case; I still see product after product, solution after solution, and web site after web site, designed so poorly that only a Visual Basic programmer from 1993 could love them! There really isn’t any excuse.

Friday, August 5, 2011

Replicator Wars | Consciousness as War Machine

On occasion I have ideas that have nothing to do with Software Development, but that I think still merit being written down. This morning while riding the train reading “The Selfish Gene” by Richard Dawkins I had just such an idea.
 
I was reading the final chapter of the original version, which introduces the “Meme” meme. This got me thinking about the tension between genes and memes. This tension exists because an individual can be “infected” by a meme that entirely subverts gene-driven behaviour, e.g. celibacy. A meme can ostensibly override a gene or gene-complex that has been millions of years in the making. This begs the question; if memes are potentially such a significant threat to genes then why have genes not evolved one or multiple defenses against them?
 
And then I had an idea; perhaps Consciousness itself arose as a genetic evolutionary response to the infectious, self-replicating and potentially destructive nature of memes.
 
I would assume that a brain able to replicate memes arose by random mutation, or for some other evolutionary purpose, but as soon as memes were able to replicate from individual to individual with some degree of fidelity, the game was afoot. And possibly this happened before consciousness evolved. A meme is after all just a tool, and there are species besides Homo Sapiens that are tool makers and users. Once memes became a reality the possibility existed that memes could override gene-driven behaviour. And one would have to imagine that, through simple Darwinian evolutionary processes, those same genes would then start to evolve countermeasures to those memes, since the genes of an individual infected with a given “toxic” meme would not survive into subsequent generations. So perhaps instead of evolving specialised adaptations to specific memes, they evolved a handful of generalised ones. And perhaps Consciousness was one of those adaptations.
 
But surely consciousness actually increases our susceptibility to meme infection? I would assert not. The ability to pass ideas from one individual to another does not seem to require consciousness; meme’s do not seem to need self-awareness to replicate, as is demonstrated by other species who pass epigenetic “knowledge” from generation to generation.
 
I assert that Consciousness actually helps prevent meme infection because it gives an individual the ability to filter and analyse memes, and then if necessary immunize themselves against that and similar memes. It also gives the individual the ability to root out destructive memes which they have already been infected by, and similarly immunize themselves against future infections. This would mean that there are genes that are evolving to make us more self-aware, and generally smarter.
 
Perhaps the development of Consciousness was not the only response to the potentially destructive nature of memes; one would assume that genes would also evolve that hinder the ability to process and transfer memes. Perhaps there are also retrograde genes in us that are evolving to turn us back into wild beasts, incapable of self-reflection, poetry, art, science, philosophy and Sudoku.
 
So there may be two genetic responses to the Meme Threat in the human gene pool; one that is making us smarter and one that is turning us into soulless meat puppets. 
 
I think I finally understand the difference between Democrats and Republicans. I kid, the Republicans!

Tuesday, August 2, 2011

C++ : The Resurrection

I have always had a love-hate relationship with C++; it gives you almost total control over the hardware but requires that you be very mindful of that hardware. It has been a very long time since I wrote any C++ code in anger, and until yesterday I was convinced that I would never write another line of C++ in my life. Then I read an article in the most recent MSDN Magazine on Windows API development with the latest version of the C++ standard, whose working title has been “C++0x”, and will officially be known as “C++2011”. I have been aware that there was a new reversion of the standard in the works, but I had no idea how significant a step forward this latest revision is going to be. Lambda Functions, Type Inference and a “foreach” equivalent  in C++? Who are you and what have you done with C++?!?

And there is a lot more to the new standard, which is now final, and should be published some time this Summer. I found this Channel9 interview with Herb Sutter; he gives a very good overview of the new features of C++2011, the process that brought it into being, and what it means for developers targeting the Windows Platform.

I think I am going to have to give C++ another chance, though I don’t know what this is going to mean for the other languages that I have been using recently; F#, IronPython and Scala? I guess we will just have to wait and see.