Thursday, November 10, 2016

Agile Secret Sauce

There seems to be a lot of chatter on the Internet recently about the evils of Agile, and Scrum specifically. Some of these posts make a few valid points, but they all seem to have been written by people who have been burned because they were doing Fragile (see You might be doing Fragile), or who simply want to get a few extra page views on their blogs by posting content that is designed to provoke an emotional response. I won’t get into the specific flaws I see in their arguments, rather I will describe one single practice, that falls under the Agile banner, that can mean the difference between sustained success, and failure on any project; regardless of whether the people involved claim to be practicing Agile or any other methodology for that matter.

Despite the mind-blowing progress that has been made in software development technologies, tools, processes and practices in the last half century, more than half of the software development projects undertaken by organizations of all sizes still land up in some or other failure state, often related to poor execution rather than a misconceived vision. Though complex software development remains a relatively difficult undertaking, this does not justify or excuse the high rate of failure. Surely after creating software for over half a century, we would have worked out how to do it in such a way that most attempts were successful, at least in execution? Why have we not learned from our collective mistakes?

I assert that the software industry has already stumbled upon a simple practice that is almost a silver bullet for successful software project outcomes. Over my career, I have been on teams that have used BDEUF (Big Design and Estimation Up-Front), proto-agile Iterative methods, formal Agile methods, including Scrum, eXtreme Programming and Kanban; novel and ad-hoc processes, and some that have just practiced Seat-of-the-pants. And across all of the aforementioned, the one practice that seems to have an almost binary impact on successful execution, is The Retrospective (though it is not always named such).

The Retrospective represents the ultimate confluence of the warm and fuzzy of human psychology and the hard and practical of the Scientific Method. It creates a simple, though formal process, from which a team can learn from its own collective experience, continuously experiment with new practices (with managed risk), and gives every member of the team an equal opportunity to shape the process that they are following.

Effective human-centric process engineering is hard; human psychology is complex and volatile. A process that works perfectly well for one person or team may fail dismally when adopted by another, even if the formalisms of that process are held to with high fidelity. And given the ever-more-rapidly changing environment, a formal process that works perfectly today, will almost certainly not be optimal tomorrow.

And most teams who attempt to adopt formal process frameworks, either don’t implement all the practices and disciplines required to make those frameworks effective, and then let them mutate organically; or institute overly-cumbersome change control processes in an attempt to minimize the cost and productivity impact of changes. Processes can quickly go from being assets to liabilities; reducing organizational productivity and agility, rather than improving them.

What organizations and teams need is a continuous process improvement process as the meta-process for the organization; one which permits and encourages continuous optimization in the face of the ever-changing environment. Managing the continuous improvement of processes needs to be baked into the organizational culture, and it must be owned by everyone from the intern to the CEO; everyone needs to be able to suggest changes to the process, and those changes should be considered if the proponent can formalize a credible hypothesis of how the change will make a net improvement in the process. The organization or team also needs to be able to measure the effectiveness, or lack thereof, of all tactics that it adopts, or experiments it attempts.

The Scientific Method is inherently empirical; for any hypothesis, there needs to be defined:

  • One or more tests that will prove the hypothesis
  • One or more tests that will disprove the hypothesis
  • A description of the data that need to be collected over which the tests will be evaluated
  • A mechanism to collect those data
  • A description of how other potentially un-related environmental factors might affect those data

There is a misconception that the Scientific Method is complex. It’s not; it can be simply reduced to cycles of Reflect, Execute, Measure, and Inflect. The Retrospective is the primary mechanism for making the Scientific Method the basis for a continuous process improvement process, while also providing the heartbeat of the process.

I will save a detailed description for how I run Retrospectives for a future post, but the rough structure is as follows: every member of the team, in no specific order, gets an opportunity to comment on practices that should be continued, practices that should be stopped or modified, and new practices that should be adopted. All comments, though particularly those related to abandoning and adopting a practice, must be supported by a description of the expected effect that the proposed practice will have on the key metrics that the team uses to measure their performance, and how those effects will be measured. If the hypothesis is consistent and coherent, then the practice is adopted for one or more iterations or milestones (depending on how long it will be before the expected impact should be measurably and unambiguously identifiable). The proponent is then accountable for making sure that the required data are collected and analyzed, and for providing a conclusion in the Retrospective that coincides with the end of the experiment. If the experiment is successful then the practice is adopted as part of the formal process, until someone makes a supportable case for why it should be modified or abandoned in a Retrospective.

The person who is running the Retrospective also must ensure that argument, debate and defensiveness are kept to a minimum, that egos are managed appropriately, and that the Retrospective does not run over the allocated time. As is common with other Agile practices it is a best practice that the person running the retrospective is not a Boss. The importance of creating an atmosphere where participants feel empowered to speak their minds, are confident that their opinions are valued, and that all hypotheses will be evaluated solely on the merits of said hypotheses (rather than the seniority or the source), cannot be overemphasized.

Even if a team is not following frequentative process cadence, regular Retrospectives provide a vital heartbeat for the team and the process. The cost of adopting the Retrospective is low, and this simple practice will have a significant positive impact on the productivity and morale of any team, regardless of the process that they are using, or the organizational culture.

Tuesday, December 15, 2015

Windows Live Writer goes Open Source

My tool of choice for writing posts to this blog has always been Microsoft’s Windows Live Writer. Unfortunately it has gotten long in the tooth, and has not seen an update in many a year. Some time back Scott Hanselman announced that Microsoft was planning to make the code for Windows Live Writer available under an open source license. And that has finally come to pass; the code has now been made available under a MIT license, and a fork has been created called Open Live Writer.
 
Unfortunately I could not use it to write this post, because of changes to the Google authentication APIs, but like most open source projects, I expect someone will provide a fix for that in a couple of days.
 
Open Live Writer can be downloaded here.
 
Update (December 23rd, 2015): the Google authentication issue has been fixed and I can now edit this blog with Open Live Writer… which I just did. Gotta love open source.

Friday, November 27, 2015

On the Merits and Risks of Being a Frank

And by Frank I am not referring to a member of a Germanic tribe, or a person named Frank”, though I can imagine that bearing that moniker has its own merits and risks; I am referring of course to an individual who readily speaks their mind, is brutally honest, is forthright in the extreme, doesn’t beat around the bush, calls a spade a spade, chooses polemic over other forms of discourse, and so on and so forth. Let us call this kind a Frank.

And in the name of said frankness, I consider myself an instance of the aforementioned kind. As to why I am this way, I cannot rightly say; perhaps it is genetic, perhaps I learned it from my dearly departed father, perhaps I am on the spectrum, or perhaps I have a strong desire to reveal the underlying truth in all things, primarily to satisfy my own curiosity about life, the universe, and everything.

My life as a Frank has taught me something important that might help other Franks; regardless of the strength of one’s belief, the depth of one’s righteousness, the eloquence and sophistication of one’s argument, and the indomitable passion with which one delivers that argument; if one does not effectively engage the audience, either in hearing and responding to the subtleties of their points and counter-points, or in at least getting them to hear the subtleties of one’s own, then all of that passion, energy, and commitment are for naught.   

The mechanism one uses to elucidate the truth is just as important as the truth itself. If one is solely concerned with one’s own righteousness then perhaps it doesn’t matter, but if one is actually concerned with revealing or discovering the truth then it is not enough that one can identify the weaknesses or inaccuracies in an argument; one has to strive to successfully communicate one’s argument or position in such a way that it has the highest likelihood of being given any consideration by the listener (who may initially strongly hold an opinion quite opposite to one’s own).

In general, our most self-destructive behaviors, are occluded from us. If one doggedly seeks to elucidate the truth of a thing, but finds oneself driving one’s point no matter what the response from the listener or audience; or worse yet, one doesn’t particularly care what the listener’s response is, then one is probably not in it for any noble reason. One’s frankness in this case is probably motivated by some deep-seated sense of inadequacy, a need for attention, or worse.

Franks need to always keep in mind that most people will not initially take the passion, energy and commitment that a Frank demonstrates for interactively discovering the truth of a thing, as noble or good; they will just think that the Frank is an arrogant asshole. And despite the fact that the Frank’s intentions may indeed be noble, those people may still be correct in their assessment. And even those relatively self-aware Franks have bad days, when their forthrightness smothers their empathy and compassion, and becomes belligerence or just plain cruelty.

Wisdom, truth and fact need to find fertile ground in the minds of the listener, else they are no better than fallacies, lies, and superstitions. You as a Frank are entirely responsible for ensuring that it is not the demeanor of the messenger that causes the audience to raise stone walls around that ground.

The truth will set you free… and in most cases also scare the bejesus out of you.

Wednesday, November 25, 2015

Tactics will get you through times of no Strategy, better than Strategy will get you through times of no Tactics

Despite how often the words Strategy, Strategic, Tactics and Tactical are used in planning meetings, it would seem that many who use them don’t actually understand the difference between the terms, often using them interchangeably. The purpose of the post is actually to discuss the merits of Tactical Excellence, but before elaborating on that topic I think I should disambiguate the two aforementioned terms.

The best way to illustrate the meanings of these terms is with a military example, which is most apropos, since these terms both have martial origins.

A group of generals and their staff decide that during an upcoming offensive, taking and holding a particular hill, which is currently held by the enemy, will be pivotal to the success of the offensive. Since it is the highest ground for hundreds of miles in every direction, it gives whomever holds that ground the ability to visually monitor all activity in the area, of friends and foes alike. Additionally, it has the advantage of forcing the enemy to fight uphill, and is generally a more defensible position. During a major assault one would not want to leave an enemy at ones back holding such ground. The generals order a company of paratroopers to drop in behind enemy lines and take this hill ahead of the main offensive. Taking and holding this high ground, and denying it to the adversary, is an example of a strategy.

A strategy is typically a course-grained plan to achieve a large-scale goal. In the example above, the strategy itself does not describe how the hill should be taken, just that it should be taken and then held. Though the generals may have more specific instructions for their field commanders, typically, given the fluidity of combat, they are not overly prescriptive and leave the details up to those field commanders. It is tactics that will be used by the field commanders and their teams to deal with any circumstances that arise while taking the hill.

Tactics are repeatable patterns or behaviors that can be used to address specific challenges that might arise in achieving the strategy. Some military examples of circumstances requiring tactical responses are, encountering a machine gun nest or a fortified position, crossing a stream or ravine (while under fire perhaps), dealing with a wounded solider, dealing with a chemical weapons assault, surviving a firefight with a larger enemy force, or for the purposes of my example, dealing with a sniper.

During the attempt to take the hill, a stick (a small team of paratroopers) gets pinned down by a sniper, who has secreted himself in a grove of trees higher up the hill, making further progress up the hill all but impossible without the squad taking heavy casualties. The lieutenant commanding the stick does not have the time to strategize a response to this threat; and he doesn’t have to, because he and his team have trained in a number of tactics for just this situation.

Assuming that the squad has found suitable cover (always a good tactic) and released smoke to obscure their position and movements (another good tactic), the squad needs to locate the sniper’s position. Historically this was done by drawing the sniper’s fire by raising a helmet or similar decoy, and using muzzle flash, the sound of the shot and its delay, and reflections off the scope, to manually triangulate the position of the shooter. Though they may still need to draw fire, modern special forces (and perhaps even conventional forces) carry electronic devices that detect body heat, the heat of the bullet in flight, shockwaves, muzzle flash, sound delay, and a number of other measureable indicators to electronically locate the shooter, making finding a sniper much less tedious.

Once the shooter’s position has been identified the sniper can then be dealt with, by laying down enthusiastic machine gun or mortar fire on that position, deploying a weaponized mini-drone (such exotic ordinance must exist!), or calling in air or artillery support (though in our example that might not be such a good idea; an entire grove of trees on fire, or torn to large irregular shreds, might be as much an obstacle to forward progress as the sniper was).

The tactics described above are not explicitly part of the strategy, but the successful execution of those tactics are critical to the successful execution of the strategy. For tactics to be optimally effective, the team needs to be highly efficient at their execution, having practiced them over and over, until they are second nature. They also have to trust their leadership and team mates, that they will all do their parts in the execution of the tactic, since most tactics require more than a single person to execute.

Soldiers have a toolbox of offensive, defensive and supportive tactics that they take to war, or any other theater they operate in; natural disasters, police actions, peace keeping, etc. It is the depth and breadth of their tactical toolbox, and the soldiers’ expertise in executing these tactics, that distinguishes adequate soldiers from elite ones. There is a reason elite operators spend so much time training. Every tactic in their repertoire has to be practiced, tested and perfected, and new techniques continually added to address new threats or circumstances.

The need for such a tactical repertoire is not limited to the military though; and now to the actual topic of this post. In the example above, which is more important to the team on the ground’s ability to do their job - having a good strategy or having a wide and deep tactical repertoire? I would assert the latter. For operational teams, of any discipline, becoming and being Tactically Excellent is probably the most important strategy they could adopt. It is so important a strategy that I would consider it The Strategy to Rule Them All. Perhaps it even deserves Meta-strategy status.

Though I was in the military a long time ago, I have spent most of my career in Software Development. There are many highly-effective Software Development tactics that teams can bring to bear to maximize the likelihood of success. These include practices like code reviews, retrospectives, planning poker, daily stand-up meetings, unit testing, short iterations, paired-programming, refactoring etc. Also, developing strategies for executing various types of pivots are also vital, given the rates of change modern software engineers need to be anti-fragile to. If an engineering team becomes expert at these tactics, then it really doesn’t matter what strategy they are executing on, they will be effective. The weakest part of a strategy should never be the tactics used in the attempt to achieve it.

Strategies in general are rather ephemeral; they, by necessity have to change to deal with environmental or competitive changes, but tactics are more durable, and even timeless in some cases. As CEOs (“Generals of Commerce”) strive for strategic excellence, attempting to predict the course-grained direction of the markets, their competitors and their customers, operational teams should focus on attaining and maintaining tactical excellence. This may sometimes mean spending time foreseeing events that never come to pass, and time practicing techniques that are never needed, but these are not wasted efforts, since they improve the team’s foresight, improve the team’s ability to work together as a cohesive squad, and generally make the team more agile.

As I like to say, Tactics will get you through times of no Strategy, better than Strategy will get you through times of no Tactics.

Tuesday, July 14, 2015

More fun with p5.js

Now with more interactivity…

Move the mouse cursor over the animating cube field. Click. Fun.

Sunday, June 21, 2015

A p5.js Experiment

I have been a fan of Processing and processing.js for a while now, though I haven’t had a lot of time to play with either recently. A new JavaScript interpretation of Processing is available called p5.js, so I thought I would give it a try. Here is my first experiment using this new API.


The new API is definitely a subset of Processing, but it has a lot of the goodies from its forebear, and also provides the ability to interact with the DOM. The performance is very reasonable too.

The welcome page for p5.js is particularly cool; it is definitely worth checking out!

Saturday, May 23, 2015

You Might Be Doing Fragile

Many years ago I attended a talk by Steve McConnell, the author of a number of seminal software engineering tomes including Code Complete, Rapid Development and Software Estimation: Demystifying the Black Art. I don’t remember the exact year, but it was around the time that the Agile Manifesto was published, and though the term Agile had yet to become an entry in the mainstream software developer's lexicon, many of the practices that we associate with Agile today were already being used in the industry.
 
During Steve’s presentation he said (and I am paraphrasing since I do not have a transcript) that in his experience the most important contributor to success in software projects was not which formal software development methodology a team followed, but rather on how disciplined they were in following the methodology or process that they had. You will note the use of the term formal in that last sentence; by this I am referring to mature methodologies that have evolved and survived in the crucible that is large-scale real-world software development.
 
Based on my own experience at the time, I immediately recognized Steve’s observation as deep wisdom, and this is as true today as it was at the time. Whether a team is following a Big Design and Estimation Up Front (BDEUF) or an Agile process, it is more important that they are disciplined in the adoption of that process, than which process they use.
 
Unfortunately, a lot of engineering teams who attempt to adopt Agile practices, fail miserably. And nine times out of ten they fail because they are undisciplined about their adoption, and they land up practicing what I call Fragile. Many managers don’t seem to get that there is no formal (there's that word again!) process called Agile; rather there are formal processes called Scrum, XP, Kanban, Lean, MSF, etc., that follow the Agile philosophy.
 
Though the aforementioned methods do lend themselves to customization, and in many cases demand it in the name of continuous process improvement, their initial adoption requires disciplined adherence to the specific principles and practices of the method. For example, just doing daily stand-up meetings, will not result in higher developer productivity, product quality or predictability. Stand-up meetings are just one part of Scrum and need to be combined with the other Scrum practices to realize the real benefits of the methodology.
 
And like any organizational change, the adoption of a formal Agile process also requires perseverance; it can take half a year before a team hits its stride with the new process. Unfortunately many failed attempts at adopting Agile become classic cases of throwing the baby out with the bathwater - after continued failure many teams abandon Agile practices and swing back to BDEUF, and then typically continue to fail with that approach.
 
I have heard it said that using Agile is just an excuse for poor planning or a justification for laziness, and that may sometimes be the case, but these are just examples of practicing Fragile. I have seen it time and time again where a team that has been doing BDEUF (please lets stop calling it Waterfall, since that moniker was used to describe how NOT to do software development!) resorts to Agile (read Fragile) when it becomes clear that there is no way they are going to meet their dates, requirements or features. It doesn’t help them, or stakeholders’ perceptions of the effectiveness of Agile practices.
 
So, with a nod to Jeff Foxworthy, here are a few ways you can identify if you are doing Fragile.
 
If you got part way through your BDEUF project and after realizing that there was no way that you were going to make your dates or scope you resorted to an Agile process…
…then you might be doing Fragile.
 
If you adopted a couple of Agile practices, e.g. a daily stand-up meeting, but haven't seen any improvement in your engineering team's productivity, predictability or general satisfaction; and are thinking that Agile is a load of hogwash…
…then you might be doing Fragile.
 
If you are not doing retrospectives or post mortems after every milestone…
…then you might be doing Fragile.
 
If your stakeholder’s don’t have complete transparency into your process and progress…
…then you might be doing Fragile.
 
If you are not using an Application Lifecycle Management  platform like Team Foundation Server to support your adoption of Agile…
…then you might be doing Fragile.
 
If you are not doing Build and Test Automation…
…then you might be doing Fragile.
 
If you don’t require that developers write tests for the code that they write…
…then you might be doing Fragile.
 
If you believe that human beings can estimate complex tasks (greater than 8 hours) with any degree of accuracy…
…then you might be doing Fragile.
 
If you are not refactoring user stories and work items into smaller user stories and work items to mitigate the above human limitation…
…then you might be doing Fragile.
 
If you think that you can mitigate the unquantifiable risks associated with emergent complexity with a fixed-size contingency buffer
…then you might be doing Fragile.
 
If your Scrum Master (or whatever you call her or him) is A Boss
…then you might be doing Fragile.
 
If anyone on your team asks, "what should I be doing next?”
…then you might be doing Fragile.
 
If any stakeholder has to call up a lead or manager and ask "when will my feature be ready?"…
…then you might be doing Fragile. 
 
If anyone in a meeting does not know whether they are a Pig or a Chicken, or does not know what that even means…
…then you might be doing Fragile.
 
If you are not doing sprints or iterations, or if those iterations are measured in months as opposed to weeks…
…then you might be doing Fragile.
 
If you are not doing backlog grooming with stakeholders multiple times during each iteration…
…then you might be doing Fragile.
 
If the total daily time commitment required by each developer in the development process, that is not related to designing software or writing code, is more than about 15 minutes…
…then you might be doing Fragile.
 
If your development leads spend more time managing the process than writing code…
…then you might be doing Fragile.


Please don’t do Fragile.

Friday, November 7, 2014

.NET Iteration Performance

Last night I gave a talk to the .NET User Group of British Columbia on .NET Performance Engineering. During the presentation I demonstrated the performance characteristics of four different C# iteration mechanisms: incremented for, decremented for, foreach and LINQ.

Here (as promised to the audience) is the code for the test application and the individual test cases:
namespace PerformanceMeasurement
{
     using System;
     using System.Collections.Generic;
     using System.Diagnostics;
     using System.IO;
     using System.Linq;

     public class Program
    {
         private enum TimerUnits
        {
            Ticks,
            Milliseconds
        }

         private enum ReturnValue
        {
            TotalTime,
            AverageTime
        }

         public static void Main(string[] args)
        {   
             if (Debugger.IsAttached)
            {                  Console.WriteLine("A debugger is attached! JIT-compiled code will not be optimized ");
            }
            
            Console.Write("Press any key to warm up tests");
            Console.ReadKey();
            var testData = InitAndWarmUpIterationTests();
            Console.Write("Press any key to start tests     ");
            Console.ReadKey();
            Console.WriteLine("\rStarting Performance Tests      ");

#if DEBUG
            Console.WriteLine("This is a Debug build! JIT-compiled code will not be optimized ");
#endif
        
            Console.WriteLine();

            const int IterationCount = 1000;

            RuntIterationTests(IterationCount, testData);
            RuntCollectionTests(IterationCount);

            Console.Write("Tests Complete. Press any key to exit.");
            Console.ReadLine();
        }

        private static List<List<string>> InitAndWarmUpIterationTests()
        {
            var testData = CreateIterationTestData();

            var warmupResults = new List<int>
                                    {
                                        WordCountWithLinq(testData),
                                        WordCountWithForEach(testData),
                                        WordCountWithFor(testData),
                                        WordCountWithForDownward(testData)
                                    };

            Trace.WriteLine(warmupResults.Count);
            
            return testData;
        }

        private static List<List<string>> CreateIterationTestData()
        {
            var entries = new List<List<string>>();

            const string Path = @"data\";

            foreach (var fileName in Directory.EnumerateFiles(Path))
            {
                using (var stream = File.OpenRead(fileName))
                using (var bufferedStream = new BufferedStream(stream))
                {
                    var reader = new StreamReader(bufferedStream);
                    var lines = new List<string>();
                    string current;
                    while ((current = reader.ReadLine()) != null)
                    {
                        lines.Add(current);
                    }

                    entries.Add(lines);
                }
            }

            return entries;
        }

        private static void RuntIterationTests(int iterationCount, List<List<string>> testData)
        {
            Console.WriteLine("Iteration Performance Tests:");

            const TimerUnits Units = TimerUnits.Ticks;
            const ReturnValue ReturnValue = ReturnValue.AverageTime;

            Console.WriteLine("Creating Test Data...");
            var results = new List<int>();

            Console.WriteLine("Running Tests...");
            var t1 = TimeMe(WordCountWithLinq, testData, iterationCount, ref results, Units);
            var t2 = TimeMe(WordCountWithForEach, testData, iterationCount, ref results, Units);
            var t3 = TimeMe(WordCountWithFor, testData, iterationCount, ref results, Units);
            var t4 = TimeMe(WordCountWithForDownward, testData, iterationCount, ref results, Units);

            Console.WriteLine("Result Count: {0}", results.Count);
            Console.WriteLine("WordCountWithLinq \t\t{0}\t: {1} {2}", ReturnValue, t1, Units);
            Console.WriteLine("WordCountWithForEach \t\t{0}\t: {1} {2}", ReturnValue, t2, Units);
            Console.WriteLine("WordCountWithFor \t\t{0}\t: {1} {2}", ReturnValue, t3, Units);
            Console.WriteLine("WordCountWithForDownward \t{0}\t: {1} {2}", ReturnValue, t4, Units);
            Console.WriteLine();
        }

        private static void RuntCollectionTests(int iterationCount)
        {
            const TimerUnits Units = TimerUnits.Ticks;
            const ReturnValue ReturnValue = ReturnValue.AverageTime;

            var results = new List<int>();

            const int MaxValue = 10000;
            var t1 = TimeMe(CreateAndEnumerateList, MaxValue, iterationCount, ref results, Units);
            var t2 = TimeMe(CreateAndEnumerateGenericList, MaxValue, iterationCount, ref results, Units);

            Console.WriteLine("Collection Performance Tests:");
            Console.WriteLine("CreateAndEnumerateList \t\t{0}\t: {1} {2}", ReturnValue, t1, Units);
            Console.WriteLine("CreateAndEnumerateGenericList \t{0}\t: {1} {2}", ReturnValue, t2, Units);
            Console.WriteLine();
        }

        private static float TimeMe<TArg, TReturn>(
            Func<TArg, TReturn> me,
            TArg arg,
            long iterationCount,
            ref List<TReturn> results,
            TimerUnits units = TimerUnits.Milliseconds,
            ReturnValue returnValue = ReturnValue.AverageTime)
        {
            var timer = new Stopwatch();

            var currentIteration = 0L;
            var time = 0F;

            do
            {
                timer.Start();
                results.Add(me(arg));
                timer.Stop();
                time += (units == TimerUnits.Milliseconds) ? timer.ElapsedMilliseconds : timer.ElapsedTicks;
                currentIteration += 1;
                timer.Reset();
            }
            while (currentIteration < iterationCount);

            if (returnValue == ReturnValue.AverageTime)
            {
                time = time / iterationCount;
            }

            return time;
        }

        private static int WordCountWithFor(List<List<string>> entries)
        {
            var wordcount = 0;

            for (var i = 0; i < entries.Count; i++)
            {
                for (var j = 0; j < entries[i].Count; j++)
                {
                    wordcount += entries[i][j].Length;
                }
            }

            return wordcount;
        }

        private static int WordCountWithForDownward(List<List<string>> entries)
        {
            var wordcount = 0;
            for (var i = entries.Count - 1; i > -1; i--)
            {
                for (var j = entries[i].Count - 1; j > -1; j--)
                {
                    wordcount += entries[i][j].Length;
                }
            }

            return wordcount;
        }

        private static int WordCountWithForEach(List<List<string>> entries)
        {
            var wordcount = 0;

            foreach (var entry in entries)
            {
                foreach (var line in entry)
                {
                    wordcount += line.Length;
                }
            }
            return wordcount;
        }

        private static int WordCountWithLinq(List<List<string>> entries)
        {
            return entries.SelectMany(entry => entry).Sum(line => line.Length);
        }

        private static int CreateAndEnumerateList(int maxValue)
        {
            var list = new System.Collections.ArrayList(maxValue);

            foreach (var val in Enumerable.Range(0, maxValue))
            {
                list.Add(val);
            }

            var total = 0;

            foreach (int val in list)
            {
                total += val;
            }

            return total;
        }

        private static int CreateAndEnumerateGenericList(int maxValue)
        {
            var list = new List<int>(maxValue);

            foreach (var val in Enumerable.Range(0, maxValue))
            {
                list.Add(val);
            }

            var total = 0;

            foreach (var val in list)
            {
                total += val;
            }

            return total;
        }
    }
}
Note: in order to run this code you will need to create a data folder that contains a number of multi-line text files and modify the code to point at that folder.
 
On my workstation I get the following output when directly executing the application:
 Starting Performance Tests

Iteration Performance Tests:
Creating Test Data...
Running Tests...
Result Count: 4000
WordCountWithLinq AverageTime : 3697.65 Ticks
WordCountWithForEach AverageTime : 916.269 Ticks
WordCountWithFor AverageTime : 833.674 Ticks
WordCountWithForDownward AverageTime : 736.798 Ticks

Collection Performance Tests:
CreateAndEnumerateList AverageTime : 582.813 Ticks
CreateAndEnumerateGenericList AverageTime : 232.494 Ticks

Tests Complete. Press any key to exit.



As you can see, the decremented for loop is the fastest, and approximately 5 times faster than the LINQ implementation! It is also about 13% faster than the incremented for loop, which is significant. After showing the code and results above, I walked the audience through how I discovered why this is the case. I won't document the entire process here, but after looking at the disassembly of the optimized native code generated by the CLR x86 JIT, I discovered that in the decremented case the evaluation of the condition uses the test and jl (Jump If Less) instructions, rather than the cmp (Compare) and jae (Jump if Above or Equal) instructions, which are used in the incremented case. Obviously the former combination of instructions executes faster than the latter. The latter combination of instructions are also used in the un-optimized, i.e. Debug, version of the decremented for, which is even slower than the un-optimized foreach, which is why it is important to make sure you always performance-test the Release version of the code.

 

So why should anyone care about this?


Well if you are using LINQ in a performance-sensitive code path then you should STOP doing that immediately, and in cases that you are iterating over very large arrays with incremented for, you should try the decremented for and see what you gain. Note that the overall performance gain will vary depending on how much work you are doing in the body of the loop.


And lastly, don't be afraid to look at the disassembly of your managed code; it's right there in Visual Studio and it represents the ultimate truth of how your code is executing on the hardware.

Thursday, July 31, 2014

Applied Architecture Patterns on the Microsoft Platform, Second Edition

bookcover

The book that I have been working on for the last year, with Andre Dovgal and Dmitri Olechko, has been published and is now available for purchase on Amazon Applied Architecture Patterns on the Microsoft Platform, Second Edition, published by Packt Publishing, builds upon the framework that was presented in the first edition and includes new chapters on recent Microsoft technologies, including .NET 4.5, SQL Server and SharePoint.

The book is aimed at software architects, developers and consultants working with the Microsoft Platform, who want to extend their knowledge of the technologies that make up said platform, and learn how those technologies can be composed into complex systems.

I decided to contribute to this book because of how useful I found the first edition. Hopefully this updated version will be as useful to those who read it.

Thursday, July 10, 2014

Everyone Jumps, Everyone Fights

I have been in a lot of interviews recently and the topic of my leadership style has come up in many of them. My 20-year career has given me numerous opportunities to observe a wide spectrum of leadership styles, some very effective, and others not so much. My own leadership style has percolated from the styles of the aforementioned leaders, and more specifically from the philosophies, practices and techniques that I have seen be consistently successful. Though most are self-evident I think they are worth writing about, and I will do so in a number of posts.

My informal study of leadership began while I was serving in the South African Defense Force with 1 Parachute Battalion, then part of the 44 Parachute Brigade. In airborne units typically everyone jumps, and everyone fights; everyone from the private working in the mess to the commanding officer is jump-qualified and is trained for active combat. The commanding officer while I was at 1 Parachute Battalion was Colonel J.R. Hills. Colonel Hills was a seasoned and decorated veteran of the Angolan Bush War, and was easily one of the most capable soldiers in a unit of highly capable warriors. He without exception commanded the respect and loyalty of every member of the unit, by demonstrating time and again that he was a leader who not only could jump and fight, but who would lead from the front.

Being in the Infantry, airborne or otherwise, means that your skill as a soldier is primarily predicated on your endurance. An infantryman needs to be able to march or run a significant distance burdened with equipment and weaponry and arrive at the objective with enough reserves to be able to fight immediately. And the physical and emotional endurance that is then required for combat is staggering.

One of the ways that Colonel Hills demonstrated his willingness to lead from the front was his annual running of the Comrades and Washie ultra-marathons, and his challenge to the entire unit to beat him in the former marathon, and simply finish the latter. The reward offered was always a much sought-after weekend furlough. Though I do recall some members of the unit beating his Comrades time, I can’t recall a single soldier taking the Washie challenge. The Colonel demonstrated time and again that he had more raw endurance than anyone in the unit.

By the time I served in the unit South Africa was not involved in any active conflicts, but if there was anyone I would have wanted as my commanding officer in active combat it would have been Colonel Hills. I would have followed him into Hell had it been required.

Lead from the front is probably my most core leadership principle. Software engineering teams are meritocracies, so effective leadership of these teams requires that every member of the team respects the leader’s technical knowledge and skill. Leading from the front also means that a leader should not expect anything from their team members that they are not prepared to do themselves. This does not imply that the leader needs to master every discipline of every role in the team they lead, but they should at least be able to roll up their sleeves and make a contribution in every role if necessary, and certainly understand the day-to-day requirements and challenges of each role.

I am lucky that I have had opportunities to work in most sub-disciplines within the broader Software Engineering discipline, including Product Management, Program Management, Project Management, Architecture, Development, User Interface\Experience Design, Quality Assurance, Release\Configuration Management, and even Support and Operations to a lesser degree; so I have a good understanding of the minutia of most of these. I also endeavour to write code every day; partly because I just love to write code, but also because it means I never lose touch with the reality of being a software engineer. In my last role I made sure that I was actively contributing to the product, by taking Architecture, Development and QA work items in every sprint. I believe that this made me a much more effective leader, earned me the technical respect of my team, and gave me some of the context necessary to build rapport with everyone on that team.

I should note that not all the leadership practices I observed being successful in the military translate to leading software engineering teams. Military organizations employ a Command and Control management style by necessity, which is almost always a disaster if used with modern software engineers. I favour a Facilitation management style, which I will elaborate on in a follow-up post.