Tuesday, December 5, 2017

Self-Organizing does not mean Self-Managing!

I hear quite often someone referring to the importance of ‘self-management’ of an agile team.  I want to dispel that myth.  Agile teams are Self-Organizing, but not Self-Managing!

Agile Principle #11 “The best architectures, requirements, and designs emerge from self-organizing teams.”(1)


Self-organizing means that a team can see a
piece of value to deliver, and within their own ranks, organize around how to accomplish the work.  This is a core tenant of Agile as it has been proven over and over that the most effective teams are simply given the Principle of Mission (2) and the minimal constraints within which they must operate, and allowed to organize themselves around the work.  By self-organizing around the work the team self-optimizes for the best approach, using the localized knowledge they have of the domain, skill sets, etc of the team environment.  True efficiency, as well as team satisfaction, is gained only through self-organization.  However, when teams try to Self-Manage in most organizations (the Spotify’s or Holocracy organizations of the world excluded) they will struggle. Why?  Because every agile team has intrinsic needs to be truly successful, and many of these cannot or should not be handled within the team alone.

Current management patterns are based on 17th century Taylorist styles that essentially state that the workers are too uninformed/ignorant/uncaring/stupid to figure things out for themselves, so they have to be told what to do at every step (my bias shows through in that statement).  True management is about maintaining the systems we build for the teams to thrive in.  To paraphrase John Kotter, ‘Leadership builds systems, management maintains systems”(3).  This style of management does not mean controlling the teams, but instead acting as a bulldozer to clear the obstacles out of the team’s way.  This includes things like supporting the teams in limiting and adding visibility to WIP and identifying and removing impediments for the teams to focus on the work.

There are many attributes of management that work well in a lean and agile environment, but some of the characteristics I have seen are: challenging the teams in a positive manner, helping the team to create and continuously update improvement metrics, and enabling and encouraging career and skill growth, Team formation is also a key role of agile management; I once heard Esther Derby state that over 60% of the success of any agile team is based on its initial creation, and I have experienced that same approximate importance.  Also, and unfortunately, not every agile team can deal with the ‘voting off the island’ time when a team member is just not in the right situation; good agile management can turn that potentially negative situation into a positive.

To enable true self-organization, Agile Managers need to exhibit the same lifelong learner and knowledge hungry approach that they expect from their teams.  They are not in the role for the title but for the sake of advancing the careers of others and for the ‘greater good’.  This takes a special mindset, and one that will take time to bake in, but the underlying desire to move towards that type of role is essential. 

If this definition of the Agile Manager is not what you see at your organization, then I can understand the desire to make agile teams self-managing.  However, let’s fix the root of the problem: let’s train and educate our managers to drop the draconian practices and become true agile managers.  That’s when, IMO, you will start to see the difference between self-organizing and self-managing agile teams.  The added benefit is that these agile managers will find more career satisfaction and enjoyment in their new role.

1) AgileManifesto.Org
2) Humble, Molesky, O’Riely, “Lean Enterprise: How High Performance Organizations Innovate at Scale”
3) John P. Kotter ‘Leading Change”

   

Friday, November 10, 2017

Focus the System Demo on the System (and not the teams!)


The system demo is a critical aspect of SAFe for each PI, but many people misunderstand the reason for it and the outcomes we are looking for.  The System Demo is much more (and much less) than what is sometimes practiced, and understanding the goals and ‘better’ practices will help you get the most out of this critical ART event.

This article is not meant to re-define the System Demo; that’s already been done quite well in the SAFe guidance article on the System Demo.  It provides very clear direction on the importance and objectives of the System Demo.  What I want to call out is the importance of making this demo Team Agnostic.  The problem that I have seen quite often is that many ART’s have not fully grasped the importance of a Team of Teams, and still function as a collection of teams.  The System Demo is a great way to start to change that mindset.
General Stanley McChrystal laid out what a true Team of Teams looks like in his incredible book “A Team of Teams”.  (please see John Pearson’s blog for a great summary).  This pattern needs to be replicated in some form in every ART to create the right alignment to deliver on a common value stream.




Let's first clarify the difference between the System Demo and the Team Demo (or Team Review). The team demo in SAFe is very similar as in Scrum; as a team we are demonstrating what we were able to accomplish to gain feedback and course correction. It's not about showing how much we accomplished (it’s not a status report), but rather about showing the progress we made on a given effort so that we can get input on direction and possible course correction.

The system demo is very similar in that we’re really looking for feedback on what we've accomplished to gain course correction. We are also looking to measure progress against our team and Program objectives. Just like the team demo, this is not to show that the team is getting work done but much more to show how we're doing against our stated objectives and helping us see if we need to pivot to be able to meet these objectives.



The core difference between a system demo and a team demo however is really in the scope and the manner in which the demo is presented. By it’s very name, System Demo, we are showing how far we have advanced the system, not just what each team has done. So, from that perspective I believe it vital that the system demo is done team agnostic, e.g not done team by team. I see a lot of ARTs that are running system demo’s team-by-team or even story by story, but that's really for the team demo. The system demo is about showing the entire system and how all teams have contributed to moving it forward during the last iteration. In fact, the system demo is one of the best ways to show the critical distinction that this is a team of teams, rather than a group of teams. By demonstrating the system and how its advanced, combining each team's contribution, we bring the perspective that this is really a team of teams working together towards a common goal and not just a collection of teams.

Another important note is that the system demo is generally presented from the product management to the stakeholders. I see a lot of ARTs that use this opportunity to demonstrate to product management how the system is incremented, but this progress should be discussed with Product Management outside of the system demo and during the iteration. The Product Manager(s), just like the Product Owner, should see the progress as the system moves forward during the iteration.  This does not mean that the Product Managers are not learning more about the system and providing course correction and direction during the demo, it just means the bulk of the course correction and feedback should be coming from stakeholders,


Monday, June 5, 2017

PI Planning and Execution Simulation


(Designed to complement SAFe® For Teams Training)

Overview

SAFe® For Teams (S4T) training is a critical component to launching an Agile Release Train (ART) successfully.  The training event will provide a level set on SAFe® ScumXP, provide insight into how to plan and execute a Program Increment (PI) and allow the teams to start or solidify their formation as a successful Agile Team.  However, the impact of the 2 day education can be enhanced by providing a hands on simulation for teams to practice their new learning and skills prior to the upcoming PI Planning event.  The Scaled City PI Simulation is an adaptation of the tried and true Scrum Simulation using LEGOS® that has helped so many teams learn the basics of Scrum in a fun and engaging environment.
The PI Sim PowerPoint provides a step by step guide for SPC’s to use to deliver this exercise.  The Sim is intended to be incorporated into the S4T training, preferably in the morning of the second day, but can be utilized outside of the training event.  This allows team members to lock in the learnings from the previous day and an opportunity to exercise their new Lean Agile muscles.  The PPT is self-explanatory for the delivery steps, however there are many nuances to apply to this exercise to enhance the learning.

Setup

You will need around 200 various sized LEGO® pieces per team.  Try to get a variety of types and usages, including a number of wheels and special shapes.  LEGO's are not cheap, but hitting a few garage sales or eBay items will help reduce the cost.  You will also need large 2’ x 3’ poster sheets (2 for the city layout and 1-2 per team for planning), markers or sharpies, and a printed copy of the Features from the PPT.  Each team should have a table and space large enough for 4-6 people to move around easily, as well as one large poster sheet to do their planning.
Prior to the start of the exercise you will need to setup a ‘deployment’ table in the middle of the room, and tape two large flipchart sheets long edge to long edge on the table for the city layout.  I usually draw a river along one edge and leave the rest as a blank canvas for the teams to innovate on.  If you have a co-trainer, ask them to play the role of Mayor of Scaled City.

Simulation

This exercise is about learning, but it’s also about generating energy and confidence in the PI Planning and Execution process.  Hopefully, you are presenting the S4T training right before the PI Planning event (think M-T for S4T and W-T for PI Planning) in which case any energy you can generate in this simulation will spill over into the planning event.  Start this sim off with as much energy and enthusiasm as you can, and keep it fun! 
I usually jump right into the deck and explain the sim using the information in the slides. 

Team/Feature Selection

To speed things up, each team will have pre-assigned features based on their team name.  Make sure you explain that this is not normal, but only done for the simulation, as most PI Planning events will utilize what I call team agnostic features.  I like to have the teams select their team name (and their features) after the Product Vision and Roadmap to create a sense of self-organizing around the problems to be solved.  For experienced Product Owners and Scrum Masters I like to have them take a different role to see how the ‘other’ side lives, but inexperienced or new PO’s and SM’s should probably take that role in the Sim.

Planning

Don’t worry that the team members are following every ‘rule’ of PI Planning, but focus on the important aspects, such as Team Objectives (gleaned from their features and the vision of the city), dependencies to other teams (e.g. DOT needs to work with Works to make sure the bridge meets the needs), and risks to their plans (a common one is that they will not have enough Legos and will need to borrow).  During planning ask each team questions that will lead them to discover the objectives, dependencies and risks critical to the commitment.  Help them with the time box by repeating “Breadth versus Depth” and focusing on a broad plan with gaps that they can go back to fill in as time permits.

Plan Review

This is a great time to cement in the need for discovering and planning around dependencies on other teams.  As each team reviews their plan ask questions that will lead to discovery of missed dependencies.  Have each team focus on their objectives in the review, rather than reading off each story.  Call out risks they may have missed in their plans, stressing that risks are opportunities for the plan to fail.  Once each team has committed you can do a confidence vote, but for the sim you don’t need to spend much time on getting every team member to a 4 or 5.

PI Execution

Iteration 1

In the first iteration you want to generate a quick win for the teams to generate confidence, so I usually coach them a fair amount towards success.  However, I do leave particular things out, such as early integration and deployment.  A very typical scenario is that the teams will build for the first 14 minutes and then scramble at the last minute to integrate into the city, resulting in things like a 1 inch high fire department and a 4 inch high fire truck.  As Product Manager, I stress the importance of integration by looking for issues (real or made up) to show the impact of lack of early deployment and integration, resulting in delayed learning.  The system demo is always full of teaching opportunities!

Iteration 2

During Iteration Planning I stress the inclusion of learning from the first iteration, encouraging them to alter their iteration plan from the PI Planning as needed to adapt.  Depending on the progress of the teams I will add a wrinkle by disappearing for most of the iteration timebox, thereby making the Product Manager not available.  When I reappear (usually with just a minute or two left in the iteration) there are usually tons of questions and adjustments needed.  This is done to illustrate the need for the involvement of the Product Manager throughout the iteration, and the usefulness of live feedback.

Iteration 3

By the middle of iteration 3 the teams are usually winding down on the committed features and have time to innovate.  At this point I start to introduce new ideas based on the knowledge they provided during the other two iterations, such as adding a ‘homeless problem’ from all the people moving in to our great city faster than expected, or a water treatment problem (one team solved the lack of fresh water by grabbing the water pitcher off the snack cart and placing it in the town as a water tower, that’s innovation!)

Summary

After the PI system demo (end of iteration 3) I gather the teams around the city and pick out other learning opportunities.  Look for things like the amount of collaboration, the ability to work cross team, the way the teams solved problems that they didn’t believe they had the skillset to tackle, etc.  I wrap it up by illustrating how similar this is to executing in a PI, and encourage the teams to use the sim to help them think differently during the upcoming PI Planning event.  Heading back in to the rest of the S4T training I can now use a lot of examples from the sim in the subsequent content with something they can connect with.

Please feel free to use this toolkit as is without any license or the like, however, please do not modify or remove the Radius ET branding without previous permission from Radius ET.

Note: SAFe®, SAFe For Teams®, and the Scaled Agile Framework® are all copyrights of Scaled Agile Inc.  

Wednesday, May 31, 2017

The Only Valid Architecture is Validated Architecture

I have worked in the IT field for over 3 decades now, and I’ve seen a lot of effort expended in design, architecture and infrastructure areas to build out some incredible and fantastic platforms and underlying systems.  However, I’ve never seen a 100% utilization of that effort.  In general a large part of the work is never used or, even worse, becomes a blocker to future agility.  Why do we do that?  Why do major components of these architectural designs, many built by some of the most intelligent people I know, go unused?  From my experience, it is always due to our tendency to separate architecture usage from business feature usage.  Architecture alone does not add to our bottom line or overall success.  It is only when that architecture or design enables us to solve customer problems that business value is achieved.
Architecture, design, UX, and any other similar effort should only be expended in the pursuit of supporting business value.  Yes, Architects, Designers, etc, I do mean that without the business value you support you have no reason to do the work.  And, even more importantly, unless you are building architecture to directly support currently needed business value, you have no way of validating if you have designed the right architecture!  Only after validating your design by seeing it enable business value do you know if you have built a valid architecture.
From an Agile perspective, this makes sense.  Agile Principle # 10, the art of maximizing the amount of work not done, stresses that the simplest solution is often the best solution.  From a Lean perspective we are pushing to eliminate waste in the system; unused architecture is a huge source of waste (as well as quality issues).  Add in the Lean Startup perspective, which brings to the table that sense of experimentation to gain knowledge quickly, and we gain the perspective that true validation only comes from the customer.  Then apply all 9 SAFe® principles (yes, Intrinsic Motivation counts) and you start to see the picture that we should only consider architectural effort well spent and validated once we see it supporting business value and solving customer problems.
“But, wait!” you say, “if we delay creating architecture/infrastructure/design, we end up with a fragile mess!”  True enough, we definitely need to have intentional architecture so that we have a consistent and supportable direction with our designs.  We absolutely need to be looking ahead for the architecture, designs, patterns, infrastructure, and all the other needed components to support business value.  Enter the SAFe® Architectural Runway.  This Runway combines Intentional Architecture and Emergent Design to ensure that we are building the right amount of architecture up front, but ensuring that all of our efforts can be quickly validated by supporting actual business value.

Intentional Architecture

Where are we going with this solution?  What framework, capability, etc will be needed to support future business value?  What do we need in place to avoid future performance issues?  Are we headed in a direction that supports future security concerns?  Those are all things that need to be discussed and planned for.  However, each discussion we have should be accompanied by “what business value is upcoming that can validate this is the right direction?”


Emergent Design

Emergent Design is a core component of validating architecture incrementally.  The ability to create a ‘walking skeleton’ of the intent and then allowing the details of the design to emerge from the teams each increment allows us to validate the value of each architectural component as we progress.  The key is to ensure that we are establishing our measurements and leading indicators of the viability of the architecture and design we are pursuing.  Having a direction is the first step, but then we need the teams to build to that intention, and then stack business value on top to validate the intention. 

For example, let’s assume you are trying to move your application base to the cloud.  Your assumption is that moving your entire infrastructure to the cloud will result in cost savings and the ability to scale capacity much more quickly.  However, you have a massive amount of applications to manage, that all seem to be inter-connected, and you cannot interrupt operations.  In addition, most of your footprint is legacy apps that are client-server or mainframe based design.  A traditional mindset would state that you need to design a cloud capability to support all of these apps and their connectivity (which means migrating a number of mainframe apps) which will require a massive application and workflow design.  And you are correct, you do need a plan, but as von Moltke stated “No Battle Plan Survives Contact With the Enemy”.  Applying Intentional Architecture along with Emergent Design is the way to survive this massive effort.

Instead of pursuing a big bang approach to this effort, pursue an incremental, learning based approach to the architecture and design.  What early indicators would help prove we have the right concept of how to move to the cloud?  What early value can we pursue that will not only help us determine the right direction, but also confirm the perceived value?  The first step is to have a firm plan that is easily changed.  Create a clear vision of where you want to go, and your approach to getting there, but create the plan with a high level of abstraction.  Avoid the locked in design that can result from going too deep too quickly.  For each component of the design ask yourself “How does this support the business outcome we are looking for?”

Next, determine the architectural/design areas that are the most mission critical or present the most risk.  It is important to isolate these areas to be targeted for early learning.  Look for areas that will not only gain knowledge on the architectural direction, but also support the most business value, both of which will provide faster feedback on future direction.  Avoid the ‘sacred cows’, the areas that you want to put in, but don’t really need.  Add in the understanding of how you will measure this progress, looking for leading indicators that will help to provide faster pivot or pursue moments.  For example, if part of your reasoning for moving to the cloud is cost savings, determine how you can measure cost savings with each increment.  Sometimes you have to extrapolate or use non-monetary indicators early on, but get as quickly as you can to real savings measurements.

Now, build the bare minimum architecture you can to gain the knowledge you need, e.g. to move your metric needle forward.  If your assumption is message based connectivity in the cloud, but you are using file based communication in many areas, can you first get these apps to talk via a simple message queue or service bus?  Do you really need to move to the cloud before you have built the first step of communication?  As you build these incremental steps you start to play leapfrog: a little architecture, a little business, rinse and repeat, all while keeping an eye on the end target.

Both Emergent Design and Intentional Architecture require Validation to be successful.  The Architectural Runway is there (in part) to ensure that we are recognizing that need for validation.  The next time you are thinking of this cool new platform concept, wanting to implement the latest My/No/Yours SQL, remember to think “what business value needs this?  What business value can we build on this capability to validate we are going the right direction with our intent?  How can we quickly measure the success of the design?”  Then, build that business value on the early iterations of that architectural work, and look for customer validation for course validation or correction.




Thursday, May 25, 2017

Solution Accuracy over Quality

In the last few years (maybe even decade) there has been a strong surge towards quality in our product development.  Not just limited to software, this focus has led to much stronger practices and tools leading to much more robust, scalable, and resilient products.  While this is in general a highly needed change from the somewhat sloppy work from previous generations of developers, but I believe we are missing the point in not applying the same, or more, attention to the accuracy of what we are building. 
Think about it.  If you are delivering a solution that is near 100% quality, what good does it do if it is not the optimal solution?  Regardless of your definition of quality, If I create a product that meets 100% of those levels, but does not solve the customer problem or advance the ability to use the product, did I do any good?  As an extreme example, if I write the highest quality game application that could exist, but what the customer needed was banking functionality, did I really add any value?  Don’t get me wrong, I am a huge advocate of quality in all products, and teach and coach software craftsmanship as a part of my work with my customers.  However, we have to balance that out with the accuracy of the solution.  Our attention to ‘Zero Defects’ has lead us away from ‘Solved the Problem’
Solution Accuracy, on the other hand, has a strong focus on learning, analyzing, measuring, validating and adjusting to deliver what our customers need.  Not what they tell us they need, but what they really need.  The adage “the customer doesn’t know what they want, they only know what they don’t want when they see it” is painfully true.  (Remember Henry Ford’s statement of “If I had asked customers what they want, they would have said faster horses”).  Accuracy is about incrementally working towards an optimal solution by applying the concepts of established Principle of Mission, iterating towards an optimal solution, and applying leading indicators that help us course correct or pivot as needed as we continue to study the impact on our customers of our emerging capability(ies).  From my years of working with Fortune 100 companies I have experienced time and again the lack of attention to accuracy and optimal solutions, instead following a Crystal Ball plan of predicting/forecasting what the customer needs, and most often arriving well short of the target.
Why is this a problem?

Assume Variability, Preserve Options

As an Enterprise Transformation coach and SAFe SPCT, I rely heavily on two core aspects to raise awareness to the solution accuracy problem.  The first is SAFe’s Principle #3 “Assume Variability, Preserve options”.  In general, this principle shows us the benefit of not relying on predictive ‘Point Based’ solutions that rely on far too little knowledge and information to state that we know what the customer needs/wants.  The typical 3-9 month corporate project that relies heavily on not only a stated outcome, but also a direct statement on how to achieve that outcome, usually accompanied by a project plan with everything detailed out to the low-level task.  If we were to honestly reflect on the results of this predictive planning we would quickly see that these efforts rarely are on time and within budge, but even worse, rarely actually solve the problem at hand. 

When we “Assume Variability” we understand that we don’t have the full level of knowledge yet (that will be gained as we iterate towards a solution) and that we should assume that our current assumptions are invalid because of the lack of that yet to be gained knowledge.  It is important to have assumptions of what the best path forward is, but when we assume variability exists we accept that we are most likely wrong (sometimes very wrong).  Assuming Variability says that we know we are going to learn more, so let’s take advantage of that variability and iterate towards the solution with a plan and process that inherently incorporates that new knowledge as we progress.
To ‘Preserve Options’ means that we never lock ourselves into a corner, never stating “this is the only way to solve this problem”, until we have gathered the most knowledge we can.  It’s great to have an assumed best path, but we also need to acknowledge, and sometimes pursue, other options that may turn out to be the better path.  Yes, that does mean sometimes you will pursue a solution that will be deprecated or dropped based on knowledge that indicates it is no longer a viable path.  If that sounds like wasted work, I would gladly trade that level of ‘waste’ for the waste we find when we pursue a point based solution and have the resulting adjustments, changes, and sometimes project cancellations due to chasing the wrong path. 
Tying these two together means we start out by allowing and encouraging more than one possible solution direction, pursue one or more to gain knowledge as fast as we can, and pivoting or dropping assumed possible solutions when the knowledge shows we should.  The end result is an optimal solution with less effort because we were not forced into retrofitting the wrong solution like a square peg in a round hole.

Example of “Assume Variability, Preserve Options”

I was working with a client on this concept, and as they began to understand this principle they shared a story with me that is a great example.  This client is highly regulated, and needs to process documents for regulatory and audit reasons on a regular basis.  When this new requirement came to light, the business asked the IT group how they could solve it.  Being the technology savvy group they were, they stated they could create an automated system that would process the documents as needed, at a cost of $8 million and about 6 months of effort.  They completed the project a little over budget and a little late, but since this was the norm they considered the project a great success.  Until they turned it on.
The first month the system only processed 3 documents based on the needed updates.  The second month?  2 documents.  The third?  3 documents.  The result after one year was that each document cost about $67k to process.  They then realized that if they had “Preserved Options” they would have realized that an alternative option would have been to hire a temp to come in one Saturday a month to process the documents, at a cost of approximately $75 per document.  Because they had not recognized the need to “Assume Variability” in that they did not yet know the volume of documents, they went to a point based solution that made sense.  Bear in mind that the automated system was still a viable option, but since the manual was such an easy approach it most likely would have been the first option to pursue.    The ‘quality’ of the effort was high based on the low number of defects in production, but the accuracy was 180 degrees from the optimum solution.

Focus on Solution Accuracy

Focusing on solution accuracy does not mean being predictive, it does not mean having to know all the answers at the start.  In fact, it assumes we don’t have all the answers, and incorporates learning early and adjusting based on gained knowledge throughout the effort.  Another coach just sent me an email with this quote on her signature line:
Progress means getting nearer to the place you want to be. And if you have taken a wrong turning, then to go forward does not get you any nearer.If you are on the wrong road, progress means doing an about-turn and walking back to the right road; and in that case the man who turns back soonest is the most progressive man.”
C.S Lewis


Our goal in focusing on solution accuracy is trying to get to the most valuable and informative pivot or pursue moments as soon and as often as we can, and making those adjustments as needed in pursuit of the optimum solution.