Friday, April 13, 2012

Breaking New Ground: Ditch Requirements



In an environment of "knowns," where standards exist, we can talk about requirements. Where we are facing the unknown--new territory--where no standards exist, there are no requirements so we think and act in terms of assumptions or hypotheses.

Hypotheses are proposed answers formulated as questions.... Assumptions held as provisionally true until proven (tested). Put another way, hypotheses are proposals that explain how things might work. Whether we build radically new, ground-breaking technology or start radically new, ground-breaking business ventures, our work is primarily based on hypothesis and their validation, not on meeting requirements.

Hypothesis are validated by testing. Hypothesis and validation are inseparable. Constantly testing hypotheses, adapting to the results, formulating new hypothesis, testing again, and so on is the routine.

The process is iterative. Progress is also incremental (results appear in stages). All progress is real progress; tested, validated, proven.

Validated learning is the basis for measuring progress.

"We believe ____________ (hypothesis)," is always paired with:

"We know ____________ (hypothesis) is right when we see ________ (tangible result).

There is an alternative validation approach: Proof that the hypothesis is incorrect. In some cases, this may be a better (quicker, cheaper) approach.

"We know ____________ (hypothesis) is wrong when we see ________ (tangible result).

Either way, there is some tangible result from testing--evidence of validated learning. In the case of technology, positive results may mean working code or a functional prototype. For a new business, positive results may mean sales or funding. Negative results may mean avoiding a long, costly run down a blind alley. Validated learning, in either event, is the means by which we understand, track, and measure our progress.

A working method for this approach might look something like this:

  1. Discover (identify and document) assumptions
  2. Formulate assumptions as hypotheses
  3. Determine validation criteria (qualitative and quantitative) based on the smallest thing that can be done or built to test
  4. Prioritize hypotheses for validation based on risk (more unknown = higher risk)
  5. Start validation of higher risk items first (high risk = high priority)
  6. Validate, test, acquire feedback
  7. Rinse and repeat
When in new territory, every design decision (or business decision) is a hypothesis. Know the assumptions, state them clearly, share them; test and validate. Build continuous testing and continuous feedback within your team.

Understanding that we work with hypothesis leads to better management of process and outcome.

Posted by: William W. (Woody) Williams

Thursday, April 12, 2012

The Struggle for Success


In the world of projects, we constantly struggle for success and one of the longest, most difficult, and time consuming battles we face in that struggle is defining what we mean by "success."

Over many decades managing a wildly diverse collection of projects and project types for clients large and small, one way of defining what success means has proven most useful. It can be described in the form of an equation.

Success = Results - Expectations

Results (functional, non-functional, quality, and more) and expectations (financial, technical, political, etc) are so important that, in our human attempt to "manage," we try to define them in greatest detail as the first step in starting a project. We "set them in stone" and then expect great kudos when we achieve them. But, we are fully human, and when that approach fails to achieve success and those kudos never appear, we always ask, "Why?"

The idea that clients, customers and users can know everything about what they need 6-9 months before they get their hands on it is simply a fallacy for most businesses. On the contrary, it is the act of getting their hands on it that defines and refines those needs.

So, the way to success lies not in fallacy or fantasy but in reality... Putting real, working pieces of the project in the hands of users constantly and adapting based on their continuous feedback. Expectations and results, therefore, are aligned in both an incremental and iterative manner throughout the project.

Posted by: William W. (Woody) Williams

Wednesday, April 11, 2012

W.A.I.T. and Listen



For those claiming a leadership role at any level as well as those carrying the tag "consultant" after their name there is an acronym worth burning into your brain: W.A.I.T. "Why Am I Talking."

We have a word--several, actually, and none are "family friendly" sobriquets--for people who consistently fail to understand that nothing they will ever say is as important as that which they will hear. Some of the more gentle terms are: Overbearing, arrogant, meddling, pushy, and offensive.

Leading and mentoring (the heart of any consultancy) requires a different approach... a different personality type and, it's not just about "style," it's about substance. Ask questions (intelligent and pointed), listen carefully--small, subtle things can be of great importance (care about the person's ideas), then respond appropriately (be social and carry on a two-way conversation).

The point is to connect, not dominate; learning is a mutual activity. You can not expect anyone to learn from you if you can not learn from them.

Posted by: William W. (Woody) Williams

Monday, April 9, 2012

Uncertainty Principles: Forget the Assumptions, Get the Facts



Uncertainty is something we all deal with in both our personal and professional lives and is frequently mis-characterized as risk. However, lack of knowledge is not technically a risk. Since we identified a key piece of information that is currently lacking from our critical knowledge base, it is a certainty, not a risk. Uncertainty should be treated as such instead of as a future event. Fortunately, acquisition of specific knowledge always reduces or eliminates uncertainty so managing lack of knowledge can be straight-forward.

The first step in managing uncertainty is admitting we can not see (or plan) beyond the first point of uncertainty in a project. Assuming we can, on the other hand, leads to massive risk. Relying on unproven assumptions is a recipe for massive, whole project re-planning, and the resulting (sometimes massive) re-work required to get the project back on track. And, even then, that "track" realistically only runs as far as the next point of uncertainty where, if assumptions prove wrong, we start all over again. Admitting we can not see beyond the first point of uncertainty in an effort, however, leads to reduced risk across multiple dimensions.

The to key to successfully managing uncertainty lies in planning only to the short time/effort increment required to obtain the next critical piece of information. Then, make adjustments (adapt) only after the new knowledge is acquired in planning for the next point of uncertainty. In other words, incrementally plan the effort based on acquiring key pieces of information at the point they are actually needed  (just-in-time), then adapt and plan the effort to the next point of uncertainty.



Posted by: William W. (Woody) Williams

Wednesday, April 4, 2012

Data Sea; Data Do

We live and work in a sea of data trying to envision, understand, and plan for the future. That data, however, by its very nature is primarily about the past.


In order to make use of data in ways applicable to the future, we apply theories. These theories are often referred to as predictive models, which are frequently integrated into decision models along with other theoretical constructs such as descriptive models. The process of applying theories or models with the future in mind is predictive analysis.


This kind of analysis is employed daily across a diverse collection of organizations relying on disparate and common data sources from both internal and external sources. Its most widespread application lies in organizations such as actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals, and other fields including portfolio and program management. 


Using data coupled with theories or models to predict the future is more than simply "common," it is ubiquitous and the foundation for strategic, tactical, and operational processes in our governments, commercial enterprises, and other organizations, as well as in our personal lives on a daily basis. We humans are constant predictors but our results are less than consistently predictable. 


Much attention is focused today on data: Big data; a sea of data. Obtaining more and more data is becoming easier and easier and it's coming at us faster and faster. Yet predicting the future is not appreciably more accurate today than ten years ago. Simply acquiring more--even more accurate--data isn't necessarily translating into better results. Why? 


Even the most accurate data (historical, current, or real-time), even in overwhelmingly huge amounts, when viewed through the lens of a flawed theory or model, becomes distorted, twisted and useless. In other words, perfect data produces imperfect results when analyzed using even slightly imperfect theoretical constructs.


At least as much--or more--attention is needed on envisioning, creating, testing, and implementing valid analysis techniques and models as is given to data acquisition. Without advances in models, constructs, and analysis techniques, acquiring more data leads to erroneously high levels of confidence but no greater accuracy in predictive results.
Posted by: William W. (Woody) Williams

Tuesday, April 3, 2012

Defining Deliverable Outputs


A "deliverable," in project management, is an object (tangible or intangible) created by a project for delivery to an internal or external client. A deliverable might be a report, video, document, system upgrade, process change, any other product or service or piece thereof. Activities (work) are connected with deliverables.

Creating a deliverable is a process. What is needed to produce the deliverable are its inputs. What the deliverable produces--what it is created to do (functions, forms, work) are its outputs.

While most of us are good at naming or listing deliverables, we are not as good at defining their outputs.  Without well defined outputs, deliverables can not be tested or integrated well and that invariably leads to rolling out a poor solution to the client.

Defining deliverable outputs is a key step toward transitioning projects that meet expectations.

Posted by: William W. (Woody) Williams

Monday, April 2, 2012

A deadline is a deadline, right?



No, not entirely; there are at least two different types of deadlines: Hard and soft. In project management terms, a "hard" deadline is on the critical path and missing it has serious, perhaps fatal, consequences to the project. A "soft" deadline is still a deadline but not on critical path and without those dire consequences if missed. There are other distinctions as well but prioritizing deadlines in any manner whatsoever is so rare as to be nearly non-existent.

Understanding and managing both personal and organization priorities is critical to success. Not doing so is an early (very high probability) indicator of disaster; failure.  Prioritizing deadlines for yourself and your team is a big step toward staying on track.

Posted by: William W. (Woody) Williams