Main Page

From Info-Ops
Jump to: navigation, search
Info-Ops is Dev-Ops at organization scale

Info-Ops

Info-Ops is a way of creating technology that eliminates waste by optimizing information flow. The term was coined by Daniel Markham in 2018 as part of the release of his book of the same name. The term is analogous to the term DevOps. DevOps seeks to eliminate waste by optimizing the movement of software to production. Info-Ops extends this to the entire organization.

History

Early Backlogs Book

Managing complexity has long been viewed as a critical element for success in technology projects.[1] While participating in a large coaching effort at State Farm in 2007, Markham observed that all of his teams had project management, design, source control, and other information systems were far too complex and cumbersome for the work they doing and were creating a tremendous amount of unneeded friction and waste. Talking to the other coaches, this pattern held true across hundreds of teams. Subsequent coaching engagements continued to bear this observation out for other industries, coaches, and project types. To Markham it was clear that there was a general lack of understanding around how to manage information such that both the needs of the team and organization were met -- while work proceeded as quickly as it could.

Initially Info-Ops was presented in a series of classes around the narrow concept of backlog management. The scope was expanded to a video series in 2011, then a science-fiction book series in 2013 involving a factory that made somewhat evil autonomous robotic chickens, then another video series in 2015. None of these efforts were ultimately successful, however, and Markham continued to expand on his initial work, extending it to all information used by teams to create technology. Info-Ops was published in May of 2018.

Themes

Managing Cognitive Load

Whatever system individuals, teams, and organizations use to manage information, it must be cognitively tractible for all of those using it. i.e., the amount of time spent explaining, understanding, and manipulating information must be kept to the bare minimum possible. By ensuring only the minimum information is used, inventory costs are kept down, training is reduced, meetings end more quickly, people are able to orient themselves to whatever task they have easily, and there is a single ultimate source of truth for any product question. In addition to significantly faster cycle times across the organization, managing cognitive load reduces confusion, uncertainty, and conflicting requirements by keeping related questions and issues in one place.

Scale Invariance

In order to prevent accidentally introducing yet more complexity in an effort to reduce it, one of the key requirements of Info-Ops is that it is scale invariant. That is, whatever holds true at the line-of-code level should also hold true at the executive planning level. There shouldn't be a transfer of information from one layer to the next, a re-learning of skills, or multiple process patterns as you move up the organizational ladder from executable test to strategic alignment and roadmapping. This principle naturally scopes what needs to be learned and practiced to a manageable level while making the material more applicable to a broader range of people and roles.

Questions

The overall thesis of the book is that whatever information system is chosen, its primary job is to facilitate precision questioning and difficult conversations by the appropriate people at the appropriate time, providing just-in-time information creation and delivery without duplication or waste. Questions that are lost, duplicated, have conflicting answers, are not asked of the appropriate people, or are misunderstood by any of the people involved are viewed as an organizational defect and a failure of the information system and process.

Info-Ops Ebook version 1.0 May 2018

Analysis

Analysis is defined as the natural process of coming into consistency and alignment around a mental model for a particular topic by asking questions. The questions do not need to be answered, nor does there have to be more than one person performing the analysis. Natural, unstructured analysis can be observed everywhere people ask questions. It's most noticeable in small children, who tend to pepper adults with question-after-question seeking to align their internal mental model with the adult.

Info-Ops relies on this semantic definition of analysis. The purpose of analysis in a creative environment is to define what's needed well enough between everybody involved so that what's needed can be delivered to the people needing it. This shared mental model must be held by all of those responsible for creating and delivery. There are no paperwork or process requirements. In some cases a simple ongoing conversation might be all that's needed. In others, extensive process and documentation could be indicated. Info-Ops is process agnostic, caring only about whether the processes fulfill the requirements of a good information system as outlined above. Analysis is the "work around the work", not the work itself. It's any work that is done that is not directly creating something for the user.

Structured Analysis

Analysis by itself would be useless, since it already happens continuously whether it's recognized or not. Instead, some additional structure and process needs to be put around it so that it can be taught, optimized, and the results evaluated against the success criteria discussed earlier. This is called Structured Analysis, the goal of which is the same as Structured Programming: similar things are identified, grouped together, and segmented into isolated units that can be reasoned about on their own. By using both Structured Programming and Object-Oriented concepts, programming and architecture skills already in-place can be used as part of the training and deployment process. No further training is needed.

The only caveat to these analogies, and it is a major one, is that there is no "universal human" for which a set of questions could provide them with what they need. Different teams, sometimes different by only one person, can have dramatically and catastrophically-different analysis needs. Because Structured Analysis aligns shared mental models, it is not something that one person can do and give to others. Instead, it's something that happens when any group of people talk. Therefore it becomes critical to manage the permanency and cohesiveness of the groups engaged in solving any particular problem.

Structured Analysis teaches that whatever process is happening to create value, whether formal, informal, ad-hoc, scripted, or chaotic, what's actually happening among the people involved is that questions are being used to subconsciously align a shared mental model. In Structured Analysis, these questions need to be explicitly identified as they come up and all those participating need to confirm that the language and terms involved represent their understanding of the problem and desired solution. It's not important that participants agree on an answer, although agreement may be required eventually for work to proceed. Note that it is not important that the questions be answered at all, as many interesting problems don't have answers available.

The questioning itself, during whatever conversations or processes happen, constitutes various Wittgensteinian Language Games. A new (informal and implicit) language [2] is always being created any time a group of people engage with one another to create technology. This is unavoidable because technology exists in a formal mathematical realm, whereas human communication is very fuzzy and loose. For any human communication to translate into technology, all of the uncertainty has to be eliminated. When and where that happens is outside the scope of the book, though the book stresses that one of the antipatterns is trying to eliminate all uncertainty or eliminate uncertainty too far ahead of time from the point the actual point of translating to math. Instead, as much as possible the entire conversation should happen at the immediate point the code is being written, not at any time before. This is important because analysis continues to happen among everybody involved after things are written down whether the participants understand that it's happening or not. Writing things down is an analysis "smell", but one that cannot be completely avoided in any non-trivial project.

Information Tagging

All information has a unique tag combination. Information with various tags come together for various reasons. For example, User Stories are simply various items with Behavior and Supplemental tags joined together to create an Acceptance Test

Some mechanism is needed to both teach and restrict the amount of information captured to prevent this antipattern from happening. This the role of tagging. All project/product information can be uniquely tagged by selecting one tag from each of the following four tag groups:

Tag Group       Possible Values
Genre       Business
System
Meta
Abstraction
Level
      Abstract
Realized
Bucket       Behavior
Structure
Supplemental
Temporal
Indicator
      Was
As-Is
To-Be

These tags, combined with Master Models, create a unique and compact "conversation library" that identifies where critical conversations must happen for the effort to succeed using the minimum possible amount of stored information. Master Models may be as small as a few lines jotted down on the back of a napkin or as large as several hundred lines recorded in multiple text files. (Although the name has "model" in it, these are not necessarily graphic models like one would make using something like the Unified Modeling Language. Of course, there's no reason a team couldn't such tools if it wanted, including graphical ones involving UML or another standard).

How and when to tag, where to put tagged information, how to maintain the tagged information, what to do with information that has various tags constitutes the remainder of the book. Examples scenarios are shown for cost estimation, roadmapping, project scoping, and automated testing, among others.

Continuous Information Deployment

An Analysis Compiler is provided (FOSS), EasyAM. EasyAM programmatically provides organization, collating, and transformations that would otherwise have to be done by hand.

If readers choose to use EasyAM, they can also gain all the other benefits that come with compilers: version control, quality checks, test suites, linting, and so forth. Using textual information and a compiler, teams are able to keep track of this minimal amount of data using the same tools and skills they already use on a daily basis for the rest of their work.

In addition, the analysis compilation process becomes part of the team's pipeline, taking data from upstream (perhaps at the program or organizational level) and delivering it downstream to consumers like automated ATDD test frameworks, CSR databases, project management tracking systems, story card generation, issue-tracking systems, etc. In fact, since the goal of the analysis system is to only keep track of things that are important across the entire effort, the analysis compiler can and should be a part of all other information systems. If other tools are required, scripts can be used to transfer data back and forth as needed. An example of this might be a team that keeps a list of user stories currently being worked on in a text file in a common directory that's version-controlled. Using common tools, this file is then synchronized with an online tool such as Trello. As information is updated in either tool, it's automatically synchronized with the other one.

In addition to keeping track of questions, which is the entire point of an analysis effort (see above), in order that tool usage be kept to an absolute minimum, EasyAM allows "tagging" of Analysis Model information with data not related to Structured Analysis: work done, work to-do, notes, and so forth. These tags can then be harvested downstream from the compiler for other uses. In our previous example, a team may only have two analysis files: a 40-line Master Model and a 10-line "in progress" file. (All of this could also all be in one file). As the "in-progress" items are completed, a team member tags the appropriate item with "done", then checks the file back into the repository. All of the relevant systems are then updated without further work.

Examples

  • Sue is creating a website for Ahmit. During their initial conversation, Sue jots down a quick Master Model as they're chatting. Later that afternoon, she reviews the model in preparation for beginning work, coming up with two-dozen questions. In their next conversation, her questions are answered and Susan and Admit agree on a work schedule, starting the next morning in his office.
  • Oswaldo has been hired as a senior developer at XYZ corporation. During his first week, in addition to pair programming with the lead architect, he jots down and tags a few pages of analysis information as he learns the business and technical context responsible for the way the system is constructed, eliminating items that no longer serve a purpose and following up with questions where the context of the project or system architecture is unclear. At the end of the week he doesn't have any important outstanding questions so he discards his notes. Structured Analysis allows him to focus on the technical details of ramping up as a senior developer while simultaneously keeping track of the larger context of why things are the way they are.
  • Vikas is leading a small team that has just been assigned to work with a new customer setting up their cloud deployment framework. The customer requires all teams to use PRINCE-2 and SAFE along with being CMMI compliant. There are several project management and requirement systems that need to be updated as the project progresses. Vikas and his team create a Master Model and Analysis Model in EasyAM as part of getting oriented, just like the previous examples. However they record several hundred lines of data instead of just a few dozen. This is because all of the project integration points multiplies the number of key conversations the team must have between them and the larger org, although still not to a large or unmanageable degree. As they're getting oriented, the team keeps in sync using techniques from the book. After the first week they've automated their analysis pipeline and plugged the DevOps pipeline into it as it also stands up, integrating with all of the other required organization information systems. This allows them to do the minimum amount of data entry while communicating with the maximum number of interested clients in the format the clients require.
  • Daniel leads a 30-person super-user group tasked with replacing 35 existing applications across the organization with a single monolith. Each application has separate business owners, requirements, batch schedules, sizes, and SLAs. Some applications don't have a business representative or person able to explain what they are and why or how they're being used. The new app will support a worldwide ordering, inventory, and distribution coordination system. Instead of team members going into separate silos and creating reports, paperwork, and diagrams, Daniel requests that the group only work as a unit, mobbing as they go. They create a Master Model of a few hundred lines. Nothing else is needed. The group interviews the appropriate stakeholders, identifies critical decisions needing to be made, and reaches agreement about most of the outstanding items, including items that some stakeholders entered the effort passionately disagreeing about. After a few weeks, the effort concludes by generating the docs needed to begin the bid process, get the program office set up, and explain results to various C-level executives.
  • Mary is the Release Train Engineer or a SAFE program with 25 projects reporting to her. In order for the rest of the organization to stay informed, she has each project consume program-level information in EasyAM format, update it as-needed, and publish the compiled results to a common file on a shared server. From there an automated bot picks up items people are interested in and creates reports to show them the information they need which are then made into web pages and emailed. This establishes a publish-subscribe, pull-based model of information dissemination, freeing up schedules while significantly increasing program visibility.
  • Viktor has just taken over as CIO of a medium-sized IT shop consisting of 280 people. There are a dozen or more information systems tracking the same data in various formats. (Each team used whatever they thought best.) Seeking to get some handle on the day-to-day status of his department, eliminate waste, while minimizing any product-delivery disruption, Viktor trains his team in Info-Ops techniques, allowing them to keep all of their current tools, processes, and schedules. The only change is a few hundred lines of analysis model each team maintains along with the rest of their code and delivers as-needed as in the examples above. Once set up, this takes very little effort to maintain, and creates a staging area where every important conversation in the organization is tracked and updated in real time. Next Viktor slowly introduces new constraints, adapting his requests depending on the circumstances and priorities. Automated quality controls are slowly put into place to make sure each team is recording the four or five pieces of information Viktor requires. He puts into place controls to prevent the Analysis Model files from becoming yet another overly-complex system, limiting the number of items at various levels and forcing the teams to use abstraction and generalization to limit complexity instead of just dumping everything into a big file as they might do with other tools. A lint cross-checker prompts teams that they may have significant disagreements about the meaning of common items they are both working on.
  • BigSmartCorp is a mega-multi-national consulting firm with thousands of consultants in various cities around the world. They've already consolidated much of their information tooling system, but the steering committee feels like more needs to be done to make sure when consultant A comes across a type of project that's been done before, they're connected with consultants B, C, and D who have worked on similar projects. This has to be done without installing and training the entire org for yet another tool. The remedy is that consultants are trained on Info-Ops and EasyAM as part of their recurring internal certification process. Then, as each project is scoped out by a consulting team/person, they make a Master Model alongside the other work that's normally done. (Note that this shouldn't involve much additional work at all. After all, consultants already have to keep track of critical conversations and issue along with business context). All of these files are batched up into a big bucket. A NLP ML bot clusters topics and client business situations into potential chat groups, notifying potential members that other consultants have direct experience in the area they're working in and suggesting they talk.

Criticisms

Most of the criticisms so far have come from misunderstandings. Various misconceptions are: it's a new process, it involves paperwork and UML diagrams, it's a version of Model-Driven Development from the late 90s, it's too complex to implement, or it's too fuzzy to yield real-world results. None of these criticisms actually touch on the heart of Info-Ops so it is impossible to reasonably respond to them.

"Info-Ops is about what happens behind-the-scenes when technology makers meet with people and create something they like. It explains all of that invisible work in such a way that it can be understood and scaled up to dozens or hundreds of people is necessary" sometimes helps clarify the misunderstandings.

Sequels

Future books are planned to include Hypothesis-Driven Backlogs and Lean Startup

Ongoing work is scaling Info-Ops concepts both down to the programming level and up to the business development and org strategy level, validating that the concepts are truly scale invariant. Sequels are in the works about functional programming, startups, and program management. === See Also ===

References

Further Reading

External Links

Purchase Info-Ops