Abstract for Using Foundation First for Major, Modular Improvement

None of us would build a house without first establishing the foundation.  There is plenty of evidence that the same concept is necessary for software development, particularly when there is already a large installed investment and there is a call for modular improvement.  Estrada Consulting, Inc. (ECI) has adopted a Department of Defense (DoD) model from the 1980s that became essential in tying hundreds of independently identified and built systems in performing to business needs in areas of logistics and finance.  We hope you also see the important relationships with deconstructing today’s monolithic systems.

 

Current Situation

A very large system of systems involved nearly eighty financial and logistics systems.  Each system had been implemented for a specific purpose as budgets allowed; and, years later, the aggregation of all systems was required to meet evolving mission needs.  The problem is threefold:

  • Management does not know how to get a straight answer on overall operational performance since there is a significant amount of duplicated data in the systems – which one, or which several, should be used?
  • There are millions of transactions per day and each system operates mostly independent of the others. There a lack of integration simply because the number of point-to-point interfaces is just too large a problem to undertake.  Even if integration was possible, there are too many systems involved in feeding information to others that the logical sequencing of information is not possible in the time periods available.
  • The data includes highly sensitive information that, today, is protected partially by air gap – the systems are not connected to other systems since the data exchanges currently have no means to ensure security is managed.

If you are part of a sizable organization, does this scenario sound familiar?  It is not as uncommon as you would think.  Perhaps 20-25 years ago this problem was being addressed with Integrated solutions commonly labeled as Enterprise Resource Planning (ERP) systems.  They were the integrated platforms that would help to resolve most of the integration problems and data duplication problems.  Do you have experience with these platforms?  Many have incorporated many other packages (best practice modules) into their ERP offering with mixed results.  The integration and data problems for these are just hidden inside rather than being exposed between otherwise non-integrated systems.

Many implementations have led to such dramatic failures that there are significant calls for avoiding implementation of monolithic applications and replacing these implementations with a modular effort.  Given how the problem started so long ago are we just repeating the same behaviors hoping for a better outcome?

 

Avoiding Most Likely Challenges

Since the information technology (IT) industry has been through the challenges repeatedly, there should be sufficient knowledge and desire to determine how to develop new or refactored systems in a way that would avoid the challenges.  The key is planning.  The planning should not just include what it takes to be successful; it will also need to include what it takes to avoid being unsuccessful.  It is common to build plans that are “happy path” and to run into the challenges during projects.  Often these plans are optimistic that the team is smart enough to avoid the challenges; or, the plans that incorporated effort to avoid failure were comparatively more expensive in the initial cost to sell the project.

When planning a modular change to a large system of systems, it is unreasonable to think that an organization could get to a higher percentage of completion and run into such challenges that the project might need to be abandoned.  So, it is important to understand that the initial happy path cost could be just that unreasonable.  Planning to avoid challenges may seem more costly initially, but has also proven to result in much lower cost in the long run.  So, what challenges should we address?

Lessons Learned from failed large, modular projects likely provide a nice list to start:

  • Multiple modules were developed or procured as commercial-off-the-shelf (COTS) products that required complex integration; the change rate of interfaces for changes in various modules exceeded capacity to keep pace.
  • Multiple modules or systems included a need to own the same data elements upon which operations were designed. This provided multiple sources for the same data with different errors in each system based on the quality of data.
  • Data that changed in one module was not able to be provided to other modules where the changes impact execution business rules. Data was also coupled with functionality to the point where changing the flow of data required changing the functionality.
  • Communication between individuals executing the process and systems involved in fulfilling business needs was insufficient in quality, timeliness, sequence, or other factor in ways that disrupted the appropriate flow of work.
  • Security implementations were inconsistent (too restrictive in some modules in ways that impacted work access, and too lenient in other modules for access where not allowed).
  • Business rules and logic were hidden in various places as deemed appropriate within each module.
  • Functionality was not accessible to people or systems when or where work was performed. Actual transactions had to wait for physical system availability.
  • Legacy partner systems could not convert to new interface specifications quickly enough creating significant delays for implementation.

To avoid these challenges, the organization would have to know the root causes for development of the situations that resulted in the challenges and then work on the root causes to establish the best ways to mitigate or avoid establishing them on your projects or change initiatives.  After constructing an Ishikawa diagram, the root causes can be isolated and prioritized based on the significance and sensitivity of challenge based upon each cause.  The list of causes, in part, includes:

  • Focus on areas of limited functionality. Within a module there are a number of features.  Each feature can be a micro-module dealing with a single user performing work necessary to achieve a piece of the full module need.  In worst case scenarios, micro-modules even duplicate data since there is not enough known about the overall data in the module or system of systems and the flow of that data through modules or micro-modules.
  • Lack of identifying and mandating use of data from systems designated to be the owner of certain data within an enterprise. In some cases, the source systems are not known.  In other cases, when known, they have no capacity to change to perform the sourcing of data to other systems.
  • There is limited central control over data and the ability to access and share this data. Each system is relegated to build its own capability to exchange data with any other system needing the data.
  • System design has not concentrated on making information available to the user at the time of process need rather than on just completing transactions are reporting on them later.
  • Lack of standards or universal decisions on how and where to incorporate business rules for ease of validation and change.
  • Too many people were involved under an efficient line of command that made detailed observation and team collaboration difficult.

Avoiding the challenges, then, can be a focus on causal factors.  The causal factors will be addressed in the section below for strategy for success.

 

Myths to Un-Learn

There are a number of myths that need to be discussed here to ensure the information above is used properly.  Myths and discussions follow:

  • Agile delivers improved results: neither Agile nor Waterfall should claim to be better in the domain of large-system, modular change. Both lifecycles have some features to address some of the causes.  Both lifecycles underwent significant change to have versions more like the other.  The truth is that software development is very difficult to get right.  When building a house, you can see that the plumbing or electrical wiring is not quite right.  Software is much more difficult to assess for its ability to have the appropriate foundation set for the whole house (enterprise application).
  • Internet-based applications are just a series of pages strung together: while some applications may not have deep logic and could appear to users as a string of web pages, most applications have deeper, conditional logic required to set the appropriate situation for users. In some cases, one setup is correct, and in others, perhaps another.  The appropriate context for the conditions can be set by other logic altogether.  Page level focus is partially the same as one of the causal factors we have to resolve, not perpetuate.
  • Microservices will resolve issues at lower cost: when you have ten tools in your toolkit, the most appropriate tool decision can come rather easily. When you have thousands of tools, one of the first ten may be the best or adequate to use, yet you might not be able to find it among all the tools.  Microservices help to use smaller chunks of Cloud processing and can be cheaper.  Without a strong ability to organize and control use, there can be significant confusion as soon as more than one using system becomes involved with use of the services.  Again, without strong discipline and avoiding causal factors, a great idea can become the problem.
  • Large projects require large teams: while it has to be true that larger projects require more of the right resources, just adding resources to teams will not make progress easier. Smaller teams control communication and productivity much better.  The increments of what is developed need a coordinated approach for the boundaries where different parts of a solution meet; and, this too, can be addressed with small teams at each boundary.  The method of applying more people may address the causal factor or not.
  • You can refactor anything: the real question would be “why” when you can accommodate changes in architecture from the beginning of the transition. There are some issues that make refactoring very difficult; for instance, when there are duplications of data in multiple applications that are being migrated to a new enterprise system, most user owners don’t want to give up ownership of the data.  Change would have to happen in many systems.  It is just easier to avoid such issues than to refactor for them later.

 

Strategy for Success

Today’s monolithic systems resolved the problem of integrating many disparate systems that change at their own pace, compounding the effort to keep them in synch.  Often this resulted in coupled systems that are more difficult to change or adapt to what a business needs.  Rather than using the approach to incorporate all systems into one, why not work on making the systems less disparate?

This might sound difficult yet let’s just talk through the issue to see what we can discover together.  An enterprise has a purpose.  In the public sector that is to serve the citizens to the letter of how law was established to direct this service.  In the private sector it is to build wealth for stockholders.  To accomplish the purpose, the enterprise establishes operations that can achieve desired outcomes.  Operations require things like: qualified and skilled people, effective and efficient processes, flexible and effective technology, information at the point of process execution, measurement to help guide achievement, and the basic infrastructure that allows those executing the business strategy to achieve to take advantage of internal services (facilities, legal, HR, etc.) in the conduct of their mission.

For most enterprises that are successful, they have built a unique operating model that provides them a competitive advantage.  This advantage is most often in process, product, or people yet, at times, could be in technology.  When all parties adopt the prevalent monolithic technology platform they are hoping to adopt the industry best practice where often they destroy their years of investment in their own competitive advantage.

The strategy in this document would focus not on technology first but on the effectiveness of the business model.  Understanding what is unique and powerful will allow appropriate automation to enhance the business model rather than drive it to be just like others.  Opponents will argue that this is what they did for years to get to a point where they had an integration nightmare within all the systems needed to support their business model.  Recognizing this is a first great step in recognizing the benefit of this strategy.

Inventory two things: 1) the business capabilities that your organization has to be very good at to excel in your mission; and 2) the processes and technologies that have enabled these capabilities.  Build a rough picture of the solutions in use that enable all capabilities, and, if possible the integration required between them.  The strategy becomes to replace what is most essential first in a manner that respects the data that each system should “own” and which data needs to be shared with other systems.  The strategy can be successful if the enterprise respects the data ownership, data flows, and methods of communication with security across systems.

 

Wrapping Each Module in the Strategy

The federal systems engineering approach to large systems generally is to decompose what is large into smaller segments that can be independently built in ways that are planned for reassembly.  You can think of these segments as modules.  Perhaps the most important part of breaking down a large system is to understand the business capabilities that it needs to support.  This is the first part of the strategy to ensure that the aggregate of modules can satisfy the business capabilities needed.  As each module gets defined by the capabilities it needs to support, the strategy begins to take shape with the priorities assigned to modules based upon core value.

At a high level, sketch the modules and define the highest level of data that would need to flow between modules.  You can look up N-Squared diagrams to visually see how you can map the flow at a high level.  Each module needs to contribute something internally or externally.  The core module often is identified by the module that has a critical role in feeding data to other modules.  This diagram also helps to understand the boundaries of each module and how the module will need to act in support of the overall system.

Decisions should be made for the overall system that include the security controls required to protect the system and its data as well as the method of communication between modules.  When these are defined and documented, they become standards for each module developed.

The next most important factor is to understand the details of the data within the chosen module.  You will need to know what data the module “owns.”  This means that the module is allowed to create this data and share this data externally as the module of record for the specific data.  The next step is to define the data that must be shared, as well as the data that must be used, with or from modules or external sources or destinations.  This might take a few iterations yet is critical to ensure the entire system will have clearly articulated data needs in support of its enablement of business capabilities.

Using this strategy, you can move from the initial module to surrounding modules using the same steps to ensure data clarity and use is well-defined.  Since communication of data and security control implementation decisions have been made, they also just need to be applied to modules when they are developed.  If each module, or set of modules, needs to stand alone in operations, there just needs to be ways to ensure the data it needs from elsewhere is sourced even if in an interim mode until other sources are complete.

 

Conclusion

To break away from the monolithic systems, it is important to build upon a foundation so that each module can achieve its individual purpose while contributing to other modules as needed.  Interdependent modules provide improved capability to change where the limitation only becomes the service of data required to other modules.  Individual modules can be split into as many modules as needed as long as the integrity of the data flows is maintained between the full set of modules.

The required foundation begins with the data that each module must “own.”