Data Management Capability (Series)

The following three part blog provides a comprehensive description of the concepts concerning “Data Management” specifically and a “Capability” in general within the context of a framework comprised of ten (10) data management functions.  The Data Management functions defined within this blog come from the writings of The Data Management Association (DAMA – www.dama.com) and its Data Management Body of Knowledge (DMBOK v.1).  The Data Management Body of Knowledge is a respected publication which attempts to formulate the concept of managing a corporate’s data as an asset.  The DAMA DMBOK v.1 is the basis for the terminology used to define the ten (10) data management functions.

The framework developed in writing this blog is a concept introduced by Estrada Consulting, Inc. and the author of this blog.  The framework which defines the ten (10) data management functions is used throughout this blog and is intended to allow the reader to overlay each framework diagram representing the people, tools and processes necessary to enable the data management functions.

Part 1 of this three part blog will define the Data Management framework in general with the ten (10) Data Management functions and then incorporate other elements necessary to enable Data Management as a “Capability”.  The framework presented in Part 1 will be leveraged in Part 2 to depict the exclusive roles (people), tools and processes involved in Data Management functions.  Finally, Part 3 will describe in general what a minimal and optimal data management program may look like for your organization.

Read More Data Management Capability (Series)

Management Control in a Remote Environment

Project Managers had a planning factor they used years ago indicating that team members who are located more than fifty feet from each other should be treated as remote teams.  Remote teams were discouraged since communication was just too difficult to be effective.  Gradually, remote collaboration and communication tools have improved, yet the habits of individuals, especially management, have not helped to build competency in effectively using remote staff.  Then COVID 19 began to spread.

For some organizations, the use of online collaboration tools, storage of knowledge capital, access to individuals, and online meetings was already significant and increasing.  For some, however, tradition played a more significant role.  COVID changed all that!  For the while that few could even go to an office, the mode of operations had to be changed quickly.  Infrastructure did not always accommodate access for all individuals operating from remote locations.  Individuals had little effectiveness in meeting online with others due to less capable scheduling or coordinating technologies.  Individuals were not as familiar with technology tools to accomplish their work.  Online meetings ignored many of the participants that were not even known to be participating.  Managers could not go to individuals to talk about progress or activity.  Many habits were being forced to change.

This article is about the techniques that could help all of us to manage better in a more remote environment for projects in the future.  Techniques will be explained in three dimensions: management of people, management of work, and management of capabilities to enable remote work.  Key techniques are in Bold font.

Management of People

In case you didn’t notice, the last two decades have changed the management relationship with individuals within an organization.  If you believe you have to manage people, it is likely you have a lot of turnover, since individuals who are strong in capability to do their work need, and want, little “management.”  Leadership of people might be a better concept to operate from.  In general, the techniques to this section follow:

  • Hire and Retain the Right People: I know we have heard this over and over again. The difference now, is that people need to be not only qualified, but self-managed.  If people are doing what they need to do, we should find ways to retain them.  If not, we need to find other work where they are capable of meeting the outcome expectations.  Training may be important in some organizations.
  • Provide Clear Outcome Expectations: When you have the right people, they need to know the expected outcomes they need to achieve. This needs to be done with teams as much as individuals since remote work must rely on qualified, self-managed people and teams.  Outcomes need to have criteria like in projects that could be measured.  When things happen, there may be methods for negotiating what is acceptable.  The achievement cannot be totally negotiable since, then, any performance is acceptable.  If they are truly self-managed, they will come very close or exceed expectations every time.
  • Facilitate Success and Growth: Managers are no longer supervisors. If you have the right people you need to get impediments to work out of the way of the employees so they can manage the work.  Pay them appropriately for retention when they perform.  Train to remain ahead of market trends.  People are not disposable; find the right job for them and help them prove themselves by providing the right challenges.

Management of Work

If you hired the right people, and they know the required outcomes to achieve, they should be the ones managing the work.  Projects are different from operations; so, they are treated separately here:

  • Projects: The principles of project management have not changed. The importance and methods used for some areas have had to change.
    • Communication is one where the traps we fall in when in the office can become much worse using online technologies. Some of these traps follow:
      • Informed Misunderstanding: Those who cannot complete their work to the level required will often knowingly or unknowingly practice communication that is ambiguous. By providing some level of information that is not clearly tied to their lack of progress the in-person meetings tend to continue to rely on the individual though their communication is intended to provide their defense for not completing the expected work.  In an online environment, this can become worse; or techniques for tracking major pieces of work completion can clear the air more quickly and expose weakness that the team can adjust for; or, that encourage staffing with the appropriate skills.
    • Schedules have always been an illusion and likely false promise for projects since the level of ambiguity at the onset of the project is so high that estimates and schedules are mostly wrong. The beauty of remote tracking is that the work results become more pertinent.  If you know the work that needs to be done, and you track the rate at which work gets done, this becomes the demonstration of what schedule is possible rather than the unrealistic target set for those who need to feel in control.  Learning to use this technique provides a departure for past practice that is much more valuable than holding on to false hope.
    • Risk, Issues, and Related Impact: Quite often meetings on risks and issues focus on the assignment and a report of activity by the person assigned. The key factor for risks or issues is the avoidance of impact.  To avoid impact, track impact, not activity.  If you can’t explain the actual or potential impact to your project, then don’t worry about the item.  If you can define the actual or potential impact, take the action necessary to stop it.  By focusing on the appropriate data using remote systems, the focus can avoid the comfort of meeting together while discussing things that may never cause impact.
    • Scope and Requirements: These items need to be documented. Even then, there are arguments about scope statements and requirement details.  For this reason, it is more important to keep communication in documentation, but to ensure that acceptance criteria are added to ensure the definition of “done” can be better established at the onset.  Also, track changes since these highly impact cost and need to be documented when approved with the additional schedule and budget for each change.
    • Quality: You already know that you can’t build quality into results by testing at the end of development. One way to improve the quality in the process is in the conduct of reviews.  Haven’t you seen where circulating a document for review results in many people having changes to the same paragraphs.  When the document originator has no idea how to negotiate between the parties, it becomes clear that this form of review has low quality.  Conduct an online review with the fewest number of pertinent resources.  Let them hear their mutual comments, and let them determine and negotiate the right resolution to conflicting comments so that they are in complete agreement at the end of the review.
    • Integration: When more than single party is involved in a project, there has traditionally been an integrated schedule of all activity. The tracking of all activity can be time-consuming and can also distract you from managing what is important.  The only integration part of integrated tracking is for those items for which another party is dependent.  The answer is to track dependencies, not all tasks for teams.  They can manage their own tasks and the PM just needs to know when others will be impacted so they can act on the item causing the impact.
    • Cost: This factor is generally driven by scope, schedule, and quality. By using techniques above for these items, you have greater potential for controlling cost.  When other items indicate change, build a projection for cost for each change identified to ensure parties are informed on potential cost.
    • Configuration Management: Any project needs to understand the context of baseline information. Reporting needs to be based upon baselines.  The same baseline for requirements associated with an end product can be meaningful, but reporting on different baselines would lose meaning.  All related documentation to a project solution needs to be version controlled.  Allowing change without control prevents tracking authority for change and the associated schedule, budget, or other factors become mis-aligned.  For remote consistency keep all items controlled to a version stored in a central location available to stakeholders so they can review and use the appropriate versions for their work.
  • Operations:
    • In many cases, operations relies upon either self-managed teams or direct supervision. As work moves to the remote domain, where possible, the transition to self-managed teams, team rewards, and ensuring the right people are doing the work will provide the best possible case for effectiveness and efficiency.

Management of Capabilities to Enable Remote Work

When individuals are working remotely, the way they are enabled for their performance can be critical.  The techniques are in the following areas:

  • Tools and Infrastructure:
    • Have you noticed that online meetings are more difficult to provide the rich experience of communication? This is due to most communication being non-verbal.  Have smaller meetings and check in with each person as often as necessary.
    • Have you noticed that meetings in the office used to always go the full hour, yet meetings online tend to get shortened to only the time required? What a concept.  Keep it up.
    • Have you noticed that Internet connectivity is not the same from home as from the office? This is the one area that will need to be managed since the availability of connectivity will need to be much better and much less expensive to support the work requirements.
    • Have you noticed that individuals can’t take their desktop home and have it work? If computer work is essential, then a laptop is important.  Company software should not be placed on individuals’ machines, so there is still a sense of responsibility required for the software used to get work done.
  • Security: Cyber security has taken much greater emphasis in recent years. With remote performance, the security of individuals and the work performed needs to increase.  Ensure that each system (people, process, and technology) includes the appropriate authentication (possibly multi-factor and biometric) as well as time-out and other features are set to avoid unauthorized access.
  • Knowledge: It has always been a bit foolish to expect that knowledge actually existed to a high level within systems that capture documentation. Think about it; how much that you know, have you documented?  Very little!  So, knowledge is much, much more in people that in documents.  It is OK to capture and use documentation, but remote performance is demonstrating how much of a need there is to get to a person to obtain something necessary to excel in achieving work results.  Provide enhanced access to people as part of any knowledge management solution.
  • Actions:
    • Employees should not live in fear of their positions. Speak honestly with them about their ability and appropriate work and compensation.
    • Employees need a safe place to work. Home is not always a great environment; so, consider what it takes to ensure each person has a place to operate from that they agree is safe and compatible with work.

Summary

While remote collaboration for enterprise achievement was largely thrust upon us unexpectedly, most organizations are finding ways to achieve that are innovative and meaningful for all those involved.  Where some individuals struggle at first with technology, this is a short-lived challenge and techniques for managing effectiveness in these environments are the new domain of creativity and innovation.

The techniques included herein are just a start; please don’t hesitate to add your comments or suggestions to improve the remote effectiveness in your areas of business.

Disciplined Agile Delivery (DAD)

Background

Disciplined Agile Delivery (DAD) enables teams to make simplified process decisions around incremental and iterative solution delivery. DAD builds on the many practices espoused by advocates of agile software development, including scrum, agile modeling and lean software development.   

DAD has been identified as a means of moving to the next evolution of Scrum.  DAD provides a carefully constructed mechanism that not only streamlines IT work, but most importantly, enables scaling.  DAD is a hybrid agile approach to enterprise IT solution delivery that provides a solid foundation from which to scale from.   

DAD recognizes not only the importance of networks of cross-functional teams, it also explicitly offers support for scaling key practices across complex working environments using techniques that link software development efforts into robust software delivery events.

Toolkit 

The Disciplined Agile (DA) process-decision toolkit provides straightforward guidance to help people, teams, and organizations streamline their processes in an event-sensitive manner; providing a solid foundation for business agility.  It does this by showing how the various activities such as Solution Delivery (software development), IT Operations, Enterprise Architecture, Portfolio Management, Security, Finance and Procurement work together as a cohesive team.  DA describes what these activities should address; provides a range of options for doing so; and details the tradeoffs associated with each option.  

To begin your adoption of DAD it is best to start at the beginning and step incrementally into the adoption of DAD.  

There are four areas within the DA toolkit:

  1. Disciplined Agile Delivery (DAD) 
  2. Disciplined DevOps 
  3. Disciplined Agile IT (DAIT)
  4. Disciplined Agile Enterprise (DAE)

This article will concentrate on first DA area (DAD) and specifically, Way of Working (WoW) 

DAD is the foundational layer of the DA toolkit.  It promotes goal-based rather than a prescriptive strategy and enables teams to choose their way of working (WoW).  Depicted below is process goals of DAD.

Figure 1. The process goals of Disciplined Agile Delivery (DAD)

 

The goals are broken into four areas: Inception, Construction, Transition and Ongoing.  Inception gets the team going in the right direction before any development work begins.  Construction is where the team incrementally builds the solution, Transition is the where the solution is released into Production.  And finally, Ongoing is where the team improves their skills and betters adapts themselves to the organization’s Enterprise.    

Providing choices rather than prescriptions and, by guiding people through these process goals, DAD enables teams to adopt a continuous improvement approach to solution delivery.  

Way of Working (WoW) 

When teams initially form, they need to invest in putting together their initial WoW.  This includes choosing the lifecycle that best suits their project, selecting the tools they will use, and setting up the physical work environment.  Because initiating a project tends to be very different than executing on the development of a solution, teams tend to tailor their WoW on what they are comfortable with and has been their tried and true way of doing the WoW.  However, teams can evolve their WoW based upon new learnings.  In considering your WoW the team must ask themselves the questions listed below.  This helps the team get organized in the manner the team is used to.    

  • How will we organize our physical workspace?
  • How will we communicate within the team?
  • How will we collaborate within the team?
  • What lifecycle will we follow?
  • How do we explore an existing process?
  • What processes/practices will we initially adopt?
  • How will we identify potential improvements?
  • How can we reuse existing practices/strategies?
  • How will we implement potential improvements within the team?
  • How will we capture our WoW?
  • How will we share effectives practices with others within our organization?
  • What software tools will we adopt?

The figure below depicts different ways to evolve your WoW and, as you can see there are many options:

 

Deciding upon your WoW is critical during the Inception Phase since it sets the framework you need to move to the Construction Phase.  During all the various DAD phases, the WoW is constantly reviewed, evaluated and improved upon.

Understanding ASP.NET Middleware

Understanding ASP.NET Middleware

In ASP.NET Core, middleware is the term used for components that form the Request Pipeline.

The pipeline request is similar to a chain; it can contain multiple middleware. These components will handle the request in sequence; each component will inspect the request and decide whether it should be passed to the next middleware or just generate a response interrupting the chain.

Once the request has been handled, a response will be generated and send back to the client passing along the chain.

Execution Order

Middleware’s will execute in the same order they are registered when handling requests and in the reverse order when handling responses.

Check the example below:

How to create a Middleware

Middleware components don’t implement interfaces or derive from classes, It simply has a constructor that takes RequestDelegate as parameter and implements the Invoke Method.

The RequestDelegate represents the next Middleware component of the chain and the Invoke method is called when the component receives a request.

For example:

Creating Content-Generating Middleware

The most important type of middleware generates content for clients, and it is this category to which MVC belongs.

This kind of middleware is used when you want to generate some content and send it back to the client without the need of dealing with all the MVC complexity.

Implementation:

 

Creating Short-Circuiting Middleware

Short-Circuiting Middleware Components are used when you want to inspect the request and decide if the request should be passed to the next component.

For example, the below process is checking if the request contains the User-Id header, if not the middleware will break the chain and return a 401-Unauthorized response to the client.

 

Creating Request-Editing Middleware

The next type of middleware component examined doesn’t generate a response. Instead, it changes requests before they reach other components later in the chain. This kind of middleware is mainly used for platform integration to enrich the ASP.NET Core representation of an HTTP request with platform-specific features.

The example below is to demonstrate the check that if the request contains a blank User-Id in the header; if yes it will be removed.

 

 

Interacting with another Middleware

Middleware components can interact with each other, let’s consider that RequestEditMiddleware is executed before the ShortCircuitMiddleware.

In that case if a request contains blank User-Id Header the RequestEditMiddleware will remove that header from the request and call the next component, which is the ShortCircuitMiddleware, the ShortCircuitMiddleware won’t find the header User-Id and will break the chain returning a 401 response to the client.

Registering a Middleware

Now that we already know how to create our own custom components, how do we use it?

It’s simple, in the Startup class there is a method called Configured which is responsible to setup how the application will handle requests.

This method has a parameter of type IApplicationBuilder, that is the object we use to register our components.

See example below:

However, there is a more efficient way to register the components, for that we need to create some extension methods.

See below:

After creating the extension methods all we have to do is register the components using it.

 

ASP.NET Core and Docker for Beginners

Introduction:

Docker is an open, lightweight platform for developing, shipping and running applications using container technology. Docker provides container solutions for developers, architects, DevOps, and IT People. It can run on Linux, Windows, Mac OS and most the cloud providers like AWS, Azure, Google etc..

What is Docker?

Docker is all about running a single program inside separated environments. It is an open source platform with can be used to package, distribute and run applications in different environments.

Let start understanding by an example,


Applications 1 & 3 will stop working because they need a different framework to run. What can be done now? The above diagram shows, we have three applications running; however, all of the application use the same framework. What will happen if App 1 required different Framework version like 1.0, app 2 needs v2.0, app 3 needs v.3.0. As a result only App 2 will work because our framework installed is v2.0.

One way is to use three different systems for all three applications, but it will be very expensive and maintening them will also be very difficult.

This is how “DOCKER” comes into play. Docker will make sure each application has its own facilities rather than common facilities.

Docker will do something like this.


Build the ASP.Net Core and Docker Packages

Docker will create a package that can run in different environments.

To build an app in Docker, first we need an app to Dockerize.

Docker allows you to build an application in pretty much the same way you would create an application to run on your local machine.

When we create application make sure enable the Docker support and also make sure you have it installed by going to:

https://www.docker.com/products/docker-desktop

Note: Make sure Docker is installed .Otherwise you’ll get the below error:

Installing this (mac or pc) will run docker and allow you to create and host docker containers. Microsoft Visual Studio will use Docker Desktop to host your app inside of during development to allow you to develop or test in.

The trick is to get a new item in your launchSettings.json:

This, in conjunction with a docker file in your project (just called “Dockerfile” without an extension) that builds a container, hosts it, and allows you to debug it remotely.

Writing a Dockerfile:

What is Dockerfile?

In short, Dockerfile contains a series of instructions which define how to construct an image:

Below are the common commands:

FROM

This command specifies the base image, eg: Microsoft/aspnetcore:2.0

WORKDIR

Changes the working directory for subsequent instructions in the Dockerfile

COPY

Copies files from source to destination

RUN

Executes a command directly in the container

EXPOSE

Sets a specific port internally to the container

VOLUME

Usually, it is used to map a physical directory on our machine to the logical directory in the container instead.

ENTRYPOINT

Specifies which application will run in the container created from the image.

To use some of the commands, First create Dockerfile in your application:

Then write Dockerfile for ASP.NET Core app, you must first know if you want to use the Development Environment or the Production Environment.

In development environment you need an image that contains a compiler and allows compiling directly in the container, and for that, use Microsoft/aspnetcore-build directly from the DockerHub (Docker Repository).

In Production environment you do not need a compiler because it will have to build the application externally from the image, integrate all built files in the image, and just use a lightweight image containing the .NET Core Runtime Microsoft/aspnetcore to execute your application.

The Dockerfile is pretty simple (if you’re familiar with Docker) though it’s very automated, so you don’t have to know how it works:

It creates a container from a slim image with ASP.NET Core 2.2 , builds the project and publishes to get a copy of the project in a container. To test/debug with docker, just pick the ‘Docker’ option in the debugging dropdown:

After that, in cmd type:

docker build. –t rootandadmin/DockerApp –f Dockerfile

Now, you have your image and check the type:

docker images

You have your image rootandadmin/DockerApp but as this image is dead so you have to make it alive. For this you will need to create a container. There is a big difference between Image and Container; a container is a process that contains an image and executes it.

Then to create containers from our image type this.

docker create command –p 555:80 –name rootandadmin/DockerApp01 rootandadmin/DockerApp

docker start rootandadmin/ DockerApp01

You can try to access your DockeApp in the browser using the following address.

localhost://555/ DockerApp/Home

1 2 3 9