Success with job shop operations (Part III) – How to put decision support in-line

sample_processWelcome to Part III of a blog series on success with Job Shop Operations.

We highlighted in Parts I and II of this blog mini-series the importance of auto-scheduling and orchestration (i.e. guideline) capabilities that come from background BPM (Part I) and how the beginnings of progress monitoring can be carried out by collecting data at workflow steps (Part II).

Here comes the reality check.

Not all work is sequential, few workflows are able to anticipate every eventuality, so workers need to be able to deviate from “best practices” from time to time.  This includes re-visiting already committed steps, skipping over steps, performing steps in a different sequence and inserting steps not in the original workflow.

All of which tells us that real workflows have many moving parts. Clearly, we need to set up governance (i.e. guardrails) so that significant excursions away from “best practices” can be “reined in”.

Governance at Cases can be provided by local and global business rules.

Let’s start with local business rules at workflow step forms that tell you, for instance, when mandatory data is missing.

How should a software system deal with missing mandatory data?

The silly choice is keep the user at the form, until, supposedly, they break down and provide the required data.  The result, if they simply don’t have the mandatory data, is the entire process grinds to a halt and you might come around after a few weeks to find the worker, covered in cobwebs, at his/her workstation.

A better approach is “OK, this IS mandatory data, we would like to have it now, but if you cannot/won’t provide it, we will let things move forward.  Later on, when we get to where we actually need the mandatory data, we will cause a hard stop”.

A good example of this is a standard address block on a form. If you get everything but the zip code, you will be able to phone the person but not send anything by mail.  So, a red flag comes up at the data capture form as a worker tries to leave the form and the system either gets the data that is missing or it does not.  At some downstream step along the workflow if we get to a step called “Mail letter” there is no point going forward, so the user would see a hard stop at that step.

The need for global business rules comes up in software environments where external (or internal) “events” are allowed to impact the performance of work.   Included in internal events are ad hoc interventions at Cases.

Say I have a series of assembly steps that must be performed internally, followed by a merge where an outsourced assembly must be integrated with the internal assembly.  In the absence of event handling and a merge point, the next-in-line step following completion of the internal work would be the integration step.

The last thing you would want to do is schedule this step and allocate resources to it when you have no information regarding the status of the outsourced work. So, the solution here is to have a global rule set that holds back the integration task pending ‘out-of-the blue” arrival of the outsourced assembly.

There have been various approaches to the setup of global rules.

A traditional approach used by software developers has been to build and maintain a large central “rules” engine that is consulted as and when an event is detected (i.e. be on the lookout for this, then go to the rules engine to find out what to do).  Not a bad approach, except that the more rules, the larger the central rules set and the greater the danger that a small change to the rule set could cause problems in unexpected areas of workflow step processing that has nothing to do with the small change.

A different approach is to define key process points along workflows that consult small rule sets that have a more narrow focus.

Suppose we have a workflow step called “ship”. The usual pre-cursor to “ship” will be “inspect” but what if someone skips “inspect” or does not follow the “inspect -> ship” workflow fragment at all?  A user might for example decide to invent an ad hoc workflow step called “ship”.

The generic question is how do we avoid shipping products that have not been inspected?

We can solve the problem by having a pre-condition to step “ship” that looks to a global rule set for advice/assistance. If the item has been inspected by QA, we will see that in the data. So, the rule set has only to consult the data and resolve to TRUE/FALSE.  FALSE and the item will not be shipped.

So, at the end of Part III we have scheduling, orchestration and governance.

Missing from this is “interoperability” and it would be unreasonable to assume that any BPMS would be “an island”.  Interoperability is what allows a BPMS to export its data to 3rd party internal and external systems and applications and to receive data from the outside world.  Included in 3rd party systems are data warehouses from which data mining/analysis can be initiated, leading to process improvement.

What we are looking for at the end of the day is 360 degree “best practices” discovery, mapping, improvement, roll out, progress monitoring, performance tracking of steps/tasks, data collection, data exchange, data analysis.

 Courtesy to Walter Keirstead. This blog is also available on http://kwkeirstead.wordpress.com/
pixelstats trackingpixel

By Karl Walter Keirstead @ Civerex | September 10, 2013

Leave a Reply

Your email address will not be published. Required fields are marked *