Saturday, July 11, 2015

IBM BPM 8.5 - Dynamic user assignment to task

From IBM BPM v8.5, the Assignment options which were there before such as "Last User in the lane", "Routing Policy",  "List Of Users" and "Custom" were deprecated an can no longer be used for the Task Assignments. In older version the option "Custom" was being used for dynamic assignment of Team or User to a Task. In the latest versions of IBM BPM i.e., from V 8.5.0 a new way of assignment has been introduced i.e., using the "Team Filter Service".

In this post, let us look at a code sample illustrating the Dynamic task assignment for user using Team Filter Service.

As a first step create a simple BPD with one Activity with a Human Service. Click on the activity and navigate to Properties --> Assignments and click on new in-front of  "Team Filter Service" to create a new Integration service (team filter service) as shown in below figure or select an already created service.




Now in the "Team Filter Service", navigate to "Variables" section and create a new variable called "toUserId" as shown in the figure. This variable can be used to pass the userId to be assigned.


Let us look at the Assignment Tab of the activity in bpd after attaching the "Team Filter Service". Below figure shows the view of Assignment Tab for the activity along with the input mapping.



Now let us look at the script which is used to create a Team with the users passed dynamically to the Team Filter Service. In this Script we need to create the Team of User and assign it output Variable "filteredTeam". Below figure shows the diagram of the "Team Filter Service".


Below is the script using which the Team is created with the users passed dynamically.

tw.local.filteredTeam=new tw.object.Team();
tw.local.filteredTeam.members=new tw.object.listOf.String();
tw.local.filteredTeam.members.insertIntoList(tw.local.filteredTeam.members.listLength, tw.local.toUserId);

Once the BPD is Created and assignment is done as explained above. Create an instance for the BPD, you will notice that the task is assigned to the team created with list of Users.

This script can be extended to filter the users from the given team based on certain parameters or to create new temporary temporary team with of the list of intended users or any other scenario where the intended Users / Teams are determined dynamically during run time.

Hope this is use full !!!

Friday, July 10, 2015

IBM BPM EPV variable behaviour

  1. I have snapshot_A with an EPV with a default value: "123"
  2. I installed that snapshot to my Process Server and the --> current EPV value is "123" as expected
  3. Now I add a new EPV value "456" with effective date to near future and the --> current EPV value changed to "456" as expected
  4. I created a new snapshot_B and I installed that one on my Process Server
  5. I used Synchronize Settings or Inflight Migration (Instances needed) and discovered that --> the current EPV value switched back to "123". Here, I expected to have "456" because it was the last modified value.
alt text












The behavior of migrating EPV variables has changed due to requirements of utilizing the default EPV value specified in the snapshot, if one is defined.
There are two scenarios:
  • One scenario is the application may be developed in the perspective where the new application requires this new EPV value defined in the snapshot (default behavior in earlier releases).
  • The other scenario is to utilize only what is defined in runtime and do not use any values defined from the snapshot (new default behavior, described above).
With JR52960 a new configurable property "epv-deploy-default" has been introduced to toggle between the two behaviors. After applying the fix the default behavior will be reverted back to how it was in previous releases where the latest modified EPV value is migrated. This means the property by default is set to 'false'. Once the property is enabled (true) then the default EPV value from the snapshot will be once again be used during deployment or instance migration.
You can find the iFix and its pre-requisites on Fix Central: https://www-947.ibm.com/support/entry/portal/search_results?sn=spe&filter=keywords%3Aibmsupportfixcentralsearch&q=JR52960

Wednesday, June 3, 2015

IBM BPM - Task Management using server script ( ReAssign Task & Complete Task )

 Assign to me

tw.system.findTaskByID(tw.local.iTaskId).reassignTo(tw.system.user_loginName);


Complete Task

If we have output variable for task, we have to add those all output variable in map as below:

var outputVariable = new tw.object.Map();
outputVariable.put("out1","");
outputVariable.put("out2","");
outputVariable.put("out3","");
outputVariable.put("out4",0.0);
outputVariable.put("out5","");
outputVariable.put("conditioninBPD", "success");

tw.system.findTaskByID(tw.local.iTaskId).complete(tw.system.org.findUserByName(tw.system.user_loginName),outputVariable);

Tuesday, June 2, 2015

IBM BPM - Design Patterns

Introduction

IBM BPM provides a number of generic artifacts for modeling purposes. These can be combined in a variety of ways in order to solve different technical problems.
A design pattern is used to document how a combination of model artifacts may be used together in order to solve a specific technical problem. Unlike a framework or utility, a design pattern cannot be deployed directly. A design pattern is rather like having a pattern template for making clothes, where a tailor may use the same template to create a number garments using different materials.
The design pattern provides a design template that allows the BPM developer to solve the same technical problem consistently throughout the model using different artifacts. Design patterns promote re-use and improve maintainability. Example design patterns include Layered Architecture, Task Services, Coach Services, Data Access Services and Manual Unit Tests.

General Patterns

 This page outlines a number of general design patterns.
If reading for the first time it is recommended that the reader should read these key patterns in the following order:

1. Smart Folder and Tagging

Design Pattern

Objective

Tags can be used to create specific smart folders making it easy to manage and account for the development cycle of the assets being created. Using the “By Tag” library view, a user will be able to sort through a large asset library quickly.
View By Tag Smart Folder and Tagging



You’re not limited to just one tag per component. For instance, if you have a service/BPD/integration that tests your database connections it would be tagged as both a test and a database service.

General (Pre-Exist in TW7)

Tag
Purpose
Data Access General data access integrations (CRUD)
Data Transform Transforms data from one form to another (e.g. when coming out of web services into a BPD)
Database To distinguish Integration services that interact with databases
Exceptions Used to handle exceptions
Deprecated Items that are kept around for backward compatibility but are no longer the recommended implementation
Installation Used for the installation of a process app or toolkit
Integration A specific integration implementation
Report Implementation that is used for reporting purposes
Reporting To distinguish which services, variables, tracking groups, etc are linked to reporting
Security Anything that is dealing with security, such as users and groups
Task Specifically used to implement a task in a process. Can be used on all first-level task services
Test To distinguish Test Harnesses
Top Level The top-level BPD or a top-level service. Can be useful with an associated Smart folder to know where the top items are
UI Helpers for human services, such as getting lists of text for a list
Utility A utility service
Web Services To distinguish Integration services that interact with web services

Naming Convention

Tag services as required in the Naming Conventions Pattern

Other Tagging Options

If your developers are the compulsively organizational type, try the following ideas for tags, and let us know the results. We’re always looking for feedback.
  • Development Status Based Tags - Tag each and every artifact with its development status (To Do, In Progress, Completed, etc.).
  • User Based Tags - Tag Task Services, Reports, and BPDs by the users/participant groups that will be using them.
  • Process Based Tags - If you have a process app with multiple processes, tag each process-specific artifact with a tag of the BPD it’s associated with.

Smart Folders

Objective

While the new library in IBM BPM versions 7 and up eliminates the cluttered, confusing nested library structure of previous Teamworks versions, it also eliminated the ability to quickly access top-level services and business process diagrams.  We can eliminate this deficiency, and improve our development experience, by using tagging and smart folders.

Application

There are 4 types of IBM BPM artifacts that we want to be easily accessible.  Create a shared smart folder for each tag listed in the table below.
Smart Folder
Tag
Description
Example
Top-Level BPDs
Top-Level BPD
These BPDs should be the highest level BPDs in your process app. You may only have a few, but the ability to get to them quickly is important. An enterprise HR Onboarding process BPD. This may live in the HR Process App, alongside other processes, but it stands alone from others (i.e. offboarding, conflict resolution, etc.). We would tag the HR Onboarding process with “Top-Level BPD”, and it would add itself the top-level BPD smart folder. We would not tag any sub-processes this  BPD has (i.e. background check, adding to payroll system, etc)
Top-Level Services
Top-Level Service
These stand-alone services (primarily Human Services) do not exist in any BPDs.
  • A CRUD (create, read, update, delete) service used to allow users to manually manage database content, event schedules, etc.
  • A service that contains a report embedded in a coach, which will be deployed on the Left Side Navigation bar in the IBM BPM Portal.
  • A service that allows users to drilldown through process instances and the data/statuses associated with each.
Integrations (Use system specific tag)
These are services which contain integrations with other systems.

Note: To create this folder, use system specific tags (i.e. Database, Document Management, Legacy System). This way, you’ll be able to sort your smart folder by integration (sort by tag).
  • A service that updates customer information using a webservice
  • A service that writes a document to a content management system.
Test Services
Unit Test
Services/BPDs  used to test User Interfaces, Integrations, Process flow, etc.
  • A service used to test an integration with a document management system.
  • A service used to test a user interface’s validation on a customer service coach.
  • A BPD used to test proper process flow.

Examples

Smart Folder
Example Screenshot
Top-Level BPDs
Top Level BPD Smart Folder Smart Folder and Tagging
Top-Level Services
Top Level Services Smart Folder Smart Folder and Tagging
Integrations Integration Smart Folder Smart Folder and Tagging
Test Services
Test Services Smart Folder Smart Folder and Tagging

 

2. Naming Conventions

Purpose

To provide a standard method of naming TW artifacts to better organize and manage process apps. Additionally, updating the standard from Teamworks 6 in order to leverage new features in Teamworks 7 and make this pattern easier to learn and use.

 Noticeable Changes from the TW6 pattern

 Prefixes

Process Apps Naming Conventions

Because of the separation of process apps in Teamworks 7, there is no longer a worry of cluttered libraries with artifacts from multiple different process apps. This removes the need for prefixes.

 Body

The use of reserved verbs will help to clarify the action/use of a service.

 Abbreviated Suffixes

Service Types Naming Conventions

Due to the new service types in Teamworks 7, and the addition of tagging capability, abbreviated suffixes, used heavily in Teamworks 6 naming convention pattern, are no longer part of the pattern.  This should help make the naming pattern simpler and more user-friendly.

Artifacts

New service types allow artifacts to have simpler names, and the use of tags will allow clarification for users.
Tag
Artifact Type
Description
Notes
Examples
Tasks
Human Service or General System Service
A service that directly implements an activity. Such a service is responsible for coordinating a whole task for a user/system.   Register Sales Opportunity
Coaches
Human Service
A service that implements a single coach. It is generally recommended that a service not have more than a single coach in it. If the coach service has the same name as the task service, add the suffix “Coach” to the coach service. Register Sales Opportunity Coach
Events
General System Service
A service that is used specifically to invoke an Event Driven UCA. The UCA has the same name as this service. If event based, append “Event” on both this service and the UCA
Start Sales Cycle Event
Event Implementation
General System Service
A service that directly implements an Event Driven UCA. To name this service, precede the UCA name with the reserved verb “Implement”. Implement Start Sales Cycle Event
Data Access
Integration Service or General System Service
A service whose specific purpose it is to get some data from inside (EPV, properties, variable) or outside (DB, LDAP) Teamworks and return it to the calling service/BPD. Use the reserved verb “Retrieve”.
Retrieve Company Info
Unit Test
Human Service, General System Service, BPD
A service designed to test another service or BPD. The unit test has the same name as the service/BPD being tested with the reserved verb “Test”. This can apply to any other type of service, such as task services, coach services, web service services, etc. Test Register Sales Opportunity
Coach Validator
General System Service
A service used to encapsulate coach validation logic. This is used in conjunction with the Coach Validation Framework. Service name starts with reserved verb “Validate”.
Validate Sales Opportunity
Batch
General System Service
A service that directly implements a Batch/Chron driven UCA. To name this service, precede the UCA name with “Batch” and the reserved verb “Implement”. Batch Implement Sales Cycle Event
Utilities
General System Service
A service that implements some piece of utility functionality (such as text parsing for valid email format, etc)   Parse Email Addresses
Inbound Web Service
Web Service
A service that directly implements a Teamworks hosted Web Service.   ReceiveProductDetails
Outbound Web Service
Integration Service
A service that wraps a Web Service connector.   Update Sales Opportunity in Salesforce.com
Constructor
General System Service
A service that initializes a variable. Precede the variable name with the reserved verb “Construct” Construct SalesforceOpportunity
Business Object
Variable Type
A variable type that resides within the Business Object Model layer within a layered Teamworks architecture. These are used to define a common view of business data within a Teamworks process.   SalesforceOpportunity
View Object
Variable Type
A variable type that resides within the View layer within a layered Teamworks architecture. These may be defined to present data in a specific way.    
Integration Object
Variable Type
A variable type that resides within the Integration layer within a layered Teamworks architecture. These may be used directly with SQL and Web Service connectors that load data directly into Teamworks variables. It may be useful to include the system being integrated with in the title of the variable.
SalesforceSalesOpportunity

Example Library

Example Tags and Names Naming Conventions

Reserved Verbs

These verbs can be used at the beginning of artifact names in order to more clearly specify the action of the service.

Verb
Use
Example
Retrieve
Pulling data from a system of record Retrieve Product Details from Salesforce
Write
Creating a new record in a system of record Write Customer Details to Salesforce
Update
Updating an existing record in a system of record Update Customer Details in Salesforce
Delete
Removing a record from a system of record Delete Customer from Salesforce
Send
Sending a message event/email to another system/participant Send Email to Customer
Receive
Receiving a message event to and from a participant Receive message from Salesforce
Validate
Validating a coach using the coach validation framework Validate Product Details Coach
Test
Testing a service Test Delete Product Details from Salesforce
Construct
Initializing a variable type Construct ProductDetails
Implement
Implementing a UCA Implement Batch Timer

Higher Level Artifacts

Snapshot Names

Snapshot names should do at least 1 of 2 things:
  • Provide a date stamp of the snapshot (e.g. v15March2010 or 14June2010Release)
  • Describe change/enhancement (e.g. RouteAroundExecutiveApprovals)
Here is an example of a more detailed naming convention around Snapshots
Special Note for BPM Advanced
Not all the recommendations apply to BPM Advanced. For example, refer to this infocenter reference in situations where you want to use more than 3 digit numeric snapshots.
  • (Prior Release/Production).(Playback/Maintenance Release).(Snapshot per Current Playback/Maintenance Release).(Workspace branch)
  • Examples:
    • Snapshot for Release 1 development prior to installation to Production, during playback 3 development, the 6th snapshot during this time = 0.3.6
    • Snapshot for Release 1 development post installation to Production, during  development for the 2nd maintenance release, the 4th snapshot during this time = 1.2.4
    • Snapshot for Release 1 development post installation to Production, in a workspace taken from a branch at 1.2.4, 1st snapshot in this branch= 1.2.4.1
  • Once there are multiple releases and multiple workspaces, this might need to be expanded to incorporate that. A possible approach would be going to other snapshots for subsequent releases (normally with a major scope change) or starting over from a snapshot of choice from a prior release.

Environment Variables

Environment Variables Naming Pattern: Coming Soon

Process App Names

  • Name your Process Application after the main process in the PA or the business term/purpose for the PA
  • Don’t use words “Process Application” in PA’s name

Toolkit Names

  • Name the toolkit after what utility/services it provides
  • Add “Toolkit” or “Framework” word to the name, so the export of it can be differentiated from process applications.

Process App & Toolkit Names

  • Don’t use very long names try to keep it less than 64 characters
  • Use white space between words to improve readability
  • Avoid using abbreviation (this is what acronym is meant for) except common words to make the name shorter
  • Put additional information in Description field
  • Don’t use version number in the name (this is what snapshot should be used for), unless want to bring attention to the major change in the solution (like Axis2 vs Axis)

Diagram

The names of model artifacts should be shortened to be readable on the diagram, but they must still provide an explanation of what the attached artifact does.

Specific Conventions

Logging.  For most logging-related library items, the words “Log” or “Logging” should appear in the name.
Layouts.  Layout names will generally include the name of the specific type of layout, e.g. a coach layout will have the word “Coach” in it.  Again, this is not necessary, but it is useful.
Variable Types.  Variable types must be all one word, no spaces and a limited selection of special characters (0..9 and _ are allowed). Also, variable types should always begin with a capital letter.  Usually new types are defined to be complex objects (structures), and the standard for capitalization for such types is to be capitalized.
UCAs.  UCAs that will be used as Events in a BPD should be named with the word “Event” in the name.  For example, “New Order Event”.  The services which implement these UCAs should be labeled as “Implement New Order Event” for example.  UCAs which are not tied to a BPD implementation should not include the word “Event” in the name.
UCAs require attached Services – and it’s best to follow a convention where the name of the Service is “the same” as the name of the UCA. While it’s true that multiple UCAs can be attached to the same Service, it’s generally easier to think of them in terms of a paired Service and UCA.
Decision Points.  Decision points, whether in BPDs or Services, should be in the form of a question.  All of the lines coming out of the decision should be answers to the question.
Lines coming out of Decision points should have labels that indicate the condition under which the path is taken. Labels such as “yes” and “no” require the reader to trace back to the Decision Gateway, so they should be generally avoided in favor of something more descriptive like “loan exceeds $100″.

3. Reuse

Design Pattern

Reuse is recognized as a fundamental best practice within the IT industry and has been practiced from the earliest days of programming. This basic principle is the basis of many other best practices including modularity, loose coupling, high cohesion, information hiding and separation of concerns.
The idea is that a solution should be comprised of reusable modules instead of replicating complete sections of the solution no matter how large or small. Each module is written once and maintained in one place. If another aspect of the solution needs the same functionality then it can simply delegate to the module that provides that functionality. This avoids redundancy within the solution, improves maintenance time, and promotes consistency.
Within IBM BPM resue is achieved by wrapping reuseable aspects of the solution within a service. A library of reuseable services may be developed that can simply be dragged onto other service or BPD diagrams as many times as required. Many other design patterns are based upon this idea. For example, the task service design pattern illustartes a specific example of a reusable service.
Services may be used to wrap other model artifacts, such as coaches and server script, that cannot be reused directly without introducing redundancy. The coach service design pattern and data access service design pattern provide good examples of this.
Generally it is a good idea to avoid exposing java script to business users wherever possible. Wrapping server script within a reuseable service hides the complexity of the Java Script and provides a simple component which the business user is more familiar with.

 

4. Constructor Design Pattern

 

Use the constructor design pattern to create and initialise the data in your business objects in a consistent fashion and in one single place with a reusable component model within a Process Application or Toolkit. Adopt this design approach, instead of having multiple process code locations for initialising and setting default values for Business Objects. Typically, Business Object initialisation is done using local private JavaScript blocks which make the code difficult to manage, update and leads to errors/defects that difficult to trace and fix.
This pattern is a specialisation of the standard Singleton Design pattern.
The main parts of this pattern are:
Define a Business Object variable
Create a general system service to be used as the singleton constructor
Declare the target Business object as an input, output and private variable.
Model the General System Service
Create an initialisation wrapper service that calls the constructor
The example below creates a constructor for a list of Job Business Object.

Define a Business Object variable

In this case, I’m using a complex variable called Job
The Business object consists of three parameters – JobDescription, JobStatus, and JobCompleted
BOdetails Constructor Design Pattern

Create a general system service to be used as the singleton constructor

Create a service of type “General System Service” that will be used to instantiate the Business Object and set default values, if required.
I’ve used the prefix underscore (_) in the name to indicate this service is a helper or utility service that should only be used within other services and never directly in a human service or BPD activity. This makes it easy to spot misuse and ensure process developers are using agreed design guidelines. Also make it easier to unit test services based on different implementation and usage types.
Created _constructorListOfJobs
constructorService 01Overview Constructor Design Pattern

Declare the target Business object as an input, output and private variable.

If your variable is a list also declare as an input an integer for the size of the list to be created and populated with default values.
The input and output variable should be the same Business Object type and have a descriptive name.
The private variable should be the same Business Object type as the input and output variables, called something like “_constructor”, using a naming convention to indicate this is only ever to be a private variable.
Select the “Has Defaults” check box for the private variable to auto-generate the script to create and initialise the Business object.
See screenshots below for details.
constructorService 02Variables Constructor Design Pattern
constructorService 02Variables1 Constructor Design Pattern

Model the General System Service

Create a script block that contains the default values for this type of Business Object.
In addition, you should add error handling and logging to catch any errors. Typical errors would be initialisation script errors occurring after the modification of the Business Object, for example renaming a parameter.
Usually, the “Has Defaults” auto generated script should update in synch with any changes in the Business object. If it does not for whatever reason, just uncheck and recheck the check box to correct the “Has Defaults” auto generated script.
If required, make the necessary changes to the default values initialisation script block. Now any referencing BPD or Service that requires a variable of this Business Object type to be created, initialised and possibly have default values set will automatically have the benefit of these changes.
Always nullify private variables for efficient memory management.
I’ve used two exits as this is consistent with the developer guidelines agreed on the project for modelling utility and helper services. This simplifies the models in parent services and makes the flow logic clear when a system error, as opposed to business exception, has occurred.
constructorService 03Constructor Constructor Design Pattern

Create an initialisation wrapper service that calls the constructor

Create a wrapper service that encapsulates and calls the constructor. This also has the benefit of being able to use the “Where Used” feature on the modular wrapper service to clearly see references to other services and BPDs using this particular Business Object initialisation logic, instead of trawling through individual private JavaScript blocks.
constructorService 04ConstructorWrapper Constructor Design Pattern
constructorService 04ConstructorWrapper1 Constructor Design Pattern


  
5. Loop Design Pattern

There are two ways to implement loops within services. One method is to write it entirely within Javascript, the other is to do it diagrammatically.
Loop Loop Pattern
The preceding image illustrates a loop implemented diagrammatically. Some of these steps actually use small snippets of Javascript, but these are very simple constructs that business users can easily understand. The loop requires a counter to indicate the number of times the loop has been iterated. An Integer may be used for this purpose and is set to zero during the initialisation of the loop. Any other variable initialisation may also be performed during this step. For example:
tw.local.count = 0;
tw.local.policyDebt = 0;

Loops are typically used to iterate over a list variable. The list may be empty, so the loop exit criteria must be checked before looping commences. The decision gateway Yes path will require a condition to be specified. For example:
tw.local.count < tw.local.claims.listLength
Some action will be performed each time round the loop. Typically, this may be a Service, a Server Script or a combination. The list variable and counter may be used together to identify the next item within the list. This may be mapped to either an input or an output parameter of a service, or directly within a Server Script. For example:
tw.local.policyDebt += tw.local.claims[tw.local.count].debt;
Before re-entering the decision gateway the counter is incremented. This either locates the next item within the list or triggers the exit condition. For example:
tw.local.count++;
The following code segment illustrates a loop implemented entirely with Javascript:
tw.local.policyDebt = 0;
for(varcount = 0; count tw.local.policyDebt += tw.local.claims[count.debt];
}

The advantages of using the Javascript approach are:
  • Performance improvement. There is a minimal run-time performance overhead when using the diagrammatic approach.
  • The speed of development. An experienced Javascript programmer may prefer this approach as it is quicker to implement.
The advantages of using the diagrammatic approach are:
  • Easy for business users to understand. i.e. They don't have to become a skilled Javascript programmer.
  • Easy to debug. The process inspector allows BPM developers to step through each step within the loop and examine variables. Conversely, Server Scripts are executed atomically, so it is not possible to step through each step within a loop written entirely within Javascript.
  • Avoids infinite loops. It is possible to define a loop either using Javascript or diagrammatically where the exit criteria is never satisfied. i.e. it loops forever. IBM BPM provides a facility to detect infinite loops. With diagrammatic loops it is possible for IBM BPM to stop services running infinite loops by using the admin console, whereas the only way to stop infinite loops written entirely in Javascript is to restart the process server.
  • Allows nested services to be used within the loop. This is also technically possible within Javascript but it has its drawbacks - the syntax it is not business user friendly and services invoked using Javascript cannot contain coaches.

Additional Notes

If you run a simple test you may see exponential performance impact as you increase the number of loops. However, this is likely due to the structure of your test.
When you add even a small operation, say a 20ms unit of work to be performed inside the illustrated test with a single loop of size 100, the results are no longer different between scripting and diagramming your loop. You'd have 210s (canvas) vs. 200.055s (script). For larger units of work, the difference approaches zero and becomes negligible.
Unless the unit of work inside the loop is insignificant (less than 10ms) and the size of the loop is significant (larger than 100) the performance difference between diagrammatic and script is negligible. On the other hand, there will always be the JS sync issue in anything prior to TW7.
The only time a script loop makes any sense is for very small loop size and/or very small unit of work contained inside the loop.


6. Constants Pattern

Overview
Teamworks does not provide a model artefact specifically designed to maintain constants used within the Solution. Constants are used to promote consistency, improve maintenance and reduce typographical errors. Exposed Process Values (EPVs) are intended to be used to allow business users modify the parameters affecting business rules within the solution. However, EPVs may also be used to maintain constants.
For each EPV variable the External Name, Variable Name, External Description and Default Value should all be identical, as illustrated in Figure 1. Constants should not be directly exposed within the user interfaces, use localization resources instead.





Constants 300x129 Constants Pattern

 

 Source : IBM

IBM BPM - Functional Architecture

Design Pattern

IBM BPM provides business process governance by leveraging existing legacy systems and data sources within organisations. IBM BPM is not intended to be the master of record nor replace existing systems and data sources, instead it integrates with these legacy assets, as illustrated in figure 1 (left-hand side). IBM BPM is a server-side application and provides its human interface by leveraging thin-client web browser technology supported by modern desktop computers and mobile devices. Figure 1 (right-hand-side) illustrates how a IBM BPM solution may be divided into a number of architectural layers. Figure 2 (left-hand-side) illustrates where various model artefacts reside within each layer of the architecture. Each layer is responsible for a specific aspect of the solution. These model artefacts are described within other design patterns.
TW Arch Layers Layered Architecture Pattern
The basic premise of these layers is based upon the de-facto Model-View-Controller design pattern that has been widely adopted throughout the IT industry. The Business Process layer is responsible for enforcing business process governance, the View layer is responsible for providing the user interface, and the Business Object Model (BOM) layer is responsible for providing a consistent means for accessing business data (regardless of where the data originated). The Business Object Model layer provides a common definition of business entities, their relationships and behaviour. The Integration layer is responsible for communicating with a variety of legacy systems and data sources in order to retrieve and store data. The Integration layer is responsible for interfacing with external systems and data sources using appropriate technologies (such as JDBC, Web Services, and Java connectors) and shields the other layers from having direct exposure to these interfaces.
Clients are encouraged to assemble BPM project teams with individuals from both the Business and IT. Clients often ask which aspects of the solution should the business be responsible for and and which aspects of the solution should IT be responsible for. A common assumption is that it should be possible to draw a horizontal line within the layered architecture to indicate which layers belong to the business and which belong to IT. In reality both parties are responsible for all layers within the architecture but the emphasis changes within each each layer, as illustrated in Figure 2 (right-hand-side). The diagram illustrates the division of responsibility with a diagonal line that passes through all the layers. The business has greater responsibility within the layers towards the the top of the architecture and IT has greater responsibility within the layers towards the bottom of the architecture.
TW Artefacts Responsibility Layered Architecture Pattern
Figure 3 illustrates how data is passed between the layers (left-hand-side) and how data is transformed between the layers (right-hand-side). Each legacy asset usually has its own definition for representing business data. The Integration layer shields the other layers from these different data models and is responsible for transforming business data into the common Business Object Model. Similarly, the View layer may need to provide different ways of presenting data based upon the common Business Object Model. The View layer is responsible for translating data between the Business Object Model and the data structures required to support the various user interfaces. IBM BPM combines business data with coach definitions and automatically transforms it into HTML pages that are displayed within the web browser and vice-versa.
Data Flow vs Data Transformation Layered Architecture Pattern

 


Reference Architecture

Purpose

The reference architecture depicts typical cases of a functional architecture in traditional BPM projects.This can be taken as reference, guideline and best practice to create the application architecture on any BPM development project.There will never be a 100% pre-formed solution, therefore this may apply for 80% of cases and case dependent adjustment are recommended to be done.

 


Architecture Overview

BPM Reference Architecture v1 Reference Architecture
The blue parts depict what’s usually part of the decision process of a BPM project.
The gray portion shows the pieces of the architecture that are usually out of scope to be designed by a BPM project are mostly part of a Enterprise Architecture initiative.
Most scenarios involve:
  • BPM itself with either coaches or a third party UI.
  • A shared service environment (ESB)
  • Shared data, which can either be enterprise master data management systems or simply other applications which have data that need to be accessed by one or more external systems
The reference architecture is divided into 4 layers:

User Interface (UI)

Visualization layer.
Mostly coaches are being used, therefore this can almost be neglected as coaches interaction with the process out of the box.
A commonly used alternative are custom UI, e.g. for mobile devices that interact with BPM using the REST API

Application (App)

The application layer holds the logic for each business application. Whether it is process oriented or not.
Business logic should always be solely held on the application layer and NOT distributed. Common mistakes are to have business logic either on the UI layer or even ESB.
If external systems interact with BPM, this can happen through different ways.
BPM can expose a service, e.g. web service, which would be called through ESB.
If the actual application is a combination of a external system together with BPM, it makes sense to interact using the WebServices API.

Integration – Enterprise Service Bus (ESB)

The ESB or Integration Layer is the portion of a architecture that integrates a application with other applications.
It is important to understand that a ESB is not a product. A ESB is a concept of a single instance (a bus) that exist within a company which enables you to access other enterprise services or data.
A ESB can be implemented different ways.
A very common and recommended way is to use existing products such as WebSphere Message Queue (MQ), WebSphere Message Broker (MB) or WebSphere ESB. However, it is not uncommon that clients have their own implementation of a ESB. Often even as custom java implementation.

Canonical Model

A ESB often implements the recommended pattern of a “canonical model”.This simply means that the enterprise data model “lives” on the ESB.Any application should basically implement this model. However, due to historical growth and many other reasons, application are often not able to do that or change their model into that fashion.For every service call that a application can do TO the ESB, the service data model gets the transformed into this standard model. From there it gets transformed into the target data model (if not the same).Through this, every transformation has only to be done once and is way more flexible for changes and enhancements as opposed to point-to-point transformations where every service call would have to maintained manually.

IBM BPM - Coach Views Separation of Concerns

The following good practices are related to ensuring the right people are focused on the right things.

Good practice – Divide the labor when you author custom coach views

Coach views are reusable user interface controls in the coaches framework of IBM® Business Process Manager (BPM). IBM supplies a number of stock coach views, but you can create your own coach views for your own purposes. For these custom coach views, a good practice is to distinguish between atomic coach views and composite coach views:
  • Atomic coach views are coach views that are written with HTML and JavaScript. These coach views should be authored by someone on your team who is skilled in these languages. Everyone else on the development team can reuse atomic coach views that the skilled developer authored. Put these coach views into one toolkit for easy reuse.
  • Composite coach views are coach views that are built from other coach views, for example a date-range coach view or user-address coach view. These coach views can be authored by team members who do not have HTML or JavaScript skills because authoring these coach views involves laying out and configuring other coach views.
From a division-of-labor perspective, have one full- or part-time UI expert on the team to build the relatively rare atomic coach views; otherwise, the rest of the team can build many more reusable, composite coach views.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All


 

IBM BPM - Security, Topology, Installation, Configuration, and Migration

1. Good practice – Plan your release-to-release migration

Before you migrate from a version or release to another version or release, for example from V8.0.x to V8.5.x, in any production environment, make sure that you have migrated your staging or test environment and have tested your applications in the new environment.
Migrate to a new version for one or more of the following reasons:
  • You want to improve performance.
  • You need fixes that are in the new version.
  • You want to use new features that are provided in the new version.
  • Your existing product version is going out of service.
Test your migration procedures for one or more of the following reasons:
  • You must estimate in advance the amount of downtime that will be needed for migration. An estimate of downtime cannot be created without testing the migration procedures.
  • You want to make sure that the new version improves performance.
  • You want to make sure that the fixes you require are in the new version and work as expected.
  • You want to make sure that your existing applications work correctly in the new version.
For more information, read Planning a migration.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All


2. Good practice – Specify configuration values in 100Custom.xml

There are a number of XML-based configuration files for IBM™ Business Process Manager that you should never directly edit:
  • 99Local.xml.
  • 00Static.xml
  • 50AppServer.xml
  • 60Database.xml
  • 80EventManager.xml
  • 98Database.xml
To ensure that you do not lose changes when you migrate to a new release, always edit the configuration values in 100Custom.xml instead of editing the configuration values in these files directly.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All

3. Good practice – Use an offline process server for production

In IBM® Process Center, you can register online process servers and offline process servers. Online process servers are convenient for easily deploying snapshots to them by interactively using the Process Center user interface. However, for security reasons, it is a good practice to register the process server cell that runs your processes in production as an offline process server. By being offline, it prevents developers from being able to access and change the process using Process Designer’s inspector. Furthermore, being offline enables the Process Center and your mission-critical production server to be on different networks.
An acceptable alternative to using an offline Process Server for production is to have a separate Process Center just for online deployment to production. In this scenario, no Process Designer users should be given security permission to access the Process Center.
For information about how to deploy to an offline process server, see Installing snapshots on offline process servers.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All

4. Good practice – Use the rolling upgrade option when you update IBM BPM

If you install IBM® Business Process Manager (BPM) fix packs V7.5.1.2, V8.0.1.2, V8.5.0.1, or upgrade to V8.5.5 or V8.5.6 from V8.5.0.1 or V8.5.5, you can use the rolling upgrade option. By using the rolling upgrade approach, you can incrementally upgrade one process server at a time, starting with test, then staging, and finishing with production. The final step is to upgrade your IBM Process Center and desktop tools. The rolling upgrade approach is safer because you can certify one cell at a time before upgrading the next cell, and it requires less down time for processes that are in production because you don’t have to wait for all the servers to be upgraded and certified to continue in-production processes.
Note, however, that in V7.5.1.2 your process servers must be offline, while in subsequent releases they can remain online. In all releases, you might not be able to debug your process applications  by using IBM Process Designer until both IBM BPM Process Server and Process Designer are at the same level.
For more information, see Performing a rolling upgrade.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All

5. Good-practice resource – Implement the appropriate IBM BPM production topology

Are you an IT architect or IT specialist who wants to understand, select, and implement the appropriate production topologies for an environment? If so, follow the step-by-step instructions to build those topologies in the appropriate information:
IBM Business Process Manager Version 8.0 Production Topologies – This IBM® Redbook® describes how to build production topologies for IBM Business Process Manager (BPM) V8.0 and is an update of the existing book IBM BPM V7.5 Production Topologies, SG24-7976.
Business Process Management Deployment Guide Using IBM Business Process Manager V8.5 – This IBM Redbook provides an introduction to designing and building IBM Business Process Manager V8.5 environments. It introduces the changes and new features in IBM BPM V8.5 and provides an overview of the basic topology and components. This book also provides an overview of a consolidated migration approach that was introduced in V8.5.
Planning your network deployment environment – This topic specifically refers to the IBM Business Process Manager V8.5 production topology.
Applicable editions: Express, Standard, Advanced
Applicable releases: All

6. Good-practice resource – Secure your IBM BPM environment

It is important to secure your IBM® Business Process Manager environment.
If you are on IBM BPM V7.5.1, consult the IBM Redbooks® publication IBM Business Process Manager Security: Concepts and Guidance, which provides information about security that concerns an organization’s business process management (BPM) program, common security holes that often occur in this field, and techniques for rectifying these holes. This book documents preferred practices and common security hardening exercises that will help you achieve a secured IBM BPM installation.
If you are on IBM BPM V8.5.5, see Application security and Creating a secure environment.
If you are on IBM BPM V8.5.6, see Application security and Creating a secure environment.
Applicable editions: Express, Standard, Advanced
Applicable releases: All

7. Good-practice resource – Use the IBM Business Process Manager Interactive Installation and Configuration Guide or the Interactive Migration Guide

The IBM Business Process Manager Interactive Installation and Configuration Guide takes you through the steps for installing and configuring IBM Business Process Manager (IBM BPM) by using installation and configuration rules and considerations that are described in other topics in the documentation.
If you are migrating business data and applications from a previous version, use the IBM Business Process Manager Interactive Migration Guide instead. The Interactive Migration Guide generates instructions for a complete migration, including installing and configuring IBM BPM.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All

 


 


 


 


 


 


 

IBM BPM - Performance

1. Good practice – Avoid excessive use of server-side JavaScript

Avoid large server-side JavaScript blocks within BPDs and services, because JavaScript is interpreted and therefore is slower to process than other compiled mechanisms, such as Java™ code.
Furthermore, large JavaScript scripts often indicate that too much integration logic is being placed in the business process layer instead of having that logic encapsulated in Java code, or in an Advanced Integration service if you are using IBM™ Business Process Manager Advanced, or externally in an Enterprise Service Bus.
Beyond performance concerns, complex JavaScript runs the risk of infinite loops and other coding errors that are hard to triage and recover from.
Applicable editions: Express, Standard, Advanced.
Applicable releases: All


2. Good practice – Avoid large business objects in a process or service

Business processes in IBM® Business Process Manager should store only the data that is needed for the process or a service in the process as the process runs. Avoid large “cargo” object data being carried through the process because that data needs to be persisted as process instance state.
When large amounts of data are stored as variables, they use memory and disk space in the process database, require serializing and deserializing, and they will be copied on by-value invocations. According to the IBM Business Process Manager V8.5 Performance Tuning and Best Practices Redpaper, “In general, objects that are 5 MB or larger might be considered “large” and require special attention. Objects of 100 MB or larger are “very large” and generally require significant tuning to be processed successfully.”
One way to avoid carrying data is to use the Claim Check pattern; the large object is persisted in a separate system of record and only a reference to it is used in the process. Claim Check is an enterprise integration pattern (eaipatterns.com/StoreInLibrary.html). For information about using the claim check pattern with BPEL, see Improving application efficiency with the Claim Check pattern in WebSphere Integration Developer V6.2.0.1.
In addition, retrieve data only when it is needed, and carry data for only as long as it is needed. Consider the following examples:
  • A process refers to a customer and displays the customer’s address and order history at a point in the process. If the process does not need this information elsewhere, it should not be stored in the process; the information should be retrieved from the system of record as reference data before it is displayed.
  • A call to a web-service brings in 100 fields of data. The process needs only 2 fields of this data. Ensure that the rest of the data that is retrieved from the call is not stored and that the variables are nulled after use so the JVM can do garbage collection.
  • A process requires a file to be uploaded by a user and added to an Enterprise Content Management system. This scenario is best handled by using the Document List coach view which allows uploading of a document directly from a user’s machine without using an intermediate process or service variable to hold the contents of the file.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All


3. Good practice – Avoid multiple sequential system lane activities

In IBM® Business Process Manager, minimize the extra resources that are needed for multiple system lane transitions.
Each system lane activity is considered a new Event Manager task, which adds a task transition in IBM Process Server. These task transitions are expensive. If your business process definition (BPD) contains multiple system lane service tasks in a row, use one system lane task that wraps the others to minimize the extra resources that these transitions need.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All


4. Good practice – Place Process Center near where your Process Designer users are physically located

If you have a geographically disperse business process management (BPM) development team, it is better to have regional IBM® Process Centers than to have a single shared Process Center that is accessed by remote Process Designer authoring clients.
The Process Designer interacts frequently with the Process Center for authoring tasks. For this reason, minimize network latency to provide optimal response times. Place the Process Center in or near the same physical location as the Process Designer users. Process Designer clients that connect to a very remote Process Center might experience slow performance and dropped connections.
To share content between Process Centers, exchange .twx files or, as of IBM Business Process Manager V8.0, see Registering Process Centers and sharing toolkits.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All


5. Good practice – Purge data regularly

If your IBM® Business Process Manager data grows without bounds, it can over time lead to disk space issues and to performance issues as database queries take longer and longer to process. Therefore, it is important to have a policy of continuously removing older data. There are a number of places within IBM BPM where data is collected.
Consult the article Purging data in IBM Business Process Manager on developerWorks to understand where with the product data is collected and how to purge it. Decide which data to purge and how often.
Applicable editions: Express, Standard, Advanced
Applicable releases: All


6. Good practice – Use efficient SQL statements

When you write SQL statements directly in IBM® Business Process Manager, such as from server-side JavaScript in a service, ensure that you use typical SQL good practices for performance and resiliency.
Avoid using ‘SELECT * from ‘
When you use ‘SELECT * ‘ all the fields from the table or view are returned. If the table changes, the list of fields also changes, which might not be what you want, especially when you are mapping directly to a business object where the new names might not exist.
In addition, the asterisk (*) might return many columns in the results that are not needed but take up memory.
Instead, explicitly name the columns that you want returned as a result name, for example “SELECT AS , AS ”. The use of AS means that your business object names do not need to match the database column names and can be more meaningful. For example, CST1NAM could map to CustomerName.
Use parameter markers
Do not construct an SQL statement by concatenating variable names, such as
“SELECT CUST_NAME FROM CUST_MAST WHERE CUST_NUM=”+tw.local.customer_number
If you concatenate variable names, the database cannot precompile and cache the select statement because it changes every time.
Instead, use parameter markers, such as in “SELECT CUST_NAME FROM CUST_MAST WHERE CUST_NUM=?”, and then set the parameter value.
Better yet, wrap database access in an SOA service that is invoked remotely or by using an Advanced Integration service.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All


7. Good practice – Use query tables for BPEL processes

For BPEL process list and human task queries in IBM WebSphere® Process Server and IBM Business Process Manager (BPM) Advanced, use composite query tables to achieve excellent response times for high-volume task and process list queries.
Particularly in production scenarios, use composite query tables instead of the standard Business Process Choreographer query APIs because composite query tables provide an abstraction over the actual implementation of the query and, therefore, enable query optimization. Furthermore, you can change composite query tables at run time without redeploying the client that accesses the query table.
For information about query tables, see Query tables in Business Process Choreographer.

Source : IBM

 


 


 


 


 


 


 

IBM BPM - Operations


1. Good practice – Have a plan for regularly upgrading IBM BPM

Like all software, IBM® Business Process Manager (BPM) is constantly improving. Every so often IBM “rolls up” (consolidates) all fixes into a new fourth digit fix pack or third digit modification release. These releases typically contain many critical fixes.
To avoid experiencing a serious issue that a fix was made available for in the last, say, 10 or 12 months, have a regular plan for updating your IBM BPM software within its current release. For example, you might have a plan that checks for the latest service level every six months and schedules an upgrade if one is available. If a new fix pack or modification release is not available at that time, you can apply the latest recommended fixes for your release instead. You can search for the list of fixes that IBM recommends for a given release on Fix Central.
Applicable editions: Express, Standard, and Advanced
Applicable releases: All

2. Good practice – Monitor the Process Federation Server embedded Elasticsearch service by using the Head utility

You can use the open source Head utility to browse your Elasticsearch cluster, view the status and topology of the Elasticsearch cluster, and perform index- and node-level operations. You can also use the Head utility to call the Elasticsearch RESTful API. The Head utility has been tested successfully on Firefox and Internet Explorer browsers in this configuration. Some issues have been seen in Chrome with requests other than HTTP GETs.

Viewing Elasticsearch health and topology

In the Overview tab, you see the status of the cluster, the nodes, and the indexes. Here, you see the primary shards and the replicas for each node, the size of each index on the node, and the number of documents that have been indexed.
 Elasticsearch Health and topology Good practice   Monitor the Process Federation Server embedded Elasticsearch service by using the Head utility

Checking index status

To see more in-depth information about the index, click Info and select Index Status.
Elasticsearch  Checking index status Good practice   Monitor the Process Federation Server embedded Elasticsearch service by using the Head utility

Viewing index data

Select the Browser tab to view index documents and their data. To see the details and field values for a specific document, select an index to restrict the tabular view to show only documents from one index and then select a document within the tabular view.
Elasticsearch  Viewing index data Good practice   Monitor the Process Federation Server embedded Elasticsearch service by using the Head utility

Making REST calls

You can make Elasticsearch REST calls in the Any Request tab, which you do to verify the queries that the Process Federation Server made.
Elasticsearch Making REST calls Good practice   Monitor the Process Federation Server embedded Elasticsearch service by using the Head utility

The forwarder application

The Elasticsearch Head utility can work with the HTTP port of the Elasticsearch service. However, because some browsers do not support mixed content on the same page and that the HTTP port does not support authentication, authorization, or secure communications, keep the Elasticsearch HTTP port disabled (the default configuration). As a secure alternative to the Elasticsearch HTTP port, Process Federation Server provides an application, called the forwarder application, that securely forwards REST requests to the Elasticsearch service, acting like a proxy server. However, the forwarder application forwards the requests internally only to the Elasticsearch process that runs on the same server. Before the forwarder application accepts Elasticsearch HTTP requests, it checks the authorization of the user who sent the request.
Note: For single sign-on to work correctly, the host name and port in the URL must be the same for both the Head utility and the forwarder application.

Ensuring security credentials can be shared

To ensure that security credentials can be shared across the Elasticsearch Head utility and the forwarder application, run the Elasticsearch Head utility from the Process Federation Server that also hosts the forwarder application. To run the Head utility from your Process Federation Server Liberty server installation, download the Head utility and then repackage it into a deployable web application archive (WAR file) that can be run on the Process Federation Server Liberty server.
For more information about the Elasticsearch Head utility, see https://github.com/mobz/elasticsearch-head.

Packaging the Head utility

  1. On https://github.com/mobz/elasticsearch-head, click Download ZIP to download the elasticsearch-head-master.zip file.
       2. The files contained within the zip file have a directory structure that looks like this:
            -elasticsearch-head-master
                    – index.html
                    – dist
                    – …
            Copy the contents of the extracted zip file under elasticsearch-head-master directory into c:\temp\eshead\ directory. The structure will look like this:
             – eshead
                      – index.html
                      – dist
                      – …
         Notice that all the files under the elasticsearch-head-master directory are now under the new eshead directory.
      3. Create a  c:\temp\eshead\WEB-INF directory:
            – eshead
                – index.html
               – WEB-INF
               – dist
               – …
       4. Create a web.xml file in the c:\temp\eshead\WEB-INF directory:
           – eshead
               – index.html
               – WEB-INF
                    -web.xml
              – dist
                    – …
        5. Edit the web.xml file and copy the following text into the file:

    ESHead
    ESHead
    
        ESHead
        
            UIContent
            /*
        
        
            esadmin
        
        
            CONFIDENTIAL
        
    
    
        BASIC
    
    
        esadmin
    
    
        index.html
    

        6. Create a .zip file of the c:\temp\eshead\ directory and call it ESHead.war. The zip file will have the following structure:
           – index.html
            – WEB-INF
                     -web.xml
            – dist
                     – …
            Notice that the structure does not have the elasticsearch-head-master directory.

Setting up the Head utility

To configure Process Federation Server Liberty, complete the following steps:
  1. Edit the server.xml file.
  2. Ensure that the forwarder application feature is enabled in the section:
    ibmPfs:federatedForwarder-1.0
  3. Ensure that the Elasticsearch port is disabled by updating the attributes of  the section:
    http.enabled=”false”
  4. Add authorized users, groups, or special subjects in the element. For example, to allow all logged-in users to administer, monitor, and search the Elasticsearch data, use the following configuration:
    
            
                
            
            
                
            
            
                
            
    
  5. Copy the ESHead.war file into the pfs_install_root/usr/shared/apps directory on your  Process Federation Server Liberty server.
  6. Add the following sections to the server.xml file. Note that the location is the name of the .zip file that you created previously; the default directory is \usr\shared\apps directory. The security role that you define must be the same as the role that is defined in the web.xml file (in this example it is “esadmin”). You can also authorize a group or special subject instead of a user.
         
            
                              
                  
               
            
         
  7. Ensure the user, group, or special subject is in the bpmadmin and bpmmonitor security roles of the com.ibm.bpm.federated.forwarder.authorization authorization role, as previously described. In this example, all authenticated users have access to the forwarder application, but only the admin user has access to the Head utility.
  8. Restart Process Federation Server Liberty.

Accessing the Head utility

  1. Go to https://:/ESHead.
  2. In the Head utility, enter the location of the administrative forwarder application, for example https://localhost:9443/elasticsearch-admin/

 Source : IBM