Wednesday, September 10, 2014

Techniques to Minimize Memory Usage with IIB

Introduction

An IBM Integration Bus node will use a varying amount of virtual and real memory. Observed sizes for the amount of real memory required for an Integration Server vary from around a few hundred megabytes to many gigabytes. Exactly how much is used is dependent on a number of factors.  Key items are
  • Integration flow coding – includes complexity and coding style
  • Messages being processed – size, type and structure of the messages
  • DFDL schema /Message sets deployed to the Integration Server
  • Level of use of Java within the Integration flows and setting of the JVM heap
  • Number of Integration flows that are deployed to the Integration Server
  • Number of Integration Servers configured for the Integration Node.
This article contains some practical recommendations to follow in order to reduce memory usage, often with dramatic results.  They are split in to two sections: Integration Flow Development Recommendations and Configuration Recommendations.  The article is written with IBM Integration as the focus. However all of the techniques and  comments equally apply to WebSphere Message Broker.
 

Integration Flow Development Recommendations

 

Introduction

At the core of all processing within an Integration flow is the message tree. A message tree is a structure that is created, either by one or more parsers when an input message bit stream is received by an Integration flow, or by the action of an Integration flow node.  A new message tree is created for every Integration flow invocation. It is cleared down at Integration flow termination.
The tree representation of a message is typically bigger than the input message, which is received as a bitstream into the Integration flow.  The raw input message is of very limited value when it is in a bitstream format and so parsing into an in memory structure is an essential step in order to make subsequent processing easier to specify whether that be using a programming language, such as ESQL, Java or .Net , or a mapping tool like the Graphical Data Mapper.
The shape and contents of a message tree will change over the course of execution of an Integration flow as the logic within the Integration flows executes. 
The size of a message tree can vary hugely and it is directly proportional to the size of the messages being processed and the logic that is coded within the Integration flow.  So both factors need to be considered. That is how messages are processed, including parsing, and how they are processed within the Integration flow – the Integration flow logic.

 

Message Processing Considerations

When a message is small, such as a few kilobytes in size then the overhead of fully parsing that message is not that great. However when a message is 10’s KB or larger then the cost of fully parsing it becomes larger.  When the size grows to megabytes then it becomes even more important to keep memory usage to a minimum.  There are a number of techniques that can be used to keep memory usage to a minimum and these are described below.
Parsing of a message will always commence from the start of the message and proceed as far along the message as required in order to access the element that is being referred to in the flow though the processing logic (ESQL, Java, XPath or Graphical Data Mapper map etc.). Dependent on the field being accessed then it may not be necessary to parse the whole message.  Only that portion of the message that has been parsed will be populated in the message tree. The rest will be held as an unprocessed bitstream. It may by parsed subsequently in the flow if there is logic that requires it. Or it may never be parsed if it is not required as part of the Integration flow processing.
If you know the Integration flow needs to process the whole of the message in an Integration flow then is most efficient to parse the whole message on first reference to a field within the message. To ensure a full parse specify Parse Immediate or Parse Complete on the input node. Note the default option is Parse on Demand.
Always use a compact parser where possible. XMLNSC for XML and DFDL for non-XML data are both compact parsers. The XML and XMLNS parsers are not compact parsers. The MRM is also a compact parser but this has now been superseded by DFDL. The benefit of compact parsers is that they discard white space and comments in a message and so those portions of the input message are not populated in the message tree, so keeping memory usage down.
For XML messages you can also consider using the opaque parsing technique. This technique allows named subtrees of the incoming message to be minimally processed. What this means is that they will be checked for XML completeness but the sub tree will not be fully expanded and populated into the message tree. Only the bit stream for the subtree will appear in the message tree. This technique reduces memory usage.  However when using this technique you should not refer to any of the contents of the subtree that has been opaquely parsed.
If you are designing messages to be used with applications that route or process only a portion of the message then place the data that those applications require at the front of the message. This means less of the message needs to be parsed and populated into the message tree.
When the size of the messages is large, that is 100’s K upwards then use large message processing techniques where possible.  There are a couple of techniques that can be used to minimize memory usage but the success of them will depend on the message structure.
Messages which have repeating structures and where the whole message does not need to be processed together lend themselves very well to the use of these techniques. Messages which are megabytes or gigabytes in size and which need to be treated as a single XML structure for example are problematic as the whole structure needs to be populated in the message tree and there is typically much less capacity to optimize processing.
The techniques are
  1. Use the DELETE statement in ESQL to delete already processed portions of the incoming message
  2. Use Parsed Record Sequence for record detection with stream based processing such as with files (FileInput, FTEInput, CDInput) and TCPIP processing (TCPIPClientInput, TCPIPServerInput) where the records are not fixed length and there is no simple record delimiter.
A summary of these techniques is provided here to give you an idea of what they consist of but for the full details you should consult the links to the IBM Integration Bus Knowledge Centre that are given below.
 
DELETE Statement
Use of this technique requires that the message contains a repeating structure where each repetition or record can be processed individually or as a small group.  This allows the broker to perform limited parsing of the incoming message at any point in time. In time the whole message will be processed but not all at once and that is the key to the success of the technique.
The technique is based on the use of partial parsing and the ability to parse specified parts of the message tree from the corresponding part of the bit stream.
The key steps in the processing are:
  • A modifiable copy of the input message is made but not parsed (Note InputRoot is not modifiable). As the copy of the input message is not parsed it takes less memory then it would if it were parsed and populated into the message tree.
  • A loop and reference variable are then used to process the message one record at a time. 
  • For each record the contents are processed and a corresponding output tree is produce in a second special folder.
  • The ASBISTREAM function is used to generate a bit stream for the output subtree. This is held in a Bitstream element in a position that corresponds with to its position in the final message.
  • The DELETE statement is used to delete both the current input and output record message trees when the processing for them has been completed.
  • When all of the records in the message have been processed the special holders used to process the input and output streams are detached so that they do not appear in the final message. 
If needed then information can be retained from one record to another through the use of variables to save state or values.
 
Parsed Record Sequence
This technique uses the parser to split incoming data from non message based input sources such as the File and TCPIP nodes into messages or records that can be processed individually.  These messages are smaller in size than the whole file or record. This again allows overall memory usage to be reduced. Substantially in some cases. It allows very large files that are Gigabytes in size, to be processed without requiring Gigabytes of memory.
The technique requires the parser to be one of the XMLNSC, DFDL or MRM(CWF or TDS) parsers. It will not work with the other parsers.
In order to use this technique the Record Detection property on the input node needs to be set to Parsed Record Sequence.
With this technique the Input node will use the parser to determine the end of a logical record.  In this situation it typically cannot be not be determined simply by length or a simple delimiter like .  When a logical record has been detected by the parser then it will be propagated through the Integration flow for processing in the usual way.
 

Coding Recommendations

This section contains some specific ESQL and node usage coding recommendations that will help to reduce memory usage during processing.
Message Tree Copying
·      Minimize the number of times that a tree copy is done.  This is the same consideration for any of the transformation nodes. 
In ESQL this is usually coded as
SET OutputRoot = InputRoot; 
In Java it would be
   MbMessage inMessage = inAssembly.getMessage();
   MbMessage outMessage = new MbMessage();  // create an empty output message
MbMessageAssembly outAssembly = new MbMessageAssembly(inAssembly, outMessage);
 
In an Integration flow combine adjacent ESQL Compute nodes. For example:
image
 
 
 
 
 
 
 
 
 
 
 
 
 
In this example the two ESQL Compute nodes cannot be further combined into a single ESQL Compute node as there is an external call to SAP with the SAPRequest node.
Watch for this same issue of multiple adjacent compute nodes when using subflows. An inefficient Subflow can cause many problems with a flow and across flows. If an efficient Subflow is used repeatedly in an application this can replicate the inefficiency many times within the same Integration flow or group of Integration flows..
It is not so easy to combine an adjacent ESQL Compute and a Java Compute node. Often the Java Compute node will contain code that cannot be implemented directly in ESQL. It may be possible to do it the other way and implement the ESQL code as Java in the Java Compute node. An alternative approach would be to invoke the Java methods through a Procedure in ESQL if you are looking to combine into a Compute node.
Consider using the Environment correlation name to hold data rather than using InputRoot and OutputRoot or LocalEnvironment in each Compute/Java Compute node.
This way a single copy of the data can be held. You should be aware that there are recovery issues associated with this however. When using the traditional approach of SET OutputRoot=InputRoot; to copy message data in the course of an Integration flow the message tree is copied at each point at which this statement is coded. Should there be a failure condition and processing is rolled back within the Integration flow so will be the message tree. When data is held in Environment there is a single copy of the data which will not be backed out in the event of a failure condition. The application code in the Integration flow needs to be aware of this difference in behaviour and allow for it.
 
ResetContentDescriptor Node
·       Where there is a sequence of Compute node -> ResetContentDescriptor node -> Compute node this could be replaced by one compute node that made use of the ESQL ASBITSTREAM and CREATE with PARSE to replace the RCD node.
·       A common node usage pattern is a combination of Filter nodes and ResetContentDescriptor nodes to determine which owning parser to assign. This can be optimised using a single Compute node with IF statements and CREATE with PARSE statements.
 
Trace nodes
·       Be aware of the use of trace nodes in non development environments. References to ${Root} will cause a full parse of the message if the trace node is active. Hopefully any such processing is not on the critical path of the Integration flow and it would only be executed for exception processing.
 
Node Level Loops
·       Although loops for processing at the node level are permitted within the IBM Integration Bus programming model do be careful about which situations they are used in or there can be a rapid growth of memory usage.
For example consider a schematic of an Integration flow which is intended to read all of the files in a directory and process them. 
 
image
 
 
 
 
 
 
 
 
 
As each file is read and processed and processing completes then the End of File Terminal is driven and the next file is read to be processed.
This is indeed what is needed but the implementation is such that all of the state associated with the node (message tree, parsers, variables etc.) is placed on the stack and heap as the node is repeatedly executed. In one particular case where several hundred files were being read in the same Integration flow execution the Integration Server rose in size to be around 20 GB of memory.
Using this alternate design the required memory size was substantially less with a size of 2 GB compared with the previous 20 GB.
image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
This technique uses the Environment to pass state from the File Read node, that is whether the file has been completely read, back to the Compute node Setting_FileRead_Properties, so that another PROPAGATE to the File Read node can take place for the next file.
 
Optimise ESQL Code
·       Write efficient ESQL which reduces the number of statements required. This will increase the efficiency of the Integration flow by reducing the number of ESQL objects which are parsed and created in the DataFlowEngine. A simple example of this is to initialize variables on a DECLARE instead of following the DECLARE with a SET. Use the ROW and LIST constructors to create lists of fields instead of one at a time. Use the SELECT function to perform message transformations instead of mapping individual fields.
·       When using the SQL SELECT statement, use WHERE clauses efficiently to help minimize the amount of data retrieved from the database operation. It is better to process a small result set than have to filter and reduce it within in the Integration flow
·      Use the PROPAGATE verb where possible. For example if multiple output messages are being written from a single input message, then consider using the ESQL PROPAGATE function to source the loop. With each iteration of the propagation, the storage from the output tree is reclaimed and re-used, thus reducing the storage usage of the Integration flow.
 

Configuration Recommendations

 

Introduction

Whilst the major influence on the amount of memory used in processing data is the combination of the messages being processed and the way in which the flow is coded the configuration of the broker runtime can also have a significant effect and it should not be ignored.  This section will discuss the key considerations.
The Integration flow logic and indeed product code with an Integration Server all use virtual memory.  The message tree, variables, input and output messages are all held at locations within virtual memory as they are processed.  Virtual memory can be held in central, or real, memory. It can also be held in auxiliary memory or swap space if it is paged or swapped out. It can not be used in processing at this time though. Where a page of virtual memory sits exactly is determined by the operating system and not by IBM Integration Bus.
Note that Integration Servers which are configured differently and which have different Integration flows deployed to them will use different amounts of virtual and real memory. There is not one amount of virtual and real memory for every Integration Server.
For processing to take place a subset of the virtual memory allocated to an Integration Server will need to be in real memory so that data can be processed (read and/or updated) with ESQL, Java or Graphical Data Mapping nodes etc.  This set of essential memory is what forms the real memory requirement, or working set, of the Integration Server. The contents of the working set will change over time.
For processing to continue smoothly all of the working set needs to be in memory. If required pages need to be moved into real memory then this causes delays in processing.  In a heavily loaded system the operating system has to manage the movement of pages very quickly. When the demand for real memory is much higher than that which is actually available then processing for one one more components can be slowed down and in extreme cases systems can end up thrashing and performing no useful work. It is important therefore to ensure that there is sufficient real memory available to allow processing to run smoothly.  The first part of this is to make sure that all processing is optimized which is what the Integration flow coding recommendations are intended to help with.
The real memory requirements of all of the Integration Servers that are running plus that required by the Bipservice Bipbroker and BipHTTPListener processes form the total real memory requirement of the Integration Node.  As Integration flows and/or Integration Servers are started and stopped this real memory requirement will change. For a given configuration and fixed workload then the overall real memory requirement should remain within a narrow band of usage with only slight fluctuations. If the real memory requirement continues to grow there is most likely a memory leak.  This could be in the Integration logic or in some rare cases the IBM Integration Bus product.
Now that IBM Integration uses 64 bit addressing there are no virtual memory constraints for the deployment of Integration flows to Integration Servers in the way that there were when 32-bit addressing was the only option.   This means that Integration flows can be freely deployed to Integration Servers as required. 
The memory requirement for an integration node will be entirely dependent on the processing that needs to be performed. If all Integration Servers were stopped the real memory requirements would be very small. If 50 Integration Servers were active and processing using inefficient Integration flows to process large messages or  large files then the real memory requirement could be 10’s of Gigabytes in size for example.  Again this is no fixed memory requirement for an Integration node. It will very much depend on which integration flows are running and the configuration of the Integration node.
 

Broker Configuration

The following configuration factors affect the memory usage of an integration server.
  • The deployment of Integration flows to an Integration Server
  • The size of the JVM for each Integration Server
  • The number of Integration Servers
We will now look at each of these in more detail.
 
Deployment of Messages Flows to Integration Servers
Each Integration flow has a virtual memory requirement that is dependent on the routing/transformation logic that is executed and the messages to be processed. 
Using additional instances for a message will result in additional memory usage but not by so much as deploying another the same flow to another Integration Server.
The more, different, Integration flows that are deployed to an Integration Server then the more virtual and real memory will be used by that Integration Server.
There will be an initial requirement of memory for deployment and a subsequent higher requirement once messages have been processed. Different Integration flows will use different amounts of additional memory.  It is not possible to accurately predict the amount of virtual and real memory that will be needed for any Integration flow or message to be processed. So for planning purposes it is best to run and measure the requirement once the Integration flow has processed messages. After some minutes of processing messages memory usage should stablise providing the mix of messages is not constantly changing, such as the size continually increasing.
If multiple messages have been deployed to the Integration Server then ensure all process messages before observing the peak memory usage of the Integration Server.
When looking at the memory usage of an Integration Server then focus on the real memory usage. That is the RSS value on Unix systems or Working Set in Windows. This is the key to understanding how much memory is being used by a process at a point in time. There will be other pages that may be sitting in swap space that were used at some point in the processing possibly at start-up or when a different part of the message flow was executed for example. But due to the demand for memory they may well no longer be in real memory.
To understand how much memory processes on the system are using then use the following:
  • AIX - The command  ps -e -o vsz=,rss=,comm= will display virtual and real memory usage along with the command for that process
  • Linux: - The command ps -e -o vsz,rss,cmd will display virtual and real memory usage along with the command for that process
  • Windows - Run Task Manager then select View -> Select Columns -> Memory (Private Working Set)
 
Integration Server JVM Settings
In IBM Integration Bus V9 the default setting of the JVM is a minimum setting of 32 MB and a maximum of 256 MB. For most situations these settings are sufficient.  The same settings are true with WebSphere Message Broker V8.
The amount of Java heap required is dependent on the Integration flows and in particular the amount of Java. This includes nodes which are written in Java such as the FileInput, FileOutput, SAPInput, SAPRequest nodes etc.  Given this then different Integration Servers may well require different Java heap settings so do not expect to always use the same values for every Integration Server.
A larger Java heap maybe needed if there is a heavy requirement from the Integration flow logic or nodes which are used within it. The best way to determine if there is sufficient Java heap is to look at Resource Statistics for the Integration Server to observe the level of Garbage Collection.
For batch processing low GC overhead is the goal. Low would be of the order of 1%.
For real–time time processings then low pause times are the goal. Low in this context is less than 1 second.    
As a guide a GC overhead of 5-10% can be tuned. As can pause time of 1-2 seconds.
The Integration Server JVM heap settings can be changed with the mqsichangeproperties command. For example the command:
mqsichangeproperties PERFBRKR -o ComIbmJVMManager -e IN_OUT -n jvmMaxHeapSize -v 536870912
will increase the maximum JVM heap to a value of 536870912 bytes (512 MB) for the Integration Server IN_OUT in the Integration node  PERFBRKR.
 
Numbers of Integration Servers
A variable number of Integration Servers are supported for an Integration node. No maximum value is specified by the product.  Practical limits, particularly the amount of real memory or swap space will limit the number than can be used on a particular machine.
Typically large systems might have 10’s of Integration Servers. A broker with 100 Integration Servers would be very large and at point we would certainly recommend reviewing the policy used to create Integration Servers and assign Integration flows to them.
The number of Integration Servers in use does not impact the amount of virtual memory or real memory used by an individual Integration Server but it does have a very direct effect on the amount of real memory that is required by the Integration node as a whole and so for this reason it is good to think about how many Integration Servers are really required.
An Integration Server is able to host many Integration flows. One Integration Server could host hundreds of flows if needed. Think carefully before doing this though as if the Integration Server fails a high number of applications will be lost for a time and the restart time for the Integration Server will be significantly increased over one with a few Integration flows deployed. If the Integration Server contains Integration flows that are seldom used then this is much less of an issue.
It is a good idea to have a policy controlling the assignment of Integration flows to Integration Servers. There are a number of schemes commonly in use. Some examples are
  • Only have Integration flows for one business application assigned to an individual Integration Server. [Large business applications may require multiple Integration Servers dependent on the number of messages flows for the business area].
  • Assigning flows that have the same hours of availability together. It is no good assigning flows for the on-line day and for the batch at night into the same Integration Server as it will not be possible to shutdown the Integration Server without interrupting the service of one set of flows.
  • Assign flows for high profile of high priority applications/services to their own Integration Servers and have general utility Integration Servers which host many less important flows.
  • Assign Integration flows across Executions so that volume
In a multi-node deployment avoid the tendency to deploy all integration flows to all nodes. Have some nodes provide processing for certain applications only.  Otherwise with everything deployed everywhere real memory usage will be large.
One additional point. Assign Integration flows that are known to be heavy on memory usage to the minimum number of Integration Servers. This will reduce the number of Integration Servers that become large in size and require large amounts of real memory.
Whichever policy you arrive at think ahead into the future and ensure it will still work if there are hundreds of more services added. Sometimes people assign one flow or service to one Integration Server when there are only a few services in existence. This is something that you can live with in the early days. But by the time there are another 400 services it simply does not scale and the demand for real memory usage becomes an issue.

Monday, July 21, 2014

Backup and Restore Configuration and Profiles in Websphere Application Server

I would like to become a best Softwar1 - Configuration Backup & Restore:

To backup the configuration of a node/profile, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./backupConfig.sh Zip_File_Name

backupConfig Command:

backupConfig command will backup the configuration of a node to a file.

Syntax:
backupConfig.sh backup_file [options]
Options:
-nostop
    Tells the restoreConfig command not to stop the servers before restoring the configuration
-password password
    Specifies the password for authentication if security is enabled in the server
-username user_name
    Specifies the user name for authentication if security is enabled in the server
-profileName profile_name
   Defines the profile of the Application Server process in a multiple-profile installation

Eg: backupConfig.sh AppSrv01_Backup_Jul2013.zip -profileName AppSrv01 -nostop

restoreConfig Command:

In a similar way we also have the restoreConfig command, which restores the configuration of a node taken from the backupconfig command.

To restore the configuration of a node, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./restoreConfig.sh Zip_File_Name

Syntax:
restoreConfig.sh backup_file [options]

Options:
-nostop
    Tells the restoreConfig command not to stop the servers before restoring the configuration
-password password
    Specifies the password for authentication if security is enabled in the server
-username user_name
    Specifies the user name for authentication if security is enabled in the server
-profileName profile_name
   Defines the profile of the Application Server process in a multiple-profile installation

Eg: restoreConfig.sh AppSrv01_Backup_Jul2013.zip -profileName AppSrv01 -nostop

Note: Be aware that if you restore the configuration to a directory that is different from the directory that was backed up
      when you performed the backupConfig command, you might need to manually update some of the paths in the configuration directory.

2- Profile Backup & Restore:

To backup the profile configuration, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./manageprofiles.sh -backupProfile -profileName AppSrv01 -backupFile /opt/home/user/WAS_Backup/profile_backup/AppSrv01.zip

Syntax:
manageprofiles.sh -backupProfile -profileName profile_name -backupFile file_name

Note: When you back up a profile using the -backupProfile option, you must first stop the server and the running processes for the profile that you want to back up.

Similarly, we also have the option of -restoreProfile which restores the profile from the backup file.

To restore the profile configuration, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./manageprofiles.sh -restoreProfile -backupFile /opt/home/user/WAS_Backup/profile_backup/AppSrv01.zip

Syntax:
manageprofiles.sh -restoreProfile -backupFile file_name

To restore a profile, perform the following steps:

  1. Stop the server and the running processes for the profile that you want to restore.
  2. Manually delete the directory for the profile from the file system.
  3. Run the -validateAndUpdateRegistry option of the manageprofiles command.
  4. Restore the profile by using the -restoreProfile option of the manageprofiles command.

Source: WAS InfoCenter 7.0, Self-Knowledge.

Wednesday, July 16, 2014

Websphere Application Server - Certificate Expiration Monitor and Dynamic Run Time Updates

As should not be surprising, certificates expire. Should a certificate expire, SSL communication using that certificate will be impossible, which will almost certainly result in a system outage. WebSphere Application Server tries hard to prevent these outages, and when it cannot prevent them, it tries to at least warn you before they occur. The certificate expiration monitor task runs on a configurable schedule which, by default, is every 14 days. The Next start date field for the monitor is persistent in the configuration and is updated with a new date each time it runs. It will execute in the deployment manager process in a Network Deployment environment, or -- if standalone -- in the WebSphere Application Server base process.
When executing, the expiration monitor will search through all KeyStore objects configured in the cell, looking for any personal certificates that will expire within the expiration threshold (90 days being the default; this is configurable via a custom property). If it finds any, it will issue a notification warning of the impending expiration. Notifications are always sent to the serious event stream to all registered listeners. By default, this is the admin console and SystemOut.log. Notifications can also be sent via email using an SMTP server.
In addition to notifications, WebSphere Application Server will attempt to replace self-signed certificates before they expire. By default, the expiration monitor will execute the certificate replacement task (mentioned in the previous section) against any self-signed certificates 15 days before expiration (this is configurable). The task creates a new certificate using the certificate information from the old one, and updates every trust store in the cell that contained the old signer with the new signer certificate. By default, the old signer certificate will be deleted.
The expiration monitor marks any SSL configuration as "modified" whenever the monitor changes the key store or trust store referenced by the configuration. The configuration changes are saved once the expiration update task is completed, causing a ripple to occur throughout the runtime. The first thing that happens is the temporary disabling of SSL server authentication (for 90 seconds) to enable these changes to occur without requiring a server restart. In cases where you do not want this to occur, consider disabling the Dynamically update the run time when SSL configuration changes occur option located at the bottom of the SSL certificate and key management panel in the admin console.
Unfortunately, automatically replacing certificates is not a panacea. WebSphere Application Server cannot update certificates in key stores that are not under its control. In particular, this means that a Web server plug-in that is using the previous soon-to-expire signing certificate will stop working when the corresponding personal certificate is replaced. It also means that if WebSphere Application Server was using the personal certificate to authenticate with some other system, the certificate replacement will cause an outage. Keep in mind that this outage would have occurred anyway -- it is just occurring 15 days sooner, and after WebSphere Application Server has sent multiple warnings of this impending outage. WebSphere Application Server is simply doing its best.

It should be obvious that letting WebSphere Application Server automatically change the expiring certificates in a production environment is risky, since it could potentially cause a short- or long-term outage. Instead, you should change certificates manually when you are notified of their impending expiration. Automatic replacement is primarily intended to simplify management for less complex environments, and for development systems where brief outages are acceptable. For most production environments, we recommend that you instead monitor and act on the expiration notification messages and disable automatic replacement of self-signed certificates. Figure 14 shows the configuration panel for the certificate expiration monitor.
Certificate expiration monitor configuration panel
Figure 14. Certificate expiration monitor configuration panel

Sunday, July 13, 2014

Memory Management In WMB/IIB9

When considering memory usage within a DataFlowEngine process there are two sources that the storage is allocated from, and these are :
1. The DataFlowEngine main memory heap
2. The operating system 

When message flow processing requires some storage, then an attempt is first made to allocate the memory block required from the DataFlowEngine's heap. If there is not a large enough contiguous block of storage on the heap, then a request will be made to the operating system to allocate more storage to the DataFlowEngine for the message flow. Once this is done, then this would lead to the DataFlowEngine's heap growing with the additional storage, and the message flow will use this extra storage.
When the message flow has completed its processing, then it issues a "free" on all its storage and these blocks will be returned to the DataFlowEngine's heap ready for allocation to any other message flows of this DataFlowEngine. The storage is never released back to the operating system, because there is actually no programmatic mechanism to perform such an operation. The operating system will not retrieve storage from a process until the process is terminated. Therefore the user will never see the size of the DataFlowEngine process decrease, after it has increased.
When the next message flow runs, then it will make requests for storage, and these will then be allocated from the DataFlowEngine heap as before. Therefore there will be a re-use within the DataFlowEngine's internal storage where possible, minimizing the number of times that the operating system needs to allocate additional storage to the DataFlowEngine process. This would mean there may be some growth observed on DataFlowEngine's memory usage which is of the size of the subsequent allocations for message flow processing. Eventually we would expect the storage usage to plateau, and this situation would occur when the DataFlowEngine has a large enough heap such that any storage request can be satisfied without having to request more from the operating system. 


Memory fragmentation in a DataFlowEngine process

At the end of each message flow iteration, storage is freed back to the DataFlowEngine memory heap ready for re-use by other threads. However, there are objects that are created within the DataFlowEngine that last the life of the DataFlowEngine and therefore reside at that point in the heap for that time. This leads to what is known as fragmentation and as a result reduces the size of contiguous storage blocks available in the DataFlowEngine when an allocation request is made. This means that DataFlowEngine process has the memory blocks for allocation but are fragmented to be allocated to requests made during message processing. In most of the cases, the requesters of storage require a contiguous chain of blocks in memory. Therefore, it is possible for a message flow to make a request for storage against the DataFlowEngine's heap that does not have enough free storage to satisfy the request for this contiguous chain of blocks, but the storage is fragmented, such that the contiguous block does not fit into any of the "gaps". In this situation, a request would have to be made to the operating system to allocate more storage to the DataFlowEngine so that this block can be allocated.
However, when unfreed blocks remain on the DataFlowEngine's heap then this will fragment the heap. This means that there will be smaller contiguous blocks available on the DataFlowEngine's heap. If the next storage allocation cannot fit into the fragmented space, then this will cause the DataFlowEngine's memory heap to grow to accommodate the new request.
This is why small increments may be seen in the DataFlowEngine even after it has processed thousands of messages. In a multi-threaded environment there will be potentially many threads requesting storage at the same time, meaning that it is more difficult for a large block of storage to be allocated.
For example,
Some message flows implement BLOB domain processing which may result in the concatenating of BLOBs. Depending on how the message flow has been written, this may lead to fragmentation of the message heap due to the fact that when a binary operation takes place such as concatenation, both the source and target variables need to be in scope at the same time.
Consider a message flow that reads in a 1MB BLOB and assigns this to the BLOB domain. For the purposes of demonstration, this ESQL will show a WHILE loop that causes the repeated concatenation of this 1MB BLOB to produce a 57MB output message. Consider the following ESQL :
DECLARE c, d CHAR;
SET c = CAST(InputRoot.BLOB.BLOB AS CHAR CCSID InputProperties.CodedCharSetId);
SET d = c;

DECLARE i INT 1;
WHILE (i <= 56) DO
  SET c = c || d;
  SET i = i + 1;
END WHILE;

SET OutputRoot.BLOB.BLOB = CAST(c AS BLOB CCSID InputProperties.CodedCharSetId);
As can be seen, the 1MB input message is assigned to a variable c and then this is also copied to d. The loop then concatenates c to d and assigns the result back to c on iteration. This means that c will grow by 1MB on every iteration. Since this processing generates a 57MB blob, one may expect the message flow to use around 130MB of storage. The main aspects of this are the ~60MB of variables in the compute node, and then 57MB in the Output BLOB parser which will be serialised on the MQOutput node.
However this is not the case. This ESQL will actually cause a significant growth in the DFE's storage usage due to the nature of the processing. This ESQL encourages fragmentation in the memory heap. This condition means that the memory heap has enough free space on the current heap, but has no contiguous blocks that are large enough to satisfy the current request. When dealing with BLOB or CHAR Scalar variables in ESQL, these values need to be held in contiguous buffers in memory.
Therefore when the ESQL SET c = c || d; is executed, in memory terms this is not just a case of appending the value of d, to the current memory location of c. The concatenation operator takes two operands and then assigns the result to another variable, and in this case this just happens to be one of the input parameters. So logically the concatenation operator could be written SET c = concatenate(c,d). This is not valid syntax but is being used to illustrate that this operator is like any other binary operand function. The value contained in c cannot be deleted until the operation is complete since c is used on input. Furthermore, the result of the operation needs to be contained in temporary storage before it can be assigned to c.

  

Out of Memory Issue

When a DataFlowEngine reaches the JVM heap limitations, it typically generates a javacore, heapdump along with a java out of memory exception in the EG stderr/stdout files.
When the DataFlowEngine runs out of total memory, it may cause the DataFlowEngine to go down or the system to become un-responsive. 


MQSI_THREAD_STACK_SIZE

Purpose : For any given message flow, a typical node requires about 2KB of the thread stack space. Therefore, by default, there is a limit of approximately 500 nodes within a single message flow on the UNIX platform and 1000 nodes on the Windows platform. This limit might be higher or lower, depending on the type of processing being performed within each node. If a message flow of a larger magnitude is required, one can increase this limit by setting the MQSI_THREAD_STACK_SIZE environment variable to an appropriate value( broker must be restarted for the variable to be effective). This environment variable setting applies to brokers, therefore the MQSI_THREAD_STACK_SIZE is used for every thread that is created within a DataFlowEngine process. If the execution group has many message flows assigned to it, and a large MQSI_THREAD_STACK_SIZE is set, this can lead to the DataFlowEngine process requiring a large amount of storage for the stack. In WMB, it is not just execution of nodes that can cause a build up on a finite stack size. It follows from the same principles for any processing that leads to a large amount of nested or recursive processing and might cause extensive usage of the stack. Therefore, you may need to increase the MQSI_THREAD_STACK_SIZE environment variable in the following situations: a) When processing a large message that has a large number of repetitions or nesting. b) When executing ESQL that recursively calls the same procedure or function. This can also apply to operators. For example, if the concatenation operator was used a large number of times in one ESQL statement, this could lead to a large stack build up.
However, it should be noted that this environment variable applies to all the message flow threads in all the execution groups, as it is set at the broker level. For example, if there are 30 message flows and this environment variable is set to 2MB then that would mean that 60MB would be reserved for just stack processing and thus taken away from the DataFlowEngine memory heap. This could have an adverse effect on the execution group rather than yielding any benefits. Typically, the default of 1 MB is sufficient for most of the scenarios. Therefore we would advise that this environment variable NOT be set unless absolutely necessary.

 

System kernel parameters for WMB/IIB

 In WMB/IIB there are no suggested kernel settings for the tuning of an operating system. However, the WebSphere MQ and some database products do, and WMB/IIB runs under the same environment as these. Hence, its best to check and tune your environment as guided by these applications.


Monitor memory usage on Windows and UNIX

At any given point, you can check the memory usage for processes in the following way:
Windows:
Ctrl-Alt-Delete > Start Task Manager > Processes > Show processes for all users and go to the process "DataFlowEngine" and look at the field "Memory (Private working set)
If you want to continuously monitor the memory usage, then check the following link for Windows sysinternals for process utilities: http://technet.microsoft.com/en-us/sysinternals/
UNIX:
ps -aelf | grep
If you want to continuously monitor the memory usage, then the above command may have to be incorporated into a simple shell script.

 


 

 



CONTROL XPATH CACHING IN WMB/IIB9

The size of the XPath cache is fixed at 100(default) elements. If you use many XPath expressions 
this fixed size can become a performance bottleneck with a single flow invocation completely 
invalidating the cache.
 
WMB8 and IIB9  allows you to configure the size of the XPath cache so you can control how many compiled 
XPath expressions are stored at any one time. This also allows the cache to be disabled so that 
no compiled XPath expressions are cached. Disabling the cache may improve throughput in a highly
multi-threaded environment as it removes thread contention on the cache.

The new property is called compiledXPathCacheSizeEntries and is set on a per execution group 
basis and is configured at the execution group level. The property can be set using the following
 mqsichangepropertiescommand: 
 
 
mqsichangeproperties  -e  -o ExecutionGroup -n compiledXPathCacheSizeEntries -v 
 
 
where  is the size of the cache to be set.The size can be set to any value greater than or equal to 100
and a value of 0 means disable the cache. The default value is 100.

The configured value can be reported using the following mqsireportproperties command: 
 
mqsireportproperties  -e  -o ExecutionGroup -n compiledXPathCacheSizeEntries 
 
 
and can also be reported as part of the other ExecutionGroup level properties:

  mqsireportproperties  -e  -o ExecutionGroup -a
 

Monday, April 14, 2014

Global Cache In IBM Integration Bus 9


 Cache topology :

Cache topology is the set of catalog servers, containers, and client connections that collaborate to form a global cache. When using the default policy, the first execution group to start will perform the role of catalog server and container server (call this Role1). The next three execution groups to start perform the role of container servers (Role2, 3, and 4). No other execution groups will host catalogs or containers, but all execution groups (including those performing Roles1, 2, 3, and 4) host client connections to the global cache. When you restart the broker, the execution groups may start in a different order, in which case different execution groups might perform Roles1, 2, 3, and 4. The cache topology still contains one catalog server, up to four containers, and multiple clients, but the distribution of those roles across your execution groups will vary.


 Global Cache in Message Broker :

A longstanding WebSphere Message Broker requirement has been a mechanism for sharing data between different processes. This requirement is most easily explained in the context of an asynchronous request/reply scenario. In this kind of scenario, a broker acts as intermediary between a number of client applications and a back-end system. Each client application sends, via the broker, request messages that contain correlating information to be included in any subsequent replies. The broker forwards the messages on to the back-end system, and then processes the responses from that system. To complete the round-trip, the broker has to insert the correlating information back into the replies and route them back to the correct clients.
When the flows are contained within a single broker, there are a few options for storing the correlating information, such as a database, or a store queue where an MQGet node is used to retrieve the information later. If you need to scale this solution horizontally and add brokers to handle an increase in throughput, then a database is the only reasonable option.

The WebSphere Message Broker global cache is implemented using embedded WebSphere eXtreme Scale (WebSphere XS) technology. By hosting WebSphere XS components, the JVMs embedded within WebSphere Message Broker execution groups can collaborate to provide a cache. For a description of WebSphere XS, see Product overview in the WebSphere XS information center. Here are some of the key components in the WebSphere XS topology:
Catalog server
Controls placement of data and monitors the health of containers.
Container server
A component embedded in the execution group that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once.
Map
A data structure that maps keys to values. One map is the default map, but the global cache can have several maps.
Each execution group can host a WebSphere XS catalog server, container server, or both. Additionally, each execution group can make a client connection to the cache for use by message flows. The global cache works out of the box, with default settings, and no configuration -- you just switch it on! You do not need to install WebSphere XS alongside the broker, or any other additional components or products.


The default scope of one cache is across one broker. To enable this, switch the broker-level policy property on the GlobalCache tab of Message Broker Explorer to Default and restart. This causes each execution group to assume a role in the cache dynamically on startup. The first execution group to start will be a catalog and container server, using the first four ports from the supplied port range (a port range will have been generated for you, but you can modify this). For more details on the port range, see Frequently asked questions below. The second, third, and fourth execution groups (if present) will be container servers, each using three ports from the range. Any execution groups beyond the fourth one will not host cache components, but will connect as clients to the cache hosted in execution groups 1-4. The diagram below shows the placement of servers, and the client connections, for a single-broker cache with six execution groups:
Single-broker cache with default policy
Picture that shows a single broker, with 6 execution groups collaborating to provide a cache.You can extend the cache to multiple brokers by using a cache policy file. Three sample policy files are included in the product install, in the sample/globalcache directory. You can simply alter the policy file to contain all the brokers you want to collaborate in a single cache, then point the broker-level cache policy property at this file. Here is this setting in Message Broker Explorer:
Configuring a cache policy file in Message Broker Explorer
Configuring a cache policy file in Message Broker ExplorerThe file lets you nominate each broker to host zero, one, or two catalog servers, and the port range that each broker should use for its cache components. The following diagram shows a two-broker cache, with both brokers configured to contain catalog servers:
Two-broker cache controlled by policy file
Picture that shows a two brokers collaborating to provide a single cache available to both of them.





Message Flow Interaction with Cache :

The message flow has new, simple artifacts for working with the global cache, and is not immediately aware of the underlying WebSphere XS technology or topology. Specifically, the Java Compute node interface has a new MbGlobalMap object, which provides access to the global cache. This object handles client connectivity to the global cache, and provides a number of methods for working with maps in the cache. The methods available are similar to those you would find on regular Java maps. Individual MbGlobalMap objects are created by using a static getter on the MbGlobalMap class, which acts as a factory mechanism. You can work with multiple MbGlobalMap objects at the same time, and create them either anonymously (which uses a predefined default map name under the covers in WebSphere XS), or with any map name of your choice. In the examples below, defaultMap will work with the system-defined default map within the global cache. myMap will work with a map called myMap, and will create this map if it does not already exist in the cache.
Sample MbGlobalMap objects in Java Compute
MbGlobalMap defaultMap = MbGlobalMap.getGlobalMap();
MbGlobalMap myMap = MbGlobalMap.getGlobalMap(“myMap”);


Download Sample Global Cache Policy Files:

 Policy_multi_instance.xml

Policy_one_broker_ha.xml

Policy_two_brokers_ha.xml

Policy_two_brokers.xml








Thursday, April 10, 2014

WebSphere Application Server Performance Tuning Recommendations

Perform Proper Load Testing

  • Properly load testing your application is the most critical thing you can do to ensure a rock solid runtime in production.
  • Replicating your production environment isn’t always 100% necessary as most times you can get the same bang for your buck with a single representative machine in the environment
    • Calculate expected load across the cluster and divide down to single machine load
    • Drive load and perform the usual tuning loop to resolve the parameter set you need to tweak and tune.
    • Look at load on the database system, network, etc and extrapolate if it will support the full systems load and if not of if there are questions test
  • Performance testing needs to be representative of patterns that your application will actually be executing
  • Proper performance testing keeps track of and records key system level metrics as well as throughput metrics for reference later when changes to hardware or application are needed.
  • Always over stress your system.  Push the hardware and software to the max and find the breaking points. 
  • Only once you have done real world performance testing can you accurately size the complete set of hardware required to execute your application to meet your demand.

Correctly Tune The JVM

  • Correctly tuning the JVM in most cases will get you nearly 80% of the possible max performance of your application  
  • The big area to focus on for JVM tuning is heap size
    • Monitor verbose:gc and target GCing no more than once every 10 seconds with a max GC pause of a second or less.
    • Incremental testing is required to get this area right running with expected customer load on the system
    • Only after you have the above boundary layers met for GC do you want to start to experiment with differing garbage collection policies
  • Beyond the Heap Size settings most other parameters are to extract out max possible performance OR ensure that the JVM cooperates nicely on the system it is running on with other JVMs 
  • The Garbage Collector Memory Visualizer is an excellent tool tool for diagnosing GC issues or refining JVM performance tuning.
    • Provided as a downloadable plug-in within the IBM Support Assistant
Garbage Collection Memory Visualizer (GCMV)

Ensure Uniform Configuration Across Like Servers

  • Uniform configuration of software parameters and even operating systems is a common stumbling block
  • Most times manifests itself as a single machine or process that is burning more CPU, Memory or garbage collecting more frequently
  • Easiest way to manage this is to have a “dump configuration” script that runs periodically
  • Store the scripts results off and after each configuration change or application upgrade track differences
  • Leverage the Visual Configuration Explorer (VCE) tool available within ISA
Visual Configuration Explorer (VCE)

 Create Cells To Group Like Applications

  • Create Cells and Clusters of application servers with an express purpose that groups them in some manner
  • Large Cells (400-500-1000 members) for the most part while supported don’t make sense
  • Group applications that need to replicate data to each other or talk to each other via RMI, etc and create cells and clusters around those commonalities. 
  • Keeping cell size smaller leads to more efficient resource utilization due to less network traffic for configuration changes, DRS, HAManager, etc.
    • For example, core groups should be limited to no more than 40 to 50 instances
  • Smaller cells and logic grouping make migration forward to newer versions of products easier and more compartmentalized.
  
Tune JDBC Data Sources

  • Correct database connection pool tuning can yield significant gains in performance
  • This pool is highly contended in heavily multithreaded applications so ensuring significant available connections are in the pool leads to superior performance.
  • Monitor PMI metrics via TPV or others tools to watch for threads waiting on connections to the database as well as their wait time.
    • If threads are waiting increase the number of pooled connections in conjunction with your DBA OR decrease the number of active threads in the system
    • In some cases, a one-to-one mapping between DB connections and threads may be ideal
  • Frequently database deadlocks or bottlenecks first manifest themselves as a large number of threads from your thread pool waiting for connections
  • Always use the latest database driver for the database you are running as performance optimization in this space between versions are significant
  • Tune the Prepared Statement Cache Size for each JDBC data source
    • Can also be monitored via PMI/TPV to determine ideal value

Correctly Tune Thread Pools

  • Thread pools and their corresponding threads control all execution on the hardware threads.
  • Understand which thread pools your application uses and size all of them appropriately based on utilization you see in tuning exercises
    • Thread dumps, PMI metrics, etc will give you this data 
    • Thread Dump Memory Analyzer and Tivoli Performance viewer (TPV) will help in viewing this data.
  • Think of the thread pool as a queuing mechanism to throttle how many active requests you will have running at any one time in your application.
    • Apply the funnel based approach to sizing these pools
      • Example IHS (1000) -> WAS ( 50) -> WAS DB connection pool (30) -> 
      • Thread numbers above vary based on application characteristics
    • Since you can throttle active threads you can control concurrency through your codebase
  • Thread pools needs to be sized with the total number of hardware processor cores in mind
    • If sharing a hardware system with other WAS instances thread pools have to be tuned with that in mind.
    • You need to more than likely cut back on the number of threads active in the system to ensure good performance for all applications due to context switching at OS layer for each thread in the system
    • Sizing or restricting the max number of threads a application can have can sometimes be used to prevent rouge applications for impacting others.
  • Default sizes for WAS thread pools on v6.1 and above are actually a little to high for best performance
    • Two to one ratio (threads to cores) typically yields the best performance but this varies drastically between applications and access patterns
TPV & TDMA tool snapshots

Minimize HTTP Session Content

  • High performance data replication for application availability depends on correctly sized session data
    • Keep it under 1MB in all cases if possible
  • Only should be storing information critical to that users specific interaction with the server
  • If composite data is required build it progressively as the interaction occurs
    • Configure Session Replication in WAS to meet your needs
    • Use different configuration options (async vs. synch) to give you the availability your application needs without compromising response time.
    • Select the replication topology that works best for you (DB, M2M, M2M Server) 
    • Keep replication domains small and/or partition where possible


Understand and Tune Infrastructure (databases & other interactive server systems)

  • WebSphere Application Server and the system it runs on is typically only one part of the datacenter infrastructure and it has a good deal of reliance on other areas performing properly.Think of your infrastructure as a plumbing system. Optimal drain performance only occurs when no pipes are clogged. 
  • On the WAS system itself you need to be vary aware of
    • What other WAS instances (JVMs) are doing and their CPU / IO profiles
    • How much memory other WAS instance (or other OS’s in a virtualized case) are using
    • Network utilization of other applications coexisting on the same hardware
  • In the supporting infrastructure
    • Varying Network Latency can drastically effect split cell topologies, cross site data replication and database query latency
      • Ensure network infrastructure is repeatable and robust
      • Don’t take for granted bandwidth or latency before going into production always test as labs vary
    • Firewalls can cause issues with data transfer latencies between systems
  • On the database system
    • Ensure that proper indexes and tuning is done for the applications request patterns
    • Ensure that the database supports the number of connected clients your WAS runtime will have
    • Understand the CPU load and impacts of other applications (batch, OLTP, etc all competing with your applications)
  • On other application server systems or interactive server systems
    • Ensure performance of connected applications is up for the load being requested of it by the WAS system
    • Verify that developers have coded specific handling mechanisms for when connected applications go down (You need to avoid storm drain scenarios)

 Keep Application Logging to a Minimum 

  • Never should there be information outside of error cases being written to SystemOut.log
  • If using logging build your log messages only when needed
  • Good
    • if(loggingEnabled==true){ errorMsg = “This is a bad error” + “ “ + failingObject.printError();}
  • Bad 
    • errorMsg = “This is a bad error” + “ “ + failingObject.printError();
      If(loggingEnabled==true){ System.out.println(errorMsg); }
  • Keep error and log messages to the point and easy to debug
  • If using Apache Commons, Log4J, or other frameworks ensure performance on your system is as expected
  • Ensure if you must log information for audit purposes or other reasons that you are writing to a fast disk

Properly Tune the Operating System
  • Operating System is consistently overlooked for functional tuning as well as performance tuning.
  • Understand the hardware infrastructure backing your OS. Processor counts, speed, shared/unshared, etc
  • ulimit values need to be set correctly. Main player here is the number of open file handles (ulimit –n). Other process size and memory ones may need to be set based on application
  • Make sure NICs are set to full duplex and correct speeds
  • Large pages need to be enabled to take advantage of –Xlp JDK parametes
  • If enabled by default check RAS settings on OS and tune them down
  • Configure TCP/IP timeouts correctly for your applications needs
  • Depending on the load being placed on the system look into advanced tuning techniques such as pinning WAS processes via RSET or TASKSET as well as pinning IRQ interrupts
WAS Throughput with processor pinning

Wednesday, February 5, 2014

Websphere Message Broker - ESQL Utilities


You can find sample esql for aggregation, exception handling, email formation, sorting program and environment variable storage in below attached Broker PI.

Download :  ESQLUtilitiesPI