Monday, July 21, 2014

Backup and Restore Configuration and Profiles in Websphere Application Server

I would like to become a best Softwar1 - Configuration Backup & Restore:

To backup the configuration of a node/profile, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./backupConfig.sh Zip_File_Name

backupConfig Command:

backupConfig command will backup the configuration of a node to a file.

Syntax:
backupConfig.sh backup_file [options]
Options:
-nostop
    Tells the restoreConfig command not to stop the servers before restoring the configuration
-password password
    Specifies the password for authentication if security is enabled in the server
-username user_name
    Specifies the user name for authentication if security is enabled in the server
-profileName profile_name
   Defines the profile of the Application Server process in a multiple-profile installation

Eg: backupConfig.sh AppSrv01_Backup_Jul2013.zip -profileName AppSrv01 -nostop

restoreConfig Command:

In a similar way we also have the restoreConfig command, which restores the configuration of a node taken from the backupconfig command.

To restore the configuration of a node, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./restoreConfig.sh Zip_File_Name

Syntax:
restoreConfig.sh backup_file [options]

Options:
-nostop
    Tells the restoreConfig command not to stop the servers before restoring the configuration
-password password
    Specifies the password for authentication if security is enabled in the server
-username user_name
    Specifies the user name for authentication if security is enabled in the server
-profileName profile_name
   Defines the profile of the Application Server process in a multiple-profile installation

Eg: restoreConfig.sh AppSrv01_Backup_Jul2013.zip -profileName AppSrv01 -nostop

Note: Be aware that if you restore the configuration to a directory that is different from the directory that was backed up
      when you performed the backupConfig command, you might need to manually update some of the paths in the configuration directory.

2- Profile Backup & Restore:

To backup the profile configuration, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./manageprofiles.sh -backupProfile -profileName AppSrv01 -backupFile /opt/home/user/WAS_Backup/profile_backup/AppSrv01.zip

Syntax:
manageprofiles.sh -backupProfile -profileName profile_name -backupFile file_name

Note: When you back up a profile using the -backupProfile option, you must first stop the server and the running processes for the profile that you want to back up.

Similarly, we also have the option of -restoreProfile which restores the profile from the backup file.

To restore the profile configuration, use the following steps:

1)Navigate to the bin directory of the profile which is usually /opt/IBM/WebSphere/profiles/AppSrv01/bin
2)Run the command ./manageprofiles.sh -restoreProfile -backupFile /opt/home/user/WAS_Backup/profile_backup/AppSrv01.zip

Syntax:
manageprofiles.sh -restoreProfile -backupFile file_name

To restore a profile, perform the following steps:

  1. Stop the server and the running processes for the profile that you want to restore.
  2. Manually delete the directory for the profile from the file system.
  3. Run the -validateAndUpdateRegistry option of the manageprofiles command.
  4. Restore the profile by using the -restoreProfile option of the manageprofiles command.

Source: WAS InfoCenter 7.0, Self-Knowledge.

Wednesday, July 16, 2014

Websphere Application Server - Certificate Expiration Monitor and Dynamic Run Time Updates

As should not be surprising, certificates expire. Should a certificate expire, SSL communication using that certificate will be impossible, which will almost certainly result in a system outage. WebSphere Application Server tries hard to prevent these outages, and when it cannot prevent them, it tries to at least warn you before they occur. The certificate expiration monitor task runs on a configurable schedule which, by default, is every 14 days. The Next start date field for the monitor is persistent in the configuration and is updated with a new date each time it runs. It will execute in the deployment manager process in a Network Deployment environment, or -- if standalone -- in the WebSphere Application Server base process.
When executing, the expiration monitor will search through all KeyStore objects configured in the cell, looking for any personal certificates that will expire within the expiration threshold (90 days being the default; this is configurable via a custom property). If it finds any, it will issue a notification warning of the impending expiration. Notifications are always sent to the serious event stream to all registered listeners. By default, this is the admin console and SystemOut.log. Notifications can also be sent via email using an SMTP server.
In addition to notifications, WebSphere Application Server will attempt to replace self-signed certificates before they expire. By default, the expiration monitor will execute the certificate replacement task (mentioned in the previous section) against any self-signed certificates 15 days before expiration (this is configurable). The task creates a new certificate using the certificate information from the old one, and updates every trust store in the cell that contained the old signer with the new signer certificate. By default, the old signer certificate will be deleted.
The expiration monitor marks any SSL configuration as "modified" whenever the monitor changes the key store or trust store referenced by the configuration. The configuration changes are saved once the expiration update task is completed, causing a ripple to occur throughout the runtime. The first thing that happens is the temporary disabling of SSL server authentication (for 90 seconds) to enable these changes to occur without requiring a server restart. In cases where you do not want this to occur, consider disabling the Dynamically update the run time when SSL configuration changes occur option located at the bottom of the SSL certificate and key management panel in the admin console.
Unfortunately, automatically replacing certificates is not a panacea. WebSphere Application Server cannot update certificates in key stores that are not under its control. In particular, this means that a Web server plug-in that is using the previous soon-to-expire signing certificate will stop working when the corresponding personal certificate is replaced. It also means that if WebSphere Application Server was using the personal certificate to authenticate with some other system, the certificate replacement will cause an outage. Keep in mind that this outage would have occurred anyway -- it is just occurring 15 days sooner, and after WebSphere Application Server has sent multiple warnings of this impending outage. WebSphere Application Server is simply doing its best.

It should be obvious that letting WebSphere Application Server automatically change the expiring certificates in a production environment is risky, since it could potentially cause a short- or long-term outage. Instead, you should change certificates manually when you are notified of their impending expiration. Automatic replacement is primarily intended to simplify management for less complex environments, and for development systems where brief outages are acceptable. For most production environments, we recommend that you instead monitor and act on the expiration notification messages and disable automatic replacement of self-signed certificates. Figure 14 shows the configuration panel for the certificate expiration monitor.
Certificate expiration monitor configuration panel
Figure 14. Certificate expiration monitor configuration panel

Sunday, July 13, 2014

Memory Management In WMB/IIB9

When considering memory usage within a DataFlowEngine process there are two sources that the storage is allocated from, and these are :
1. The DataFlowEngine main memory heap
2. The operating system 

When message flow processing requires some storage, then an attempt is first made to allocate the memory block required from the DataFlowEngine's heap. If there is not a large enough contiguous block of storage on the heap, then a request will be made to the operating system to allocate more storage to the DataFlowEngine for the message flow. Once this is done, then this would lead to the DataFlowEngine's heap growing with the additional storage, and the message flow will use this extra storage.
When the message flow has completed its processing, then it issues a "free" on all its storage and these blocks will be returned to the DataFlowEngine's heap ready for allocation to any other message flows of this DataFlowEngine. The storage is never released back to the operating system, because there is actually no programmatic mechanism to perform such an operation. The operating system will not retrieve storage from a process until the process is terminated. Therefore the user will never see the size of the DataFlowEngine process decrease, after it has increased.
When the next message flow runs, then it will make requests for storage, and these will then be allocated from the DataFlowEngine heap as before. Therefore there will be a re-use within the DataFlowEngine's internal storage where possible, minimizing the number of times that the operating system needs to allocate additional storage to the DataFlowEngine process. This would mean there may be some growth observed on DataFlowEngine's memory usage which is of the size of the subsequent allocations for message flow processing. Eventually we would expect the storage usage to plateau, and this situation would occur when the DataFlowEngine has a large enough heap such that any storage request can be satisfied without having to request more from the operating system. 


Memory fragmentation in a DataFlowEngine process

At the end of each message flow iteration, storage is freed back to the DataFlowEngine memory heap ready for re-use by other threads. However, there are objects that are created within the DataFlowEngine that last the life of the DataFlowEngine and therefore reside at that point in the heap for that time. This leads to what is known as fragmentation and as a result reduces the size of contiguous storage blocks available in the DataFlowEngine when an allocation request is made. This means that DataFlowEngine process has the memory blocks for allocation but are fragmented to be allocated to requests made during message processing. In most of the cases, the requesters of storage require a contiguous chain of blocks in memory. Therefore, it is possible for a message flow to make a request for storage against the DataFlowEngine's heap that does not have enough free storage to satisfy the request for this contiguous chain of blocks, but the storage is fragmented, such that the contiguous block does not fit into any of the "gaps". In this situation, a request would have to be made to the operating system to allocate more storage to the DataFlowEngine so that this block can be allocated.
However, when unfreed blocks remain on the DataFlowEngine's heap then this will fragment the heap. This means that there will be smaller contiguous blocks available on the DataFlowEngine's heap. If the next storage allocation cannot fit into the fragmented space, then this will cause the DataFlowEngine's memory heap to grow to accommodate the new request.
This is why small increments may be seen in the DataFlowEngine even after it has processed thousands of messages. In a multi-threaded environment there will be potentially many threads requesting storage at the same time, meaning that it is more difficult for a large block of storage to be allocated.
For example,
Some message flows implement BLOB domain processing which may result in the concatenating of BLOBs. Depending on how the message flow has been written, this may lead to fragmentation of the message heap due to the fact that when a binary operation takes place such as concatenation, both the source and target variables need to be in scope at the same time.
Consider a message flow that reads in a 1MB BLOB and assigns this to the BLOB domain. For the purposes of demonstration, this ESQL will show a WHILE loop that causes the repeated concatenation of this 1MB BLOB to produce a 57MB output message. Consider the following ESQL :
DECLARE c, d CHAR;
SET c = CAST(InputRoot.BLOB.BLOB AS CHAR CCSID InputProperties.CodedCharSetId);
SET d = c;

DECLARE i INT 1;
WHILE (i <= 56) DO
  SET c = c || d;
  SET i = i + 1;
END WHILE;

SET OutputRoot.BLOB.BLOB = CAST(c AS BLOB CCSID InputProperties.CodedCharSetId);
As can be seen, the 1MB input message is assigned to a variable c and then this is also copied to d. The loop then concatenates c to d and assigns the result back to c on iteration. This means that c will grow by 1MB on every iteration. Since this processing generates a 57MB blob, one may expect the message flow to use around 130MB of storage. The main aspects of this are the ~60MB of variables in the compute node, and then 57MB in the Output BLOB parser which will be serialised on the MQOutput node.
However this is not the case. This ESQL will actually cause a significant growth in the DFE's storage usage due to the nature of the processing. This ESQL encourages fragmentation in the memory heap. This condition means that the memory heap has enough free space on the current heap, but has no contiguous blocks that are large enough to satisfy the current request. When dealing with BLOB or CHAR Scalar variables in ESQL, these values need to be held in contiguous buffers in memory.
Therefore when the ESQL SET c = c || d; is executed, in memory terms this is not just a case of appending the value of d, to the current memory location of c. The concatenation operator takes two operands and then assigns the result to another variable, and in this case this just happens to be one of the input parameters. So logically the concatenation operator could be written SET c = concatenate(c,d). This is not valid syntax but is being used to illustrate that this operator is like any other binary operand function. The value contained in c cannot be deleted until the operation is complete since c is used on input. Furthermore, the result of the operation needs to be contained in temporary storage before it can be assigned to c.

  

Out of Memory Issue

When a DataFlowEngine reaches the JVM heap limitations, it typically generates a javacore, heapdump along with a java out of memory exception in the EG stderr/stdout files.
When the DataFlowEngine runs out of total memory, it may cause the DataFlowEngine to go down or the system to become un-responsive. 


MQSI_THREAD_STACK_SIZE

Purpose : For any given message flow, a typical node requires about 2KB of the thread stack space. Therefore, by default, there is a limit of approximately 500 nodes within a single message flow on the UNIX platform and 1000 nodes on the Windows platform. This limit might be higher or lower, depending on the type of processing being performed within each node. If a message flow of a larger magnitude is required, one can increase this limit by setting the MQSI_THREAD_STACK_SIZE environment variable to an appropriate value( broker must be restarted for the variable to be effective). This environment variable setting applies to brokers, therefore the MQSI_THREAD_STACK_SIZE is used for every thread that is created within a DataFlowEngine process. If the execution group has many message flows assigned to it, and a large MQSI_THREAD_STACK_SIZE is set, this can lead to the DataFlowEngine process requiring a large amount of storage for the stack. In WMB, it is not just execution of nodes that can cause a build up on a finite stack size. It follows from the same principles for any processing that leads to a large amount of nested or recursive processing and might cause extensive usage of the stack. Therefore, you may need to increase the MQSI_THREAD_STACK_SIZE environment variable in the following situations: a) When processing a large message that has a large number of repetitions or nesting. b) When executing ESQL that recursively calls the same procedure or function. This can also apply to operators. For example, if the concatenation operator was used a large number of times in one ESQL statement, this could lead to a large stack build up.
However, it should be noted that this environment variable applies to all the message flow threads in all the execution groups, as it is set at the broker level. For example, if there are 30 message flows and this environment variable is set to 2MB then that would mean that 60MB would be reserved for just stack processing and thus taken away from the DataFlowEngine memory heap. This could have an adverse effect on the execution group rather than yielding any benefits. Typically, the default of 1 MB is sufficient for most of the scenarios. Therefore we would advise that this environment variable NOT be set unless absolutely necessary.

 

System kernel parameters for WMB/IIB

 In WMB/IIB there are no suggested kernel settings for the tuning of an operating system. However, the WebSphere MQ and some database products do, and WMB/IIB runs under the same environment as these. Hence, its best to check and tune your environment as guided by these applications.


Monitor memory usage on Windows and UNIX

At any given point, you can check the memory usage for processes in the following way:
Windows:
Ctrl-Alt-Delete > Start Task Manager > Processes > Show processes for all users and go to the process "DataFlowEngine" and look at the field "Memory (Private working set)
If you want to continuously monitor the memory usage, then check the following link for Windows sysinternals for process utilities: http://technet.microsoft.com/en-us/sysinternals/
UNIX:
ps -aelf | grep
If you want to continuously monitor the memory usage, then the above command may have to be incorporated into a simple shell script.

 


 

 



CONTROL XPATH CACHING IN WMB/IIB9

The size of the XPath cache is fixed at 100(default) elements. If you use many XPath expressions 
this fixed size can become a performance bottleneck with a single flow invocation completely 
invalidating the cache.
 
WMB8 and IIB9  allows you to configure the size of the XPath cache so you can control how many compiled 
XPath expressions are stored at any one time. This also allows the cache to be disabled so that 
no compiled XPath expressions are cached. Disabling the cache may improve throughput in a highly
multi-threaded environment as it removes thread contention on the cache.

The new property is called compiledXPathCacheSizeEntries and is set on a per execution group 
basis and is configured at the execution group level. The property can be set using the following
 mqsichangepropertiescommand: 
 
 
mqsichangeproperties  -e  -o ExecutionGroup -n compiledXPathCacheSizeEntries -v 
 
 
where  is the size of the cache to be set.The size can be set to any value greater than or equal to 100
and a value of 0 means disable the cache. The default value is 100.

The configured value can be reported using the following mqsireportproperties command: 
 
mqsireportproperties  -e  -o ExecutionGroup -n compiledXPathCacheSizeEntries 
 
 
and can also be reported as part of the other ExecutionGroup level properties:

  mqsireportproperties  -e  -o ExecutionGroup -a
 

Monday, April 14, 2014

Global Cache In IBM Integration Bus 9


 Cache topology :

Cache topology is the set of catalog servers, containers, and client connections that collaborate to form a global cache. When using the default policy, the first execution group to start will perform the role of catalog server and container server (call this Role1). The next three execution groups to start perform the role of container servers (Role2, 3, and 4). No other execution groups will host catalogs or containers, but all execution groups (including those performing Roles1, 2, 3, and 4) host client connections to the global cache. When you restart the broker, the execution groups may start in a different order, in which case different execution groups might perform Roles1, 2, 3, and 4. The cache topology still contains one catalog server, up to four containers, and multiple clients, but the distribution of those roles across your execution groups will vary.


 Global Cache in Message Broker :

A longstanding WebSphere Message Broker requirement has been a mechanism for sharing data between different processes. This requirement is most easily explained in the context of an asynchronous request/reply scenario. In this kind of scenario, a broker acts as intermediary between a number of client applications and a back-end system. Each client application sends, via the broker, request messages that contain correlating information to be included in any subsequent replies. The broker forwards the messages on to the back-end system, and then processes the responses from that system. To complete the round-trip, the broker has to insert the correlating information back into the replies and route them back to the correct clients.
When the flows are contained within a single broker, there are a few options for storing the correlating information, such as a database, or a store queue where an MQGet node is used to retrieve the information later. If you need to scale this solution horizontally and add brokers to handle an increase in throughput, then a database is the only reasonable option.

The WebSphere Message Broker global cache is implemented using embedded WebSphere eXtreme Scale (WebSphere XS) technology. By hosting WebSphere XS components, the JVMs embedded within WebSphere Message Broker execution groups can collaborate to provide a cache. For a description of WebSphere XS, see Product overview in the WebSphere XS information center. Here are some of the key components in the WebSphere XS topology:
Catalog server
Controls placement of data and monitors the health of containers.
Container server
A component embedded in the execution group that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once.
Map
A data structure that maps keys to values. One map is the default map, but the global cache can have several maps.
Each execution group can host a WebSphere XS catalog server, container server, or both. Additionally, each execution group can make a client connection to the cache for use by message flows. The global cache works out of the box, with default settings, and no configuration -- you just switch it on! You do not need to install WebSphere XS alongside the broker, or any other additional components or products.


The default scope of one cache is across one broker. To enable this, switch the broker-level policy property on the GlobalCache tab of Message Broker Explorer to Default and restart. This causes each execution group to assume a role in the cache dynamically on startup. The first execution group to start will be a catalog and container server, using the first four ports from the supplied port range (a port range will have been generated for you, but you can modify this). For more details on the port range, see Frequently asked questions below. The second, third, and fourth execution groups (if present) will be container servers, each using three ports from the range. Any execution groups beyond the fourth one will not host cache components, but will connect as clients to the cache hosted in execution groups 1-4. The diagram below shows the placement of servers, and the client connections, for a single-broker cache with six execution groups:
Single-broker cache with default policy
Picture that shows a single broker, with 6 execution groups collaborating to provide a cache.You can extend the cache to multiple brokers by using a cache policy file. Three sample policy files are included in the product install, in the sample/globalcache directory. You can simply alter the policy file to contain all the brokers you want to collaborate in a single cache, then point the broker-level cache policy property at this file. Here is this setting in Message Broker Explorer:
Configuring a cache policy file in Message Broker Explorer
Configuring a cache policy file in Message Broker ExplorerThe file lets you nominate each broker to host zero, one, or two catalog servers, and the port range that each broker should use for its cache components. The following diagram shows a two-broker cache, with both brokers configured to contain catalog servers:
Two-broker cache controlled by policy file
Picture that shows a two brokers collaborating to provide a single cache available to both of them.





Message Flow Interaction with Cache :

The message flow has new, simple artifacts for working with the global cache, and is not immediately aware of the underlying WebSphere XS technology or topology. Specifically, the Java Compute node interface has a new MbGlobalMap object, which provides access to the global cache. This object handles client connectivity to the global cache, and provides a number of methods for working with maps in the cache. The methods available are similar to those you would find on regular Java maps. Individual MbGlobalMap objects are created by using a static getter on the MbGlobalMap class, which acts as a factory mechanism. You can work with multiple MbGlobalMap objects at the same time, and create them either anonymously (which uses a predefined default map name under the covers in WebSphere XS), or with any map name of your choice. In the examples below, defaultMap will work with the system-defined default map within the global cache. myMap will work with a map called myMap, and will create this map if it does not already exist in the cache.
Sample MbGlobalMap objects in Java Compute
MbGlobalMap defaultMap = MbGlobalMap.getGlobalMap();
MbGlobalMap myMap = MbGlobalMap.getGlobalMap(“myMap”);


Download Sample Global Cache Policy Files:

 Policy_multi_instance.xml

Policy_one_broker_ha.xml

Policy_two_brokers_ha.xml

Policy_two_brokers.xml








Thursday, April 10, 2014

WebSphere Application Server Performance Tuning Recommendations

Perform Proper Load Testing

  • Properly load testing your application is the most critical thing you can do to ensure a rock solid runtime in production.
  • Replicating your production environment isn’t always 100% necessary as most times you can get the same bang for your buck with a single representative machine in the environment
    • Calculate expected load across the cluster and divide down to single machine load
    • Drive load and perform the usual tuning loop to resolve the parameter set you need to tweak and tune.
    • Look at load on the database system, network, etc and extrapolate if it will support the full systems load and if not of if there are questions test
  • Performance testing needs to be representative of patterns that your application will actually be executing
  • Proper performance testing keeps track of and records key system level metrics as well as throughput metrics for reference later when changes to hardware or application are needed.
  • Always over stress your system.  Push the hardware and software to the max and find the breaking points. 
  • Only once you have done real world performance testing can you accurately size the complete set of hardware required to execute your application to meet your demand.

Correctly Tune The JVM

  • Correctly tuning the JVM in most cases will get you nearly 80% of the possible max performance of your application  
  • The big area to focus on for JVM tuning is heap size
    • Monitor verbose:gc and target GCing no more than once every 10 seconds with a max GC pause of a second or less.
    • Incremental testing is required to get this area right running with expected customer load on the system
    • Only after you have the above boundary layers met for GC do you want to start to experiment with differing garbage collection policies
  • Beyond the Heap Size settings most other parameters are to extract out max possible performance OR ensure that the JVM cooperates nicely on the system it is running on with other JVMs 
  • The Garbage Collector Memory Visualizer is an excellent tool tool for diagnosing GC issues or refining JVM performance tuning.
    • Provided as a downloadable plug-in within the IBM Support Assistant
Garbage Collection Memory Visualizer (GCMV)

Ensure Uniform Configuration Across Like Servers

  • Uniform configuration of software parameters and even operating systems is a common stumbling block
  • Most times manifests itself as a single machine or process that is burning more CPU, Memory or garbage collecting more frequently
  • Easiest way to manage this is to have a “dump configuration” script that runs periodically
  • Store the scripts results off and after each configuration change or application upgrade track differences
  • Leverage the Visual Configuration Explorer (VCE) tool available within ISA
Visual Configuration Explorer (VCE)

 Create Cells To Group Like Applications

  • Create Cells and Clusters of application servers with an express purpose that groups them in some manner
  • Large Cells (400-500-1000 members) for the most part while supported don’t make sense
  • Group applications that need to replicate data to each other or talk to each other via RMI, etc and create cells and clusters around those commonalities. 
  • Keeping cell size smaller leads to more efficient resource utilization due to less network traffic for configuration changes, DRS, HAManager, etc.
    • For example, core groups should be limited to no more than 40 to 50 instances
  • Smaller cells and logic grouping make migration forward to newer versions of products easier and more compartmentalized.
  
Tune JDBC Data Sources

  • Correct database connection pool tuning can yield significant gains in performance
  • This pool is highly contended in heavily multithreaded applications so ensuring significant available connections are in the pool leads to superior performance.
  • Monitor PMI metrics via TPV or others tools to watch for threads waiting on connections to the database as well as their wait time.
    • If threads are waiting increase the number of pooled connections in conjunction with your DBA OR decrease the number of active threads in the system
    • In some cases, a one-to-one mapping between DB connections and threads may be ideal
  • Frequently database deadlocks or bottlenecks first manifest themselves as a large number of threads from your thread pool waiting for connections
  • Always use the latest database driver for the database you are running as performance optimization in this space between versions are significant
  • Tune the Prepared Statement Cache Size for each JDBC data source
    • Can also be monitored via PMI/TPV to determine ideal value

Correctly Tune Thread Pools

  • Thread pools and their corresponding threads control all execution on the hardware threads.
  • Understand which thread pools your application uses and size all of them appropriately based on utilization you see in tuning exercises
    • Thread dumps, PMI metrics, etc will give you this data 
    • Thread Dump Memory Analyzer and Tivoli Performance viewer (TPV) will help in viewing this data.
  • Think of the thread pool as a queuing mechanism to throttle how many active requests you will have running at any one time in your application.
    • Apply the funnel based approach to sizing these pools
      • Example IHS (1000) -> WAS ( 50) -> WAS DB connection pool (30) -> 
      • Thread numbers above vary based on application characteristics
    • Since you can throttle active threads you can control concurrency through your codebase
  • Thread pools needs to be sized with the total number of hardware processor cores in mind
    • If sharing a hardware system with other WAS instances thread pools have to be tuned with that in mind.
    • You need to more than likely cut back on the number of threads active in the system to ensure good performance for all applications due to context switching at OS layer for each thread in the system
    • Sizing or restricting the max number of threads a application can have can sometimes be used to prevent rouge applications for impacting others.
  • Default sizes for WAS thread pools on v6.1 and above are actually a little to high for best performance
    • Two to one ratio (threads to cores) typically yields the best performance but this varies drastically between applications and access patterns
TPV & TDMA tool snapshots

Minimize HTTP Session Content

  • High performance data replication for application availability depends on correctly sized session data
    • Keep it under 1MB in all cases if possible
  • Only should be storing information critical to that users specific interaction with the server
  • If composite data is required build it progressively as the interaction occurs
    • Configure Session Replication in WAS to meet your needs
    • Use different configuration options (async vs. synch) to give you the availability your application needs without compromising response time.
    • Select the replication topology that works best for you (DB, M2M, M2M Server) 
    • Keep replication domains small and/or partition where possible


Understand and Tune Infrastructure (databases & other interactive server systems)

  • WebSphere Application Server and the system it runs on is typically only one part of the datacenter infrastructure and it has a good deal of reliance on other areas performing properly.Think of your infrastructure as a plumbing system. Optimal drain performance only occurs when no pipes are clogged. 
  • On the WAS system itself you need to be vary aware of
    • What other WAS instances (JVMs) are doing and their CPU / IO profiles
    • How much memory other WAS instance (or other OS’s in a virtualized case) are using
    • Network utilization of other applications coexisting on the same hardware
  • In the supporting infrastructure
    • Varying Network Latency can drastically effect split cell topologies, cross site data replication and database query latency
      • Ensure network infrastructure is repeatable and robust
      • Don’t take for granted bandwidth or latency before going into production always test as labs vary
    • Firewalls can cause issues with data transfer latencies between systems
  • On the database system
    • Ensure that proper indexes and tuning is done for the applications request patterns
    • Ensure that the database supports the number of connected clients your WAS runtime will have
    • Understand the CPU load and impacts of other applications (batch, OLTP, etc all competing with your applications)
  • On other application server systems or interactive server systems
    • Ensure performance of connected applications is up for the load being requested of it by the WAS system
    • Verify that developers have coded specific handling mechanisms for when connected applications go down (You need to avoid storm drain scenarios)

 Keep Application Logging to a Minimum 

  • Never should there be information outside of error cases being written to SystemOut.log
  • If using logging build your log messages only when needed
  • Good
    • if(loggingEnabled==true){ errorMsg = “This is a bad error” + “ “ + failingObject.printError();}
  • Bad 
    • errorMsg = “This is a bad error” + “ “ + failingObject.printError();
      If(loggingEnabled==true){ System.out.println(errorMsg); }
  • Keep error and log messages to the point and easy to debug
  • If using Apache Commons, Log4J, or other frameworks ensure performance on your system is as expected
  • Ensure if you must log information for audit purposes or other reasons that you are writing to a fast disk

Properly Tune the Operating System
  • Operating System is consistently overlooked for functional tuning as well as performance tuning.
  • Understand the hardware infrastructure backing your OS. Processor counts, speed, shared/unshared, etc
  • ulimit values need to be set correctly. Main player here is the number of open file handles (ulimit –n). Other process size and memory ones may need to be set based on application
  • Make sure NICs are set to full duplex and correct speeds
  • Large pages need to be enabled to take advantage of –Xlp JDK parametes
  • If enabled by default check RAS settings on OS and tune them down
  • Configure TCP/IP timeouts correctly for your applications needs
  • Depending on the load being placed on the system look into advanced tuning techniques such as pinning WAS processes via RSET or TASKSET as well as pinning IRQ interrupts
WAS Throughput with processor pinning

Wednesday, February 5, 2014

Websphere Message Broker - ESQL Utilities


You can find sample esql for aggregation, exception handling, email formation, sorting program and environment variable storage in below attached Broker PI.

Download :  ESQLUtilitiesPI

Monday, December 30, 2013

How to Delete Process Applications from IBM BPM

As a new feature of the Business Process Manager V7.5.1 platform, process applications can now be deleted from the repository.  In the previous version of BPM, version 7.5.0 & 7.5.0.1, users could only archive snapshots within a process application.  This did not remove the application; rather it was merely hidden from the default view and it was still stored in the repository and the database.
In the latest version of BPM, users can now actually delete the process application, and in turn, delete all snapshots and instances tied to that application, including deletion of these entries from the database.

To delete a process application, click on the process application that you want to delete and then click on Manage.
 
Next, click on Archive Process App and click on Archive.
 
 
Click on the Process Apps tab of Process Center and then click on Archived.
 
 
Click on the process app that you previously archived and select Delete Process App, click on Delete.
 
 
If you then return to the Process Apps tab in Process center, you will notice that the process app no longer appears in the list.
We can actually confirm that all database entries are also removed as part of the process app deletion. Here we can see that the entry for our application named TestApplication was added to the BPMDB in the table LSW_PROJECT.
 
 
After deleting the process app, this entry is no longer present in the table.  If the application contained snapshots and BPDs, these entries would also be removed from the LSW_SNAPSHOT and LSW_BPD tables respectively.
Please keep in mind, that this cleanup only happens in Process Center. Currently, the product does not have the capaibility to clean up these components on the Process Server side. However, this is an important new feature to help keep your Process Center repository clean and its database clean and efficient.

Wednesday, December 18, 2013

View mqsisetdbparms settings in IBM Integration Bus

Example:

mqsisetdbparms IB9NODE -n ldap::myldap.com -u ldap01 -p ibm

mqsisetdbparms IB9NODE -n jdbc::mySecurityIdentity -u muthu -p ibm

mqsisetdbparms  Settings Location:
==========================
Windows XP:  C:\Documents and Settings\All Users\Application Data\IBM\MQSI\registry\IB9NODE\CurrentVersion\DSN

Windows 7  : C:\ProgramData\IBM\MQSI\registry\IB9NODE \CurrentVersion\DSN









Unix : /var/mqsi/registry/IB9NODE/DSN/