Monday, April 14, 2014

Global Cache In IBM Integration Bus 9


 Cache topology :

Cache topology is the set of catalog servers, containers, and client connections that collaborate to form a global cache. When using the default policy, the first execution group to start will perform the role of catalog server and container server (call this Role1). The next three execution groups to start perform the role of container servers (Role2, 3, and 4). No other execution groups will host catalogs or containers, but all execution groups (including those performing Roles1, 2, 3, and 4) host client connections to the global cache. When you restart the broker, the execution groups may start in a different order, in which case different execution groups might perform Roles1, 2, 3, and 4. The cache topology still contains one catalog server, up to four containers, and multiple clients, but the distribution of those roles across your execution groups will vary.


 Global Cache in Message Broker :

A longstanding WebSphere Message Broker requirement has been a mechanism for sharing data between different processes. This requirement is most easily explained in the context of an asynchronous request/reply scenario. In this kind of scenario, a broker acts as intermediary between a number of client applications and a back-end system. Each client application sends, via the broker, request messages that contain correlating information to be included in any subsequent replies. The broker forwards the messages on to the back-end system, and then processes the responses from that system. To complete the round-trip, the broker has to insert the correlating information back into the replies and route them back to the correct clients.
When the flows are contained within a single broker, there are a few options for storing the correlating information, such as a database, or a store queue where an MQGet node is used to retrieve the information later. If you need to scale this solution horizontally and add brokers to handle an increase in throughput, then a database is the only reasonable option.

The WebSphere Message Broker global cache is implemented using embedded WebSphere eXtreme Scale (WebSphere XS) technology. By hosting WebSphere XS components, the JVMs embedded within WebSphere Message Broker execution groups can collaborate to provide a cache. For a description of WebSphere XS, see Product overview in the WebSphere XS information center. Here are some of the key components in the WebSphere XS topology:
Catalog server
Controls placement of data and monitors the health of containers.
Container server
A component embedded in the execution group that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once.
Map
A data structure that maps keys to values. One map is the default map, but the global cache can have several maps.
Each execution group can host a WebSphere XS catalog server, container server, or both. Additionally, each execution group can make a client connection to the cache for use by message flows. The global cache works out of the box, with default settings, and no configuration -- you just switch it on! You do not need to install WebSphere XS alongside the broker, or any other additional components or products.


The default scope of one cache is across one broker. To enable this, switch the broker-level policy property on the GlobalCache tab of Message Broker Explorer to Default and restart. This causes each execution group to assume a role in the cache dynamically on startup. The first execution group to start will be a catalog and container server, using the first four ports from the supplied port range (a port range will have been generated for you, but you can modify this). For more details on the port range, see Frequently asked questions below. The second, third, and fourth execution groups (if present) will be container servers, each using three ports from the range. Any execution groups beyond the fourth one will not host cache components, but will connect as clients to the cache hosted in execution groups 1-4. The diagram below shows the placement of servers, and the client connections, for a single-broker cache with six execution groups:
Single-broker cache with default policy
Picture that shows a single broker, with 6 execution groups collaborating to provide a cache.You can extend the cache to multiple brokers by using a cache policy file. Three sample policy files are included in the product install, in the sample/globalcache directory. You can simply alter the policy file to contain all the brokers you want to collaborate in a single cache, then point the broker-level cache policy property at this file. Here is this setting in Message Broker Explorer:
Configuring a cache policy file in Message Broker Explorer
Configuring a cache policy file in Message Broker ExplorerThe file lets you nominate each broker to host zero, one, or two catalog servers, and the port range that each broker should use for its cache components. The following diagram shows a two-broker cache, with both brokers configured to contain catalog servers:
Two-broker cache controlled by policy file
Picture that shows a two brokers collaborating to provide a single cache available to both of them.





Message Flow Interaction with Cache :

The message flow has new, simple artifacts for working with the global cache, and is not immediately aware of the underlying WebSphere XS technology or topology. Specifically, the Java Compute node interface has a new MbGlobalMap object, which provides access to the global cache. This object handles client connectivity to the global cache, and provides a number of methods for working with maps in the cache. The methods available are similar to those you would find on regular Java maps. Individual MbGlobalMap objects are created by using a static getter on the MbGlobalMap class, which acts as a factory mechanism. You can work with multiple MbGlobalMap objects at the same time, and create them either anonymously (which uses a predefined default map name under the covers in WebSphere XS), or with any map name of your choice. In the examples below, defaultMap will work with the system-defined default map within the global cache. myMap will work with a map called myMap, and will create this map if it does not already exist in the cache.
Sample MbGlobalMap objects in Java Compute
MbGlobalMap defaultMap = MbGlobalMap.getGlobalMap();
MbGlobalMap myMap = MbGlobalMap.getGlobalMap(“myMap”);


Download Sample Global Cache Policy Files:

 Policy_multi_instance.xml

Policy_one_broker_ha.xml

Policy_two_brokers_ha.xml

Policy_two_brokers.xml








Thursday, April 10, 2014

WebSphere Application Server Performance Tuning Recommendations

Perform Proper Load Testing

  • Properly load testing your application is the most critical thing you can do to ensure a rock solid runtime in production.
  • Replicating your production environment isn’t always 100% necessary as most times you can get the same bang for your buck with a single representative machine in the environment
    • Calculate expected load across the cluster and divide down to single machine load
    • Drive load and perform the usual tuning loop to resolve the parameter set you need to tweak and tune.
    • Look at load on the database system, network, etc and extrapolate if it will support the full systems load and if not of if there are questions test
  • Performance testing needs to be representative of patterns that your application will actually be executing
  • Proper performance testing keeps track of and records key system level metrics as well as throughput metrics for reference later when changes to hardware or application are needed.
  • Always over stress your system.  Push the hardware and software to the max and find the breaking points. 
  • Only once you have done real world performance testing can you accurately size the complete set of hardware required to execute your application to meet your demand.

Correctly Tune The JVM

  • Correctly tuning the JVM in most cases will get you nearly 80% of the possible max performance of your application  
  • The big area to focus on for JVM tuning is heap size
    • Monitor verbose:gc and target GCing no more than once every 10 seconds with a max GC pause of a second or less.
    • Incremental testing is required to get this area right running with expected customer load on the system
    • Only after you have the above boundary layers met for GC do you want to start to experiment with differing garbage collection policies
  • Beyond the Heap Size settings most other parameters are to extract out max possible performance OR ensure that the JVM cooperates nicely on the system it is running on with other JVMs 
  • The Garbage Collector Memory Visualizer is an excellent tool tool for diagnosing GC issues or refining JVM performance tuning.
    • Provided as a downloadable plug-in within the IBM Support Assistant
Garbage Collection Memory Visualizer (GCMV)

Ensure Uniform Configuration Across Like Servers

  • Uniform configuration of software parameters and even operating systems is a common stumbling block
  • Most times manifests itself as a single machine or process that is burning more CPU, Memory or garbage collecting more frequently
  • Easiest way to manage this is to have a “dump configuration” script that runs periodically
  • Store the scripts results off and after each configuration change or application upgrade track differences
  • Leverage the Visual Configuration Explorer (VCE) tool available within ISA
Visual Configuration Explorer (VCE)

 Create Cells To Group Like Applications

  • Create Cells and Clusters of application servers with an express purpose that groups them in some manner
  • Large Cells (400-500-1000 members) for the most part while supported don’t make sense
  • Group applications that need to replicate data to each other or talk to each other via RMI, etc and create cells and clusters around those commonalities. 
  • Keeping cell size smaller leads to more efficient resource utilization due to less network traffic for configuration changes, DRS, HAManager, etc.
    • For example, core groups should be limited to no more than 40 to 50 instances
  • Smaller cells and logic grouping make migration forward to newer versions of products easier and more compartmentalized.
  
Tune JDBC Data Sources

  • Correct database connection pool tuning can yield significant gains in performance
  • This pool is highly contended in heavily multithreaded applications so ensuring significant available connections are in the pool leads to superior performance.
  • Monitor PMI metrics via TPV or others tools to watch for threads waiting on connections to the database as well as their wait time.
    • If threads are waiting increase the number of pooled connections in conjunction with your DBA OR decrease the number of active threads in the system
    • In some cases, a one-to-one mapping between DB connections and threads may be ideal
  • Frequently database deadlocks or bottlenecks first manifest themselves as a large number of threads from your thread pool waiting for connections
  • Always use the latest database driver for the database you are running as performance optimization in this space between versions are significant
  • Tune the Prepared Statement Cache Size for each JDBC data source
    • Can also be monitored via PMI/TPV to determine ideal value

Correctly Tune Thread Pools

  • Thread pools and their corresponding threads control all execution on the hardware threads.
  • Understand which thread pools your application uses and size all of them appropriately based on utilization you see in tuning exercises
    • Thread dumps, PMI metrics, etc will give you this data 
    • Thread Dump Memory Analyzer and Tivoli Performance viewer (TPV) will help in viewing this data.
  • Think of the thread pool as a queuing mechanism to throttle how many active requests you will have running at any one time in your application.
    • Apply the funnel based approach to sizing these pools
      • Example IHS (1000) -> WAS ( 50) -> WAS DB connection pool (30) -> 
      • Thread numbers above vary based on application characteristics
    • Since you can throttle active threads you can control concurrency through your codebase
  • Thread pools needs to be sized with the total number of hardware processor cores in mind
    • If sharing a hardware system with other WAS instances thread pools have to be tuned with that in mind.
    • You need to more than likely cut back on the number of threads active in the system to ensure good performance for all applications due to context switching at OS layer for each thread in the system
    • Sizing or restricting the max number of threads a application can have can sometimes be used to prevent rouge applications for impacting others.
  • Default sizes for WAS thread pools on v6.1 and above are actually a little to high for best performance
    • Two to one ratio (threads to cores) typically yields the best performance but this varies drastically between applications and access patterns
TPV & TDMA tool snapshots

Minimize HTTP Session Content

  • High performance data replication for application availability depends on correctly sized session data
    • Keep it under 1MB in all cases if possible
  • Only should be storing information critical to that users specific interaction with the server
  • If composite data is required build it progressively as the interaction occurs
    • Configure Session Replication in WAS to meet your needs
    • Use different configuration options (async vs. synch) to give you the availability your application needs without compromising response time.
    • Select the replication topology that works best for you (DB, M2M, M2M Server) 
    • Keep replication domains small and/or partition where possible


Understand and Tune Infrastructure (databases & other interactive server systems)

  • WebSphere Application Server and the system it runs on is typically only one part of the datacenter infrastructure and it has a good deal of reliance on other areas performing properly.Think of your infrastructure as a plumbing system. Optimal drain performance only occurs when no pipes are clogged. 
  • On the WAS system itself you need to be vary aware of
    • What other WAS instances (JVMs) are doing and their CPU / IO profiles
    • How much memory other WAS instance (or other OS’s in a virtualized case) are using
    • Network utilization of other applications coexisting on the same hardware
  • In the supporting infrastructure
    • Varying Network Latency can drastically effect split cell topologies, cross site data replication and database query latency
      • Ensure network infrastructure is repeatable and robust
      • Don’t take for granted bandwidth or latency before going into production always test as labs vary
    • Firewalls can cause issues with data transfer latencies between systems
  • On the database system
    • Ensure that proper indexes and tuning is done for the applications request patterns
    • Ensure that the database supports the number of connected clients your WAS runtime will have
    • Understand the CPU load and impacts of other applications (batch, OLTP, etc all competing with your applications)
  • On other application server systems or interactive server systems
    • Ensure performance of connected applications is up for the load being requested of it by the WAS system
    • Verify that developers have coded specific handling mechanisms for when connected applications go down (You need to avoid storm drain scenarios)

 Keep Application Logging to a Minimum 

  • Never should there be information outside of error cases being written to SystemOut.log
  • If using logging build your log messages only when needed
  • Good
    • if(loggingEnabled==true){ errorMsg = “This is a bad error” + “ “ + failingObject.printError();}
  • Bad 
    • errorMsg = “This is a bad error” + “ “ + failingObject.printError();
      If(loggingEnabled==true){ System.out.println(errorMsg); }
  • Keep error and log messages to the point and easy to debug
  • If using Apache Commons, Log4J, or other frameworks ensure performance on your system is as expected
  • Ensure if you must log information for audit purposes or other reasons that you are writing to a fast disk

Properly Tune the Operating System
  • Operating System is consistently overlooked for functional tuning as well as performance tuning.
  • Understand the hardware infrastructure backing your OS. Processor counts, speed, shared/unshared, etc
  • ulimit values need to be set correctly. Main player here is the number of open file handles (ulimit –n). Other process size and memory ones may need to be set based on application
  • Make sure NICs are set to full duplex and correct speeds
  • Large pages need to be enabled to take advantage of –Xlp JDK parametes
  • If enabled by default check RAS settings on OS and tune them down
  • Configure TCP/IP timeouts correctly for your applications needs
  • Depending on the load being placed on the system look into advanced tuning techniques such as pinning WAS processes via RSET or TASKSET as well as pinning IRQ interrupts
WAS Throughput with processor pinning

Wednesday, February 5, 2014

Websphere Message Broker - ESQL Utilities


You can find sample esql for aggregation, exception handling, email formation, sorting program and environment variable storage in below attached Broker PI.

Download :  ESQLUtilitiesPI

Monday, December 30, 2013

How to Delete Process Applications from IBM BPM

As a new feature of the Business Process Manager V7.5.1 platform, process applications can now be deleted from the repository.  In the previous version of BPM, version 7.5.0 & 7.5.0.1, users could only archive snapshots within a process application.  This did not remove the application; rather it was merely hidden from the default view and it was still stored in the repository and the database.
In the latest version of BPM, users can now actually delete the process application, and in turn, delete all snapshots and instances tied to that application, including deletion of these entries from the database.

To delete a process application, click on the process application that you want to delete and then click on Manage.
 
Next, click on Archive Process App and click on Archive.
 
 
Click on the Process Apps tab of Process Center and then click on Archived.
 
 
Click on the process app that you previously archived and select Delete Process App, click on Delete.
 
 
If you then return to the Process Apps tab in Process center, you will notice that the process app no longer appears in the list.
We can actually confirm that all database entries are also removed as part of the process app deletion. Here we can see that the entry for our application named TestApplication was added to the BPMDB in the table LSW_PROJECT.
 
 
After deleting the process app, this entry is no longer present in the table.  If the application contained snapshots and BPDs, these entries would also be removed from the LSW_SNAPSHOT and LSW_BPD tables respectively.
Please keep in mind, that this cleanup only happens in Process Center. Currently, the product does not have the capaibility to clean up these components on the Process Server side. However, this is an important new feature to help keep your Process Center repository clean and its database clean and efficient.

Wednesday, December 18, 2013

View mqsisetdbparms settings in IBM Integration Bus

Example:

mqsisetdbparms IB9NODE -n ldap::myldap.com -u ldap01 -p ibm

mqsisetdbparms IB9NODE -n jdbc::mySecurityIdentity -u muthu -p ibm

mqsisetdbparms  Settings Location:
==========================
Windows XP:  C:\Documents and Settings\All Users\Application Data\IBM\MQSI\registry\IB9NODE\CurrentVersion\DSN

Windows 7  : C:\ProgramData\IBM\MQSI\registry\IB9NODE \CurrentVersion\DSN









Unix : /var/mqsi/registry/IB9NODE/DSN/

Wednesday, December 4, 2013

Issue: Packages IBM Websphere Process Server 7.0.0.0 and IBM Webpshere Integration Developer 7.0.0.0 cannot coexist in the same package group

Problem:

Package Group name contain only IBM WEBSPHERE INTEGRATION SERVER BUT NOT APPLICATION SERVER. (we need both in the package group name)

Solution:

1. Open the shortcut "IBM Installtion Manager"
2. Click "Install" Icon, select next -> click Repository link
3. It will take you to the repository page.
4. Add the repository.config file from ..\WID7.0\WTE_Disk\repository.
5. Add another repository.config file from C:\program files\ibm\Installation Manager\eclipse
6. Click Test connection for successfull connection.
7. Exit and Click import Icon.
8. Select the default directory location to C:\program files\ibm\WID7_WTE\runtimes\bi_v7.
9. Click next and import it.
10. Now, you can able to install Websphere Test Environment without any issue.


Friday, November 15, 2013

Restarting Application Servers With Nodeagent – WebSphere Application Server V7,V8

In WebSphere 7 or 8, by default, the Nodeagent will take no action if an application server fails.  To have the Nodeagent intervene and automatically restart a failed application server instance, the ‘monitoring policy’ must be set for that application server.  In the admin console, perform the following:

1 . –> Java and Process Management –> Monitoring Policy
2.  Check the box next to “Automatic Restart”
3.  In the “Node Restart State“, set the state to “STOPPED


The actions above will allow the nodeagent to auto-restart a failed or killed application server.  It is important that the ‘Node Restart State” be set to ‘STOPPED“.  If set to “RUNNING“, not only will the nodeagent restart a failed or killed application server, but it will autostart the application server upon a nodeagent restart.  This may be unwanted in certain environments where application servers are only supposed to run at certain times or if there is a specific application start order.
If you wish to have the nodeagent automatically start application servers when it comes online, set the state to “RUNNING“.

Thursday, August 22, 2013

Stuck messages occur for WebSphere Process Server (WPS) and WebSphere Enterprise Service Bus (WESB) on SCA.SYSTEM.etail_cell01.Bus destinations.

Environment

The problem can occur in WebSphere Process Server or WebSphere Enterprise Service Bus-related modules that are deployed on a server or clustered environment. This problem occurs only for asynchronous messaging on SCA modules.

Diagnosing the problem

To diagnose the problem, monitor the queue destinations for the message counter and review the settings for the forward routing path of the affected queue destinations.
Monitor the queue destinations in an asynchronous communication scenario. If the number of messages on the queue only increases, then the messages are not routed forward. This scenario applies to destinations that do not have a defined forward routing path (except the module destination, which has an applied MDB, which reads the messages from the queue).

Use the Service Integration Bus Explorer to monitor module-related queues or review the number of messages on the queue using the administrative console:

Administrative console Service integration Buses SCA.SYSTEM.cellName.Bus Destinations sca/destinationName Queue points destinationName@messagingEngineName Runtime tab Current message depth

Use the refresh button to get the current value of the message counter to monitor the behavior over time:





Stuck messages on the module destination queue (optional)
If messages remain on the module destination queue, verify that the activation specification of the MDB that is listening on that queue is configured correctly:

Administrative console Resources Resource adapters J2C activation specifications sca/moduleName/ActivationSpec J2C activation specification custom properties

Check that busName refers to the correct SCA bus (SCA.SYSTEM.cellName.Bus) and that destinationName refers to the module destination (sca/moduleName).

Resolving the problem

Follow this procedure to resolve the problem:

  1. Identify the destination where messages are stuck.
  2. Set the correct forward routing path on the destination (set the path to the module destination).
  3. Save the master configuration. 
Example :

Destination Name : sca/PubProdMED/component/PubProdMF
ModuleName :  PubProdMED
Routing Path:


The messages should immediately be routed to the specified target destination. You do not need to restart the server.

Tuesday, April 23, 2013

Deleting Records From Failed Event Manager In Websphere Application Server [ WESB & WPS ]

The WebSphere application server stores all exception's in failed event manager. If FEM is full, then clearing FEM Records through Admin Console is not possible.

In this situation, we have two options to remove all FEM.

Option 1:
=======

Delete FEM records from WPSDB database . Run the below query in order.

delete from FAILEDEVENTS;
delete from FAILEDEVENTBOTYPES;
delete from FAILEDEVENTDETAIL;


Option 2:
=======

Delete FEM by jython scripts.

The script must be saved in a file and can be run on a single server profile as well as a
clustered environment (run on deployment manager) using the following command:

1.  cd Deployment manager  [ or ] Profile Name/bin 
2.   Run wsadmin.(bat|sh) -lang jython -f jythonScriptName -user wpsAdminUserName -password wpsAdminPassword

Ex: wsadmin.bat -lang jython -f /opt/Websphere/AppServer/scripts/FEMScript.py -user admin -password admin


FEMScript.py
===========
# lookup the failed event manager
objstr = AdminControl.completeObjectName('WebSphere:*,type=FailedEventManager')
obj = AdminControl.makeObjectName(objstr)

# count the overall number of failed events
fecount = AdminControl.invoke(objstr,"getFailedEventCount")
print "Failed event number before discarding: ", fecount

delnum = 100
fecount = int(fecount)

while (fecount > 0):
    if fecount < 100:
        delnum = fecount
        fecount = 0
    else:
        delnum = 100
        fecount = fecount - 100
      
    # get 100 failed events
    msglist = AdminControl.invoke_jmx(obj,'getAllFailedEvents',[delnum],['int'])

    # discard 100 events in single batch run
    print "Discarding ", delnum, " failed events"
    AdminControl.invoke_jmx(obj,'discardFailedEvents', [msglist],['java.util.List'])

# count the overall number of failed events
fecount2 = AdminControl.invoke(objstr,"getFailedEventCount")
print "Failed event number after discarding: ", fecount2




Tuesday, April 2, 2013

WPS : Unable to Install appllication on server "Application is already Exist on Server". Module status is unknown in SCA module section.

Issue: I am trying to install one application on server. It's unable to install giving exception as " already exist on server".
In admin console go to --> SCAModules : there i am finding my module over there but the state of the module is unknown. i tryed to un-deploy from there. it's not able to un-deploy. Even i am unable to remove that application from server from admin consol too..

Solution :
1. first we have to stop the server.
2. we have to remove particular application from all the folder in server profile.
3.One more main thing is go to installation folder-->  \profiles\qbpmaps\config\cells\qcell go to this path and open "cell-core.xml" file and remove SCAModules entry of that particular application from this file.
4. Start the server
now we can able to deploy that same application successfully.


OR

You can also force the uninstall with the following command:

<>\bin\wsadmin.bat -user <> -password <> -lang jacl -f <>\ProcessChoreographer\admin\bpcTemplates.jacl -uninstall <> -force