Cache topology :
Cache topology is the set of catalog servers, containers, and client
connections that collaborate to form a global cache. When using the
default policy, the first execution group to start will perform the role
of catalog server and container server (call this Role1). The next
three execution groups to start perform the role of container servers
(Role2, 3, and 4).
No other execution groups will host catalogs or containers, but all
execution groups (including those performing Roles1, 2, 3, and 4) host
client connections to the global cache.
When you restart the broker, the execution groups may start in a
different order, in which case different execution groups might perform
Roles1, 2, 3, and 4.
The cache topology still contains one catalog server, up to four
containers, and multiple clients, but the distribution of those roles
across your execution groups will vary.
Global Cache in Message Broker :
A longstanding WebSphere Message Broker requirement has been a mechanism for sharing data between different processes. This requirement is most easily explained in the context of an asynchronous request/reply scenario. In this kind of scenario, a broker acts as intermediary between a number of client applications and a back-end system. Each client application sends, via the broker, request messages that contain correlating information to be included in any subsequent replies. The broker forwards the messages on to the back-end system, and then processes the responses from that system. To complete the round-trip, the broker has to insert the correlating information back into the replies and route them back to the correct clients.
When the flows are contained within a single broker, there are a few options for storing the correlating information, such as a database, or a store queue where an MQGet node is used to retrieve the information later. If you need to scale this solution horizontally and add brokers to handle an increase in throughput, then a database is the only reasonable option.
The WebSphere Message Broker global cache is implemented using embedded WebSphere eXtreme Scale (WebSphere XS) technology. By hosting WebSphere XS components, the JVMs embedded within WebSphere Message Broker execution groups can collaborate to provide a cache. For a description of WebSphere XS, see Product overview in the WebSphere XS information center. Here are some of the key components in the WebSphere XS topology:
- Catalog server
- Controls placement of data and monitors the health of containers.
- Container server
- A component embedded in the execution group that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once.
- Map
- A data structure that maps keys to values. One map is the default map, but the global cache can have several maps.
The default scope of one cache is across one broker. To enable this, switch the broker-level policy property on the GlobalCache tab of Message Broker Explorer to Default and restart. This causes each execution group to assume a role in the cache dynamically on startup. The first execution group to start will be a catalog and container server, using the first four ports from the supplied port range (a port range will have been generated for you, but you can modify this). For more details on the port range, see Frequently asked questions below. The second, third, and fourth execution groups (if present) will be container servers, each using three ports from the range. Any execution groups beyond the fourth one will not host cache components, but will connect as clients to the cache hosted in execution groups 1-4. The diagram below shows the placement of servers, and the client connections, for a single-broker cache with six execution groups:
Single-broker cache with default policy
You can extend the cache to multiple brokers by using a cache policy file. Three sample policy files are included in the product install, in the sample/globalcache directory. You can simply alter the policy file to contain all the brokers you want to collaborate in a single cache, then point the broker-level cache policy property at this file. Here is this setting in Message Broker Explorer:Configuring a cache policy file in Message Broker Explorer
The file lets you nominate each broker to host zero, one, or two catalog servers, and the port range that each broker should use for its cache components. The following diagram shows a two-broker cache, with both brokers configured to contain catalog servers:Two-broker cache controlled by policy file
Message Flow Interaction with Cache :
The message flow has new, simple artifacts for working with the global cache, and is not immediately aware of the underlying WebSphere XS technology or topology. Specifically, the Java Compute node interface has a new MbGlobalMap object, which provides access to the global cache. This object handles client connectivity to the global cache, and provides a number of methods for working with maps in the cache. The methods available are similar to those you would find on regular Java maps. Individual
MbGlobalMap
objects are created by using a static getter on the MbGlobalMap class, which acts as a factory mechanism.
You can work with multiple MbGlobalMap
objects at the same
time,
and create them either anonymously (which uses a predefined default map
name under the covers in WebSphere XS), or with any map name of your
choice.
In the examples below, defaultMap
will work with the system-defined default map within the global cache.
myMap
will work with a map called myMap
, and will create this map if it does not already exist in the cache.Sample MbGlobalMap objects in Java Compute
MbGlobalMap defaultMap = MbGlobalMap.getGlobalMap(); MbGlobalMap myMap = MbGlobalMap.getGlobalMap(“myMap”);
Download Sample Global Cache Policy Files:
Policy_multi_instance.xml
Policy_one_broker_ha.xml
Policy_two_brokers_ha.xml
Policy_two_brokers.xml