HIGH AVAILABILITY
The OpenText Process Suite platform web gateway
As described before, when installing and configuring the OTPS platform for high availability, you install the OTPS platform baseline to run in primary distributed mode. Each of the OTPS platform instances will be running its own web server and OTPS web gateway. Typically in these scenarios, you will use an IP (Internet Protocol) Load Balancer to distribute the workload across the multiple nodes of the cluster. Although both hardware and software based IP load balancers can be applied, it is recommended to use a hardware based for better performance and their built-in high availability features.
CARS – Cordys Admin Repository Server
The OTPS clusters consists of at least two or more instances (nodes) that are installed in primary distributed mode and are used to distribute the load of web service requests between the different nodes of the cluster. Each instance has its own CARS repository while these repositories are configured to run in Multi‑Master replication mode. This ensures that the contents of the different master CARS repositories are synchronized over time, and whenever one of the masters fails, the requests can be handled through any of the other master repositories.
Alternatively, multiple OTPS instances can share one and the same CARS repository, but notice that this creates a single point of failure you have to ensure that the CARS repository is running on a highly available LDAP server. This should make sure that the contents of the CARS repository is replicated to prevent data loss for any reason. Documentation is available in a separate CARS administrators guide.
SOA - Services Oriented Architecture grid
The SOA grid of the OTPS platform is created by defining any number of service containers. Web service requests are handled by service containers, these are JVMs (Java Virtual Machine) that are handling the requests. Service containers are grouped in a service group. The service container reads the associated native implementation of the web service operation from the CARS repository and forwards the implementation to the associated application connector. The application connector executes the native (source) code and any result is passed back to the service container which passes it back to the client requesting the web service execution.
When you define a service container, you start by creating a service group to group similar service containers. At the level of the service group, you associate the web service interfaces (library of web service operations). This information is stored in the CARS repository, service group with web services interfaces, the web service operations and their implementation. After you defined a service container, you can quickly clone another service container with the same group and run the service container on another node of the cluster. By cloning service containers, you create multiple service containers that run the same type of web services across the different nodes of the cluster. The load between the different service containers is divided by the load balancing algorithm specified at the service group level.
When starting any of the node(s), each node will start the OTPS monitor service that will next start the other service containers within the node that are defined to be started automatically. While running, the OTPS monitor service keeps track of the state of the available services containers of that node. This state information together with registered problems if any, are shared and maintained by means of a SSU (State Sync-Up) based framework. You can run the CMC (Cordys Management Console) from any node and use the option “State SyncUp” to explore the current overview of the registered state of all of the service containers across the different nodes of the cluster. Whenever the monitor service discovers that the service container has failed for what reason, if the service container is defined to start automatically with cluster node start-up, the monitor will try to restart the service container again.
The OTPS platform instance database(s)
With any OTPS platform instance, one or more databases are used to store the relevant data of the running instance. Typically, these databases should be running in high availability mode as well. For configuration, you are referred to the documentation of the RDBMS (Relational Database Management System) used, MS-SQLServer, Oracle or MySQL. Notice that depending on the purpose of the cluster, running for production, development, test or acceptance, the OTPS platform instance may use different databases for storing details on the whole system (system DB), separate BAM repository for monitoring the running process instances. In case of a development cluster, the CWS contents can be saved in a separate database to enable backup and recovery of only the development efforts.
The OTPS platform instance local file system
The local file system of the OTPS platform instance is used for installation of all the platform files of the OTPS instance. When deploying applications on the nodes of the cluster, some of the component contents of the application will be deployed on the local file system as well. Ensure that the local file system is scalable, and that is included in any backup and recovery scenario. In addition to the local file system, you can also use a shared file system for storing and retrieving any contents that is used by the applications.
When configuring the OpenText Process Suite platform for scalability and/or high availability, you have two areas to consider:
The different constituents we have discussed before, such as the OTPS web server gateway, the CARS repository, the OTPS system database and other configured databases for data storage of BAM data, or CWS workspaces, etc.
Non OTPS platform parts are the file system of the OTPS platform server(s), the database management system involved for managing the different data stores used by the OTPS platform.
Make sure that in the explanation of your high availability configuration, you are including any backup and recovery strategies as well for both the OTPS platform constituents as well as the above mentioned non OTPS platform parts.
List of abbreviations
Abbreviation | Description |
ANSI | American National Standards Institute |
BAM | Business Activity Monitoring |
BER | Business Event Response |
BPML | Business Process Modeling Language |
BPMN | Business Process Modeling Notation |
BPMS | Business Process Management Suite (or System) |
CAF | Composite Application Framework file extension |
CAL | Composite Application Logging (framework) |
CAP | Cordys / Composite Application Package (file extension) |
CARS | Cordys Admin Repository Server |
CMC | Cordys Management Console |
CRUD | Create, Read, Update and Delete, data manipulation operations with a relational database |
CWS | Collaborative Work Space |
DTAP | Development, Testing, Acceptance and Production |
ESB | Enterprise Service Bus |
HW | HardWare |
IDE | Integrated Development Environment |
IP | Internet Protocol |
JAR | Java ARchive file extension |
JVM | Java Virtual Machine |
KPI | Key Performance Indicator |
LDAP | Lightweight Directory Access Protocol |
OMG | Object Management Group |
OTPS | OpenText Process Suite |
PIM | Process Instance Manager |
PMO | Process Monitoring Object |
RDBMS | Relational DataBase Management System |
SCM | Software Configuration Management |
SCXML | State Chart XML |
SOA | Services Oriented Architecture |
SOAP | Simple Object Access Protocol |
SQL | Structured Query Language |
SSU | State Sync-Up |
SVN | SubVersioN |
SW | SoftWare |
W3C | World Wide Web Consortium |
WfMC | Workflow Management Coalition |
WSDL | Web Service Definition Language |
WSI | Web Service Interface |
WSO | Web Service Operation |
XML | eXtensible Mark-up Language |
XPDL | XML Process Definition Language |
Don't miss out on future blog posts! Subscribe to email updates today!