Search This Blog

Sunday, 31 January 2021

JMS Server Concepts

 JMS

The Java Message Service (JMS) API is a Java Message Oriented Middleware (MOM) API for sending messages between two or more clients. JMS is a part of the Java Platform, Enterprise Edition. It is a messaging standard that allows application components based on the Java 2 Platform, Enterprise Edition (J2EE) to create, send, receive, and read messages. It allows the communication between different components of a distributed application to be loosely coupled, reliable, and asynchronous. 

The following are JMS elements:

JMS provider

An implementation of the JMS interface for a Message Oriented Middleware (MOM). Providers are implemented as either a Java JMS implementation or an adapter to a non-Java MOM.

JMS client

An application or process that produces and/or receives messages.

JMS producer/publisher

A JMS client that creates and sends messages.

JMS consumer/subscriber

A JMS client that receives messages.

JMS message

An object that contains the data being transferred between JMS clients.

JMS queue

A staging area that contains messages that have been sent and are waiting to be read. Note that, contrary to what the name queue suggests, messages have to be delivered in the order sent A JMS queue only guarantees that each message is processed only once.

JMS topic

A distribution mechanism for publishing messages that are delivered to multiple subscribers

The JMS API supports two models:

  • Point-to-point
  • Publish and subscribe

Point-to-point model :

In the point-to-point model, a sender posts messages to a particular queue and a receiver reads messages from the queue. Here, the sender knows the destination of the message and posts the message directly to the receiver's queue. This model is characterized by the following:

  • Only one consumer gets the message.
  • The producer does not have to be running at the time the consumer consumes the message, nor does the consumer need to be running at the time the message is sent.
  • Every message successfully processed is acknowledged by the consumer.



Publisher/Subscriber Model:

The publish/subscribe model supports publishing messages to a particular message topic. Subscribers may register interest in receiving messages on a particular message topic. In this model, neither the publisher nor the subscriber knows about each other. A good analogy for this is an anonymous bulletin board. The following are characteristics of this model:
  • Multiple consumers (or none) will receive the message.
  • There is a timing dependency between publishers and subscribers. The publisher has to create a message topic for clients to subscribe. The subscriber has to remain continuously active to receive messages, unless it has established a durable subscription. In that case, messages published while the subscriber is not connected will be redistributed whenever it reconnects.
Using Java, JMS provides a way of separating the application from the transport layer of providing data. The same Java classes can be used to communicate with different JMS providers by using the JNDI information for the desired provider. The classes first use a connection factory to connect to the queue or topic, and then use populate and send or publish the messages. On the receiving side, the clients then receive or subscribe to the messages.






Distributed Queue:

Many Producers can serialize messages to multiple receivers in a queue. 

Distributed Topic:

Publishing and subscribing to a topic decouples producers from consumers.



JMS Architecture: 

Step 1) Client will look up for a Connection Factory. 
Step 2) WebLogic will identify the connection factory from JNDI tree and return the connection factory to the client. Client creates a connection and a session is established.
Step 3) Client will look up for a destination from WebLogic using JNDI.
Step 4) WebLogic will return a destination. 
Step 5) Message will be placed at the destination Queue or Topic once the destination has been identified.









Load Balancing in a Cluster

 Oracle WebLogic Server clusters provide load balancing support for different types of objects. 

  • Load Balancing for Servlets and JSPs
  • Load Balancing for EJBs and RMI Objects
  • Load Balancing for JMS

Load Balancing for Servlets and JSPs:

You can accomplish load balancing of servlets and JSPs with the built-in load balancing capabilities of a WebLogic proxy plug-in or with separate load balancing hardware.

Load Balancing with a Proxy Plug-in

The WebLogic proxy plug-in maintains a list of WebLogic Server instances that host a clustered servlet or JSP, and forwards HTTP requests to those instances on a round-robin basis. This load balancing method is described in Round-Robin Load Balancing.

The plug-in also provides the logic necessary to locate the replica of a client's HTTP session state if a WebLogic Server instance should fail.

WebLogic Server supports the following Web servers and associated proxy plug-ins:
  • WebLogic Server with the HttpClusterServlet
  • Netscape Enterprise Server with the Netscape (proxy) plug-in
  • Apache with the Apache Server (proxy) plug-in
  • Microsoft Internet Information Server with the Microsoft-IIS (proxy) plug-in
Load Balancing HTTP Sessions with an External Load Balancer

Clusters that employ a hardware load balancing solution can use any load balancing algorithm supported by the hardware. These can include advanced load-based balancing strategies that monitor the utilization of individual machines.

Load Balancer Configuration Requirements

If you choose to use load balancing hardware instead of a proxy plug-in, it must support a compatible passive or active cookie persistence mechanism, and SSL persistence.

Passive Cookie Persistence

Passive cookie persistence enables WebLogic Server to write a cookie containing session parameter information through the load balancer to the client.

Active Cookie Persistence

You can use certain active cookie persistence mechanisms with WebLogic Server clusters, provided the load balancer does not modify the WebLogic Server cookie. WebLogic Server clusters do not support active cookie persistence mechanisms that overwrite or modify the WebLogic HTTP session cookie. If the load balancer's active cookie persistence mechanism works by adding its own cookie to the client session, no additional configuration is required to use the load balancer with a WebLogic Server cluster.

SSL Persistence

When SSL persistence is used, the load balancer performs all encryption and decryption of data between clients and the WebLogic Server cluster. The load balancer then uses the plain text cookie that WebLogic Server inserts on the client to maintain an association between the client and a particular server in the cluster.

Load Balancers and the WebLogic Session Cookie

A load balancer that uses passive cookie persistence can use a string in the WebLogic session cookie to associate a client with the server hosting its primary HTTP session state. The string uniquely identifies a server instance in the cluster. You must configure the load balancer with the offset and length of the string constant. The correct values for the offset and length depend on the format of the session cookie.

The format of a session cookie is:

Copysessionid!primary_server_id!secondary_server_id

where:
sessionid is a randomly generated identifier of the HTTP session. The length of the value is configured by the IDLength parameter in the <session-descriptor> element in the weblogic.xml file for an application. By default, the sessionid length is 52 bytes.

primary_server_id and secondary_server_id are 10 character identifiers of the primary and secondary hosts for the session.

Note:For sessions using non-replicated memory, cookie, or file-based session persistence, the secondary_server_id is not present. For sessions that use in-memory replication, if the secondary session does not exist, the secondary_server_id is "NONE".

Load Balancing for EJBs and RMI Objects

WebLogic Server cluster use load balancing algorithms for EJBs and RMI objects. The load balancing algorithm for an object is maintained in the replica-aware stub obtained for a clustered object.

Following are different types of load balancing algorithms available:
  • Round-Robin Load Balancing
  • Weight-Based Load Balancing
  • Random Load Balancing
By default, WebLogic Server Cluster use round-robin load balancing algorithm. You can configure a different default load balancing method for a cluster by using WebLogic Admin Console to set weblogic.cluster.defaultLoadAlogorithm. You can also specify the load balancing algorithm for a specific RMI objects using the -loadAlgorithm option in rmic, or with the home-load-algorithm or stateless-bean-load-algorithm in an EJB's deployment descriptor. A load balancing algorithm that you configure for an object overrides the default load balancing algorithm for the cluster. 

Round Robin Load Balancing

WebLogic Server uses the round-robin algorithm as the default load balancing strategy for clustered object stubs when no algorithm is specified. This algorithm is supported for RMI objects and EJBs. It is also the method used by WebLogic proxy plug-ins.

The round-robin algorithm cycles through a list of WebLogic Server instances in order. For clustered objects, the server list consists of WebLogic Server instances that host the clustered object. For proxy plug-ins, the list consists of all WebLogic Server instances that host the clustered servlet or JSP.

The advantages of the round-robin algorithm are that it is simple, cheap and very predictable. The primary disadvantage is that there is some chance of convoying. Convoying occurs when one server is significantly slower than the others. Because replica-aware stubs or proxy plug-ins access the servers in the same order, a slow server can cause requests to "synchronize" on the server, then follow other servers in order for future requests.

Weight based Load Balancing

This algorithm applies only to EJB and RMI object clustering.

Weight-based load balancing improves on the round-robin algorithm by taking into account a pre-assigned weight for each server. You can use the Server > Configuration > Cluster page in the WebLogic Server Administration Console to assign each server in the cluster a numerical weight between 1 and 100, in the Cluster Weight field. This value determines what proportion of the load the server will bear relative to other servers. If all servers have the same weight, they will each bear an equal proportion of the load. If one server has weight 50 and all other servers have weight 100, the 50-weight server will bear half as much as any other server. This algorithm makes it possible to apply the advantages of the round-robin algorithm to clusters that are not homogeneous.

If you use the weight-based algorithm, carefully determine the relative weights to assign to each server instance. Factors to consider include:
  • The processing capacity of the server's hardware in relationship to other servers (for example, the number and performance of CPUs dedicated to WebLogic Server).
  • The number of non-clustered ("pinned") objects each server hosts.
Random Load Balancing

The random method of load balancing applies only to EJB and RMI object clustering.

In random load balancing, requests are routed to servers at random. Random load balancing is recommended only for homogeneous cluster deployments, where each server instance runs on a similarly configured machine. A random allocation of requests does not allow for differences in processing power among the machines upon which server instances run. If a machine hosting servers in a cluster has significantly less processing power than other machines in the cluster, random load balancing will give the less powerful machine as many requests as it gives more powerful machines.

Random load balancing distributes requests evenly across server instances in the cluster, increasingly so as the cumulative number of requests increases. Over a small number of requests the load may not be balanced exactly evenly.

Disadvantages of random load balancing include the slight processing overhead incurred by generating a random number for each request, and the possibility that the load may not be evenly balanced over a small number of requests.

Server Affinity Load Balancing Algorithms

WebLogic Server provides three load balancing algorithms for RMI objects that provide server affinity. Server affinity turns off load balancing for external client connections; instead, the client considers its existing connections to WebLogic Server instances when choosing the server instance on which to access an object. If an object is configured for server affinity, the client-side stub attempts to choose a server instance to which it is already connected, and continues to use the same server instance for method calls. All stubs on that client attempt to use that server instance. If the server instance becomes unavailable, the stubs fail over, if possible, to a server instance to which the client is already connected.

The purpose of server affinity is to minimize the number IP sockets opened between external Java clients and server instances in a cluster. WebLogic Server accomplishes this by causing method calls on objects to "stick" to an existing connection, instead of being load balanced among the available server instances. With server affinity algorithms, the less costly server-to-server connections are still load-balanced according to the configured load balancing algorithm—load balancing is disabled only for external client connections.

Server affinity is used in combination with one of the standard load balancing methods: round-robin, weight-based, or random:

Load Balancing for JMS

WebLogic Server JMS supports server affinity for distributed JMS destinations and client connections.

By default, a WebLogic Server cluster uses the round-robin method to load balance objects. To use a load balancing algorithm that provides server affinity for JMS objects, you must configure the desired method for the cluster as a whole. You can configure the load balancing algorithm by using the WebLogic Server Administration Console to set weblogic.cluster.defaultLoadAlgorithm.

Server Affinity for Distributed JMS Destinations

Server affinity is supported for JMS applications that use the distributed destination feature; this feature is not supported for standalone destinations. If you configure server affinity for JMS connection factories, a server instance that is load balancing consumers or producers across multiple members of a distributed destination will first attempt to load balance across any destination members that are also running on the same server instance.

Initial Context Affinity and Server Affinity for Client Connections

A system administrator can establish load balancing of JMS destinations across multiple servers in a cluster by configuring multiple JMS servers and using targets to assign them to the defined WebLogic Servers. Each JMS server is deployed on exactly one WebLogic Server and handles requests for a set of destinations. During the configuration phase, the system administrator enables load balancing by specifying targets for JMS servers. 

A system administrator can establish cluster-wide, transparent access to destinations from any server in the cluster by configuring multiple connection factories and using targets to assign them to WebLogic Servers. Each connection factory can be deployed on multiple WebLogic Servers. 

The application uses the Java Naming and Directory Interface (JNDI) to look up a connection factory and create a connection to establish communication with a JMS server. Each JMS server handles requests for a set of destinations. Requests for destinations not handled by a JMS server are forwarded to the appropriate server.

WebLogic Server provides server affinity for client connections. If an application has a connection to a given server instance, JMS will attempt to establish new JMS connections to the same server instance.

When creating a connection, JMS will try first to achieve initial context affinity. It will attempt to connect to the same server or servers to which a client connected for its initial context, assuming that the server instance is configured for that connection factory. For example, if the connection factory is configured for servers A and B, but the client has an Initial Context on server C, then the connection factory will not establish the new connection with A, but will choose between servers B and C.

If a connection factory cannot achieve initial context affinity, it will try to provide affinity to a server to which the client is already connected. For instance, assume the client has an Initial Context on server A and some other type of connection to server B. If the client then uses a connection factory configured for servers B and C it will not achieve initial context affinity. The connection factory will instead attempt to achieve server affinity by trying to create a connection to server B, to which it already has a connection, rather than server C.

If a connection factory cannot provide either initial context affinity or server affinity, then the connection factory is free to make a connection wherever possible. For instance, assume a client has an initial context on server A, no other connections and a connection factory configured for servers B and C. The connection factory is unable to provide any affinity and is free to attempt new connections to either server B or C.

Dynamic Clusters

 Dynamic clusters consist of server instances that can be dynamically scaled up to meet the resource needs of your application. A dynamic cluster uses a single server template to define configuration for a specified number of generated (dynamic) server instances.

When you create a dynamic cluster, the dynamic servers are preconfigured and automatically generated for you, enabling you to easily scale up the number of server instances in your dynamic cluster when you need additional server capacity. You can simply start the dynamic servers without having to first manually configure and add them to the cluster.

If you need additional server instances on top of the number you originally specified, you can increase the maximum number of servers instances (dynamic) in the dynamic cluster configuration or manually add configured server instances to the dynamic cluster. A dynamic cluster that contains both dynamic and configured server instances is called a mixed cluster.

TermDefinition

dynamic cluster

A cluster that contains one or more generated (dynamic) server instances that are based on a single shared server template.

configured cluster

A cluster in which you manually configure and add each server instance.

dynamic server

A server instance that is generated by WebLogic Server when creating a dynamic cluster. Configuration is based on a shared server template.

configured server

A server instance for which you manually configure attributes.

mixed cluster

A cluster that contains both dynamic and configured server instances.

server template

A prototype server definition that contains common, non-default settings and attributes that can be assigned to a set of server instances, which then inherit the template configuration. For dynamic clusters, the server template is used to generate the dynamic servers. 

WebLogic Server Data Source Types

 A data source is a pool of database connections that are created when the data source instance is created, which can occur when the data source is deployed, when it is targeted, or when the host WebLogic Server instance is started.

Oracle WebLogic Server provides five types of data sources:

  • Generic data sources: Generic data sources and their connection pools provide connection management processes that help keep your system running efficiently. You can set options in the data source to suit your applications and your environment.
  • Active GridLink data sources: An event-based data source that adaptively responds to state changes in an Oracle RAC instance.
  • Multi data sources: An abstraction around a group of generic data sources that provides load balancing or failover processing.
  • Proxy data sources: Data sources that provide the ability to switch between databases in a WebLogic Server Multitenant environment.
  • Universal Connection Pool (UCP) data sources: Data sources provided as an option for users who wish to use Oracle Universal Connection Pooling (UCP) to connect to Oracle Databases. UCP provides an alternative connection pooling technology to Oracle WebLogic Server connection pooling.
Generic data sources:

Generic data sources and their connection pools provide database access and database connection management processes that help keep your system running efficiently. Each generic data source contains a pool of database connections that are created when the data source is created and at server startup. Applications reserve a database connection from the data source by looking up the data source on the JNDI tree or in the local application context and then calling getConnection(). When finished with the connection, the application should call connection.close() as early as possible, which returns the database connection to the pool for other applications to use.

Active GridLink data sources:

A single Active GridLink (AGL) data source provides connectivity between WebLogic Server and an Oracle Database service, which may include multiple Oracle RAC clusters. An AGL data source uses the Oracle Notification Service (ONS) to adaptively respond to state changes in an Oracle RAC instance. An Oracle Database service represents a workload with common attributes that enables administrators to manage the workload as a single entity. You scale the number of AGL data sources as the number of services increases in the data base, independent of the number of nodes in the cluster.

An AGL data source includes the features of generic data sources plus the following support for Oracle RAC:
  • Fast Connection Failover
  • Runtime Connection Load Balancing
  • Graceful Handling for Oracle RAC Outages
  • GridLink Affinity
  • SCAN Addresses
  • Secure Communication using Oracle Wallet
Multi Data Sources:

A multi data source can be regarded as a pool of generic data sources. Multi data sources are best used for failover or load balancing between nodes of a highly available database system, such as redundant databases or Oracle Real Application Clusters (Oracle RAC). A multi data source is bound to the JNDI tree or local application context, in the same way that generic data sources are bound to the JNDI tree. Applications look up a multi data source on the JNDI tree or in the local application context (java:comp/env), just as they do for data sources, and then request a database connection. The multi data source determines the data source to use that can satisfy the request depending upon the algorithm selected in the multi data source configuration: load balancing or failover.

Proxy Data Source:

Proxy data sources provide the ability to switch between databases in a WebLogic Server Multitenant environment. Proxy data sources simplify the administration of multiple data sources by providing a light-weight mechanism for accessing a data source associated with a partition or tenant. Applications often need to quickly access a data source by name without needing to know the naming conventions, context names (partitions or tenants), and so on. The proxy data source provides the access to the underlying data sources. All of the significant processing happens in the data sources to which it points. That is, the underlying data sources actually handle deployment, management, security, and so on.

Universal Connection Pool Data Sources:

A Universal Connection Pool (UCP) data source enables the use of Oracle Universal Connection Pooling (UCP) for connecting to Oracle Database. A UCP data source is available as an option for using UCP, which is an alternative connection pooling technology to WebLogic Server connection pooling.

Note: Oracle generally recommends the use of Active GridLink data sources, multi data sources, or generic data sources, and also the Oracle WebLogic Server connection pooling included in these data source implementations to establish connectivity with Oracle Database.

The implementations of UCP data sources are loosely coupled, allowing the swapping of the ucp.jar file to support the use of new UCP features by the applications. UCP data sources are not supported in an application-scoped, application-packaged, or standalone module environment. See Using Universal Connection Pool Data Sources in Administering JDBC Data Sources for Oracle WebLogic Server.

JDBC, Data Source, Connection Pool

 JDBC

JDBC is an API for accessing databases in an uniform way.

JDBC Provides:

  • Platform independent access to data bases
  • Location transparency
  • Transparency to proprietary database issues
  • Support for both two-tier and multi-tier models for database access.

JDBC Architecture :


Driver Types:

Type 1: JDBC-ODBC Bridge

This combination provides JDBC access via ODBC drivers. ODBC binary code and in many cases, database client code must be loaded on each client machine that uses a JDBC-ODBC Bridge. A product called SequeLink from Data Direct Technologies provides a driver that supports some ODBC drivers (for example Microsoft Access).

Type one drivers provide JDBC access via one or more Open Database Connectivity (ODBC) drivers. ODBC, which predates JDBC, is widely used by developers to connect to databases in a non-Java environment.

Pros: A good approach for learning JDBC. May be useful for companies that already have ODBC drivers installed on each client machine typically the case for Windows-based machines running productivity applications. May be the only way to gain access to some low-end desktop databases.

Cons: Not for large-scale applications. Performance suffers because there’s some overhead associated with the translation work to go from JDBC to ODBC. Doesn’t support all the features of Java. User is limited by the functionality of the underlying ODBC driver.



A JDBC/ODBC bridge provides JDBC API access through one or more ODBC drivers. Some ODBC native code and in many cases native database client code must be loaded on each client machine that uses this type of driver.

The advantage for using this type of driver is that it allows access to almost any database since the database ODBC drivers are readily available.

Disadvantages for using this type of driver include the following:
  • Performance is degraded since the JDBC call goes through the bridge to the ODBC driver then to the native database connectivity interface. The results are then sent back through the reverse process
  • Limited Java feature set
  • May not be suitable for a large-scale application


Type 2: Partial Java Driver

This type of driver converts JDBC calls into calls on the client API for Oracle, Sybase, Informix, DB2, or other DBMS. Note that, like the bridge driver, this style of driver requires that some binary code be loaded on each client machine.

This type of driver converts the calls that a developer writes to the JDBC application programming interface into calls that connect to the client machine’s application programming interface for a specific database, such as IBM, Informix, Oracle or Sybase.

Pros: Performance is better than that of Type 1, in part because the Type 2 driver contains compiled code that’s optimized for the back-end database server’s operating system.

Cons: User needs to make sure the JDBC driver of the database vendor is loaded onto each client machine. Must have compiled code for every operating system that the application will run on. Best use is for controlled environments, such as an intranet.


A native-API partly Java technology-enabled driver converts JDBC calls into calls on the client API for DBMSs. Like the bridge driver, this style of driver requires that some binary code be loaded on each client machine. An example of this type of driver is the Oracle Thick Driver, which is also called OCI.

Advantages for using this type of driver include the following:
  • Allows access to almost any database since the databases ODBC drivers are readily available
  • Offers significantly better performance than the JDBC/ODBC Bridge
  • Limited Java feature set
Disadvantages for using this type of driver include the following:
  • Applicable Client library must be installed
  • Type 2 driver shows lower performance than type 3 or 4

Type 3: Pure Java Driver for Database Middleware

This style of driver translates JDBC calls into the middleware vendor’s protocol, which is then translated to a DBMS protocol by a middleware server. The middleware provides connectivity to many different databases.

This driver translates JDBC calls into the middleware vendor’s protocol, which is then converted to a database-specific protocol by the middleware server software.

Pros: Better performance than Types 1 and 2. Can be used when a company has multiple databases and wants to use a single JDBC driver to connect to all of them. Server-based, so no need for JDBC driver code on client machine. For performance reasons, the back-end server component is optimized for the operating system on which the database is running.

Cons: Needs some database-specific code on the middleware server. If the middleware must run on different platforms, a Type 4 driver might be more effective.

A net-protocol fully Java-enabled driver translates JDBC API calls into a DBMS-independent net protocol which is then translated to a DBMS protocol by a server. This net server middleware is able to connect all of its Java technology-based clients to many different databases. Many mainframe legacy non-relational databases use this kind of driver.

Advantages for using this type of driver include the following:
  • Allows access to almost any database since the databases ODBC drivers are readily available
  • Offers significantly better performance than the JDBC/ODBC Bridge and Type 2 Drivers
  • Advanced Java feature set
  • Scalable
  • Caching
  • Advanced system administration
  • Does not require applicable database client libraries
The disadvantage for using this type of driver is that it requires a separate JDBC middleware server to translate specific native-connectivity interface.

Type 4: Direct to Database pure Java Driver

This style of driver converts JDBC calls into a network protocol that sends the converted packets in a proprietary format to be used directly by DBMSs, thus allowing a direct call from the client machine to the DBMS server and providing a practical solution for intranet access. This type of driver has become very popular recently and is supported by most database software vendors. All JDBC drivers from Data Direct Technologies (driver vendor) are Type 4 drivers.

Pros: Better performance than Types 1 and 2. No need to install special software on client or server.

Cons: Not optimized for server operating system, so the driver can’t take advantage of operating system features. (The driver is optimized for the database and can take advantage of the database vendor’s functionality.) User needs a different driver for each different database.


A native-protocol fully Java technology-enabled driver converts JDBC technology calls into the network protocol used by DBMSs directly. This allows a direct call from the client machine to the DBMS server.

Advantages for using this type of driver include the following:

  • Advantages for using this type of driver include the following:
  • Allows access to almost any database since the databases ODBC drivers are readily available
  • Offers significantly better performance than the JDBC/ODBC Bridge and Type 2 Drivers
  • Scalable
  • Caching
  • Advanced system administration
  • Superior performance
  • Advance Java feature set
  • Does not require applicable database client libraries
  • The disadvantage for using this type of driver is that each database will require a driver
  • The disadvantage for using this type of driver is that each database will require a driver


JDBC Architecture Types:

The JDBC API supports both two-tier and three-tier processing models for database access.

Two-Tier Architecture

In the two-tier model, a Java application talks directly to the data source. This requires a JDBC driver that can communicate with the particular data source being accessed. A user's commands are delivered to the database or other data source, and the results of those statements are sent back to the user. The data source may be located on another machine to which the user is connected via a network. This is referred to as a client/server configuration, with the user's machine as the client, and the machine housing the data source as the server. The network can be an intranet, which, for example, connects employees within a corporation, or it can be the Internet.




Multi-Tier Architecture

In the three-tier model, commands are sent to a "middle tier" of services, which then sends the commands to the data source. The data source processes the commands and sends the results back to the middle tier, which then sends them to the user. MIS directors find the three-tier model very attractive because the middle tier makes it possible to maintain control over access and the kinds of updates that can be made to corporate data. Another advantage is that it simplifies the deployment of applications. Finally, in many cases, the three-tier architecture can provide performance advantages.



Until recently, the middle tier has often been written in languages such as C or C++, which offer fast performance. However, with the introduction of optimizing compilers that translate Java bytecode into efficient machine-specific code and technologies such as Enterprise JavaBeans, the Java platform is fast becoming the standard platform for middle-tier development. This is a big plus, making it possible to take advantage of Java's robustness, multithreading, and security features.

With enterprises increasingly using the Java programming language for writing server code, the JDBC API is being used more and more in the middle tier of a three-tier architecture. Some of the features that make JDBC a server technology are its support for connection pooling, distributed transactions, and disconnected row sets. The JDBC API is also what allows access to a data source from a Java middle tier.

Data Source 

What is a Data Source?

A Data Source object provides a way for a JDBC client to obtain a database connection from a connection pool.

A Data Source :
  • Is stored in the WLS JNDI tree
  • Can support transactions
  • Is associated with a connection pool.
What is a Connection Pool?

A connection pool is a group of ready-to-use database connections associated with a Data Source.

Connection pools:
  • Are created at WebLogic Server startup
  • Can be administered using the Admin Console
  • Can be dynamically resized to accommodate increasing load
Benefits of Data Sources + Connection Pools:
  • Time and overhead are saved by using an existing database connection
  • Connection information is managed in one location in the Admin Console
  • The number of connections to a database can be controlled
  • The DBMS can be changed without the application developer having to modify underlying code.
  • A Connection pool allows an application to borrow a DBMS connection. 

Data Source Architecture 




How Data Sources are used

Step 1) A client will Look up for Data Source through a JNDI 
Step 2) WebLogic Returns the Data Source Name by Identifying in JNDI tree.
Step 3) Once Data Source is retuned to the client, then client will borrow a connection from Connection Pool using getConnection() method.
Step 4) Connection Pool will return a connection to the client.
Step 5) Client will access the database using the connection which is retuned from the connection pool 




Multi Data Source 

An abstraction around a group of generic data sources that provides load balancing or failover processing.



Different Types of WebSphere Profiles

 A profile defines the runtime environment. The profile includes all the files that the server processes in the runtime environment and that you can change.

You can create a runtime environment either through the manageprofiles command or the Profile Management Tool graphical user interface. You can use the Profile Management Tool to enter most of the parameters. Some parameters, however, require you to use the manageprofiles command. You must use the manageprofiles command to delete a profile, for instance, because the Profile Management Tool does not provide a deletion function. You can use either the Profile Management Tool or the manageprofiles command to create a cell profile. The Profile Management Tool creates the cell in a single step, whereas the manageprofiles command requires two separate invocations.

Core product files

The core product files are the shared product binary files, which are shared by all profiles.

The directory structure for the product has the following two major divisions of files in the installation root directory for the product:

The core product files are shared product binary files that do not change unless you install a refresh pack, a fix pack, or an interim fix. Some log information is also updated.

The following list shows default installation locations for root users on supported platforms:

  • [AIX]/usr/IBM/WebSphere/AppServer
  • [Linux][Solaris][HP-UX]/opt/IBM/WebSphere/AppServer
  • [Windows]C:\Program Files\IBM\WebSphere\AppServer

The app_server_root/profiles directory is the default directory for creating profiles.

When you want binary files at different service levels, you must use a separate installation of the product for each service level.

The configuration for every defined application server process is within the profiles directory unless you specify a new directory when you create a profile. These files change as often as you create a new profile, reconfigure an existing profile, or delete a profile.

Each of the folders except for the profiles directory and a few others such as the logs directory and the properties directory do not change, unless you install service fixes. The profiles directory, however, changes each time you add, change, or delete a profile. The profiles directory is the default repository for profiles. However, you can put a profile anywhere on the machine or system, provided enough disk space is available.

If you create a profile in another existing folder in the installation root directory, then a risk exists that the profile might be affected by the installation of a service fix that applies maintenance to the folder. Use a directory outside of the installation root directory when using a directory other than the profiles directory for creating profiles.

Why and when to create a profile

The manageprofiles command-line tool defines each profile for the product.

Run the Profile Management Tool or the manageprofiles command each time that you want to create a profile. A need for more than one profile on a machine is common.

Administration is greatly enhanced when using profiles instead of multiple product installations. Not only is disk space saved, but updating the product is simplified when you maintain a single set of product core files. Also, creating new profiles is more efficient and less prone to error than full product installations, allowing a developer to create separate profiles of the product for development and testing.

You can run the Profile Management Tool or the command-line tool to create a new profile on the same machine as an existing profile. Define unique characteristics, such as profile name and node name, for the new profile. Each profile shares all runtime scripts, libraries, the Java SE Runtime Environment (JRE) environment, and other core product files.

Profile types

Templates for each profile are located in the app_server_root/profileTemplates directory.

Multiple directories exist within this directory, which correspond to different profile types and vary with the type of product that is installed. The directories are the paths that you indicate while using the manageprofiles command with the -templatePath option. You can also specify profile templates that exist outside the profileTemplates directory, if you have any.

See the -templatePath parameter description in the manageprofiles command topic for more information.

The manageprofiles command in the WebSphere Application Server Network Deployment product can create the following types of profiles:

Management profile with a deployment manager server

The basic function of the deployment manager is to deploy applications to a cell of application servers, which it manages. Each application server that belongs to the cell is a managed node.

You can create the management profile with a deployment manager server using the Profile Management Tool or the manageprofiles command. If you create the profile with the manageprofiles command, specify app_server_root/profileTemplates/management for the -templatePath parameter and DEPLOYMENT_MANAGER for the -serverType parameter.

Management profile with an administrative agent server

The basic function of the administrative agent is to provide a single interface to administer multiple unfederated application servers.

You can create the profile using the Profile Management Tool or the manageprofiles command. If you create the profile with the manageprofiles command, specify app_server_root/profileTemplates/management for the -templatePath parameter and ADMIN_AGENT for the -serverType parameter to create this type of management profile.

Management profile with a job manager server

The basic function of the job manager is to provide a single console to administer multiple base servers, multiple deployment managers, and do asynchronous job submission.

You can create the profile using the Profile Management Tool or the manageprofiles command. If you create the profile with the manageprofiles command, specify app_server_root/profileTemplates/management for the -templatePath parameter and JOB_MANAGER for the -serverType parameter to create this type of management profile.

Application server profile

Use the application server to make applications available to the Internet or to an intranet.

An important product feature is the ability to scale up a standalone application server profile by adding the application server node into a deployment manager cell. Multiple application server processes in a cell can deploy an application that is in demand. You can also remove an application server node from a cell to return the node to the status of a standalone application server.

Each standalone application server can optionally have its own administrative console application, which you use to manage the application server. You can also use the wsadmin scripting facility to perform every function that is available in the administrative console application.

No node agent process is available for a standalone application server node unless you decide to add the application server node to a deployment manager cell. Adding the application server node to a cell is known as federation. Federation changes the standalone application server node into a managed node. You use the administrative console of the deployment manager to manage the node. If you remove the node from the deployment manager cell, then use the administrative console and the scripting interface of the standalone application server node to manage the process.

 You can create the profile using the Profile Management Tool or the manageprofiles command. If you create the profile with the manageprofiles command, specify app_server_root/profileTemplates/default for the -templatePath parameter to create this type of profile.

Cell profile

Use the cell profile to make applications available to the Internet or to an intranet under the management of the deployment manager.

Creation of a cell profile generates a deployment manager and a federated node in one iteration through the Profile Management Tool. The result is a fully functional cell on a given system.

To create a cell profile using the manageprofiles command, you must create two portions of the profile: the cell deployment manager portion and the cell node portion. Additionally, you can have only one cell deployment manager and one cell node associated with each other when you create a cell. The initial cell profile that you create with the manageprofiles command is equivalent to the cell profile you create with the Profile Management Tool. After you create the initial cell profile, you can create custom profiles or standalone profiles and federate the profiles into the deployment manager.

On the manageprofiles command, specify app_server_root/profileTemplates/cell/dmgr for the -templatePath parameter for the deployment manager and app_server_root/profileTemplates/cell/default for the -templatePath parameter for the cell node.

Avoid trouble

When you create cell profiles, ensure that each cell name is unique. Duplicate cell names might cause serious problems when there is interaction between cells.

After you create the two portions that make up the cell profile, you have a deployment manager and federated node. The federated node contains an application server and the default application, which contains the snoop servlet, the HitCount application, and the HelloHTML servlet.

Custom profile

Use the custom profile, which belongs to a deployment manager cell, to make applications available to the Internet or to an intranet under the management of the deployment manager.

The deployment manager converts a custom profile to a managed node by adding the node into the cell. The deployment manager also converts an application server node into a managed node when you add an application server node into a cell. When either node is added to a cell, the node becomes a managed node. The node agent process is then instantiated on the managed node. The node agent acts on behalf of the deployment manager to control application server processes on the managed node. The node agent can start or stop application servers, for example.

A deployment manager can create multiple application servers on a managed node so long as the node agent process is running. Processes on the managed node can include cluster members that the deployment manager uses to balance the workload for heavily used applications.

Use the administrative console of the deployment manager to control all of the nodes that the deployment manager manages. You can also use the wsadmin scripting facility of the deployment manager to control any of the managed nodes. A custom profile does not have its own administrative console or scripting interface. You cannot manage the node directly with the wsadmin scripting facility.

A custom profile does not include default applications or a default server like the application server profile includes. A custom profile is an empty node. Add the node to the deployment manager cell. Then, you can use the administrative interface of the deployment manager to customize the managed node by creating clusters and application servers.

You can create the profile using the Profile Management Tool or the manageprofiles command. If you create the profile with the manageprofiles command, specify app_server_root/profileTemplates/managed for the -templatePath parameter to create this type of profile.

Secure proxy profile

Use the secure proxy server to take requests from the Internet and forward them to application servers. The secure proxy server resides in the DMZ.

Default profiles

Profiles use the concept of a default profile when more than one profile exists. The default profile is set to be the default target for scripts that do not specify a profile. You can use the -profileName parameter with most of the scripts to enable the scripts to act on a profile other than the default profile.

The default profile name is <profile_type><profile_number>:

<profile_type> is a value of AppSrv, Dmgr, Custom, AdminAgent, JobMgr, or SecureProxySrv.

<profile_number> is a sequential number that is used to create a unique profile name

Tip

When multiple profiles exist on a machine, certain commands require that you specify the -profileName parameter if the profile is not the default profile. In those cases, it might be easier to use the commands that are in the bin directory of each profile. When you issue one of these commands within the bin directory of a profile, the command acts on that profile unless the -profileName parameter specifies a different profile.

Security policy for application server profiles

In environments where you plan to have multiple standalone application servers, the security policy of each application server profile is independent of the others. Changes to the security policy in one application server profile are not synchronized with the other profiles.

Installed file set

You decide where to install the files that define a profile.

The default location is in the profiles directory in the installation root directory. You can change the location on the Profile Management Tool or in a parameter when using the command-line tool. For example, assume that you create two profiles on a Linux® platform with host name devhost1. The profile directories resemble the following example if you do not relocate them:

/opt/IBM/WebSphere/AppServer/profiles/AppSrv01

/opt/IBM/WebSphere/AppServer/profiles/AppSrv02Copy code

You can specify a different directory, such as /opt/profiles for the profile directory using the manageprofiles command. For example:

manageprofiles.sh 

   -profileName AppSrv01

   -profilePath /opt/profiles

manageprofiles.sh 

   -profileName AppSrv02

   -profilePath /opt/profilesCopy code

Then the profile directories resemble the directories shown in the following example:

/opt/profiles/AppSrv01

/opt/profiles/AppSrv02Copy code

The following directories exist within a typical profile. This example assumes that the profile, AppSrv01, exists:

app_server_root/profiles/AppSrv01/bin

app_server_root/profiles/AppSrv01/config

app_server_root/profiles/AppSrv01/configuration

app_server_root/profiles/AppSrv01/etc

app_server_root/profiles/AppSrv01/firststeps

app_server_root/profiles/AppSrv01/installableApps

app_server_root/profiles/AppSrv01/installedApps

app_server_root/profiles/AppSrv01/installedConnectors

app_server_root/profiles/AppSrv01/installedFilters

app_server_root/profiles/AppSrv01/logs

app_server_root/profiles/AppSrv01/properties

app_server_root/profiles/AppSrv01/temp

app_server_root/profiles/AppSrv01/wstemp

WebSphere Application Server Concepts





 Profile:

A profile is the set of files that define the runtime environment for a WebSphere Application Server.

A need for more than one profile one server is common. For example, you might want to create separate profiles of the application server for development and testing.

There are advantages of creating profiles instead of multiple installations of the WebSphere Application Server product. Not only is disk-space saved, but updating the application server is simplified when you maintain a single set of core files. Also, creating new profiles is more efficient and less prone to errors than full installations of the WebSphere Application Server.

Creating a new profile on the same server as an existing one defines unique characteristics, such as profile name and node name. Each profile shares all runtime scripts, libraries, the Java runtime environment, and other core server files. However, each profile has its own administrative console and administrative scripting interface.

There are different types of profiles available in WebSphere Application Server:

  • Management Profile
  • Application Server Profile
  • Cell Profile
  • Custom Profile
  • Secure proxy profile
  • Default Profile

Node:

A node is a logical group of one or more application servers on a physical computer. The node name is unique within the cell. A node name usually is identical to the host name for the computer. That is, a node usually corresponds to a physical computer system with a distinct IP address.

Node Agent:

Each node has a Node Agent, which is a service that is used to communicate with the Deployment Manager.

Node Groups:

Nodes that you organize into a node group need to be similar in terms of installed software, available resources, and configuration to enable servers on those nodes to host the same applications as part of a server cluster. The deployment manager does no validation to guarantee that nodes in a given node group have anything in common.

Node groups are optional and are established at the discretion of the WebSphere Application Server administrator. However, a node must be a member of a node group. Initially, all Application Server nodes are members of the default DefaultNodeGroup node group.

A node can be a member of more than one node group.

To delete a node group, the node group must be empty. The default node group cannot be deleted.

Cell:

The administrative domain that a Deployment Manager manages. A cell is a logical grouping of nodes that enables common administrative activities in a WebSphere Application Server distributed environment. A cell can have one or many clusters.

Deployment Manager:

The Deployment Manager is a service that manages all nodes in the cell. The deployment manager communicates with the node agents of the cell that is administering to manage the applications within the node.

Application server

The application server is the primary component of WebSphere. The server runs a Java virtual machine, providing the runtime environment for the application's code. The application server provides containers that specialize in enabling the execution of specific Java application components.

Cluster

A logical grouping of one or more functionally identical application server processes. A cluster provides ease of deployment, configuration, workload balancing, and fallback redundancy. A cluster is a collection of servers working together as a single system to ensure that mission-critical applications and resources remain available to clients.

Clusters provide scalability. For more information, refer to additional documentation that customer support may provide that describes vertical and horizontal clustering in the WebSphere Application Server distributed environment.

Cluster member

An instance of a WebSphere Application Server in a cluster.

How to setup SSL on nginx

Step 1) Install nginx

[root@master ~]# yum install epel-release -y

[root@master ~]# yum install nginx -y

Step 2) Enable nginx.service

[root@master ~]# systemctl enable nginx

Step 3) Start nginx service

[root@master ~]# systemctl start nginx

Step 4) Test http connection

[root@master ~]# curl -Ik http://master

HTTP/1.1 200 OK
Server: nginx/1.14.1
Date: Sun, 31 Jan 2021 00:22:17 GMT
Content-Type: text/html
Content-Length: 4057
Last-Modified: Mon, 07 Oct 2019 21:16:24 GMT
Connection: keep-alive
ETag: "5d9bab28-fd9"
Accept-Ranges: bytes

Step 5) Generate Self Signed Certificates

[root@master ~]# mkdir /etc/nginx/ssl
[root@master ~]# cd /etc/nginx/ssl

[root@master ssl]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/server.key -out /etc/nginx/ssl/server.crt
Generating a RSA private key
............................................................................................+++++
....................................+++++
writing new private key to '/etc/nginx/ssl/server.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:NC
Locality Name (eg, city) [Default City]:Charlotte
Organization Name (eg, company) [Default Company Ltd]:PAVAN, Inc
Organizational Unit Name (eg, section) []:PAVAN
Common Name (eg, your name or your server's hostname) []:master
Email Address []:pavan@abc.com
[root@master ssl]#

Step 6) Configure SSL

Uncomment SSL configuration section and update the following 

    server {

        listen       443 ssl http2 default_server;

        listen       [::]:443 ssl http2 default_server;

        server_name  master;

        root         /usr/share/nginx/html;

        ssl_certificate "/etc/nginx/ssl/server.crt";

        ssl_certificate_key "/etc/nginx/ssl/server.key";

        ssl_session_cache shared:SSL:1m;

        ssl_session_timeout  10m;

        ssl_ciphers PROFILE=SYSTEM;

        ssl_prefer_server_ciphers on;

        # Load configuration files for the default server block.

        include /etc/nginx/default.d/*.conf;

        location / {

        }

        error_page 404 /404.html;

            location = /40x.html {

        }

        error_page 500 502 503 504 /50x.html;

            location = /50x.html {

        }

    }


Step 7) http to https redirection

    server {

        listen       80 default_server;

        listen       [::]:80 default_server;

        server_name  master;

        root         /usr/share/nginx/html;

        return 301 https://$server_name$request_uri;

        # Load configuration files for the default server block.

        include /etc/nginx/default.d/*.conf;

        location / {

        }

        error_page 404 /404.html;

            location = /40x.html {

        }

        error_page 500 502 503 504 /50x.html;

            location = /50x.html {

        }

    }


Step 8) Verify the syntax

[root@master ssl]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Step 9) Restart nginx

[root@master ssl]# systemctl restart nginx

Step 10) Test the SSL(https) connection . 

[root@master ssl]# curl -Ik http://master

HTTP/1.1 301 Moved Permanently

Server: nginx/1.14.1

Date: Sun, 31 Jan 2021 01:13:09 GMT

Content-Type: text/html

Content-Length: 185

Connection: keep-alive

Location: https://master/

Now http request is being routed to https and SSL is working. 

Installation log is available at: https://github.com/pavanbandaru/webserver/blob/main/nginx-install-configure-ssl

Step 11) You can access the URL from the web browser. 

https://master






Saturday, 30 January 2021

WebSphere Application Server Administration

WebSphere Application Server V9.0




WebSphere Application Server Architecture



Applications:

WebSphere Application Server can run the following types of applications:

  • Java EE Applications
  • Portlet Applications
  • Session Initiation Protocol Applications
  • OSGI Applications
  • Batch Applications
  • Business-level Applications

Containers:

Containers provide runtime support for applications. They are specialized code in the application server that runs specific types of applications. Containers can interact with other containers by sharing session management, security, and other attributes. 

Web Container:

 The web container is the part of the application server in which web application components run. Web applications are comprised of one or more related servlets, JSPs, and HTML files that you can manage as a unit. The Web container processes servlets, JSP files, and other types of server-side includes. Each application server runtime has one logical web container, which can be modified, but not created or removed. Each web container provides the following.

  • Web container transport chains
  • Servlet processing
  • HTML and other static content processing
  • Session Management
  • SIP Application and their container
  • Portlet applications and their container

EJB Container:

EJB container provides all of the runtime services needed to deploy and manage enterprise beans. It is a server process that handles requests for both session and entity beans.

EJBs are Java components that typically implement the business logic of Java EE applications, as well as accessing data. The enterprise beans, packaged in EJB modules, installed in an application server do not communicate directly with the server. Instead, the EJB container is an interface between EJB components and the application server. 

The container provides many low-level services, including threading and transaction support. From an administrative perspective, the container handles data access for the contained beans. A single container can host more than one EJB Java archive file.

Batch container: 

The Batch container is where the job scheduler runs jobs that are written in XML job control language(xJCL)

The batch container provides and execution environment for the execution of batch applications that are based on Java EE.

Batch applications are deployed as EAR files and follow either the transactional batch or compute-intensive programming models.

Application Server:

The application server is the platform where Java EE applications can run. It provides services that can be used by business applications, such as database connectivity, threading, and workload management. 

Client Applications and other types of clients:

In a client-server environment, clients communicate with applications running on the server. 

1) Client applications and their containers: The client container is installed separately from the application server, on the client machine. It enables the client to run applications in an EJB-compatible Java EE environment. 

2) Web clients or web browser clients: The web client makes a request to the web container of the application server. A web client or web browser client runs in a web browser, and typically is a web application.

3) Web services client: Web services clients are another kind of client that might exist in your application servicing environment.

4) Administrative clients: A scripting client or the administrative console.

Web Services engine:

Web services are self-contained, modular applications that can be described, published, located, and invoked over a network. They implement a service-oriented architecture (SOA), which supports the connecting or sharing of resources and data in a flexible and standardized manner.

Service Component Architecture (SCA):

SCA composites consists of components that implement business functions in the form of services.

Data access, messaging, and Java EE resources:

Data access resources:

Connection management for accessing to Enterprise information systems (EIS) in the application server is based on the Java EE connector Architecture. JCA services helps an application to access a database in which the application retrieves and persists data.

The connection between the EIS is done through the use of EIS-provided resource adapters, which are plugged into the application server. The architecture specifies the connection management, transaction management, and security contracts between the application server and EIS.

The connection Manager in the application server pools and manages connections. It is capable of managing connections obtained through both resource adapters defined by the JCA specification and data sources defined by the JDBC 2.0 Extensions specifications. 

JDBC resources: JDBC resources are a type of Java EE resources used by applications to access data. 

JCA resource adapters: JCA resource adapters are another type of Java EE resources used by applications. The JCA defines the standard architecture for connecting Java EE platform to EIS.

Messaging resources and messaging engines:

JMS support enables applications to exchange messages asynchronously with other JMS clients by using JMS destination (Queues or Topics). Applications can use message-driven beans to automatically retrieve messages from JMS destinations and JCA endpoints with out explicit polling for messages

The messaging engine supports the following types of message providers:

Default messaging provider (service integration bus): 

The default messaging provider uses the service integration bus for transport. The default message provider provider print-to-point functions, as well as publish and subscribe functions. With this provider you define JMS connection factories and destinations that correspond to service integration bus destination. 

WebSphere MQ provider:

WebSphere MQ provider can be used as the external JMS provider. The application server provides the JMS client classes and administration interface, while WebSphere MQ provides the queue-based messaging system.

Generic JMS provider:

You can use another messaging provider as long as it implements the ASF components of the JMS 1.0.2 specifications. JMS resources for this provider can't be configured using the admin console.

Security:

The product provides security infrastructure and mechanisms to protect sensitive Java EE resources and administrative resources and to address enterprise end-to-end security requirements on authentication, resource access control, data integrity, confidentiality, privacy, and secure interoperability.

Security infrastructure and mechanisms protect Java Platform, Enterprise Edition (Java EE) resources and administrative resources, addressing your enterprise security requirements. In turn, the security infrastructure of this product works with the existing security infrastructure of your multiple-tier enterprise computing framework. Based on open architecture, the product provides many plug-in points to integrate with enterprise software components to provide end-to-end security.

The security infrastructure involves both a programming model and elements of the product architecture that are independent of the application type.

Additional Services for use by applications:

  • Naming and directory service (JNDI)
  • Object Request Broker (ORB)
  • Transactions

WebSphere extensions:

WebSphere programming model extensions are the programming model benefits you gain by purchasing the product. 

  • Application profiling
  • Dynamic query
  • Dynamic cache
  • Activity Sessions
  • Web services
  • Asynchronous beans
  • Startup beans
  • Object pools
  • Internationalization
  • Scheduler
  • Work areas



Friday, 29 January 2021

script to test remote tcp connection on multiple hosts

Use the following script to validate tcp connection on remote hosts. 

Step 1) Generate ssh keys on your local machine

run command: ssh-keygen -t rsa 

Step 2) Use the following script to establish ssh connectivity to all remote servers.

 #!/bin/ksh

## Author: Pavan Bandaru

##List the hostnames in host.txt file

echo "Enter your password: "

stty -echo
read -s SSHPASS
export SSHPASS
stty echo

for host in `cat host.txt`
do
sshpass -e ssh-copy-id -f ${host} -o StrictHostKeyChecking=no
done

SSHPASS=""


Step 3) Use the following script to to check tcp connection.


inputfile.txt should be updated with 3 arguments
hostname,remotehostname,ip
hostname,remotehostname,ip


#!/bin/ksh

## Author: Pavan Bandaru

display_usage() {
  echo "This script must be run with ...."
  echo -e "\nUsage:\n$0 [inputfile.txt] \n"
}


if [ $# -eq 0 ]; then
    display_usage
    exit 1
fi

inputfile=$1

echo "------------------------------------------"

for host in `cat<$inputfile`

do

hostname=$(echo $host | awk -F"," '{print $1}')

remotehost=$(echo $host | awk -F"," '{print $2}')

remoteip=$(echo $host | awk -F"," '{print $3}')

echo $hostname

echo $remotehost

echo $remoteip

ssh -T $hostname << EOSSH

timeout 2 bash -c 'echo >/dev/tcp/$remotehost/$remoteip' && echo "connection status: success" || echo "connection status: failed"

EOSSH

echo "-----------------------------------------"

done