Thursday, December 17, 2009

Web server plug-in routing to SAME application in DIFFERENT clusters

Question
If I install the same Web application into more than one WebSphere Application Server cluster, is it possible to configure the Web server plug-in to properly route requests to the application in both clusters?

Cause

The most common use of the WebSphere Application Server Web server plug-in is to load balance requests for an application installed to a single cluster. For that environment you should not use the instructions in this technote.

In some rare cases, you might want to install the exact same application into multiple clusters. The purpose of this technote is to describe how to configure the WebSphere Application Server Web server Plug-in to work properly in that specific case.

Answer

Note: The Web server plug-in does not support load balancing or fail-over between multiple clusters. Also the configuration described below requires manually changing the plugin-cfg.xml file, so you should turn off automatic propagation in the WebSphere Administrative Console so that the plugin-cfg.xml file does not get automatically overwritten.

Yes, it is possible for a single Web server plug-in to properly route requests when the same application is installed into more than one WebSphere Application Server cluster. To make this work you will need to use different hostnames or port numbers for each of the clusters. You will also need to do some manual cut and paste of information in the plugin-cfg.xml file.

The example below shows exactly how to accomplish this.

* The IBM HTTP Server machine is called ihsbox.
* cluster1 has two members c1_member1 and c1_member2.
* cluster2 has two members c2_member1 and c2_member2.
* Both of the member1 appservers are on a machine called was1.
* Both of the member2 appservers are on a machine called was2.


So, for this simple example, it would look like the following:

------ was1 --- cl1_member1
/ \--- cl2_member1
/
ihsbox
\
\------ was2 --- cl1_member2
\--- cl2_member2

If I install my snoop application (context-root /snoop) into both clusters, how is the plug-in supposed to distinguish which ServerCluster to use?

In the plug-in, there are only 3 things that can distinguish between requests:

* hostname
* port number
* URI


For example, these URLs are unique requests that can be routed independently of each other:

http://host1/snoop
http://host1:83/snoop
http://host2/snoop
http://host2:81/snoop

In each of these examples, the URI part /snoop remains the same. It is the hostname or port number that makes the difference.

Back to the example, in Application Server admin, you would create a virtual host called "vhost1" which would have a host alias of host1:80. You would also need to include other host aliases for the internal ports used by appservers in cluster1 (for example: ports 9080, 9081, 9443, 9444). You would use this virtual host (vhost1) in all of the members of cluster1 (cl1_member1 and cl1_member2).

In addition, you would create a virtual host called "vhost2" which would have a host alias of host2:80. You would need to include other host aliases for the internal ports used by appservers in cluster2. I would use this virtual host (vhost2) in all of the members of cluster2 (cl2_member1 and cl2_member2).

In order to maintain session affinity it is essential to use different affinity cookie names for each different cluster. For example, the appservers in cluster1 can use the cookie name "JSESSIONIDC1". And the appservers in cluster2 can use the cookie name "JSESSIONIDC2". By using different cookie names for the different clusters, session affinity will be preserved within each cluster. For information about how to change the cookie names, see Cookie settings in the Information Center.

You must map the application modules to the newly created virtual hosts. Since the same application is installed to both clusters, you will need to map the application modules to both vhosts. However, there currently is a limitation in the Application Server administrative console in that it only allows the application modules to be mapped to a single vhost. Consequently, you must use a trick to map the modules twice and manually copy and paste the configs into a single plugin-cfg.xml file.

Here are the steps to use:

1. Map the application modules to the first vhost (for example: vhost1).

2. Generate the plug-in.

3. From the plugin-cfg.xml file, manually copy the VirtualHostGroup and UriGroup and Route that correspond to vhost1.

4. Map the application modules to the second vhost (for example: vhost2).

5. Generate the plug-in.

6. In the new plugin-cfg.xml file you will see that the VirtualHostGroup and UriGroup for vhost1 are gone, and there are new VirtualHostGroup and UriGroup for vhost2.

7. Manually paste the VirtualHostGroup and UriGroup and Route for vhost1 back into the plugin-cfg.xml file.

8. Save the plugin-cfg.xml file and propagate it to the Web server.


The plugin-cfg.xml file should now have a VirtualHostGroup and UriGroup for vhost1 with a Route that points to cluster1. Also there should be a VirtualHostGroup and UriGroup for vhost2 with a Route that points to cluster2.

You need to account for these new hostnames in my IBM HTTP Server config (httpd.conf). The ServerName for IBM HTTP Server is ihsbox. Create a VirtualHost in IBM HTTP Server to account for the other valid hostnames, like this:


ServerName ihsbox
ServerAlias host1
ServerAlias host2


Add host1 and host2 into my DNS config so that they resolve to the ip address of ihsbox.

Now, this URL http://host1/snoop will go to the snoop application in cluster1.

And, this URL http://host2/snoop will go to the snoop application in cluster2.

If you want to use different port numbers instead of different hostnames, the same idea applies there as well.

IBM URL

Wednesday, December 2, 2009

MustGather: Performance, hang, or high CPU issues on AIX

If you are experiencing performance degradation, hang, no response, hung threads, CPU starvation, high CPU utilization, network delays, or deadlocks, this MustGather will assist you in collecting the critical data that is needed to troubleshoot your issue.

Click here

Monitoring performance with Tivoli Performance Viewer (TPV)

Tivoli Performance Viewer (TPV) enables administrators and programmers to monitor the overall health of WebSphere Application Server from within the administrative console.
Click here for more INFO

High availability manager

WebSphere Application Server includes a high availability manager component. The services that the high availability manager provides are only available to WebSphere Application Server components.
A high availability manager provides several features that allow other WebSphere Application Server components to make themselves highly available. A high availability manager provides:

* A framework that allows singleton services to make themselves highly available. Examples of singleton services that use this framework include the transaction managers for cluster members, and the default IBM® messaging provider, also known as the service integration bus.
* A mechanism that allows servers to easily exchange state data. This mechanism is commonly referred to as the bulletin board.
* A specialized framework for high speed and reliable messaging between processes. This framework is used by the data replication service when WebSphere Application Server is configured for memory-to-memory replication.


Click here for more Info

Configuring the hang detection policy

The hang detection option for WebSphere Application Server is turned on by default. You can configure a hang detection policy to accommodate your applications and environment so that potential hangs can be reported, providing earlier detection of failing servers. When a hung thread is detected, WebSphere Application Server notifies you so that you can troubleshoot the problem.

Click here for more info

IBM HTTP Server Performance Tuning

Click here for Detailed Information about the Performance Tuning.

Tuning Web servers


Tuning IBM HTTP Server to maximize the number of client connections to WebSphere Application Server

DMZ - Demilitarized Zone

In computer networking, DMZ is a firewall configuration for securing local area networks (LANs).

In a DMZ configuration, most computers on the LAN run behind a firewall connected to a public network like the Internet. One or more computers also run outside the firewall, in the DMZ. Those computers on the outside intercept traffic and broker requests for the rest of the LAN, adding an extra layer of protection for computers behind the firewall.

More Info on DMZ

Advantage of IBM HTTP Server Over Dfault Http Transport In Websphere App Server 6.1

1. IBM HTTP Server is a full function HTTP server that using WAS plugin can
efficiently load balance traffic to one or more application servers in a
cluster. It is a critical component of high availability architecture. It
also allows one to deploy static content on web servers and reduce load from
application servers.

2. From security point of view, you can deploy HTTP servers in the DMZ thus
keeping application servers behind firewalls.

3. Default port HTTP is 80. On many operating systems you have to be root
(administrator) to be able to bind to this port. In case of application
server you will have to run the application server as root. This is not
always desirable as you do not want application code running with root
permissions

Tuesday, November 10, 2009

Capacity Planning

Capacity planning is the process of determining the production capacity needed by an organization to meet changing demands for its products.[1] In the context of capacity planning, "capacity" is the maximum amount of work that an organization is capable of completing in a given period of time.

A discrepancy between the capacity of an organization and the demands of its customers results in inefficiency, either in under-utilized resources or unfulfilled customers. The goal of capacity planning is to minimize this discrepancy. Demand for an organization's capacity varies based on changes in production output, such as increasing or decreasing the production quantity of an existing product, or producing new products. Better utilization of existing capacity can be accomplished through improvements in overall equipment effectiveness (OEE). Capacity can be increased through introducing new techniques, equipment and materials, increasing the number of workers or machines, increasing the number of shifts, or acquiring additional production facilities.

Capacity is calculated: (number of machines or workers) × (number of shifts) × (utilization) × (efficiency).

The broad classes of capacity planning are lead strategy, lag strategy, and match strategy.

* Lead strategy is adding capacity in anticipation of an increase in demand. Lead strategy is an aggressive strategy with the goal of luring customers away from the company's competitors. The possible disadvantage to this strategy is that it often results in excess inventory, which is costly and often wasteful.
* Lag strategy refers to adding capacity only after the organization is running at full capacity or beyond due to increase in demand (North Carolina State University, 2006). This is a more conservative strategy. It decreases the risk of waste, but it may result in the loss of possible customers.
* Match strategy is adding capacity in small amounts in response to changing demand in the market. This is a more moderate strategy.

In the context of systems engineering, capacity planning is used during system design and system performance monitoring.

Capacity planning is long-term decision that establishes a firm of overl level of resources. It extends over time horizon long enough to obtain resources. Capacity decision affects the production lead time, customer responsiveness, operating cost and company ability to compete. Inadequate capacity planning can lose the customer and business. Excess capacity can drain the company's resources and prefect investment of lucrative venture. When capacity should be increased and how much increase are the critical decisions.

Click Here for Capacity Planning White Paper.

Monday, November 9, 2009

Websphere Automation Tool ( WASIC)

WASIC is Websphere Application Server Installation and Configuraton Automation Tool.

Overview:-

WASIC is a tool to install, configure and administrate Websphere application server Version 6.1/7.0.

Features of WASIC:

1. Deployment Manager Installation.
2. Node Installation
3. Federate node to Deployment Manager.
4. Creating Profiles.
5. Install Update installers
6. Apply Fix Packs on Base Installation.
7. Changes port of deployment Manager
8. Checks whether ports are already in use.
9. Changes and checks node agent ports.
10.Webserver Installation and configure them with Application Server.
11.Cluster and server creation and configuring servers.
12.Create Cluster.
13.Create server and add the server to cluster.
14.Create Virtual hosts.
15.Max and minimum heap size.
16.Debug argument.
17.Http Transports
18.Boot classpath.
19.Classpath.
20.Create server level variable.
21.JVM arguments.
22.Changes ports of server.
23.JDBC and Data source creation.
24.Create JDBC provider at cluster level scope..
25.Create Data Source at cluster level scope.
26.Create J2c Authentication.
27.Create Connection Pool Setting.
28.Testing Database connection.
29.Cluster Stop and start.
30.Application installation.

Pre-requisites:-


These are the pre-requisites which we need to do before installing this tool.

1. Create one user which will be used as administrator.
2. ssh for this user should work in all UNIX boxes.
3. Directory under which deployment manager, node and webserver are to be installed should be under this user.
4. The Directory under which we will place this tool should be mounted on all UNIX boxes where we will be installing Websphere Application server.
5. Download the Websphere application server and HTTP server binaries.

Description:-

This tools does Base installation of Websphere Application server version 6.1, first it checks whether the WAS base installation is already done on UNIX m/c and if the base installation is not there it will install WAS base binaries. The location under which it will check whether WAS is already installed depends on the value specified in configure.properties file and response file and then it install Update installer and apply fix packs and after installing base binaries, it install deployment manager using manageprofile utility and values specified in property files and using ports defined in ports properties file. It also checks whether the ports are already in use or not if they are in use I will send the list the ports which are already in use and it also changed the SOAP TIME OUT value to 6000 as it is 180 by default which is very less.

It will then install number of Nodes specified in property file and install them on host specified in properties file and install the Base binaries if they are not installed on unix box and it they are installed then it install nodes and federate nodes to deployment manager, changes node agents ports and start node. It also changes Soap time out for nodes.
*It also install node and federate with Deployment Manager for existing environment.

After node installation is completed, then it installs webservers on the host where we want to install it as specified in property files and configure webserver to deployment manager and generate plug-in and propagate the plug-in back to webserver and start the admin and http server. Http server will be listening on the port in property file.



It then create cluster and create server and add server to cluster and configure all these things on server:-
o Create Virtual hosts.
o Max and minimum heap size.
o Web Container Thread Pool Setting
o Debug argument.
o Http Transports
o Boot class path.
o Class path.
o Create server level variable.
o JVM arguments.
o Changes ports of server.

All these values are picked from property files.
• Other things can also be configured depending upon requirement and script can be modified to accommodate those changes.

After configuring cluster and server creation now it create JDBC provider, Data source , J2c Authentication and connection pool setting on cluster level scope.

It then installs the application on cluster which is created and starts the cluster and sends an email out with Deployment manager console login.

So with one script execution it will do a full fledged environment creation taking all possible intervention of manual stuff to be done and script which we will have to execute will be WASIC.sh.

All these steps in environment creation can run as on single step and they can be executed separately as single steps depending upon needs and new feature can be added as per requirement.


NOTE ***** ALL VALUE ARE PULLED FROM PROPERTY FILES, SO EVERYTHING CAN BE VERSION CONTROLLED IN CLEARCASE.


This tool also works under these scenarios also:

1. If we have to just create Deployment manager.
2. If we have create standalones Application server.
3. If we have to add new node to existing environment.
4. If we have to add new cluster in existing environment.
5. If we have to add new server to existing cluster and configure it.
6. If we have to modify existing JDBC, Data source connection settings.
7. For daily to daily application deployments, cluster stop and cluster start.
8. Create new webserver.
9. Create new webserver and add this webserver to existing environment.
10. It will generate plug-in after application installation or any configuration change and propagate the plug-in to webserver.

For all this environment creation to work, all we have to do is copy the existing environment in WASIC configuration and rename it to the new environment what we want to create change the values in properties file and change the name of scripts and run the WASIC.sh with functional area name and environment name.

These all steps can also be integrated as part of Build Forge.

FUTURE ENHANCEMENT:

1. Create a Graphical user interface for WASIC tool.
2. Adding New features for V 7.0 which are installation of admin agent, Job Manager and registering node to admin Agent and job manager
3. Configuring Global Security, LDAP, LTPA, SSO.
4. Configuring SSL configuration, creating self signed certificate, replace an existing self signed certificate, creating certificate authority requests, receiving a certificate issued by a certificate authority, retrieving a signer certificate from a remote ssl port and adding a signer certificate to a keystore.
5. Installations of Websphere Portal Sever and Websphere Process server.
6. This tool can be modified as per requirement and more features can be added.
7. These scripts can be broken into small scripts, so that if we have to change one configuration file of server it will work.
8. The scripts can also be integrated with build Forge

Release management

Release management defines the mechanisms of building and releasing software, and is included as a component of the Service Support Set in ITIL. The practice of Release Management continues to evolve while being applied to complex distributed software services such as in the SOA realm.

Release Management is proactive technical support focused on the planning and preparation of new services. Some of the benefits are:-

* The opportunity to plan expenditure and resource requirements in advance.
* A structured approach to rolling out all new software or hardware, which are efficient and effective.
* Changes to software are ‘bundled’ together for one release, which minimises the impact of changes on users.
* Testing before rollout, which minimises incidents affecting users and requires less reactive support.

The process

The release management meta process consists of several steps.

Gathering and description

When a new release is prepared, requirements are gathered. These are for example improvements that are needed to fix the current product. A parallel step is to look at the dependencies. Programs often consist of many modules that depend on each other to work. Changing one will affect the other. Once the requirements and dependencies are known, the next release process can be planned. This planning consists of what steps need to be taken, the time constraints etc. In figure 1 the entire process is visualized, using the meta modeling technique.

Many IT shops, especially those with extensive Microsoft platform deployments have developed elaborate processes for Patch Management to the Production environment. Their scope usually includes operating systems software, database upgrade, and even firmware upgrades to hardware components (e.g. storage arrays and network switches).

Release Building

When the requirements, dependencies and planning are known, the building process of the new release begins. The first step is to design the new release. This can be done with the use of various software development techniques for example UML. The design is worked out into code in a programming language (e.g. Java, C#, C++). The pieces of code, classes etc. are joined together, compiled into working subsections, and finally put together in a working program, a build.

Acceptance Test

When the build is ready, it is sent to a testing department for further acceptance testing, checking the build against the testing standards. The program is reviewed to verify if it works correctly and lives up to the requirements and dependencies. During this time the entire process is documented to serve as a future knowledge base. After a final verification of the program the testing standards are updated.

Release Preparation

Once a correct, verified release has been achieved, it is prepared for release to the production environment. The release is packaged, meaning preparing a final product to be sent to a specific customer. This can be an Internet download, a CD, or a specific language etc. A final step is the verification of this package against the requirements resulting in audit reports. These audit reports give a last verification before the entire package is released.

Release Deployment

The deployment itself is getting the release to the customer and implementing it.

Software development process

A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.

Software development activities

Planning

The important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect.

Once the general requirements are gleaned from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document.

Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified.

Design

Domain Analysis is often the first step in attempting to design a new piece of software, whether it be an addition to an existing software, a new application, a new subsystem or a whole new system. Assuming that the developers (including the analysts) are not sufficiently knowledgeable in the subject area of the new software, the first task is to investigate the so-called "domain" of the software. The more knowledgeable they are about the domain already, the less work required. Another objective of this work is to make the analysts, who will later try to elicit and gather the requirements from the area experts, speak with them in the domain's own terminology, facilitating a better understanding of what is being said by these experts. If the analyst does not use the proper terminology it is likely that they will not be taken seriously, thus this phase is an important prelude to extracting and gathering the requirements. If an analyst hasn't done the appropriate work confusion may ensue: "I know you believe you understood what you think I said, but I am not sure you realize what you heard is not what I meant."[1]

Specification

Specification is the task of precisely describing the software to be written, possibly in a rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable. A good way to determine whether the specifications are sufficiently precise is to have a third party review the documents making sure that the requirements and Use Cases(A use case in software engineering and systems engineering is a description of a system’s behavior as it responds to a request that originates from outside of that system) are logically sound.

Architecture

The architecture of a software system or software architecture refers to an abstract representation of that system. Architecture is concerned with making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system.

Detailed Design

The Detailed Design is kind of the translation of architecture, which is a concrete operational guide following architecture. It may cover programming specific desgns, UI and validations, database structure, and also embed design pattern and best practice.

Implementation, testing and documenting

Implementation is the part of the process where software engineers actually program the code for the project.

Software testing is an integral and important part of the software development process. This part of the process ensures that bugs are recognized as early as possible.

Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the authoring of an API, be it external or internal.

Deployment and maintenance

Deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment.

Software Training and Support is important because a large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software.

Maintenance and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. It is during this phase that customer calls come in and you see whether your testing was extensive enough to uncover the problems before customers do. If the labor cost of the maintenance phase exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality, of at least one prior phase, is poor. In that case, management should consider the option of rebuilding the system (or portions) before maintenance cost is out of control.

Bug Tracking System tools are often deployed at this stage of the process to allow development teams to interface with customer/field teams testing the software to identify any real or perceived issues. These software tools, both open source and commercially licensed, provide a customizable process to acquire, review, acknowledge, and respond to reported issues.

Build Automation

Build automation is the act of scripting or automating a wide variety of tasks that software developers do in their day-to-day activities including things like:

* compiling computer source code into binary code
* packaging binary code
* running tests
* deployment to production systems
* creating documentation and or release notes

Types

* On-Demand automation such as a user running a script at the command line
* Scheduled automation such as a continuous integration server running a nightly build
* Triggered automation such as a continuous integration server running a build on every commit to a version control system

Advantages

* Improve product quality
* Accelerate the compile and link processing
* Eliminate redundant tasks
* Minimize "bad builds"
* Eliminate dependencies on key personnel
* Have history of builds and releases in order to investigate issues
* Save time and money - because of the reasons listed above.[6]


Requirements of a build system

Basic requirements:

1. Frequent or overnight builds to catch problems early.
2. Support for Source Code Dependency Management
3. Incremental build processing
4. Reporting that traces source to binary matching
5. Build acceleration
6. Extraction and reporting on build compile and link usage

Optional requirements:

1. Generate release notes and other documentation such as help pages
2. Build status reporting
3. Test pass or fail reporting
4. Summary of the features added/modified/deleted with each new build

Script to check the status of application and send email notification if application is down

This Script checks the status of application and send email notification if application is down. Define the application URL in url.properties
#!/usr/bin/bash

## Written by Charanjeet Singh ##

PATH=${PATH}:/opt/csw/bin;export PATH

cd /home/charan
date=`date '+%Y%m%d%H%M'`
mdate=`date '+%b-%d(%A)-%Y_%H.%M%p'`
home=/home/charan
logdir=$home/logs/application/$1.app.log$date
emaillist=
if [ $# -ne 1 ] ;
then echo ;
echo "enter env name for the script to execute or pass ALL as argument to check for the status of all envrionment specified in url.properties"
echo
exit 0
fi
echo " Checking application status of $1 Server "
envlist=" DEV SIT UAT "
if [[ $1 == ALL ]]; then
for envvar in $envlist ;
do
PropsPath="/home/ccbuild/charan/url.properties";
URL=`grep "$envvar"= $PropsPath | awk -F= '{print $2}'`
echo " $envvar URL --> $URL "
wget -q -O $logdir $URL
if [ \! -s $logdir ]; then
mailx -s "$envvar Application is DOWN" $emaillist < $logdir rm -rf $logdir fi done fi for env in $envlist ; do if [[ $env == $1 ]]; then PropsPath="/home/charan/url.properties"; URL=`grep "$env"= $PropsPath | awk -F= '{print $2}'` echo " $env URL --> $URL "
wget -q -O $logdir $URL
if [ \! -s $logdir ]; then
mailx -s "$env Application is DOWN" $emaillist < $logdir
rm -rf $logdir
fi
fi
done
rm -rf $logdir

--------------------------------------------------------------

DEV=
SIT=
UAT=

Automation Using Scripting

There are Number of ways in which we can automate tasks on Windows .Net and Unix Platforms:-

1. Power Shell
2. Batch File
3. WMI(Windows Management Instrumentation)
4. Nant
5. Ant
6. Perl
7. Shell Scripting.
8. VB script.
9. System administration automation

NAnt

Why NAnt?

NAnt is different. Instead of a model where it is extended with shell-based commands, NAnt is extended using task classes. Instead of writing shell commands, the configuration files are XML-based, calling out a target tree where various tasks get executed. Each task is run by an object that implements a particular Task interface.

Granted, this removes some of the expressive power that is inherent in being able to construct a shell command such as 'find . -name foo -exec rm {}', but it gives you the ability to be cross-platform - to work anywhere and everywhere. And hey, if you really need to execute a shell command, NAnt has an task that allows different commands to be executed based on the OS it is executing on.'

Click here for more infomation about Nant

ANT

Introduction

Apache Ant is a Java-based build tool. In theory, it is kind of like make, without make's wrinkles.

Why?

Why another build tool when there is already make, gnumake, nmake, jam, and others? Because all those tools have limitations that Ant's original author couldn't live with when developing software across multiple platforms. Make-like tools are inherently shell-based: they evaluate a set of dependencies, then execute commands not unlike what you would issue on a shell. This means that you can easily extend these tools by using or writing any program for the OS that you are working on; however, this also means that you limit yourself to the OS, or at least the OS type, such as Unix, that you are working on.

Makefiles are inherently evil as well. Anybody who has worked on them for any time has run into the dreaded tab problem. "Is my command not executing because I have a space in front of my tab?!!" said the original author of Ant way too many times. Tools like Jam took care of this to a great degree, but still have yet another format to use and remember.

Ant is different. Instead of a model where it is extended with shell-based commands, Ant is extended using Java classes. Instead of writing shell commands, the configuration files are XML-based, calling out a target tree where various tasks get executed. Each task is run by an object that implements a particular Task interface.

Granted, this removes some of the expressive power that is inherent in being able to construct a shell command such as `find . -name foo -exec rm {}`, but it gives you the ability to be cross-platform--to work anywhere and everywhere. And hey, if you really need to execute a shell command, Ant has an task that allows different commands to be executed based on the OS it is executing on.

Click here for learning Ant

Windows Installer

Windows Installer is the latest Microsoft technology for deploying applications. It offers a format for packaging an application and an engine to unpack and install an application. MSI packages are used in place of proprietary installation systems, allowing your installer to be run on any Windows platform from Windows 95 to XP and higher.

You can use Advanced Installer without knowing all the details and intricacies of the Windows Installer - Advanced Installer creates an abstraction over the underlying technology. However, to truly understand what is going on, to access complex features or to troubleshoot, Windows Installer knowledge is strongly recommended. All the information you need is included in the Windows SDK from Microsoft, available using the link below.
Download the latest Windows Installer SDK.

Please click here for detailed information of Windows Installers.

ADVANCED INSTALLER

ADVANCED INSTALLER
Advanced Installer is a Windows Installer authoring tool. It offers a friendly and easy to use Graphical User Interface for creating and maintaining installation packages (EXE, MSI, etc.) based on the Windows Installer installation technology.
Trial Version of Advanced Installer can be downloaded from here.
Click here for Advanced Installers Tutorials

VBScript

What Is VBScript?

Microsoft Visual Basic Scripting Edition brings active scripting to a wide variety of environments, including Web client scripting in Microsoft Internet Explorer and Web server scripting in Microsoft Internet Information Service.

Click here for more information.

Microsoft Operations Manager (MOM)

MOM 2005 provides comprehensive event and performance management, proactive monitoring and alerting, reporting and trend analysis, and system and application specific knowledge and tasks to improve the manageability of Windows-based servers and applications.
MOM 2005 Features Overview

The following are features of MOM 2005.

Security

MOM implements a security model that enables staff and components to work with accounts that have lower privilege levels.

Speed and ease of deployment

By combining using automation and wizards it is possible, depending on the scale of the deployment, to deploy MOM in a matter of hours, rather than weeks.

Low bandwidth or un-reliable networks

MOMs use of agents ensures that data collection on managed entities continues even if there is a temporary network outage.

Extended problem diagnostics

Because MOM retains operational data in its own database, analysts have a longer time to engage in diagnostics.

Data volume

MOMs multiple views, refined health model, and intelligent monitoring enable customers to filter and reduce large volumes of alert data.

Flexible, robust, and secure reporting

MOM Reporting uses Microsoft SQL Server and SQL Server Reporting Services to support long term storage, report customization, dynamic reports, data exports, auditing, planning, and report security.

High availability

MOMs management model enables you to add management servers so you can implement failover to eliminate a single point of failure.

Scalability

MOM design is such that you can manage thousands of entities.

High level of integration

MOM provides the MOM Connector Framework (MCF) and extensible APIs that enable you to integrate MOM with virtually any kind of management system or application.

Click here for detailed Information about MOM

Service Control Manager

Introduction
Service Control Manager is a program that gives you full control over services installed on your computer, and on remote computers as well.

Using this tool you can

* Install and uninstall services.
* Start, stop and pause services.
* View list of all installed services and their properties.
* Change service properties.
* Do all of the above on REMOTE computer!

The only difference between local and remote operations is speed: remote operations are usually slower than local ones

Click here for more Information

Script to Create Cluster, Server and Configuring them Fully.

This Script does the following Tasks :-
1. Create Cluster and Server and make server Member of cluster.
2. If cluster exits then it creates Server and make it member of cluster.
3. If cluster and server exists and if server is not member of cluster, it makes that server member of cluster.
4. Debug Argument.
5. Http Transports.
6. Ports --> SOAP, DRS, and Bootstrap ports.
7. JVM heap Size.
8. Virtual host.
9. AppServer Classpath
10. AppServer BootClasspath.
11. Generic JVM argument.
12. Cluster Level and Server Level Variables.
13. WebContainer Thread Pool Setting.



Written by Charanjeet Singh


import sys,java
from java.util import Properties
from org.python.modules import time
from java.io import FileInputStream

lineSep = java.lang.System.getProperty('line.separator')


def clusterServer(node,cluster,server,http,http1,host,bootstrap,soap,orb,csiv2_multi,csiv2_server,adminhost,adminhost1,dcs,sib,sib1,sib_mq,sib_mq1,sip,sip1,sas,maxHS,minHS,varname1,varvalue1,varname2,varvalue2,varname3,varvalue3,JvmArguments,maxtp,mintp,debug):

global AdminApp
global AdminConfig
global AdminControl

print " ----------------------------------------------------------------------------------------- "

######################################## Getting config ID of cell ################################################

cell = AdminControl.getCell()

cellid = AdminConfig.getid('/Cell:'+ cell)

##################################### checking for existence of node #########################################################


nodeid= AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node)

print " Checking for existence of node --> " + node

if len(nodeid) == 0:

print "Error -- node not found for name --> " + node

else:

print " ----------------------------------------------------------------------------------------- "

print " Node exists --> " + node


print " ----------------------------------------------------------------------------------------- "


###################### Checking for the existence of cluster and creating cluster if it doesnot exist ###########################


Clusterid = AdminConfig.getid('/ServerCluster:'+ cluster +'/')

print " ----------------------------------------------------------------------------------------- "

print " Checking for existence of Cluster --> " + cluster

if len(Clusterid) == 0:

print " ----------------------------------------------------------------------------------------- "

print "Cluster doesnot exists "

print " ----------------------------------------------------------------------------------------- "

print " Creating cluster --> " + cluster

name_attr = ["name", cluster]
desc_attr = ["description", cluster+" cluster"]
pref_attr = ["preferLocal", "true"]
statem_attr = ["stateManagement", [["initialState", "STOP"]]]
attrs = [name_attr, desc_attr, pref_attr, statem_attr]


AdminConfig.create('ServerCluster', cellid, attrs)

AdminConfig.save()

print " ----------------------------------------------------------------------------------------- "

print " Verify that cluster created successfully "

print " ----------------------------------------------------------------------------------------- "

newClusterid = AdminConfig.getid('/ServerCluster:'+ cluster +'/')

if len(newClusterid) > 0:

print " ----------------------------------------------------------------------------------------- "

print " cluster --> " + cluster + " created successfully "

print " ----------------------------------------------------------------------------------------- "

###################### Checking for the existence of Server and creating Server if it doesnot exist ###########################

Serverid = AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node +'/Server:'+ server)

print " Checking for existence of Server :" + server

if len(Serverid) == 0:

print " ----------------------------------------------------------------------------------------- "

print "Server doesnot exists "

print " ----------------------------------------------------------------------------------------- "

print " Creating --> " + server + " and making this server as a member of cluster --> " + cluster

AdminConfig.createClusterMember(newClusterid, nodeid, [['memberName', server ]])

AdminConfig.save()

Serverid = AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node +'/Server:'+ server)

if len(Serverid) > 0:

print " ----------------------------------------------------------------------------------------- "

print " Server --> " + server +" created and added as a member of --> " + cluster


else:

print " ----------------------------------------------------------------------------------------- "

print "Server already exist with this name --> " + server + " , please provide different name in property file "


else:

print " ----------------------------------------------------------------------------------------- "

print " Cluster exist --> "+ cluster + " and making this server as a member of cluster --> " + cluster +""

Serverid = AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node +'/Server:'+ server)

if len(Serverid) == 0:

print " ----------------------------------------------------------------------------------------- "

print "Server doesnot exists "

print " ----------------------------------------------------------------------------------------- "

print " Creating --> " + server + " and making this server as a member of cluster --> " + cluster

AdminConfig.createClusterMember(Clusterid, nodeid, [['memberName', server ]])

AdminConfig.save()

Serverid = AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node +'/Server:'+ server)

if len(Serverid) > 0:

print " ----------------------------------------------------------------------------------------- "

print " Server --> " + server +" created and added as a member of --> " + cluster


else:

print " ----------------------------------------------------------------------------------------- "

print "Server already exist with this name --> " + server + " , please provide different name in property file "

print " ----------------------------------------------------------------------------------------- "


################################# configuring properties for Server , variables and Virtual host ###################################

Clusterid = AdminConfig.getid('/ServerCluster:'+ cluster +'/')

if len(Clusterid) > 0:

Serverid = AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node +'/Server:'+ server)

if len(Serverid) > 0:

print " Following properties will be configured for --> " + server

print " 1. Debug Argument "

print " 2. Http Transports "

print " 3. Ports --> SOAP , DRS , BootStrap ports "

print " 4. JVM heap Size "

print " 5. Virtual host "

print " 6. Classpath "

print " 7. Boot Classpath "

print " 8. Generic JVM argument "

print " 9. Create Server Level Variables "

print " 10. WebContainer Thread Pool Setting"

print " ----------------------------------------------------------------------------------------- "

########################################## configuring JVM debug mode #####################################################

jvm = AdminConfig.list('JavaVirtualMachine', Serverid)

AdminConfig.modify(jvm, [['debugMode', 'false'], ['debugArgs', debug ]])

AdminConfig.save()

print " JVM Debug argument Configured ............."

print " ----------------------------------------------------------------------------------------- "

######################################### Configuring HTTP Transports ######################################################

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "WC_defaulthost"
portsDict["host"] = host
portsDict["port"] = http
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "WC_defaulthost_secure"
portsDict["host"] = host
portsDict["port"] = http1
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " HTTP Transports Configured .............. "

print " ----------------------------------------------------------------------------------------- "


################################################# Configuring Ports ###################################################


######################################################### WAS 6.1 ######################################################

## Bootstrap port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "BOOTSTRAP_ADDRESS"
portsDict["host"] = host
portsDict["port"] = bootstrap
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])

print " BOOTSTRAP_ADDRESS Configured ........... "

print " ----------------------------------------------------------------------------------------- "

## SOAP Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SOAP_CONNECTOR_ADDRESS"
portsDict["host"] = host
portsDict["port"] = soap
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])

print " Soap Port Configured ...................."

print " ----------------------------------------------------------------------------------------- "


## CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["port"] = csiv2_multi
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])

print " CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS Port Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["port"] = csiv2_server
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])

print " CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS Port Configured .......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## ORB_LISTENER_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "ORB_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["port"] = orb
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])

print " ORB_LISTENER_ADDRESS Port Configured .............."

print " ------------------------------------------------------------------------------------------------------------------------ "

## SAS_SSL_SERVERAUTH_LISTENER_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SAS_SSL_SERVERAUTH_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["port"] = sas
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])

print " SAS_SSL_SERVERAUTH_LISTENER_ADDRESS Port configured ........ "


print " ------------------------------------------------------------------------------------------------------------------------ "

## WC_adminhost Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "WC_adminhost"
portsDict["host"] = host
portsDict["port"] = adminhost
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " WC_adminhost Port Configured ...... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## WC_adminhost_secure Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "WC_adminhost_secure"
portsDict["host"] = host
portsDict["port"] = adminhost1
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " WC_adminhost_secure Port Configured ...... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## DCS_UNICAST_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "DCS_UNICAST_ADDRESS"
portsDict["host"] = host
portsDict["port"] = dcs
portsDict["modifyShared"] = "true"
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " DCS_UNICAST_ADDRESS Configured ...... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## SIB_ENDPOINT_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SIB_ENDPOINT_ADDRESS"
portsDict["host"] = host
portsDict["port"] = sib
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " SIB_ENDPOINT_ADDRESS Port Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## SIB_ENDPOINT_SECURE_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SIB_ENDPOINT_SECURE_ADDRESS"
portsDict["host"] = host
portsDict["port"] = sib1
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " SIB_ENDPOINT_SECURE_ADDRESS Port Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## SIB_MQ_ENDPOINT_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SIB_MQ_ENDPOINT_ADDRESS"
portsDict["host"] = host
portsDict["port"] = sib_mq
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " SIB_MQ_ENDPOINT_ADDRESS Port Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## SIB_MQ_ENDPOINT_SECURE_ADDRESS Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SIB_MQ_ENDPOINT_SECURE_ADDRESS"
portsDict["host"] = host
portsDict["port"] = sib_mq1
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " SIB_MQ_ENDPOINT_SECURE_ADDRESS Port Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## SIP_DEFAULTHOST Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SIP_DEFAULTHOST"
portsDict["host"] = host
portsDict["port"] = sip
portsDict["modifyShared"] = "true"
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " SIP_DEFAULTHOST Port Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

## SIP_DEFAULTHOST_SECURE Port

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SIP_DEFAULTHOST_SECURE"
portsDict["host"] = host
portsDict["port"] = sip1
portsDict["modifyShared"] = "true"
AdminTask.modifyServerPort(server,
["-%s %s" % (key, value) for key, value in portsDict.items()])


print " SIP_DEFAULTHOST_SECURE Port Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

print " All Port's Configured ............... "

print " ------------------------------------------------------------------------------------------------------------------------ "


#################################################### configuring JVM heap Size #############################################

AdminConfig.modify(jvm, [['initialHeapSize', minHS ], ['maximumHeapSize', maxHS ]])

AdminConfig.save()

print " JVM heap Size configured ........... "

print " ------------------------------------------------------------------------------------------------------------------------ "

################################################### Configuring WebContainer thread pool Size ################################


tpList=AdminConfig.list('ThreadPool', Serverid).split(lineSep)

for tp in tpList:

tpWebContainer=tp

AdminConfig.show(tpWebContainer)

AdminConfig.modify(tpWebContainer, [['maximumSize', maxtp ]] )

AdminConfig.save()

AdminConfig.modify(tpWebContainer, [['minimumSize', mintp ]] )

AdminConfig.save()


print " WebContainer thread pool Size configured ........... "

print " ----------------------------------------------------------------------------------------------------------------------"

################################################# Creating Virtual Hosts ############################################################

hostname = ["hostname", "*"]

port1 = ["port", http]

port2 = ["port" ,http1 ]

hostAlias1 = [hostname, port1]

hostAlias2 = [ hostname , port2 ]

vh = AdminConfig.getid("/VirtualHost:"+cluster+"/")

if len(vh) == 0 :

vtempl= "default_host(templates/default|virtualhosts.xml#VirtualHost_1)"

parentVhost = AdminConfig.createUsingTemplate('VirtualHost', cellid, [['name', cluster]], vtempl)

AdminConfig.create('HostAlias', parentVhost, hostAlias1 )

AdminConfig.create('HostAlias', parentVhost, hostAlias2 )

AdminConfig.save()

else :

AdminConfig.create('HostAlias', vh, hostAlias1 )

AdminConfig.create('HostAlias', vh, hostAlias2 )

AdminConfig.save()

print " Virtual Host Created .............. "


print " ------------------------------------------------------------------------------------------------------------------------ "


################################################# Creating Variables ###############################################################

############################################ SERVER LEVEL ################################################


varMapserver = AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node +'/Server:'+ server +'/VariableMap:/')

if len(varMapserver) > 0:

AdminConfig.remove(varMapserver)

AdminConfig.save()

varMapserver1 = AdminConfig.create('VariableMap', Serverid, [])

varMapserver2 = AdminConfig.getid('/Cell:'+ cell +'/Node:'+ node +'/Server:'+ server +'/VariableMap:/')

nameattr1 = ['symbolicName', varname1]
valattr1 = ['value', varvalue1]
desc1 = ["description", varname1]
nameattr2 = ['symbolicName', varname2]
valattr2 = ['value', varvalue2]
desc2 = ["description", varname2]
nameattr3 = ['symbolicName', varname3]
valattr3 = ['value', varvalue3 +'/'+ server ]
desc3 = ["description", varname3]
attr1 = [nameattr1, valattr1, desc1]
attr2 = [nameattr2, valattr2, desc2]
attr3 = [nameattr3, valattr3, desc3]
attrs1 = [attr1, attr2, attr3]

entries6 = ['entries', attrs1 ]

AdminConfig.modify(varMapserver2, [entries6] )

AdminConfig.save()

print " Server Level Variables Configured ................ "

print " ------------------------------------------------------------------------------------------------------------------------ "


################################################# Configuring Classpath for AppServer ##############################################


classpath1 = ['classpath', '${'+ varname2 +'}/properties.jar']
classpath2 = ['classpath', '${'+ varname1 +'}/ojdbc14.jar']
classpath3 = ['classpath', '${'+ varname1 +'}/log4j-1.2.4.jar']
classpath4 = ['classpath', '${'+ varname1 +'}/poi-2.5.1-final-20040804.jar']
classpath5 = ['classpath', '${'+ varname1 +'}/jdom.jar']
classpath6 = ['classpath', '${'+ varname1 +'}/idssecl.jar']
classpath7 = ['classpath', '${'+ varname1 +'}/com.ibm.mq.jar']
classpath8 = ['classpath', '${'+ varname1 +'}/com.ibm.mqbind.jar']
classpath9 = ['classpath', '${'+ varname1 +'}/com.ibm.mqjms.jar']
classpath10 = ['classpath', '${'+ varname1 +'}/connector.jar']
classpath11 = ['classpath', '${'+ varname1 +'}/jms.jar']
classpath12 = ['classpath', '${'+ varname1 +'}/jta.jar']
classpath13 = ['classpath', '${'+ varname1 +'}/mqji.jar']
classpath14 = ['classpath', '${'+ varname1 +'}/idsclie.jar']
classpath15 = ['classpath', '${'+ varname2 +'}/'+ server]

AdminConfig.modify(jvm, [classpath1])
AdminConfig.modify(jvm, [classpath2])
AdminConfig.modify(jvm, [classpath3])
AdminConfig.modify(jvm, [classpath4])
AdminConfig.modify(jvm, [classpath5])
AdminConfig.modify(jvm, [classpath6])
AdminConfig.modify(jvm, [classpath7])
AdminConfig.modify(jvm, [classpath8])
AdminConfig.modify(jvm, [classpath9])
AdminConfig.modify(jvm, [classpath10])
AdminConfig.modify(jvm, [classpath11])
AdminConfig.modify(jvm, [classpath12])
AdminConfig.modify(jvm, [classpath13])
AdminConfig.modify(jvm, [classpath14])
AdminConfig.modify(jvm, [classpath15])

AdminConfig.save()

print " Classpath for AppServer Configured ......... "

print " ------------------------------------------------------------------------------------------------------------------------ "

############################################### Configuring Boot Classpath for AppServer ############################################

bootclasspath = ['bootClasspath', '${'+ varname1 +'}/cal141.jar']

AdminConfig.modify(jvm, [bootclasspath])

AdminConfig.save()

print " Boot Classpath for AppServer Configured ............... "

print " ------------------------------------------------------------------------------------------------------------------------ "


############################################# Configuring Generic JVM argument for AppServer ########################################

genericJVMargument = ['genericJvmArguments', JvmArguments]

AdminConfig.modify(jvm, [genericJVMargument])

AdminConfig.save()

print " Generic JVM argument for AppServer Configured ............. "

print " ------------------------------------------------------------------------------------------------------------------------ "


######################################################### Full Resyncronization #########################################################

print " Fully Resyncronizing nodes ........... "

nodelist = AdminTask.listManagedNodes().split(lineSep)

for nodename in nodelist :

####################Identifying the ConfigRepository MBean and assign it to variable######################

repo = AdminControl.completeObjectName('type=ConfigRepository,process=nodeagent,node='+ nodename +',*')

print AdminControl.invoke(repo, 'refreshRepositoryEpoch')

sync = AdminControl.completeObjectName('cell='+ cell +',node='+ nodename +',type=NodeSync,*')

print AdminControl.invoke(sync , 'sync')

print " ----------------------------------------------------------------------------------------- "

print " Full Resyncronization completed "

print " ----------------------------------------------------------------------------------------- "


########################################################## Main Program #############################################################

arglen=len(sys.argv)

num_exp_args=11

if (arglen != num_exp_args):

print "eleven arguments are required. This argument should be a properties file and Server level Variables."

print " ----------------------------------------------------------------------------------------- "

sys.exit(-1)

propFile=sys.argv[0]

properties=Properties();

try:

properties.load(FileInputStream(propFile))

print " ----------------------------------------------------------------------------------------- "

print "Succesfully read property file "+propFile

print " ----------------------------------------------------------------------------------------- "

except:

print "Cannot read property file "+propFile
sys.exit(-1)

print " ----------------------------------------------------------------------------------------- "

node = sys.argv[7]

cluster = sys.argv[8]

server = sys.argv[9]

http = int(properties.getProperty("WC_defaulthost"))

http1 = int(properties.getProperty("WC_defaulthost_secure"))

host = sys.argv[10]

bootstrap = int(properties.getProperty("BOOTSTRAP_ADDRESS"))

soap = int(properties.getProperty("SOAP_CONNECTOR_ADDRESS"))

orb = int(properties.getProperty("ORB_LISTENER_ADDRESS"))

csiv2_multi = int(properties.getProperty("CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS"))

csiv2_server = int(properties.getProperty("CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS"))

adminhost = int(properties.getProperty("WC_adminhost"))

adminhost1 = int(properties.getProperty("WC_adminhost_secure"))

dcs = int(properties.getProperty("DCS_UNICAST_ADDRESS"))

sib = int(properties.getProperty("SIB_ENDPOINT_ADDRESS"))

sib1 = int(properties.getProperty("SIB_ENDPOINT_SECURE_ADDRESS"))

sib_mq = int(properties.getProperty("SIB_MQ_ENDPOINT_ADDRESS"))

sib_mq1 = int(properties.getProperty("SIB_MQ_ENDPOINT_SECURE_ADDRESS"))

sip = int(properties.getProperty("SIP_DEFAULTHOST"))

sip1 = int(properties.getProperty("SIP_DEFAULTHOST_SECURE"))

sas = int(properties.getProperty("SAS_SSL_SERVERAUTH_LISTENER_ADDRESS"))

maxHS = int(properties.getProperty("MAXIMUM_HEAP_SIZE"))

minHS = int(properties.getProperty("MINIMUM_HEAP_SIZE"))

varname1 = sys.argv[1]

varvalue1 = sys.argv[2]

varname2 = sys.argv[3]

varvalue2 = sys.argv[4]

varname3 = sys.argv[5]

varvalue3 = sys.argv[6]

JvmArguments = str(properties.getProperty("GenericJvmArguments"))

maxtp = str(properties.getProperty("MAXIMUM_THREAD_POOL"))

mintp = str(properties.getProperty("MINIMUM_THREAD_POOL"))

debug = str(properties.getProperty("DebugArguments"))

clusterServer(node,cluster,server,http,http1,host,bootstrap,soap,orb,csiv2_multi,csiv2_server,adminhost,adminhost1,dcs,sib,sib1,sib_mq,sib_mq1,sip,sip1,sas,maxHS,minHS,varname1,varvalue1,varname2,varvalue2,varname3,varvalue3,JvmArguments,maxtp,mintp,debug)

############################################################################################################################

Script to configure JDBC, Datasource , J2C Auth, Connection Pool Settting at Cluster level Scope

WRITTEN BY CHARANJEET SINGH

This Script configure :- 1. JDBC Provider 2. DataSource 3.JAASAuthData 4. Connection Pool Setting


import sys,java
from java.util import Properties
from java.io import FileInputStream
from org.python.modules import time
lineSep = java.lang.System.getProperty('line.separator')

def datasource(cluster,user,password,url,env,jdbc_driver,timeOut,maxConn,minConn,reapTime,unusdTimeout,agedTimeout):

# Declare global variables

global AdminConfig
global AdminControl



## JDBCProvider ##


name = "jdbcOracle"+ env

print " Name of JDBC Provider which will be created ---> " + name

print " ----------------------------------------------------------------------------------------- "

# Gets the name of cell

cell = AdminControl.getCell()

cellid = AdminConfig.getid('/Cell:'+ cell +'/')

print " ----------------------------------------------------------------------------------------- "



# checking for the existence of Cluster


Serverid = AdminConfig.getid('/Cell:'+ cell +'/ServerCluster:'+ cluster +'/')

print " Checking for existence of Server :" + cluster

if len(Serverid) == 0:

print "Cluster doesnot exists "

else:

print "Cluster exist:"+ cluster

print " ----------------------------------------------------------------------------------------- "


## removing old jdbc provider and creating a new jdbc provider

print " Checking for the existence of JDBC Provider :"+ name

s2 = AdminConfig.getid('/Cell:'+ cell +'/ServerCluster:'+ cluster +'/JDBCProvider:'+ name)

if len(s2) > 0:

print " JDBC Provider exists with name :"+ name

print " Removing JDBC Provider with name :"+ name

AdminConfig.remove(s2)

print " JDBC Provider removed "

AdminConfig.save()

print " Saving Configuraion "

print " ----------------------------------------------------------------------------------------- "

## Creating New JDBC Provider ##

print " Creating New JDBC Provider :"+ name

n1 = ["name" , name ]

desc = ["description" , "Oracle JDBC Driver"]

impn = ["implementationClassName" , "oracle.jdbc.pool.OracleConnectionPoolDataSource"]

classpath = ["classpath" , jdbc_driver ]

attrs1 = [n1 , impn , desc , classpath]

jdbc = AdminConfig.create('JDBCProvider' , Serverid , attrs1)

print " New JDBC Provider created :"+ name

AdminConfig.save()

print " Saving Configuraion "

print " ----------------------------------------------------------------------------------------- "

## checking for the existence of JAASAuthData and deleting ##

node = AdminControl.getNode()

alias1 = node +"/"+ env

print " Checking for the existence of JAASAUTHDATA :"+ alias1

jaasAuthDataList = AdminConfig.list("JAASAuthData")

if len(jaasAuthDataList) == 0:

print " Creating New JAASAuthData with Alias name :"+ alias1

sec = AdminConfig.getid('/Cell:'+ cell +'/Security:/')

alias_attr = ["alias" , alias1]

desc_attr = ["description" , "alias"]

userid_attr = ["userId" , user ]

password_attr = ["password" , password]

attrs = [alias_attr , desc_attr , userid_attr , password_attr ]

authdata = AdminConfig.create('JAASAuthData' , sec , attrs)

print " Created new JASSAuthData with Alias name :"+ alias1

AdminConfig.save()

print " Saving Configuraion "

print " ----------------------------------------------------------------------------------------- "

else :

matchFound = 0

jaasAuthDataList = AdminConfig.list("JAASAuthData")

jaasAuthDataList=jaasAuthDataList.split(lineSeparator)

for jaasAuthId in jaasAuthDataList:

getAlias = AdminConfig.showAttribute(jaasAuthId, "alias")

if (cmp(getAlias,alias1) == 0):

print " JAASAuthData exists with name :"+ alias1

print " Removing JAASAuthData with name :"+ alias1

AdminConfig.remove(jaasAuthId)

print " JAASAuthData removed "

AdminConfig.save()

print " Saving Configuraion "

matchFound = 1

break

if (matchFound == 0):

print " No match was found for the given JASSAuthData : "+ alias1

#endIf

print " ----------------------------------------------------------------------------------------- "


## J2C Authentication Entries ##

print " Creating New JAASAuthData with Alias name :"+ alias1

sec = AdminConfig.getid('/Cell:'+ cell +'/Security:/')

alias_attr = ["alias" , alias1]

desc_attr = ["description" , "alias"]

userid_attr = ["userId" , user ]

password_attr = ["password" , password]

attrs = [alias_attr , desc_attr , userid_attr , password_attr ]

authdata = AdminConfig.create('JAASAuthData' , sec , attrs)

print " Created new JASSAuthData with Alias name :"+ alias1

AdminConfig.save()

print " Saving Configuraion "

print " ----------------------------------------------------------------------------------------- "

## DataSource ##

datasource = "DataSource"+ env

print " Name of datasource which will be created on JDBC Provider :"+ name +" is :"+ datasource

ds = AdminConfig.getid('/Cell:'+ cell +'/ServerCluster:'+ cluster +'/JDBCProvider:'+ name)

name1 = ["name" , datasource]

jndi = ["jndiName" , "jdbc/tiers3DS"]

authentication = ["authDataAlias" , alias1]

st_cachesize = ["statementCacheSize" , "150"]

ds_hlpclass = ["datasourceHelperClassname" , "com.ibm.websphere.rsadapter.Oracle10gDataStoreHelper"]

map_configalias_attr=["mappingConfigAlias", "DefaultPrincipalMapping"]

map_attrs=[authentication , map_configalias_attr]

mapping_attr=["mapping", map_attrs]

ds_attr = [name1 , jndi , authentication , st_cachesize , ds_hlpclass ,mapping_attr ]

newds = AdminConfig.create('DataSource' , ds , ds_attr)

print " New DataSource created with name :"+ datasource

AdminConfig.save()

print " Saving Configuraion "

print " ----------------------------------------------------------------------------------------- "

## set the properties for the datasource ##

print " Setting the properties for DataSource :"+ datasource

newds1 = AdminConfig.getid('/Cell:'+ cell +'/ServerCluster:'+ cluster +'/JDBCProvider:'+ name +'/DataSource:'+ datasource)

propSet = AdminConfig.create('J2EEResourcePropertySet' , newds1 , "")

name3 = ["name" , "URL"]

type = ["type" , "java.lang.String"]

required = ["required" , "true"]

value = ["value" , url]

rpAttrs = [name3 , type , required , value]

jrp = AdminConfig.create('J2EEResourceProperty' , propSet , rpAttrs)


print " Properties created for DataSource :"+ datasource

AdminConfig.save()

print " Saving Configuraion "

print " ----------------------------------------------------------------------------------------- "

# Create an associated connection pool for the new DataSource#

print " Creating Connection Pool Setting for DataSource :"+ datasource

timeout = ["connectionTimeout" , timeOut]

maxconn = ["maxConnections" , maxConn]

minconn = ["minConnections" , minConn]

reaptime = ["reapTime" , reapTime]

unusedtimeout = ["unusedTimeout" , unusdTimeout]

agedtimeout = ["agedTimeout" , agedTimeout]

purgepolicy = ["purgePolicy" , "EntirePool"]

connPoolAttrs = [timeout , maxconn , minconn , reaptime , unusedtimeout , agedtimeout , purgepolicy]

AdminConfig.create("ConnectionPool", newds , connPoolAttrs)

print " Connection Pool Setting created for DataSource :"+ datasource

AdminConfig.save()

print " Saving Configuraion "

print " ----------------------------------------------------------------------------------------- "


## Full Syncronization ##

print " Syncronizing configuration with Master Repository "

nodelist = AdminTask.listManagedNodes().split(lineSep)

for nodename in nodelist :

print " Doing Full Resyncronization of node.......... "

####################Identifying the ConfigRepository MBean and assign it to variable######################

repo = AdminControl.completeObjectName('type=ConfigRepository,process=nodeagent,node='+ nodename +',*')

print AdminControl.invoke(repo, 'refreshRepositoryEpoch')

sync = AdminControl.completeObjectName('cell='+ cell +',node='+ nodename +',type=NodeSync,*')

print AdminControl.invoke(sync , 'sync')

#time.sleep(20)

print " ----------------------------------------------------------------------------------------- "

print " Full Resyncronization completed "

print " ----------------------------------------------------------------------------------------- "

#######Restarting Node Agent#########

nodelist = AdminTask.listManagedNodes().split(lineSep)

for nodename in nodelist :

print " Restarting Nodeagent of "+nodename+" node "

na = AdminControl.queryNames('type=NodeAgent,node='+nodename+',*')

AdminControl.invoke(na,'restart','true true')

print " ----------------------------------------------------------------------------------------- "

time.sleep(30)

##########Testing Database Connection################

dsid = AdminConfig.getid('/ServerCluster:'+ cluster +'/JDBCProvider:'+ name +'/DataSource:'+ datasource +'/')

print " Testing Database Connection"

print AdminControl.testConnection(dsid)

print " ----------------------------------------------------------------------------------------- "
####################################################################################################################
####################################################################################################################

#main program starts here

arglen=len(sys.argv)

num_exp_args=2

if (arglen != num_exp_args):

print "Two arguments are required.one of them is property file"

print " ----------------------------------------------------------------------------------------- "

sys.exit(-1)

propFile=sys.argv[0]

properties=Properties();

try:

properties.load(FileInputStream(propFile))

print " ----------------------------------------------------------------------------------------- "

print "Succesfully read property file "+propFile

print " ----------------------------------------------------------------------------------------- "

except:

print "Cannot read property file "+propFile
sys.exit(-1)

print " ----------------------------------------------------------------------------------------- "

cluster = str(properties.getProperty("CLUSTER_NAME"))
env = sys.argv[1]
user = str(properties.getProperty("dbms.userId"))
password = str(properties.getProperty("dbms.password"))
url = str(properties.getProperty("dbms.url"))
jdbc_driver = str(properties.getProperty("JDBC_DRIVER_PATH"))
timeOut = int(properties.getProperty("TIMEOUT"))
maxConn = int(properties.getProperty("MAXCONN"))
minConn = int(properties.getProperty("MINCONN"))
reapTime = int(properties.getProperty("REAPTIME"))
unusdTimeout = int(properties.getProperty("UNUSEDTIMEOUT"))
agedTimeout = int(properties.getProperty("AGEDTIMEOUT"))

datasource(cluster,user,password,url,env,jdbc_driver,timeOut,maxConn,minConn,reapTime,unusdTimeout,agedTimeout)

Script to get the cell name

This Script get the name of the Cell:

import sys,java
from java.util import Properties
from org.python.modules import time
from java.io import FileInputStream

lineSep = java.lang.System.getProperty('line.separator')

global AdminApp
global AdminConfig
global AdminControl


# Getting config ID of cell

cell = AdminControl.getCell()

print " cell="+cell

Cluster Start and Stop Scripts

This Script will start the Cluster on Websphere Application Server:
Script to start Cluster and check for existence of application
Written By Charanjeet Singh

import sys,java
from java.util import Properties
from java.io import FileInputStream
from org.python.modules import time
lineSep = java.lang.System.getProperty('line.separator')


def startcluster(cluster,appfile):

global AdminApp
global AdminConfig
global AdminControl

cell = AdminControl.getCell()

print " Cell name is --> "+ cell

Cluster = AdminControl.completeObjectName('cell='+ cell +',type=Cluster,name='+ cluster +',*')

state = AdminControl.getAttribute(Cluster, 'state')

if (state == 'websphere.cluster.running'):

print "Cluster --> " + cluster + " is running .......... "

print "Ripple starting cluster ............."

clusterMgr = AdminControl.completeObjectName('cell='+ cell +',type=ClusterMgr,*')

print AdminControl.invoke(clusterMgr, 'retrieveClusters')

Cluster = AdminControl.completeObjectName('cell='+ cell +',type=Cluster,name='+ cluster +',*')

print AdminControl.invoke(Cluster ,'rippleStart')

else:

print "Cluster --> " + cluster + " is stopped "

print "Starting cluster ............... "

clusterMgr = AdminControl.completeObjectName('cell='+ cell +',type=ClusterMgr,*')

AdminControl.invoke(clusterMgr, 'retrieveClusters')

Cluster = AdminControl.completeObjectName('cell='+ cell +',type=Cluster,name='+ cluster +',*')

print AdminControl.invoke(Cluster ,'start')

print " ---------------------------------------------------------------------------------------------- "

application = AdminConfig.getid("/Deployment:"+appfile+"/")

if len(application) > 0:

print " Deployment completed succesfully ........... "


arglen=len(sys.argv)

num_exp_args=1

if (arglen != num_exp_args):

print "One argument is required. This argument should be a properties file."

print " ----------------------------------------------------------------------------------------- "

sys.exit(-1)

propFile=sys.argv[0]

properties=Properties();

try:

properties.load(FileInputStream(propFile))

print " ----------------------------------------------------------------------------------------- "

print "Succesfully read property file "+propFile

print " ----------------------------------------------------------------------------------------- "

except:

print "Cannot read property file "+propFile
sys.exit(-1)

print " ----------------------------------------------------------------------------------------- "


appfile = str(properties.getProperty("APPLICATION_NAME"))

cluster = str(properties.getProperty("CLUSTER_NAME"))

startcluster(cluster,appfile)

This Script will stop Cluster :

Script to stop Cluster
Written By Charanjeet Singh

import sys,java
from java.util import Properties
from java.io import FileInputStream
from org.python.modules import time
lineSep = java.lang.System.getProperty('line.separator')


def stopcluster(cluster):

global AdminApp
global AdminConfig
global AdminControl

cell = AdminControl.getCell()

print " Cell name is --> "+ cell

Serverid = AdminConfig.getid('/Cell:'+ cell +'/ServerCluster:'+ cluster +'/')

memberlist = AdminConfig.showAttribute(Serverid, "members" )

print "test is:"+ memberlist

members = memberlist[1:len(memberlist)-1]

for member in members.split():

node = AdminConfig.showAttribute(member, "nodeName" )

server = AdminConfig.showAttribute(member, "memberName" )

serverId = AdminConfig.getid("/Cell:"+cell+"/Node:"+node+"/Server:"+server+"/")

s1 = AdminControl.completeObjectName('cell='+ cell +',node='+ node +',name='+ server +',type=Server,*')

print " Checking for the running Mbean of server :"+ server

if len(s1) > 0:

print " Server : "+ server +" is running"

print " Stopping Server :"+ server

AdminControl.stopServer(server, node, 'immediate' )

print " Server : "+ server +" stopped"

else :

print "Server : "+ server +" is stopped "

arglen=len(sys.argv)

num_exp_args=1

if (arglen != num_exp_args):

print "One argument is required. This argument should be a properties file."

print " ----------------------------------------------------------------------------------------- "

sys.exit(-1)

propFile=sys.argv[0]

properties=Properties();

try:

properties.load(FileInputStream(propFile))

print " ----------------------------------------------------------------------------------------- "

print "Succesfully read property file "+propFile

print " ----------------------------------------------------------------------------------------- "

except:

print "Cannot read property file "+propFile
sys.exit(-1)

print " ----------------------------------------------------------------------------------------- "



cluster = str(properties.getProperty("CLUSTER_NAME"))

stopcluster(cluster)

Changing Node Agent Port

This Script will change the ports of Node Agent:

WRITTEN BY CHARANJEET SINGH

This Script changes node agent ports and picks port value from portsFile.properties

import sys,java
from java.util import Properties
from java.io import FileInputStream
from org.python.modules import time
lineSep = java.lang.System.getProperty('line.separator')

def change_port(bootstrap,orb,csiv2_multi,csiv2_server,dcs,drs,nda,nma1,nma2,sas,soap,host,node):

global AdminApp
global AdminConfig
global AdminControl
global AdminTask

cell = AdminControl.getCell()


portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "BOOTSTRAP_ADDRESS"
portsDict["host"] = host
portsDict["port"] = bootstrap
portsDict["modifyShared"] = "true"
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "ORB_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["modifyShared"] = "true"
portsDict["port"] = orb
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])


portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["modifyShared"] = "true"
portsDict["port"] = csiv2_multi
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["modifyShared"] = "true"
portsDict["port"] = csiv2_server
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])


portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "DCS_UNICAST_ADDRESS"
portsDict["host"] = host
portsDict["modifyShared"] = "true"
portsDict["port"] = dcs
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "DRS_CLIENT_ADDRESS"
portsDict["host"] = host
portsDict["modifyShared"] = "true"
portsDict["port"] = drs
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "NODE_DISCOVERY_ADDRESS"
portsDict["host"] = host
portsDict["modifyShared"] = "true"
portsDict["port"] = nda
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])


portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "NODE_IPV6_MULTICAST_DISCOVERY_ADDRESS"
portsDict["host"] = host
portsDict["port"] = nma1
portsDict["modifyShared"] = "true"
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])


portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "NODE_MULTICAST_DISCOVERY_ADDRESS"
portsDict["host"] = host
portsDict["port"] = nma2
portsDict["modifyShared"] = "true"
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SAS_SSL_SERVERAUTH_LISTENER_ADDRESS"
portsDict["host"] = host
portsDict["port"] = sas
portsDict["modifyShared"] = "true"
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])

portsDict = {}
portsDict["nodeName"] = node
portsDict["endPointName"] = "SOAP_CONNECTOR_ADDRESS"
portsDict["host"] = host
portsDict["modifyShared"] = "true"
portsDict["port"] = soap
AdminTask.modifyServerPort('nodeagent',
["-%s %s" % (key, value) for key, value in portsDict.items()])

#--Saving Configuration--#

AdminConfig.save()

## Syncronizing node

nodelist = AdminTask.listManagedNodes().split(lineSep)

for nodename in nodelist :

print " Syncronizing node.......... "

####################Identifying the ConfigRepository MBean and assign it to variable######################

repo = AdminControl.completeObjectName('type=ConfigRepository,process=nodeagent,node='+ nodename +',*')

print AdminControl.invoke(repo, 'refreshRepositoryEpoch')

sync = AdminControl.completeObjectName('cell='+ cell +',node='+ nodename +',type=NodeSync,*')

print AdminControl.invoke(sync , 'sync')

print " ----------------------------------------------------------------------------------------- "

print " Full Resyncronization completed "

print " ----------------------------------------------------------------------------------------- "

if (len(sys.argv)!= 13):

print "you didnt supplied correct number of argument"

else:

bootstrap=sys.argv[0]
orb=sys.argv[1]
csiv2_multi=sys.argv[2]
csiv2_server=sys.argv[3]
dcs=sys.argv[4]
drs=sys.argv[5]
nda=sys.argv[6]
nma1=sys.argv[7]
nma2=sys.argv[8]
sas=sys.argv[9]
soap=sys.argv[10]
host=sys.argv[11]
node=sys.argv[12]


change_port(bootstrap,orb,csiv2_multi,csiv2_server,dcs,drs,nda,nma1,nma2,sas,soap,host,node)