Wednesday, 28 November 2018

Config Mule Managment Console backend as MSSQL Database

Mule Management Console which is also known as MMC in short,  is a centralizes management and monitoring functions for all our on-premise Mule ESB Enterprise deployments, that includes running as a standalone instances, as a cluster, or embedded in application servers.

MMC as an enterprise management and monitoring tool is designed specifically for Mule on-premises instances. It provides all the functionality for managing and monitoring running Mule on premises servers, Mule clusters, deployed applications within Mule servers, and the flows within those applications. Another important feature is it also provides ways of looking at specific transactions through pre-defined business events, as well as transactions in flight.

MMC by default supports internal derby database to persist environment and transaction data. If we start our MMC  application under Apache Tomcat server, we can see under the bin folder there is a folder named as  mmc-data is created where it stores all the MMC related data to support all the MMC functionalities:-

untitled

 

If we enter into that folder, we will see the derby database is created with all other folders:-.

untitled

From MMC  version 3.4.2 or later it supports external database which means we can connect to any external remote database instead of using our default internal derby database. It is very easy to connect our MMC  with an external database and in this post we are going to demonstrate this in 5 simple stepsJ.

Currently it can be connected with any of the following databases other than derby :-

Externalizing with MS-SQL server:-

So, we will see here connecting our MMC  with external MS-SQL Server.

Before we start, we can delete the mmc-data folder from <MMC_HOME> /bin folder path after taking a backup.

<MMC_HOME> is the directory where MMC is installed.

Step 1:- First step is simple. Since we are connecting our MMC  with MS-SQL Server  we need the driver and copy it into <Mule install path>/apps/mmc/webapps/mmc/WEB-INF/lib folder:-

untitled

Step 2:- In the folder <MMC_HOME>/WEB-INF we need to configure our web.xml. By default in spring.profiles.active section we will see the parameter string as env-derby which is pointing for default derby configuration:-

untitled

We need to replace it with env-mssql as follows:-

untitled

We also need to delete the string tracking-h2 and replace it with tracking-mssql to Persist Transaction Data to MS-SQL Server.

Step 3:- We will be creating a database say MMC_DB_3.8 in our MS-SQL Server which will be connected to our MMC :-

untitled

Step 4:- In the directory <MMC_HOME>/WEB-INF/classes/META-INF/databases, we need to locate the file mmc-mssql.properties, and then do the editing for are env.username, env.password, env.host, env.port and env.dbschema:-

untitled

Step 5:- The final stage will be in directory <MMC_HOME>/WEB-INF/classes/quartz there is a .sql file called tables_sqlServer.sql, and we execute that in SQL DB server:-

untitled

This will create the required tables in our database.

And that's it! We are done here.

If we start our Apache Tomcat server and browse the MMC portal in a browser, we will see the application has started successfully:-

untitled

And now if we go back to bin folder we will see the mmc-data folder is created which contains all the MMC related data:-

untitled

If we browse into the folder we will find no internal database folder created as we have already configured it with our MS-SQL Server and hence leaving only repository and workspace folder:-

untitled

So, we can see how easy it is to configure the MMC application with an external MS-SQL Server database.

Hope you like the post and please do share your feedback and experiences in the below section for comments.

Thursday, 31 May 2018

DB2 Federation between two DB2 databases on same instance

Step 1:
 
Go to db2 prompt
 
db2=>

Step 2: (Optional) Create database if necessary
 
create database Tom
 
create database Jerry
Step 3: Connect to DB1 and create table (Example Tom) (Optional)
 
connect to tom
 
create table db2inst1.tab1 (cod1 int)
 
insert into db2inst1.tab1 values (1)

Step 4: Connect to DB2 and create table (Example Jerry)
 
connect to jerry
 
create table db2inst1.tab2 (name1 char)
 
insert into db2inst1.tab2 values ('a')

Step 5: Create server and mappings in 1st DB (Tom) to access 2nd DB (Jerry)
connect to tom
 

create server fedserver TYPE DB2/UDB VERSION 10.5 WRAPPER drda  AUTHORIZATION "db2inst1" password "password" OPTIONS(DBNAME 'jerry')
 
create user mapping for db2inst1 SERVER fedserver OPTIONS(REMOTE_AUTHID 'db2inst1',REMOTE_PASSWORD 'password')
create nickname mytable for fedserver.db2inst1.tab1

Step 6: 

Executing Select statement from 1st Db (Tom) to query results from 2nd DB (Jerry)
 
select * from mytable

--
Regards
Sandeep C

Wednesday, 24 January 2018

Users from LDAP nested groups don't appear in BPM Process Admin Console

BPM uses the WebSphere UserRegistry.getUsersForGroup() call to retrieve user members of nested groups.

In order to get nested members from the API call getUsersForGroup(), you need to add/set a custom property:

"com.ibm.ws.wim.adapter.ldap.returnNestedNonGroupMembers" with value "true".

In order to set this property, do the following:

1) Stop all the servers and node agents.

2) From the deployment manager bin directory.

#./wsadmin.sh

Run below commands:

$AdminTask setIdMgrCustomProperty { -id my_Ldap_Repository_Id -name com.ibm.ws.wim.adapter.ldap.returnNestedNonGroupMembers -value true}

$AdminConfig save

3) Sync your nodes as needed.

4) Start your servers.


Thursday, 27 July 2017

Using a value file in a parameter set in Information Server DataStage

Question

How do I create and use a value file within a parameter set in DataStage?

Answer

Using a value file in a parameter set allows you to set values of a parameter dynamically. For Example:

  • Job A updates the value file. Job B uses a parameter set that points to that value file.
  • When moving jobs from development to test or production you can update the value file to reflect different values for the different environments without having to recompile the job.

To create a new Parameter Set, select File, New and select "Create new Parameter Set"

This will Launch a Dialog shown below:

Fill out the appropriate information on the General tab and the proceed to the Parameters Tab:




In this tab, enter in the Parameters you wish to include in this Parameter Set. Note that you can also add existing Environmental Variables.



The last tab, Values, allows you to specify a Value File name. This is the name of the file that will automatically be created on the Engine tier. This tab also allows you to view/edit values located in the value file.

Click OK to save the Parameter set.

Once the Parameter Set is created, you can view or edit the Value file on the Engine tier. The value file can be found in the following location ../Projects/<project name>/ParameterSets/<Parameter Set Name>. For example:
$ pwd
/opt/IBM/InformationServer/Server/Projects/PROJECT_NAME/ParameterSets/Param_test
$ ls
Param_test_data
$ more Param_test_data
Database=SYSIBM
Table=SYSTABLES
$

Any changes made to the value file will be populated to the Parameter Set automatically.

​​

A DataStage job does not use the new value that is put in the Parameter set.

Problem(Abstract)

A DataStage job does not use the new value that is put in the Parameter set.

Cause

If you make any changes to a parameter set object, these changes will be reflected in job designs that use this object up until the time the job is compiled. The parameters that a job is compiled with are the ones that will be available when the job is run (although if you change the design after compilation the job will once again link to the current version of the parameter set).

Diagnosing the problem

Examine the log entry "Environment variable settings" for parameter sets. If the parameter set specifies the value "(As predefined)", the parameter set is using the value that was used during the last compile.

Resolving the problem

If the value of the parameter set may be changed, you should specify a Value File for the parameters or set the parameters in the parameter set (including encrypted parameters) to $PROJDEF.

​​

How to set default values for Environment Variables without re-compiling DataStage jobs

Question

Is it possible to set/change the default values for an environment variable without re-compiling the DataStage job?

Answer

Yes, it is possible to set/change the default values for an environment variables without recompiling the job.

You can manage all of your environment variables from the DataStage Administrator client. To do this follow these steps:

  1. Open the Administrator client and select the project you are working in and click Properties.
  2. On the General tab click Environment.
  3. Create a new environment variable under the "User Defined" section or update the variable if it exists. Set the value of the variable to what want the DataStage job to inherit.
  4. Once you do this, close the Administrator Client so the variable is saved.
  5. Next, open the DataStage Designer client and navigate to the Job Properties.
  6. Add the environment variable name that you just created in the DataStage Administrator Client.
  7. Set the value of the new variable to $PROJDEF. This will inherit whatever default value you have set in the Administrator client. This will also allow you to update that default value in the Administrator client without having to recompile the job.

Wednesday, 26 July 2017

How to fix missing and underreplicated blocks - HDFS

$ su - <$hdfs_user> 

$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files

$ for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ; hadoop fs -setrep 3 $hdfsfile; done

In above command, 3 is the replication factor. If you are using single datanode, it must be 1.