Thursday 13 April 2017

Regedit: Find and export keys from command prompt or Export an ODBC Data Source from the registry

Open command promt and issue following command to export the odbc.ini

regedit /e c:\output.reg "HKEY_LOCAL_MACHINE\Software\ODBC\ODBC.INI"
regedit /e c:\output.reg "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI"


Wednesday 12 April 2017

Install Log.io(to monitor log in GUI) on RHEL 7

Configure EPEL repository:

We will use npm for installing Log.io, so lets install npm and node.js, are available in EPEL repository. Install EPEL rpm to setup repository on RHEL 7.

rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm


Install Log.io:

Install npm and node.js using yum command.

# yum install npm nodejs


npm is the package manager for jabascript which allows to manage dependencies for application, also it allows user to install node.js applications from npm registry. You must input a username for installation, for example here i used "root" user.

# npm install -g log.io --user "root"


Configure Log.io:

Log.io's Installed directory is ~/.log.io which is a hidden directory in the home directory of the user, used in prevoius step for the installation, it has three configuration files which controls its working nature.

harvester.conf

This the configuration file for harvester, it is nothing but a log forwarder which keeps on watching log files for changes, send new log to the server. We can configure nodename, what are all logs to watched and where to send a log.

Edit the harvester file, mention the node name. By-default, harvester is set to monitor only apache log, we will modify in such a way that it monitors messages log. Since the server host is defined as 0.0.0.0, harvester will broadcast logs to all listening Log.io server, it is recommended to set either 127.0.0.1 (if the same machine act as Log.io server) or ip address of remote server Log.io server.




# vi  ~/.log.io/harvester.conf

exports.config = {

nodeName: "iisserver",

logStreams: {

iisaudit: [

"/opt/IBM/WebSphere/AppServer/profiles/InfoSphere/logs/ISauditLog_0.log"

],

igclog: [

"/opt/IBM/WebSphere/AppServer/profiles/InfoSphere/logs/igc.log"

],

systemout: [

"/opt/IBM/WebSphere/AppServer/profiles/InfoSphere/logs/server1/SystemOut.log"

],

},

server: {

host: '35.161.33.163',

port: 28777

}

}


log_server.conf


This is the configuration file of Log.io server, it tells the server on which ip address it should listen. By default, it listens on all ip interfaces for receiving the logs from client. Listening ip address can be changed by modifying host string.

# vi  ~/.log.io/log_server.conf


exports.config = {

host: '0.0.0.0', # Listens on all ip for receving logs

port: 28777

}


web_server.conf

This the configuration file of web interface, this alters the functionality of the web portal. By-default, web portal is accessible on port no 28778 and on all interface. This file offers a way to increase the security by putting HTTP authentication,securing the web interface with SSL, disallowing logs from specific ip address and restricting the web interface access to the specific ip.

# vi  ~/.log.io/web_server.conf


exports.config = {

host: '0.0.0.0', # Listens all ipadress to recive the web interface requests

port: 28778,


auth: {

user: "admin",

pass: "1234"

},


/*

// Restrict access to websocket (socket.io)

// Uses socket.io 'origins' syntax

restrictSocket: '*:*',

*/


/*

// Restrict access to http server (express)

restrictHTTP: [

"192.168.29.39",

"10.0.*"

]

*/


}


Start the services

log.io-server &

log.io-harvester &

Stop the services


Monitor server logs:

Open up your web browser and visit http://your-ip-address:28778. You will get the following page with logs.


Log.io Monitoring logs





Tuesday 11 April 2017

Configure IS Auditing log to get user login and logout info

Open the config file in the below location.

vim /opt/IBM/InformationServer/ASBServer/apps/deploy/ISauditing.properties

update the value from "ALL" to "INFO" for the below two parameter

1)com.ibm.iis.isf.audit.event.LOGIN = INFO

2)com.ibm.iis.isf.audit.event.LOGOUT = INFO


Restart the IIS services

Now we can able to see the user info in ISauditLog_0.log present in the log location <IS installation location>/WebSphere/AppServer/profiles/InfoSphere/logs



Monday 10 April 2017

Backup a DB2 database and restore redirect it to a different location

If you want to clone a database in the same server or in other compatible* server, the fastest way is using the backup & restore method.

However when using the plain restore command, without further options, you may find problems, such as non existing directories, invalid file systems or existing database with the same name. In cases like this you may use the RESTORE REDIRECT.

Let's see an use case below:

The DBA needs to clone the ATOP_STG database and create a ATOP_TST database as clone of the first one. So the first step will be perform a backup:

$ db2 backup db ATOP_STG to /db/db2data/instdevp/backups compress
Backup successful. The timestamp for this backup image is : 20090201091813

In the example above we used the option "to" so we could specify a file system with enough storage available for this backup, we also chose to use the  compress option to save storage space.

Now that we have the backup ready for use, let's start the restore. When doing a restore redirect the best way to do this is in a 3 step process:

1. Create the restore script using the RESTORE's generate script option

db2 "restore db atop_stg 
     from /db/db2data/instdevp/backups 
     taken at 20090131 
     into atop_tst 
     redirect 
     generate script new_db.txt"

In the example above we asked for:

  • Restore of ATOP_STG's backup image (db2 restore db atop_stg)
  • The backup image is located at  /db/db2data/instdevp/backups (from /db/db2data/instdevp/backups )
  • The backup's timestamp used was 20090131. (taken at 20090131 )
  • And the backup image should be restore in ATOP_TST. (into atop_tst)
  • For restoring this ATOP_STG database's file in another location, other than the original, we requested a REDIRECT. (redirect)
  • And for easing our process, DB2 will create a script called new_db.txt, so we can edit and run later on. (generate script new_db.txt )

 

2. Edit the new_db.txt script using VI, or another text editor depending of your operating system. 
If you are not familiar with VI commands, I suggest you to download the file to your machine edit, and send it back.

The file's sample content will look like this one below:

RESTORE DATABASE ATOP_STG 
-- USER <username> 
-- USING '<password>' 
FROM '/db/db2data/instdevp/backups' 
TAKEN AT 20090131070258 
-- ON '/db/db2data/instdevp' 
-- DBPATH ON '<target-directory>' 
INTO ATOP_TST 
-- NEWLOGPATH '/db/db2data/instdevp/instdevp/NODE0000/SQL00002/SQLOGDIR/'
-- WITH <num-buff> BUFFERS 
-- BUFFER <buffer-size> 
-- REPLACE HISTORY FILE 
-- REPLACE EXISTING 
REDIRECT 
-- PARALLELISM <n> 
-- COMPRLIB '<lib-name>' 
-- COMPROPTS '<options-string>' 
WITHOUT ROLLING FORWARD 
-- WITHOUT PROMPTING 
; 
...
SET TABLESPACE CONTAINERS FOR 3 
USING ( PATH 
'/db/db2data/instdevp/instdevp/NODE0000/ATOP_STG/TS/SYSTOOLSPACE'
); 
...
RESTORE DATABASE ATOP_STG CONTINUE; 

The data in bold on the above script should be changed whenever needed, example:

RESTORE DATABASE ATOP_STG 
-- Indicates where the backup image file is located
FROM '/db/db2data/instdevp/backups'
-– Indicates the backup image's timestamp
TAKEN AT 20090131070258
-– Indicates the file system where the database will be created
ON '/db/db2data/instdevp'
-– Indicates the database name to be restored as
INTO ATOP_TST 
-– Indicates the log files location to be used for the new database
NEWLOGPATH '/db/db2data/instdevp/instdevp/NODE0000/SQL00001/SQLOGDIR/' 
-– Indicates the existing database should be replaced by the restore
REPLACE EXISTING 
-– Indicates the tablespaces will be created in a different location 
REDIRECT; 
.. 
SET TABLESPACE CONTAINERS FOR 3 
USING ( PATH 
-– Specify a new tablespace location 
'/db/db2data/instdevp/instdevp/NODE0000/ATOP_TST/TS/SYSTOOLSPACE');

If you are replacing an existing database, and want to existing logpath for that database. Just check what is logpath for the target database using the 
db2 get db cfg for DB_NAME | grep LOG. 
Example: db2 get db cfg for ATOP_TST | grep LOG

 

3. After your script was changed and saved, you just need to run the script:  
db2 -tvf new_db.txt


 

Some problems are well known and simple to fix:

Problem:

SQL5099N 
The value "/db/db2data/instdevp/instdevp/NODE0000/SQL00001/SQLOGDIR/"
indicated by the database configuration parameter "NEWLOGPATH" is
not valid, reason code "4". SQLSTATE=08004

Solution: Go to the directory specified in the error message, and delete all existing log files.

-- Abort the restore:
db2 restore database ATOP_STG abort



-- Go to the specified directory and remove the files:
cd
/db/db2data/instdevp/instdevp/NODE0000/SQL00001/SQLOGDIR/
rm *



-- Go back and restart the restore
cd /db/db2data/instdevp/backups
db2 –tvf new_db.txt


 
 

Problem:

SQL1035N The database is currently in use.SQLSTATE=57019 

Solution: Force all users off from the target database.

Example:

  1. First find out what users are using it the target database: 
  2. db2 list application for db ATOP_TST 

  3. The output for the command above will be like this one:
  4. Auth Id  Application Appl.  Application Id  DB       # of 
    Name Handle Name Agents
    -------- ----------- ------ --------------- -------- -----
    INSTDEVP Winsql.exe 5497 9.8.0.135.19972 ATOP_TST 1

  5. Now you need for the force the user off: 
  6. db2 "force application (5497) 

  7. Be aware that you may use db2 force application all, however all connections in the instance will be forced off, not only for that database
  8. Resubmit your script: 
  9. db2 –tvf new_db.txt 

If you want to follow-up the restore, just open another session and use the command db2 list utilities show detail. This command can be used for both backup and restore operations, among other utilities such as LOAD.

After the restore has finished, whether was few minutes or hours, please double check the connection to the database. I always add a connect to command in the end of the script in this case: CONNECT TO ATOP_TST



--
Regards
Sandeep C