Thursday 27 July 2017

Using a value file in a parameter set in Information Server DataStage

Question

How do I create and use a value file within a parameter set in DataStage?

Answer

Using a value file in a parameter set allows you to set values of a parameter dynamically. For Example:

  • Job A updates the value file. Job B uses a parameter set that points to that value file.
  • When moving jobs from development to test or production you can update the value file to reflect different values for the different environments without having to recompile the job.

To create a new Parameter Set, select File, New and select "Create new Parameter Set"

This will Launch a Dialog shown below:

Fill out the appropriate information on the General tab and the proceed to the Parameters Tab:




In this tab, enter in the Parameters you wish to include in this Parameter Set. Note that you can also add existing Environmental Variables.



The last tab, Values, allows you to specify a Value File name. This is the name of the file that will automatically be created on the Engine tier. This tab also allows you to view/edit values located in the value file.

Click OK to save the Parameter set.

Once the Parameter Set is created, you can view or edit the Value file on the Engine tier. The value file can be found in the following location ../Projects/<project name>/ParameterSets/<Parameter Set Name>. For example:
$ pwd
/opt/IBM/InformationServer/Server/Projects/PROJECT_NAME/ParameterSets/Param_test
$ ls
Param_test_data
$ more Param_test_data
Database=SYSIBM
Table=SYSTABLES
$

Any changes made to the value file will be populated to the Parameter Set automatically.

​​

A DataStage job does not use the new value that is put in the Parameter set.

Problem(Abstract)

A DataStage job does not use the new value that is put in the Parameter set.

Cause

If you make any changes to a parameter set object, these changes will be reflected in job designs that use this object up until the time the job is compiled. The parameters that a job is compiled with are the ones that will be available when the job is run (although if you change the design after compilation the job will once again link to the current version of the parameter set).

Diagnosing the problem

Examine the log entry "Environment variable settings" for parameter sets. If the parameter set specifies the value "(As predefined)", the parameter set is using the value that was used during the last compile.

Resolving the problem

If the value of the parameter set may be changed, you should specify a Value File for the parameters or set the parameters in the parameter set (including encrypted parameters) to $PROJDEF.

​​

How to set default values for Environment Variables without re-compiling DataStage jobs

Question

Is it possible to set/change the default values for an environment variable without re-compiling the DataStage job?

Answer

Yes, it is possible to set/change the default values for an environment variables without recompiling the job.

You can manage all of your environment variables from the DataStage Administrator client. To do this follow these steps:

  1. Open the Administrator client and select the project you are working in and click Properties.
  2. On the General tab click Environment.
  3. Create a new environment variable under the "User Defined" section or update the variable if it exists. Set the value of the variable to what want the DataStage job to inherit.
  4. Once you do this, close the Administrator Client so the variable is saved.
  5. Next, open the DataStage Designer client and navigate to the Job Properties.
  6. Add the environment variable name that you just created in the DataStage Administrator Client.
  7. Set the value of the new variable to $PROJDEF. This will inherit whatever default value you have set in the Administrator client. This will also allow you to update that default value in the Administrator client without having to recompile the job.

Wednesday 26 July 2017

How to fix missing and underreplicated blocks - HDFS

$ su - <$hdfs_user> 

$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files

$ for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ; hadoop fs -setrep 3 $hdfsfile; done

In above command, 3 is the replication factor. If you are using single datanode, it must be 1.


 




Friday 2 June 2017

How to Get Started With Cloudera docker

Step 1 — Installing Docker

The Docker installation package available in the official Ubuntu 16.04 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.

But first, let's update the package database:

  • sudo apt-get update

Now let's install Docker. Add the GPG key for the official Docker repository to the system:

Add the Docker repository to APT sources:


Update the package database with the Docker packages from the newly added repo:

  • sudo apt-get update

Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:

  • apt-cache policy docker-engine


You should see output similar to the follow:

Output of apt-cache policy docker-engine
docker-engine:    Installed: (none)    Candidate: 1.11.1-0~xenial    Version table:       1.11.1-0~xenial 500          500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages       1.11.0-0~xenial 500          500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages    

Notice that docker-engine is not installed, but the candidate for installation is from the Docker repository for Ubuntu 16.04. The docker-engine version number might be different.

Finally, install Docker:

  • sudo apt-get install -y docker-engine


Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it's running:

  • sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:



Output
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago Docs: https://docs.docker.com Main PID: 749 (docker)

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We'll explore how to use the docker command later in this tutorial.


Step 2 — Pull clodera docker image

docker pull cloudera/quickstart:latest






Step 3 — Start or Run Cloudera Docker image


docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888:8888 -p 80:80 cloudera/quickstart /usr/bin/docker-quickstart



STEP 3:
browse hue localhost:8888
Optionally start services

Thursday 1 June 2017

Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again

  • Add epel repo and update
rpm -iUvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm  yum update 

If yum complains that

Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again

Then try to run following command to fix it:

yum upgrade ca-certificates --disablerepo=epel

Wednesday 17 May 2017

Mysql root user initial password reset

grep 'temporary password' /var/log/mysqld.log
Sort date (newest date)  

You may see something like this;

[root@SERVER ~]# grep 'temporary password' /var/log/mysqld.log  2016-01-16T18:07:29.688164Z 1 
[Note] A temporary password is generated for root@localhost: O,k5.marHfFu 2016-01-22T13:14:17.974391Z 1
[Note] A temporary password is generated for root@localhost: b5nvIu!jh6ql 2016-01-22T15:35:48.496812Z 1
[Note] A temporary password is generated for root@localhost: (B*=T!uWJ7ws 2016-01-22T15:52:21.088610Z 1
[Note] A temporary password is generated for root@localhost: %tJXK7sytMJV 2016-01-22T16:24:41.384205Z 1
[Note] A temporary password is generated for root@localhost: lslQDvgwr3/S 2016-01-22T22:11:24.772275Z 1
[Note] A temporary password is generated for root@localhost: S4u+J,Rce_0t 
[root@SERVER ~]# mysql_secure_installation  

Securing the MySQL server deployment.

Enter password for user root:     The existing password for the user account root has expired. Please set a new password. 
New password:
Re-enter new password:

If you see it says

... Failed! Error: Your password does not satisfy the current policy requirements 
That means your password needs to have a character such as ! . # - etc... mix characters well, upper case,
lower case, ! . , # etc... New password: Re-enter new password: The 'validate_password' plugin is
installed on the server. The subsequent steps will run with the existing configuration of the plugin.
Using existing password for root. Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : Y
New password:
Re-enter new password:
Estimated strength of the password: 100
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : Y

By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for testing, and to make the installation go a bit smoother.
You should remove them before moving into a production environment. Remove anonymous users? (Press y|Y for Yes,
any other key for No) : Y

Success.

Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at
the root password from the network. Disallow root login remotely? (Press y|Y for Yes, any other key for No) : Y

Success.

By default, MySQL comes with a database named 'test' that anyone can access.
This is also intended only for testing, and should be removed before moving into a production environment.
Remove test database and access to it? (Press y|Y for Yes, any other key for No) : Y - Dropping test database... Success. - Removing privileges on test database... Success. Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Y Success. All done! [root@SERVER ~]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 11 Server version: 5.7.10 MySQL Community Server (GPL)

Configuring Postfix as a Gmail Relay on CentOS


Configuring Postfix as a Gmail Relay on CentOS

This tutorial should work on any distro based on RedHat, but I have only tested it on CentOS 6.4.
You will need to run the commands as root.

I found a ton of how-tos and tutorials on how to set up Postfix as a Gmail relay, but most of them required making a client certificate or were incomplete. After fighting with getting Postfix set up on CentOS 6.1, and browsing the internet for many, many days, I finally got it working. Part of the configuration is based on the pages found here and here.

Installing Postfix

Installing Postfix is easy, just run this command as root:

yum install postfix mailx cyrus-sasl-plain

Thanks to Jonathan for pointing that out.

Configuring

Basically, you need to create a password file so that Postfix can authenticate to Gmail's servers. You do this by creating a file named sasl_passwd in /etc/postfix. Replace smtp_user and smtp_passwd with their respective values.

echo "smtp.gmail.com    smtp_user:smtp_passwd" > /etc/postfix/sasl_passwd

You then hash that file so that the password is not stored in clear text. This command will create a file named sasl_passwd.db in the /etc/postfix/ directory.

postmap hash:/etc/postfix/sasl_passwd

After that is done, add these to the bottom of /etc/postfix/main.cf. This is assuming that your root certificates installed from openssl are located in /etc/pki/tls/certs/ca-bundle.crt.

smtp_sasl_auth_enable = yes  smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd  smtp_sasl_security_options = noanonymous  # Secure channel TLS with exact nexthop name match.  smtp_tls_security_level = secure  smtp_tls_mandatory_protocols = TLSv1  smtp_tls_mandatory_ciphers = high  smtp_tls_secure_cert_match = nexthop  smtp_tls_CAfile = /etc/pki/tls/certs/ca-bundle.crt  relayhost = smtp.gmail.com:587  

After that is done, restart postfix:

service postfix restart

Now test it to make sure it is working. Run this:

mail email@domain

Fill in the subject, put something in the body and then type a . and hit enter.

If all went well, you should get an email at the email address you entered. If you do, you can delete the file that has the password.

rm /etc/postfix/sasl_passwd

If it did not work, check the log to see what happened.

tail /var/log/maillog

Everything should be good after you get everything set up, so enjoy your new SMTP relay!

Configuring Postfix as a Gmail Relay on CentOS

This tutorial should work on any distro based on RedHat, but I have only tested it on CentOS 6.4.
You will need to run the commands as root.

I found a ton of how-tos and tutorials on how to set up Postfix as a Gmail relay, but most of them required making a client certificate or were incomplete. After fighting with getting Postfix set up on CentOS 6.1, and browsing the internet for many, many days, I finally got it working. Part of the configuration is based on the pages found here and here.

Installing Postfix

Installing Postfix is easy, just run this command as root:

yum install postfix mailx cyrus-sasl-plain

Configuring

Basically, you need to create a password file so that Postfix can authenticate to Gmail's servers. You do this by creating a file named sasl_passwd in /etc/postfix. Replace smtp_user and smtp_passwd with their respective values.

echo "smtp.gmail.com    smtp_user:smtp_passwd" > /etc/postfix/sasl_passwd

You then hash that file so that the password is not stored in clear text. This command will create a file named sasl_passwd.db in the /etc/postfix/ directory.

postmap hash:/etc/postfix/sasl_passwd

After that is done, add these to the bottom of /etc/postfix/main.cf. This is assuming that your root certificates installed from openssl are located in /etc/pki/tls/certs/ca-bundle.crt.

smtp_sasl_auth_enable = yes  smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd  smtp_sasl_security_options = noanonymous  # Secure channel TLS with exact nexthop name match.  smtp_tls_security_level = secure  smtp_tls_mandatory_protocols = TLSv1  smtp_tls_mandatory_ciphers = high  smtp_tls_secure_cert_match = nexthop  smtp_tls_CAfile = /etc/pki/tls/certs/ca-bundle.crt  relayhost = smtp.gmail.com:587  

After that is done, restart postfix:

service postfix restart

Now test it to make sure it is working. Run this:

mail email@domain

Fill in the subject, put something in the body and then type a . and hit enter.

If all went well, you should get an email at the email address you entered. If you do, you can delete the file that has the password.

rm /etc/postfix/sasl_passwd

If it did not work, check the log to see what happened.

tail /var/log/maillog

Everything should be good after you get everything set up, so enjoy your new SMTP relay!

Monday 15 May 2017

how to unlock a oracle user account when it expires and set to unlimited

I have written an article on how to unlock a user account when it expires:

But, every 180 days, you need to repeat the same action.  If you are NOT concerned with strict security rules for your database, you can take the following approach to set user accounts to never expire.

What Profile Used by a User Account?

A profile[3] is a database object – a named set of resource limits. Using profile, you can enforce a limit on resource utilization using resource limit parameters Also you can maintain database security by using password management feature.  Here is the SQL command you can use to query which profile is used by each user account:

SQL>  SELECT USERNAME, PROFILE FROM DBA_USERS;

USERNAME                       PROFILE
—————————— ——————————

OAM_OAM                        DEFAULT
OAM_IAU_APPEND                 DEFAULT
OAM_IAU_VIEWER                 DEFAULT
OAM_IAU                        DEFAULT
OIM_SOAINFRA                   DEFAULT
OIM_ORASDPM                    DEFAULT
OIM_MDS                        DEFAULT
OIM_OIM                        DEFAULT
As shown above, both OAM and OIM user accounts use "DEFAULT" profile.

What Limits Set with a Profile?

We are only interested in "DEFAULT" profile and resource of PASSWORD type.  To query all sorts of limits imposed with "DEFAULT" profile, you do the following query:
SQL> select resource_name, limit from dba_profiles where profile='DEFAULT' and resource_type='PASSWORD';

OR

SQL> select resource_name, limit from dba_profiles;

RESOURCE_NAME                    LIMIT
——————————– —————————————-
FAILED_LOGIN_ATTEMPTS            10
PASSWORD_LIFE_TIME               180
PASSWORD_REUSE_TIME              UNLIMITED
PASSWORD_REUSE_MAX               UNLIMITED
PASSWORD_VERIFY_FUNCTION         NULL
PASSWORD_LOCK_TIME               1
PASSWORD_GRACE_TIME              7

As shown above, all our OAM and OIM user accounts will expire in 180 days.  However, we would like to set it to never expire.

How to Set User Password to Never Expire?

Here is the alter statement that you can use:
SQL> ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED;
Profile altered.

The above command has set password life time associated with "DEFAULT" profile to be unlimited.  You can verify the setting by:

SQL> select resource_name, limit from dba_profiles where profile='DEFAULT' and resource_type='PASSWORD';

RESOURCE_NAME                    LIMIT
——————————– —————————————-
FAILED_LOGIN_ATTEMPTS            10
PASSWORD_LIFE_TIME               UNLIMITED
PASSWORD_REUSE_TIME              UNLIMITED
PASSWORD_REUSE_MAX               UNLIMITED
PASSWORD_VERIFY_FUNCTION         NULL
PASSWORD_LOCK_TIME               1
PASSWORD_GRACE_TIME              7
7 rows selected.


Wednesday 3 May 2017

The application experienced an internal error loading the SSL libraries

The application experienced an internal error loading the SSL libraries

Document ID:  TEC1504286

Last Modified Date:  09/29/2016 
Show Technical Document Details

Problem: 

When downloading a Microsoft document attachment on ServiceDesk SSL system, the following error pops up:
"The application experienced an internal error loading the SSL libraries"

ssl_error.jpg

Environment:  

ServiceDesk 12.x-14.x
Microsoft Windows 7 - 8
Microsoft Internet Explorer

Cause: 

The error is Microsoft generated Error caused by Microsoft Windows update.

Resolution/Workaround:

Microsoft have addressed these error.

Please review one of the following documents:

1. https://social.technet.microsoft.com/Forums/Sharepoint/en-US/78f64ac7-e68c-4171-b940-98f68e27279c/the-application-experienced-an-internal-error-loading-the-ssl-libraries-word-2013?forum=officeitpro

2. https://blogs.technet.microsoft.com/office_integration__sharepoint/2015/04/20/office-2013-error-certificate-error-the-application-experienced-an-internal-error-loading-the-ssl-libraries-when-opening-files/

Additional Information:

 

The error will not occur when using Firefox or Chrome

​​

Thursday 13 April 2017

Regedit: Find and export keys from command prompt or Export an ODBC Data Source from the registry

Open command promt and issue following command to export the odbc.ini

regedit /e c:\output.reg "HKEY_LOCAL_MACHINE\Software\ODBC\ODBC.INI"
regedit /e c:\output.reg "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI"


Wednesday 12 April 2017

Install Log.io(to monitor log in GUI) on RHEL 7

Configure EPEL repository:

We will use npm for installing Log.io, so lets install npm and node.js, are available in EPEL repository. Install EPEL rpm to setup repository on RHEL 7.

rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm


Install Log.io:

Install npm and node.js using yum command.

# yum install npm nodejs


npm is the package manager for jabascript which allows to manage dependencies for application, also it allows user to install node.js applications from npm registry. You must input a username for installation, for example here i used "root" user.

# npm install -g log.io --user "root"


Configure Log.io:

Log.io's Installed directory is ~/.log.io which is a hidden directory in the home directory of the user, used in prevoius step for the installation, it has three configuration files which controls its working nature.

harvester.conf

This the configuration file for harvester, it is nothing but a log forwarder which keeps on watching log files for changes, send new log to the server. We can configure nodename, what are all logs to watched and where to send a log.

Edit the harvester file, mention the node name. By-default, harvester is set to monitor only apache log, we will modify in such a way that it monitors messages log. Since the server host is defined as 0.0.0.0, harvester will broadcast logs to all listening Log.io server, it is recommended to set either 127.0.0.1 (if the same machine act as Log.io server) or ip address of remote server Log.io server.




# vi  ~/.log.io/harvester.conf

exports.config = {

nodeName: "iisserver",

logStreams: {

iisaudit: [

"/opt/IBM/WebSphere/AppServer/profiles/InfoSphere/logs/ISauditLog_0.log"

],

igclog: [

"/opt/IBM/WebSphere/AppServer/profiles/InfoSphere/logs/igc.log"

],

systemout: [

"/opt/IBM/WebSphere/AppServer/profiles/InfoSphere/logs/server1/SystemOut.log"

],

},

server: {

host: '35.161.33.163',

port: 28777

}

}


log_server.conf


This is the configuration file of Log.io server, it tells the server on which ip address it should listen. By default, it listens on all ip interfaces for receiving the logs from client. Listening ip address can be changed by modifying host string.

# vi  ~/.log.io/log_server.conf


exports.config = {

host: '0.0.0.0', # Listens on all ip for receving logs

port: 28777

}


web_server.conf

This the configuration file of web interface, this alters the functionality of the web portal. By-default, web portal is accessible on port no 28778 and on all interface. This file offers a way to increase the security by putting HTTP authentication,securing the web interface with SSL, disallowing logs from specific ip address and restricting the web interface access to the specific ip.

# vi  ~/.log.io/web_server.conf


exports.config = {

host: '0.0.0.0', # Listens all ipadress to recive the web interface requests

port: 28778,


auth: {

user: "admin",

pass: "1234"

},


/*

// Restrict access to websocket (socket.io)

// Uses socket.io 'origins' syntax

restrictSocket: '*:*',

*/


/*

// Restrict access to http server (express)

restrictHTTP: [

"192.168.29.39",

"10.0.*"

]

*/


}


Start the services

log.io-server &

log.io-harvester &

Stop the services


Monitor server logs:

Open up your web browser and visit http://your-ip-address:28778. You will get the following page with logs.


Log.io Monitoring logs





Tuesday 11 April 2017

Configure IS Auditing log to get user login and logout info

Open the config file in the below location.

vim /opt/IBM/InformationServer/ASBServer/apps/deploy/ISauditing.properties

update the value from "ALL" to "INFO" for the below two parameter

1)com.ibm.iis.isf.audit.event.LOGIN = INFO

2)com.ibm.iis.isf.audit.event.LOGOUT = INFO


Restart the IIS services

Now we can able to see the user info in ISauditLog_0.log present in the log location <IS installation location>/WebSphere/AppServer/profiles/InfoSphere/logs



Monday 10 April 2017

Backup a DB2 database and restore redirect it to a different location

If you want to clone a database in the same server or in other compatible* server, the fastest way is using the backup & restore method.

However when using the plain restore command, without further options, you may find problems, such as non existing directories, invalid file systems or existing database with the same name. In cases like this you may use the RESTORE REDIRECT.

Let's see an use case below:

The DBA needs to clone the ATOP_STG database and create a ATOP_TST database as clone of the first one. So the first step will be perform a backup:

$ db2 backup db ATOP_STG to /db/db2data/instdevp/backups compress
Backup successful. The timestamp for this backup image is : 20090201091813

In the example above we used the option "to" so we could specify a file system with enough storage available for this backup, we also chose to use the  compress option to save storage space.

Now that we have the backup ready for use, let's start the restore. When doing a restore redirect the best way to do this is in a 3 step process:

1. Create the restore script using the RESTORE's generate script option

db2 "restore db atop_stg 
     from /db/db2data/instdevp/backups 
     taken at 20090131 
     into atop_tst 
     redirect 
     generate script new_db.txt"

In the example above we asked for:

  • Restore of ATOP_STG's backup image (db2 restore db atop_stg)
  • The backup image is located at  /db/db2data/instdevp/backups (from /db/db2data/instdevp/backups )
  • The backup's timestamp used was 20090131. (taken at 20090131 )
  • And the backup image should be restore in ATOP_TST. (into atop_tst)
  • For restoring this ATOP_STG database's file in another location, other than the original, we requested a REDIRECT. (redirect)
  • And for easing our process, DB2 will create a script called new_db.txt, so we can edit and run later on. (generate script new_db.txt )

 

2. Edit the new_db.txt script using VI, or another text editor depending of your operating system. 
If you are not familiar with VI commands, I suggest you to download the file to your machine edit, and send it back.

The file's sample content will look like this one below:

RESTORE DATABASE ATOP_STG 
-- USER <username> 
-- USING '<password>' 
FROM '/db/db2data/instdevp/backups' 
TAKEN AT 20090131070258 
-- ON '/db/db2data/instdevp' 
-- DBPATH ON '<target-directory>' 
INTO ATOP_TST 
-- NEWLOGPATH '/db/db2data/instdevp/instdevp/NODE0000/SQL00002/SQLOGDIR/'
-- WITH <num-buff> BUFFERS 
-- BUFFER <buffer-size> 
-- REPLACE HISTORY FILE 
-- REPLACE EXISTING 
REDIRECT 
-- PARALLELISM <n> 
-- COMPRLIB '<lib-name>' 
-- COMPROPTS '<options-string>' 
WITHOUT ROLLING FORWARD 
-- WITHOUT PROMPTING 
; 
...
SET TABLESPACE CONTAINERS FOR 3 
USING ( PATH 
'/db/db2data/instdevp/instdevp/NODE0000/ATOP_STG/TS/SYSTOOLSPACE'
); 
...
RESTORE DATABASE ATOP_STG CONTINUE; 

The data in bold on the above script should be changed whenever needed, example:

RESTORE DATABASE ATOP_STG 
-- Indicates where the backup image file is located
FROM '/db/db2data/instdevp/backups'
-– Indicates the backup image's timestamp
TAKEN AT 20090131070258
-– Indicates the file system where the database will be created
ON '/db/db2data/instdevp'
-– Indicates the database name to be restored as
INTO ATOP_TST 
-– Indicates the log files location to be used for the new database
NEWLOGPATH '/db/db2data/instdevp/instdevp/NODE0000/SQL00001/SQLOGDIR/' 
-– Indicates the existing database should be replaced by the restore
REPLACE EXISTING 
-– Indicates the tablespaces will be created in a different location 
REDIRECT; 
.. 
SET TABLESPACE CONTAINERS FOR 3 
USING ( PATH 
-– Specify a new tablespace location 
'/db/db2data/instdevp/instdevp/NODE0000/ATOP_TST/TS/SYSTOOLSPACE');

If you are replacing an existing database, and want to existing logpath for that database. Just check what is logpath for the target database using the 
db2 get db cfg for DB_NAME | grep LOG. 
Example: db2 get db cfg for ATOP_TST | grep LOG

 

3. After your script was changed and saved, you just need to run the script:  
db2 -tvf new_db.txt


 

Some problems are well known and simple to fix:

Problem:

SQL5099N 
The value "/db/db2data/instdevp/instdevp/NODE0000/SQL00001/SQLOGDIR/"
indicated by the database configuration parameter "NEWLOGPATH" is
not valid, reason code "4". SQLSTATE=08004

Solution: Go to the directory specified in the error message, and delete all existing log files.

-- Abort the restore:
db2 restore database ATOP_STG abort



-- Go to the specified directory and remove the files:
cd
/db/db2data/instdevp/instdevp/NODE0000/SQL00001/SQLOGDIR/
rm *



-- Go back and restart the restore
cd /db/db2data/instdevp/backups
db2 –tvf new_db.txt


 
 

Problem:

SQL1035N The database is currently in use.SQLSTATE=57019 

Solution: Force all users off from the target database.

Example:

  1. First find out what users are using it the target database: 
  2. db2 list application for db ATOP_TST 

  3. The output for the command above will be like this one:
  4. Auth Id  Application Appl.  Application Id  DB       # of 
    Name Handle Name Agents
    -------- ----------- ------ --------------- -------- -----
    INSTDEVP Winsql.exe 5497 9.8.0.135.19972 ATOP_TST 1

  5. Now you need for the force the user off: 
  6. db2 "force application (5497) 

  7. Be aware that you may use db2 force application all, however all connections in the instance will be forced off, not only for that database
  8. Resubmit your script: 
  9. db2 –tvf new_db.txt 

If you want to follow-up the restore, just open another session and use the command db2 list utilities show detail. This command can be used for both backup and restore operations, among other utilities such as LOAD.

After the restore has finished, whether was few minutes or hours, please double check the connection to the database. I always add a connect to command in the end of the script in this case: CONNECT TO ATOP_TST



--
Regards
Sandeep C