Wednesday 24 September 2014

Cognos BI integration with BigInsights using BigSQL: Use-cases and How to



In this blog we’ll develop Cognos BI reports using BigInsights (Hadoop distribution) along with warehouse data sources. “Data warehouse augmentation” is a Big Data use case of huge importance to the traditional analytics industry (Visit http://www.ibm.com/developerworks/library/ba-augment-data-warehouse1/index.html to know more).  To explore and implement a big data project, you can augment existing data warehouse environments by introducing one or more use cases given below, as the business requires.


This blog is directly helpful in case-3 however use of BigSQL would be used effectively in all 3 cases. In my previous blogs, we discussed Cognos BI in detail so if you are new probably you can check details here - http://www.ibm.com/software/products/en/business-intelligence. Below I am giving a brief description of BigInsights and BigSQL before we start integration steps.
IBM InfoSphere BigInsights (http://www.ibm.com/software/data/infosphere/biginsights/) combines Apache Hadoop (including the MapReduce framework and the Hadoop Distributed File Systems) with unique, enterprise-ready technologies and capabilities from across IBM, including Big SQL, built-in analytics, visualization, BigSheets, and security. InfoSphere BigInsights is a single platform to manage all of the data. InfoSphere BigInsights offers many benefits:
  • Provides flexible, enterprise-class support for processing large volumes of data by using streams and MapReduce
  • Enables applications to work with thousands of nodes and petabytes of data in a highly parallel, cost effective manner
  • Applies advanced analytics to information in its native form to enable ad hoc analysis
  • Integrates with enterprise software
Big SQL (http://www.ibm.com/developerworks/library/bd-bigsql/) provides SQL access to data that is stored in InfoSphere BigInsight by using JDBC, ODBC, and other connections. Big SQL supports large ad hoc queries by using IBM SQL/PL support, SQL stored procedures, SQL functions, and IBM Data Server drivers. These queries are low-latency queries that return information quickly to reduce response time and provide improved access to data. Big SQL offers unmatched simplicity, performance and security for SQL on Hadoop. It provides a single point of access and view across all big data, exactly where it lives.


OK, so with this little background we are ready to start with implementations tasks using BigSQL. In case you are interested in Hive based work, please refer - http://www.ibm.com/developerworks/library/ba-cognosbi10-infospherebiginsights/index.html. We’ll complete 3 tasks here –

1)      Setting up the environment with BigInsights 3.0, DB2 10.5 Warehouse (BLU) and Cognos BI 10.2.1 FP 3
2)      Prepare data sources. Create tables and load data in warehouse and Hadoop environment.
3)      Create Cognos data sources, meta-data model and a sample report.

Task 1 - Setting up the environment with BigInsights 3.0, DB2 10.5 (BLU) and Cognos BI 10.2.1 FP 3

In my case, all below software is installed on Radhat Enterprise Linux 6.3. In your case they all can be on different machines as well.

·         For Cognos BI 10.2.1 setup you can either download free developer edition for Windows from IBM website (http://www.ibm.com/developerworks/downloads/im/cognosbi/) or use the installation steps given in my previous blog (http://vmanoria.blogspot.in/2014/08/ibm-cognos-bi-installation.html) if you have the software for Linux.

·         If you don’t have licensed version for DB2 10.5 please download and install DB2 10.5 express edition (http://www-01.ibm.com/software/data/db2/express-c/download.html). Installation steps are shown here for Windows https://www.youtube.com/watch?v=2AtSEHC6iAQ

·         For BigInsights 3.0 setup you can either download free QuickStart edition images from IBM website (http://www.ibm.com/developerworks/downloads/im/biginsightsquick/) or use the installation steps given in my previous blog (http://vmanoria.blogspot.in/2014/08/infosphere-biginsight-30-installation.html) if you have the software. If you are not using images then you also need to follow below steps.

·         Copy BigSQL drivers in Cognos library folder and restart Cognos BI services.

cp /opt/ibm/biginsights/bigsql/bigsql1/jdbc/bigsql-jdbc-driver.jar /opt/ibm/cognos/c10_64/webapps/p2pd/WEB-INF/lib/




Task 2 - Prepare data sources. Create tables and load data in warehouse and Hadoop environment.

To keep the things simple we are going to work here with 3 tables – 1) Student 2) Student_Details and 3) Student_Facts. First two tables are being created in DB2 environment. Third table would be created BigInsights HDFS environment using BigSQL. After that we'll create Student_Details table in HDFS environment and load the data from DB2 DB using JDBC driver.

In DB2 BLU, lets create table - 1) Student 2) Student_Details and load data from csv files. Below commands are being run on RHEL shell.

[root@scekvm1 sample]# su db2inst1

[db2inst1@scekvm1 sample]$ ls
ER.jpg  Exam.csv  Old  Performance.csv  QBank.csv  Student.csv  Student_Details.csv  StuFact.csv

[db2inst1@scekvm1 sample]$ db2 connect to gs_db

   Database Connection Information

 Database server        = DB2/LINUXX8664 10.5.3
 SQL authorization ID   = DB2INST1
 Local database alias   = GS_DB

[db2inst1@scekvm1 sample]$ db2 -tvf db2ddl.sql
CREATE TABLE DB2INST1.STUDENT ( STUDENT_ID INTEGER NOT NULL, STUDENT_NAME VARCHAR (30)NOT NULL, YEAR_OF_ADMISSION INTEGER NOT NULL, SCHOOL VARCHAR (30)NOT NULL, CLASS VARCHAR (10)NOT NULL, SECTION VARCHAR (3) NOT NULL, HOSTELER VARCHAR (3) NOT NULL )
DB20000I  The SQL command completed successfully.

CREATE TABLE DB2INST1.STUDENT_DETAILS ( STUDENT_ID INTEGER NOT NULL, DOB DATE NOT NULL, GENDER VARCHAR (2) NOT NULL, HOME_CITY VARCHAR (15) NOT NULL, HOME_STATE VARCHAR (3) NOT NULL, ADMISSION_CATEGORY VARCHAR (15) NOT NULL, SOCIAL_CATEGORY VARCHAR (15) NOT NULL, SCHOOL_CATEGORY VARCHAR (15) NOT NULL, NATIONALITY VARCHAR (15) NOT NULL, RELIGION VARCHAR (15) NOT NULL )
DB20000I  The SQL command completed successfully.

[db2inst1@scekvm1 sample]$ db2 import from Student.csv of del messages msg.txt insert into student

Number of rows read         = 1000
Number of rows skipped      = 0
Number of rows inserted     = 1000
Number of rows updated      = 0
Number of rows rejected     = 0
Number of rows committed    = 1000

[db2inst1@scekvm1 sample]$ db2 import from Student_Details.csv of del messages msg.txt insert into student_details

Number of rows read         = 1000
Number of rows skipped      = 0
Number of rows inserted     = 1000
Number of rows updated      = 0
Number of rows rejected     = 0
Number of rows committed    = 1000

SQL3107W  At least one warning message was encountered during LOAD processing.



Now in BigInsights, let us create Hadoop tables for Student_Details & Student_Facts. After that we’ll load data in Student_Details from DB2 and in Student_Facts from csv file. Before we start please make sure BigInsights is running. If not then please start it by running /opt/ibm/biginsights/bin/start-all.sh


Here we’ll use JSqsh. BigInsights supports a command-line interface for Big SQL through the Java SQL Shell (JSqsh, pronounced “jay-skwish”). JSqsh is an open source project for querying JDBC databases. You may find it handy to become familiar with basic JSqsh capabilities, particularly if you don’t expect to have access to an Eclipse environment at all times for your work. Below commands are being run on RHEL shell.

[root@scekvm1 sample]# su biadmin

[biadmin@scekvm1 sample]$ cd /opt/ibm/biginsights/jsqsh/bin/

[biadmin@scekvm1 bin]$ ls
jsqsh  jsqsh.bat

[biadmin@scekvm1 bin]$ ./jsqsh bigsql
Password:********
WARN [State:      ][Code: 0]: Statement processing was successful.. SQLCODE=0, SQLSTATE=     , DRIVER=3.67.33
JSqsh Release 2.1.2, Copyright (C) 2007-2014, Scott C. Gray
Type \help for available help topics. Using JLine.

[scekvm1.iicbang.ibm.com][biadmin] 1> 

Just copy & paste below commands on JSqsh prompt to create tables -

CREATE HADOOP TABLE IF NOT EXISTS STUDENT_FACTS (
 STUDENT_ID INTEGER NOT NULL,
 ATTENDANCE INTEGER NOT NULL,
 FEE_COLLECTED INTEGER NOT NULL,
 FEE_BALANCE INTEGER NOT NULL,
 MARKS INTEGER NOT NULL
 )
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '\t'
 LINES TERMINATED BY '\n'
 STORED AS TEXTFILE
 ;


 CREATE HADOOP TABLE IF NOT EXISTS STUDENT_DETAILS (
 STUDENT_ID INTEGER NOT NULL,
 DOB DATE NOT NULL,
 GENDER VARCHAR (2) NOT NULL,
 HOME_CITY VARCHAR (15) NOT NULL,
 HOME_STATE VARCHAR (3) NOT NULL,
 ADMISSION_CATEGORY VARCHAR (15) NOT NULL,
 SOCIAL_CATEGORY VARCHAR (15) NOT NULL,
 SCHOOL_CATEGORY VARCHAR (15) NOT NULL,
 NATIONALITY VARCHAR (15) NOT NULL,
 RELIGION VARCHAR (15) NOT NULL
 )
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '\t'
 LINES TERMINATED BY '\n'
 STORED AS TEXTFILE; 


You'll get response like this -

0 rows affected (total: 0.49s)

Now lets load the data in table STUDENT_FACTS from the csv file. Here's the command -

LOAD HADOOP USING FILE URL  
'file:///images/vmanoria/engagements/sample/StuFact.csv'
WITH SOURCE PROPERTIES ('field.delimiter'=',')
INTO TABLE STUDENT_FACTS OVERWRITE;


On successful completion, you'll get response like this -

WARN [State:      ][Code: 5108]: The LOAD HADOOP statement completed. Number of rows loaded into the Hadoop table: "1000".  Total number of  source records: "1000".  If the source is a file, number of lines skipped: "0".  Number of source records that were rejected: "0".  Job identifier: "job_201409242156_0009".. SQLCODE=5108, SQLSTATE=     , DRIVER=3.67.33
0 rows affected (total: 21.26s)

Now lets load data in table STUDENT_DETAILS from DB2 database table we created earlier. 

LOAD HADOOP USING JDBC CONNECTION URL
'jdbc:db2://scekvm1:50000/GS_DB'
 WITH PARAMETERS (user = 'db2inst1',password='db2inst1')
FROM TABLE STUDENT_DETAILS SPLIT COLUMN STUDENT_ID
INTO TABLE STUDENT_DETAILS APPEND
WITH LOAD PROPERTIES ( 'num.map.tasks' = 1)




Task-3) Create Cognos data sources, meta-data model and a sample report.

Let’s quickly create two data source connections from Cognos Administration interface. One with DB2 database GS_DB in our case and another JDBC connection using “IBM Infosphere BigInsights (Big SQL)” as shown below. Provide valid sign-on details and test the connection.


Now let’s open Framework Manager and pull the tables from respective sources. Create relationships between them and set the query items properties correctly. Like in my case I changed usage property of ‘Student_ID’ with ‘identifier’ which previously was ‘fact’ due to its integer data type. Before you create and publish the package, just test if the aggregate data is coming out correctly. Now you can create a package and publish it on Cognos Connection.


Now you are ready to create your report using Report Studio.


 
References:

Cognos Business Intelligence 10.2 reporting on InfoSphere BigInsights (Using Hive)

Big data and data warehouse augmentation

Use big data technologies as a landing zone for source data

Use big data technology for an active archive

Use big data technologies for initial data exploration

Whats the big deal about Big SQL?

Wednesday 27 August 2014

Infosphere BigInsight 3.0 Installation and Configuration on Redhat Linux



InfoSphere BigInsights is IBM’s bigdata offering to help organizations discover and analyze business insights hidden in large volumes of a diverse range of data – data that’s often ignored or discarded because it’s too huge, impractical or difficult to process using traditional means. Examples include log records, click streams, social media data, news feeds, emails, electronic sensor output, and even transactional data.

BigInsights brings the power of open source Apache Hadoop project to enterprise.  In addition, there are a number of IBM value-add components that make up this Enterprise Analytics platform. These value-adds are in the areas of analysis and discovery, security, enterprise software integration, administrative and platform enhancements. For more details please visit below URL.


You can also download no-charge Quick Start Edition of IBM Infosphere BigInsight.

In this blog we’ll see steps involved in BigInsights installation and configuration on RHEL. There are three major parts to it.

1)      Meet the pre-requisites (Hardware & Software)
2)      Complete pre-installation activities
3)      Install BigInsights 3.0

Meet the pre-requisites (Hardware & Software)

Let’s start with step -1. You can go thru standard supported environment specification on IBM site (http://www-01.ibm.com/support/docview.wss?uid=swg27027565). Here I am going to install single-node BigInsights 3.0 on RHEL 6.4 system with the specification shown in below screenshot.


We need to verify or install the Expect, Numactl, and Ksh Linux packages. One way to get these libraries is to download them independently from various Linux websites and install them. The other and probably the better way is to use your OS (RHEL 6.4 in this case) disk or .ISO image for the process. I am going to use the second option here. First I copied “RHEL6.4-20130130.0-Server-x86_64-DVD1.iso” file in /data folder (newly created) then mounted it as /media and update repository.

mount -oloop RHEL6.4-20130130.0-Server-x86_64-DVD1.iso /media
vi /etc/yum.repos.d/server.repo
rpm --import /media/*GPG*
yum clean all


Next step is to verify that the Expect, Numactl and Ksh Linux packages are installed.

rpm -qa | grep expect
rpm -qa | grep numactl
rpm -qa | grep ksh

If the packages are not installed, then run the following command to install them.

yum install expect
yum install numactl
yum install ksh

Now we are ready for step-2.

Complete pre-installation activities

In addition to product prerequisites, there are tasks common to all InfoSphere BigInsights installation and upgrade paths. You must complete these common tasks before you start an installation or upgrade.

Task – 1) Ensure that adequate disk space exists for these directories - / (10GB), /tmp (5GB), /opt (15GB), /var (5GB) & /home (5GB).

df –h

Task – 2) Check that all devices have a Universally Unique Identifier (UUID) and that the devices are mapped to the mount point

sudo blkid

vi /etc/fstab

Before you edit /etc/fstab, save a copy of the original file.
            

 Task – 3) Create the biadmin user and group.

// Add the biadmin group.
groupadd -g 123 biadmin  

// Add the biadmin user to the biadmin group.
useradd -g biadmin -u 123 biadmin 

//Set the password for the biadmin user.
            passwd biadmin

//add the biadmin user to the sudoers group.
     sudo visudo -f /etc/sudoers

Find out and add ‘#’ to comment below line if its not there
            # Defaults requiretty

Also add these lines just below “# %wheel ALL=(ALL) NOPASSWD: ALL” line
biadmin ALL=(ALL) NOPASSWD:ALL
root ALL=(ALL) NOPASSWD:ALL

Open the /etc/security/limits.d/90-nproc.conf file and add below lines.

@biadmin  soft nofile    65536
@biadmin  soft nproc     65536
@root     soft nofile    65536
@root     soft nproc     unlimited

Open the /etc/security/limits.conf file and add below lines.

Task – 4) Configure your network.

Edit the /etc/hosts to include the IP address, fully qualified domain name. The format is IP_address domain_name short_name. For example,

127.0.0.1 localhost.localdomain localhost
172.21.6.151 bda.iicbang.ibm.com bda

Edit the /etc/resolv.conf to include the nameservers

domain iicbang.ibm.com
search iicbang.ibm.com
nameserver 172.21.4.40

Save your changes and then restart your network.
service network restart

We need to configure passwordless SSH for the root and biadmin.
su root
ssh-keygen -t rsa (When asked select the default file storage location and leave the password blank.)
ssh-copy-id -i ~/.ssh/id_rsa.pub root@bda.iicbang.ibm.com

Ensure that you can log in to the remote server without a password.
ssh root@bda.iicbang.ibm.com
exit

Repeat this SSH setting process for biadmin user also.

Run the following commands in succession to disable the firewall.
service iptables save
service iptables stop
chkconfig iptables off

Now disable IPv6 –

echo “install ipv6 /bin/true” >> /etc/modprobe.d/disable-ipv6.conf

Edit the /etc/sysconfig/network file and append the following lines.
NETWORKING=yes
NETWORKING_IPV6=no

Edit /etc/sysconfig/network-scripts/ifcfg-eth0 (assuming eth0 is used for networking) and add these lines –

IPV6INIT=no

Append following lines at the end of /etc/sysctl.conf file.
net.ipv6.conf.all.disable_ipv6 = 1
kernel.pid_max = 4194303
net.ipv4.ip_local_port_range = 1024   64000

Restart your machine.
reboot

Verify that IPv6 is disabled.
ifconfig
IPv6 is disabled if all lines containing inet6 are not listed in the output.
 
Task – 5) Synchronize the clocks of all servers using Network Time Protocol (NTP) source.

Add below line in /etc/ntp.conf
server 172.21.4.40 iburst

 Update the NTPD service with the time servers that you specified.
chkconfig --add ntpd

 Start the NTPD service.
service ntpd start

 Verify that the clocks are synchronized with a time server.
ntpstat


Step – 6) Run the pre-installation checker utility to verify that your Linux environment readiness

I have copied BigInsights software copy in /data folder. Let’s unzip it.

tar -xvf IS_BigInsights_EE_30_LNX64.tar.gz
            cd IS_BigInsights_EE_30_LNX64/installer/hdm/bin
     ls bi-prechecker.sh

We must run and pass all bi-prechecker.sh tests before start BigInsights installation. Before that let’s create a file containing your host name.

Echo “bda.iicbang.ibm.com” > hostlist.txt
            ./bi-prechecker.sh –m ENTERPRISE –f hostlist.txt –u biadmin

If all the checks are [ OK ] then we are ready for next step. If there are [FAILED] entries then go thru the log file created by utility in the same folder and correct it.

Install BigInsights 3.0

Let’s start installation steps which are pretty easy if previous steps are completed successfully.

Navigate to the directory where you extracted the biginsights
            cd /data/IS_BigInsights_EE_30_LNX64/
Run the start.sh script.
     ./start.sh

The script starts WebSphere Application Server Community Edition on port 8300. The script provides you with a URL to the installation wizard. In my case I received -

http://172.21.6.151:8300/Install/

Open it in the browser. On the License Agreement panel, accept the license agreement and then click Next.
 

On the Installation Type panel, select Cluster installation, select the check box to Create a response file and save your selections without completing an installation, and then click next.

On the File System panel, enter a name for your cluster (BICluster is default), select Install Hadoop  Distributed File System (HDFS), enter the mount point where you want to install HDFS, and then click Next. You can choose other file system also.
 

On the 'Secure Shell' panel, select the user (root in my case) that you want to install with, enter any required information, and then click Next.

On the 'Nodes' panel, click your node to use for HDFS. I can see bda.ibm.com listed here.

Next, on 'Components 1' screen, pass on ‘catalog’ and ‘bigsql’ password whatever you desire to keep.


 Click Next on the remaining panels until you reach the Summary panel. On the Summary panel, click Create response file. The installation program displays the location where your response file is saved. Take note of this location so that you can easily locate your response file after you install HDFS and are ready to install InfoSphere BigInsights.
 

Make sure you can see all the services running on your node on ‘results’ panel.
 

Next it’ll take you to BigInsights Console screen. That shows your installation is successfully completed.  You can browse information from Welcome tab and decide your next action.



Now if you want to add more nodes in the cluster, prepare them and add from Cluster Status tab.

To stop all the services, run below command -
            cd /opt/ibm/biginsights/bin/
            ./stop-all.sh
Similarly there is ./start-all.sh to start all the services.


We also need to install “IBM InfoSphere BigInsights Eclipse tools for developing and deploying applications to the BigInsights server and writing programs using Java MapReduce, JAQL, Pig, Hive and BigSQL. First of all download Eclipse 4.3 + from www.eclipse.org. Then, add the http://<server>:<port>/updatesite/ URL to your Eclipse Software Updater (Help Menu -> Install) as shown below. Select the location and all entries under the IBM InfoSphere BigInsights category. Then simply follow the steps to install the InfoSphere BigInsights plugins. 
 

References:

Planning to install InfoSphere BigInsights 3.0

http://www-01.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.install.doc/doc/c0057867.html?cp=SSPT3X

Preparing to install InfoSphere BigInsights 3.0

http://www-01.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.install.doc/doc/bi_install_prep_overview.html

Installing Infosphere BigInsights 3.0


BigInsight 3.0 Tutorials


Monday 25 August 2014

IBM Cognos BI Installation & Configuration on Redhat Linux


Those who normally work with Cognos BI on Windows Server, find it difficult to install and configure on Linux. In this blog we’ll see steps involved in this installation and configuration on RHEL. There are three parts to it.

1)      Meet the pre-requisites (Hardware & Software)
2)      Install and configure Cognos BI Server components
3)      Install and configure HTTP server

Cognos Framework Manager and Transformer are client tools and must be installed on Windows.

Meet the pre-requisites (Hardware & Software)

Let’s start with step -1. You can go thru standard supported environment specification on IBM site (http://www-01.ibm.com/support/docview.wss?uid=swg27037784). Here I am going to install Cognos BI V 10.2.1 on RHEL 6.4 system with the specification shown in below screenshot.


 We need to ensure installation of required patches before we start Cognos installation:

  • glibc-2.12-1.80.el6 (both ppc and ppc64 packages) - 32 and 64 bit glibc libraries
  • libstdc++-4.4.6-4.el6 (both ppc and ppc64 packages) - 32 and 64 bit libstdc++ libraries
  • nspr-4.9-1.el6 (both ppc and ppc64 packages) - 32 and 64 bit nspr library for CAM ldap provider
  • nss-3.13.3-6.el6 (both ppc and ppc64 packages) - 32 and 64 bit nss library for CAM ldap provider
  • openmotif-2.3.3-4.el6 (both ppc and ppc64 packages) - 32 and 64 bit openmotif libraries

One way to get these libraries is to download them independently from various Linux websites and install them. The other and probably the better way is to use your OS (RHEL 6.4 in this case) disk or .ISO image for the process. I am going to use the second option here. First I copied “RHEL6.4-20130130.0-Server-x86_64-DVD1.iso” file in /data folder (newly created) then mounted it as /media and update repository.

mount -oloop RHEL6.4-20130130.0-Server-x86_64-DVD1.iso /media
vi /etc/yum.repos.d/server.repo
rpm --import /media/*GPG*
yum clean all
  

Now to check if glibc package is already installed or not, use below command:

rpm –qa | grep glibc

If package is installed you’ll get file list (name ending with .x64_64 or .i686) in return otherwise we need to install it including dependencies using below command:

 yum install glibc.i686    // For 32-bit
 yum install glibc.x86_64  // For 64-bit

Repeat the same process for libstdc++, nspr, nss, openmotif.

We also need to have JDK 7 installed as prerequisite. I am downloading IBM JDK 7 from IBM site (http://www.ibm.com/developerworks/java/jdk/linux/download.html ) for Linux 64-bit environment and install it as shown below.

./ibm-java-x86_64-sdk-7.1-1.1.bin

It is installed in /opt/ibm/java-x86_64-71. Now we are ready for step-2.

Install and configure Cognos BI Server components

As shown below in snapshot, I have copied these 5 server components –
·         Cognos BI Server 10.2.1
·         Cognos BI Samples (optional)
·         Cognos SDK (optional)
·         Cognos Mobile (optional)
·         Cognos Dynamic Query Analyzer (optional)



 First unzip the package using below command –

  tar –xvf bi_svr_10.2.1_l86_ml.tar.gz
     cd linuxi38664h
     ./issetup

  It would open GUI based installation wizard, as shown below -


From here steps are self explanatory. I am selecting all four components from ‘Component Selection’ screen as I want all on my single server. By default ‘Cognos Content Database’ is not selected. In case, you plan to create content store somewhere else you can go ahead without it.


 Once the installation is over you can go ahead with Cognos Samples, SDK, Mobile and other server components. Add the bcprov-jdk14-134.jar  from  /cognos/c8_64/bin64/jre/1.5.0/lib/ext/bcprov-jdk14-134.jar to the $JAVA_HOME/lib/ext path

 cp /opt/ibm/cognos/c10_64/bin64/jre/7.0/lib/ext/bcprov-jdk14-145.jar /opt/ibm/java-x86_64/jre/lib/ext

We also need to add JAVA_HOME in cogconfig.sh file before opening Cognos Configuration tool.

vi /opt/ibm/cognos/c10_64/bin64/cogconfig.sh

And add below line as first executable command of cogconfig.sh file.

 Export JAVA_HOME=/opt/ibm/java-x86_64/jre

Save it and run it to start Cognos service.

./opt/ibm/cognos/c10_64/bin64/cogconfig.sh



Just test if Cognos BI service is running successfully by opening “http://localhost:9300/p2pd/servlet” on browser.



Install and configure HTTP server

You can choose your choice of HTTP server here. I am using IBM HTTP Server (32-bit). It is no-charge and can be downloaded from –

Unzip downloaded package, update JAVA_HOME in “IHS/install” file and run it.

tar –xvf ihs.7000.linux.ia32.tar
cd IHS
vi install
./install

It’ll open GUI wizard for installation.


 After successful installation we’ll add necessary virtual directories in configuration file.

 vi /opt/IBM/HTTPServer/conf/httpd.conf

Add below lines in httpd.conf

# Cognos #
# Load Cognos Apache 2.2 Module
LoadModule cognos_module "/opt/ibm/cognos/c10_64/cgi-bin/lib/mod2_2_cognos.so"

# Add WebDAV lock directory, make sure the LoadModule dav_module modules/mod_dav.so and LoadModule dav_fs_module modules/mod_dav_fs.so are uncommented
DAVLockDB /tmp/DBLOCK

# Alias for the cgi scripts
ScriptAlias /ibmcognos/cgi-bin "/opt/ibm/cognos/c10_64/cgi-bin/"

<Directory "/opt/ibm/cognos/c10_64/cgi-bin">

AllowOverride None
Options None
Order allow,deny
Allow from all
</Directory>

# Alias for the Cognos webcontent folder
Alias /ibmcognos "/opt/ibm/cognos/c10_64/webcontent"

<Directory "/opt/ibm/cognos/c10_64/webcontent">
Options Indexes FollowSymLinks MultiViews IncludesNoExec
AddOutputFilter Includes html
AllowOverride None
Order allow,deny
Allow from all
DAV on
</Directory>

Find out below lines and add ‘#’ in the beginning to comment them.

# LoadModule was_ap22_module /opt/IBM/HTTPServer/Plugins/bin/32bits/mod_was_ap22_http.so
# WebSpherePluginConfig /opt/IBM/HTTPServer/Plugins/config/webserver1/plugin-cfg.xml

We can save it now. We’ll create two file – one two start the server and another to stop it.

Create startIHS.sh with below code –
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ibm/cognos/c10_64/cgi-bin:/opt/ibm/cognos/c10_64/cgi-bin/lib
./apachectl start

And stopIHS.sh with –
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ibm/cognos/c10_64/cgi-bin:/opt/ibm/cognos/c10_64/cgi-bin/lib
./apachectl stop

Copy both files in /opt/IBM/HTTPServer/bin and run startIHS.sh to start the server.
  cd /opt/IBM/HTTPServer/bin
./startIHS.sh

Here’s your Cognos BI server ready for use. Open “http://localhost:80/ibmcognos” on local machine or use IP address/hostname instead of ‘localhost’. In my case, HTTP server is running on 80 port number (default).


 If you want to upgrade it with fix pack 3 which is latest please get it from below link and install it.

References:

IBM Cognos 10.2.1 official documentation (Knowledge Center)

Business Intelligence Installation and Configuration Guide

Business Intelligence Architecture and Deployment Guide