Friday, June 29, 2012

Invoking Web Services from OBIEE

This blog explains on how to invoke Web services from OBIEE 11g using Actions.

 

Requirements

OBIEE 11g installed and running

Sample app for OBIEE 11g

SOAP UI

Note: You need internet access for the machine on which you have installed OBIEE 11g.

 

Web Service

A Web service is a Web resource which may be invoked to get some information. There are 2 different implementations of Web services

1. SOAP : Simple Object Access Protocol

2. REST: Representational State Transfer.

Today we are going to see a demo based on SOAP based Web services. SOAP based Web services are described in an xml format know as Web Service Description Language (WSDL). WSDL specifies what operations that web service contain, inputs and outputs for each operation, where the service is located etc.

The Web service that I am going to use for demo is called “zip code web service”. This service has an operation which takes zip code as input parameter and gives the city name, state name, time zone and phone area code etc. as output.

The web service is located at http://www.webservicex.net/uszip.asmx

The WSDL is located at http://www.webservicex.net/uszip.asmx?wsdl

Note: Please note that I am no way related to this Web Service.

Understanding the Web service

Let see how Web services work.

1. Download the SOAP UI Standalone version and unzip it.

2. Go to bin folder and double click on soapui.bat for windows or soapui.sh for Linux

3. It opens a GUI window as shown below

clip_image002[7]

4. Click on the File Menu and select New soapUI Project as show below

clip_image004[5]

5. It should open a New soapUI Project window. Give a name to the project (I named it as Test) and in the Initial WSDL/WADL field enter the “http://www.webservicex.net/uszip.asmx?wsdl”, leave the default settings and Click OK. See the image below

clip_image006[5]

6. SOAP UI creates the necessary stubs and you should see a new project (Test in my case) with list of services and operations as shown below

clip_image008[4]

7. Expand the USZipSoap12, expand GetInfoByZip operation and double click on the Request1. You should see a new split window. The left side represents the input message for the operation and right side represents output for the operation. Notice that on left hand side, you should see that <web:GetInfoByZip> tag which is just below the Body tag, this indicates that you are about to invoke GetInfoByZip operation. You should see that “web:USZip” under operation tag, this is the parameter that needs to be passed before executing the operation. Replace the “?” from the highlighted section (in the image below) with some zip code and click on run or green play button located on the top. You need internet for this step to work

clip_image009[6]

8. You can see that the output side is filled with data. I have entered 60563 and I got back city as Naperville, state as IL, time zone as C, area code as 60563

clip_image011[7]

9. In OBIEE, you get this information using XPath. For e.g.,

a. To get the XPath for City, you could use //CITY (since there is only city). I would suggest you to use //GetInfoByZIPResult//Table/CITY.

b. Similarly, I would use

i. //GetInfoByZIPResult//Table/STATE for State

ii. //GetInfoByZIPResult//Table/TIME_ZONE for Time Zone

iii. //GetInfoByZIPResult//Table/AREA_CODE for Area Code

Note: Please remember these XPath expressions as I am going to use them in the next section.

Create Action:

1. Let us create an action first, Log in to OBIEE 11g and go to New > Action

2. It pops up a dialog, select Invoke a Web Service link

clip_image013[4]

3. You should see “Select Web Service Operation” window, in the WSDL URL field enter the

http://www.webservicex.net/uszip.asmx?wsdl and click Open button

clip_image015

4. Expand the USZip > USZipSoap12 and select the operation GetInfoByZip and click OK

clip_image017

5. It opens New Action window as shown below. Notice that it defines only one parameter as our Web service operation takes only one parameter as input. Click on the “Options…” button located next to Help button

clip_image019

6. In the Invoke Action tab, check the Dialog Title box and enter “Get Geographic Information for Zip” in the field besides it

clip_image021

7. In the Action Results Tab,

a. Enter “The Geographic information is” in the Dialog Title field

b. Enter following in the Dialog Text area


The City is @{1}
The State is @{2}
The Time Zone is @{3}
The Area Code is @{4}


c. In the XPath Results section, add 4 Names as (use green color plus clip_image023 for adding additional parameters)

i. 1 as //GetInfoByZIPResult//Table/CITY

ii. 2 as //GetInfoByZIPResult//Table/STATE

iii. 3 as //GetInfoByZIPResult//Table/TIME_ZONE

iv. 4 as //GetInfoByZIPResult//Table/AREA_CODE

clip_image025

8. Click OK and Save Action button.

9. This finishes the action part, let us to go and create an analysis which uses this action.

Note: I can use an action link to give a demo but I thought it is better to create a demo with analysis.

Create Analysis

I have the sample app for OBIEE 11g installed and I am going to use it for the demo

1. Create an analysis from the Sample Sales subject area with Postal Code (Customer), Product, Revenue and Country(Customer) as columns

clip_image027

2. Since the Zip code Web service is going to work only for United States, add a filter on Country with value equal to United States

clip_image029

3. On the results tab, Let us exclude Country from the display

clip_image031

4. On the Analysis tab, open the column properties of the Postal Code Column

clip_image033

5. Go to Interaction Tab of the column properties and select Primary Interaction for Value as Action Links

clip_image035

6. Add action link by clicking the green plus symbol clip_image037

a. Enter the link text as Get Geographic Info

b. Select existing action button clip_image039

c. Navigate to the place where you have saved the action (mentioned in previous section, I saved it as ZipToGeoAction), click on the action and click OK button.

d. You should see a pop up “Edit Parameter Mapping” window. In the value column, click this clip_image041 button to expand it and select “Column Value” option

clip_image043

7. You should see that values are populated as list of columns present in your analysis. Select Postal code as value (see the image below). By doing this, you are saying that pass the value of Postal code as a parameter to the web service.

clip_image045

8. Click OK to close Edit Parameter Mapping window, OK to close Edit Action Link window and OK to close Column Properties.

clip_image047

clip_image049

9. Go to Results tab and click on any value in postal code column, it should pop a menu as Get Geographic Info

clip_image051

10. Click on the Get Geographic Info menu, you should see the following window

clip_image053

11. Click on Execute button and you should see the following window

clip_image055

12. Note: it might take little time as it going through the internet to get this information.

If you have any questions or comments, please drop a note below.

Tuesday, June 26, 2012

How to setup Hadoop for Development

If you enthusiastic or interested in the code of Apache Hadoop then perform the following steps to build and view the code either in Netbeans or Eclipse IDE


Requirements:

  1. Linux box

  2. subversion

  3. Java 1.6

  4. Apache Maven

  5. Apache Ant


Checkout Code:

You need to check out the code from trunk of apache hadoop


svn co http://svn.apache.org/repos/asf/hadoop/common/trunk/ Hadoop


Protocol Buffers:

Hadoop uses google's protocol buffers for serialization. You need to install it

  1. Download the protocol buffers from google code (search it for the link). I have downloaded protocolbuf-2.4.1.tar.gz

  2. Extract the downloaded file and navigate to the extracted folder through command line and run following steps



./configure
make
sudo make install


Note:  Make sure that every step passes before proceeding to next step

Note: Step 3 make sure you run it as super user or sudo

  1. Test your protocol buffer installation by running protoc in a command line and you should get something like “Missing input file”


Setup:

Navigate to the folder where you have checked out Hadoop and run the following commands. It is just basic maven installation

[sourcecode lanaguage="shell"]
mvn clean
mvn install -DskipTests


*Note: Build might fail if so then rerun the 2 steps. It should work for the second time.

At this point you are good to go.

*If you use Netbeans then it is piece of cake to load maven projects. Navigate to the Hadoop folder and Netbeans should automatically recognize the maven project. Click Open to open the project

*If you use eclipse then run additional command after step 2 in Setup section

[sourcecode lanaguage="shell"]
mvn eclipse:eclipse


If you are interested in sources and javadocs

[sourcecode lanaguage="shell"]
mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true


Now open the eclipse and import > Existing projects into workspace > Next, navigate to the Hadoop folder and click Finish.

Please go to Apache Hadoop's site for more details.

Monday, June 25, 2012

Install Apache Hadoop in a Single Node mode

Requirements:

You need following software before you can proceed

  1. Linux Machine (I am using Ubuntu)

  2. Java - JDK 1.6 (prefered is Oracle's version)

  3. *Hadoop latest distribution (1.0.3 in my case)

  4. Install SSH server on the linux Machine (On Ubuntu, sudo apt-get install openssh-server)

  5. Install rsync (On Ubuntu, sudo apt-get install rsync)

  6. Setup passphraseless ssh

    1. If you type ssh localhost and if SSH is asking for a password then you perform these steps


    2. ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
      cat ~/.ssh/id_dsa.pub &gt;&gt; ~/.ssh/authorized_keys

    3. type ssh localhost and it should not ask for password.




*It is better if you download a zipped version or tar.gz version instead of .deb or .rpm.

Brief Overview on Hadoop:

Before we can install Hadoop, you should know few things about hadoop. I would provide a brief overview of each topic and please refer to apache hadoop's site for more details.

Hadoop has 3 important pieces

  1. Distributed File System

  2. Map Reduce Framework

  3. Hadoop Common


Distributed File System

DFS (Distributed File System) has 2 important pieces i.e., Name Node and Data Node.

  1. Name Node: Name Node is like a controller in DFS.It is primary responsibilities include

    • dividing the files into chunks (blocks) and distribute them across Data Nodes

    • Maintain reports on where are blocks are present

    • Make sure that number of replications of a file are always met etc



  2. Data Node: This node's primary responsibility is to store the data blocks and send a periodic report on what blocks it is maintaining to Name Node.


Map Reduce Framework

MR Framework (Map Reduce) is used to create/run jobs to work on the data stored in DFS. Jobs are submitted using a Job client. There are 2 important pieces i.e., Job Tracker and Task Tracker. Job Tracker is responsible for handling the job, devide and distribute job among Task Trackers. I will go into the details in another blog.

Hadoop Common

Hadoop Common is set of utilies used by HDFS, Map Reduce Framework and various other sub projects

Setup:

  1. Unzip the Hadoop*.tar.gz to a location and lets create a environment variable called HADOOP_HOME to point to this location

  2. Open the file HADOOP_HOME/conf/hadoop-env.sh and remove the # infront of “export JAVA_HOME=” and set it to place where you have installed java

  3. Open the file HADOOP_HOME/conf/core-site.xml and replace <configuation> tag with



<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>


  1. Open the file HADOOP_HOME/conf/hdfs-site.xml and replace the <configuration> tag with



<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>


  1. Open the file HADOOP_HOME/conf/mapred-site.xml and replace the <configuration> tag with



<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>


Installation:

  1. We need to setup Name node

    1. cd $HADOOP_HOME

    2. bin/hadoop namenode -format

    3. Above command creates a folder /tmp/hadoop-userName folder, which is the place where hadoop holds all the files (userName is the name of the user who performed step 2)



  2. Start the hadoop by running the following commands

    1. cd $HADOOP_HOME

    2. bin/start-all.sh

    3. It starts the Hadoop

    4. try running jps from command line and you should get somthing like following





3884 NameNode
4825 Jps
4437 SecondaryNameNode
4525 JobTracker
4192 DataNode
4774 TaskTracker


Congratulations you have successfully installed Hadoop. If you want to play bit more then open the following urls in a web browser

  1. http://localhost:50030 for job tracker

  2. http://localhost:50060 for task tracker

  3. http://localhost:50070 for dfs


I will go into the details on how to add files and map reduce etc in another blog.

Please go to Apache Hadoop's site for more details.

Thursday, June 21, 2012

Understanding Master Details Events in OBIEE 11g

Master Details Events help us to trigger events on a view based on user’s interactions on same or different view. For examples, let’s say there are two views in your analysis (I am referring to sample app that comes with OBIEE 11g)

  1. A simple table with columns Products, Regions, Revenue

  2. A Graph with Products as Graph prompts, Regions on X axis and Revenue on Y Axis (See the image below)




If would like to update the graph based on user interactions (user clicking on the column) on the table, then Master Detail Events is your answer.

There are basically two terms that you need to understand

  1. Master View

  2. Details View


Master View

  1. Master View is responsible for generation of events which are listened by Details view

  2. A view becomes master view if you set up the interaction (tab on column properties) of a column to send master detail events using a channel (name which you assigned for the channel) and the column for which you have performed the setup becomes  Master Column

  3. Channel acts like a mediator between Master view and details views by passing the events. All you do in channel is assign a name to the channel

  4. Master column can be any column i.e., hierarchy, attribute or Measure and should be present in the body (master column cannot be on selection section, graph prompts or page edge)

  5. There is no rule on where master view and details view should be i.e., master view and details view can be from different analysis

  6. Only the following views can be master views

    1. Table

    2. Pivot Table

    3. Graph

    4. Gauge

    5. Funnel Graph

    6. Map




Details View

  1. A details view listens to the channel specified by master view. In case of an event fired by master view, details views captures the event and responds accordingly

  2. A view becomes a details view if you set up the view to listen to master-details event’s channel (should be the same channel name mentioned in your master view) for a column in its properties section. This column becomes Details Column

  3. Details view can listen to any number of channels. These channels may come from same analysis or from different analysis

  4. Details view cannot be a master view

  5. Details column must be displayed on page edge or graph prompts or selection section

  6. Only following views can be details view

    1. Table

    2. Pivot Table

    3. Graph

    4. Gauge

    5. Funnel Graph




Example

Create a Master view


    1. Create a new analysis with some columns, I have created 3 column analysis as shown below






    1. Create 2 views in the Results Tabs, 1 as table and 2 as graph as show below






    1. Now navigate to criteria tab and select the column properties of product

    2. Click on Interaction tab, select Primary interaction as Send Master-Detail Events and specify the channel name (“ProductSelectedChannel” in my case). Please see the image below





  1. Click OK. This creates the Master view.


Create Details View


    1. For details view, click edit button on the graph (pencil button)






    1. Click on the properties of the graph






    1. On General Tab, in Zoom and Scroll section, check Listen to Master-Detail Events and enter the channel name (in my case, it is “ProductSelectedChannel”, refer to step 4 in previous section)





  1. Click OK button and click Done. This creates the Details view


Now try to click on any of the product column on the table, this would automatically update the Graph by automatically selecting the same product in the drop down list. Have fun :).

Wednesday, June 20, 2012

Hadoop JAVA_HOME is not set error

Most first time users of Hadoop are facing this problem. I thought it is better to write a note on this topic. JAVA_HOME is environment variable that Hadoop needs and most Linux users confuse with word “environment variable”. Linux environment variables can be set in many ways (.bashrc, .bash_profile etc) but hadoop is not looking for this. There is file under hadoop_installation_path/conf/hadoop-env.sh. Open it in edit mode and you should see a line starting with # export JAVA_HOME=.... change its value to the location where you have installed your Java and remove the # in front of the line. You should be able to run Hadoop. Drop a comment if you still face an issue.

Tuesday, June 19, 2012

Variables in Oracle OBIEE 11g

Variables in Oracle OBIEE 11g


There are basically 4 different types of variables in OBIEE 11g.

  1. Session Variables

  2. Repository Variables

  3. Presentation Variables

  4. Request Variables.


Session Variables:

  1. As the name suggests, session variables are created during the creation of session i.e., as soon as a user logs into the BI server.  So, Every login has its own session variable.

  2. There are two types of session variables

    1.  System (which are defined by OBIEE and are reserved)

    2. Non-System which are defined by developers.

    3. Session Variables can be created only through Oracle BI Administration Tool.




Referencing session variable:

For displaying session variables, we should use @{biServer.variables['NQ_SESSION.VariableName']} .

For using session variables in expression, we should use VALUEOF(NQ_SESSION."VariableName").

Repository Variables:

  1.  A repository variable is a variable that has a single value at any point in time.

  2. There are two types of repository variables

    1. Static (which changes only if admin or developer changes it value)

    2. Dynamic ( value is refreshed using a query)

    3. Repository variables can be created only through Oracle BI Administration Tool.




Referencing repository variable:

For displaying repository variables, we should use @{biServer.variables.VariableName} or @{biServer.variables['VariableName']}.

For using repository variables in expression, we should use VALUEOF("VariableName") for static variable and for dynamic variable VALUEOF("Dynamic Initialization Block Name"."VariableName").

Presentation Variable:

  1. A presentation variable is a variable which can be created as a part of creation of dashboard prompts. Dashboard prompts must be either Column Prompt or Variable Prompt.

  2. The value of presentation variable is set by the prompt for which it is created (upon user selection).


Referencing presentation variable:

For displaying presentation variables, we should use either

  1. @{variables.VariableName}[Format]{DefaultValue}  or

    1. Format and DefaultValue are optional

    2. Format is useful to format the data for e.g., for Date, format can be MM/DD/YYYY. Note: Default Value is not formatted.

    3. @{scope.variables['VariableName']}.

      1. Scope should be used if you create variables with same name.

      2. Scope can be analyses, dashboard etc.

      3. Order of precedence is analyses, dashboard pages, dashboards.






For using presentation variables in expression, we should use @{"VariableName"}{DefaultValue}. Default value is optional.

Request Variable:

  1. Request Variable is used to overwrite the value of session variable and it happens only during request initiation to the database from column prompt.

  2. Can be created only during the creation of column prompt.


Referencing request variable:

Same as presentation variable.

 

References:

Please refer to Oracle site for more details.