EssCS – Essbase LCM Utility - 27-Mar-2017 13:22 - Sarah Zumbrum

About a week and a half ago I wrote at a high level about 2 Essbase Cloud Service (EssCS) command line scripts, the Export Utility and Command Line Tool. Another utility in the mix that I didn’t talk about (due to environment constraints) was the Essbase LCM Utility. I finally got my environment set up to use the tool and played around with an export and import today.

On the EssCS homepage, you will see a green icon named “Utilities”. I clicked on this to download the utility.

In the list, it will be the third item. I clicked the down arrow next to “Life Cycle Management”.

After I downloaded and extracted the tool to my favorite utility location, I read the “README.txt”. Below I have highlight some items I thought were key:

· The supported releases of Essbase are, , and 12c EssCS.

· The command and variables to use when exporting a cube.

· The command and variables to use when importing a cube.

Note that there are also details about how to deal with scenarios (workflow and/or sandbox) and partitions. I’m not using these in my example cube, so I won’t be addressing them in this blog post.

The first thing I want to do is export my on-premises (OP) cube to a zip file. I have highlighted that syntax below in the window. Also, I have supplied it to copy and paste for your environment.

EssbaseLCM export –server {servername}:1423 –user {username} –password {password} –application {appname} –zipFile {FileName}.zip

If all goes successfully, you should get a line for each artifact in your application. You can also choose to not import data by adding “-nodata” at the end of the export command.

Note: The parameters given in the command can be in any order, not just the one shown!

Because I’m curious, I opened the zip file to see how it was organized.

I can see there are folders for the various artifacts.

Now to import the application… I tried the import command for a new cube first, but because I already have the cube in my environment, I was given an error and told to use “-overwrite” in my parameters. So, that’s what I did (shown with the yellow box). The command I used for the import was:

EssbaseLCM –server {servername}:1423 –user {username} –password {password} –application {appname} –zipFile {filename}.zip -overwrite

To double-check to make sure it worked, I logged into EssCS, chose my database name, and clicked “Settings”.

In the Properties tab under Basic, I can see that the expected number of members were loaded as well as data. Perfect!

The data is not aggregated, so I thought it would be a good chance to see if my calc scripts came in…Yep! From here I could execute the scripts if I desired.

And I’m done! That was easy enough…

And to take a step further, this is easily something that could be run routinely via a batch (or shell) script. This would be great for backing up your environment, versioning, or simply moving cubes from one environment to another. …And if you were wondering if you can only download from OP, nope, cloud is an option as shown below:

Command Line:


Tips for Using Essbase in Data Visualization - 23-Mar-2017 15:40 - Sarah Zumbrum

Last Friday I wrote a blog on how to connect Essbase and Essbase Cloud Service (EssCS) to Data Visualization (DV). I’ve decided to put together a couple tips to help you start using Essbase/EssCS as a data source in DV.

Let’s start…

If you followed the steps and have Essbase or EssCS as a DV data source, you might find yourself annoyed that the hierarchy names came in as Generations. Here is one of my examples from last week:

If you administer a cube, you might know what each generation refers to in the dimension. For example, “Gen2, Location” might be country, “Gen3, Location” might be state, and “Gen4, Location” might be city. However, if you are the end user, you might get confused with these details.

Doesn’t this look easier to navigate?

So, how can you give the generations a clearer definition of the hierarchy granularity? It’s actually pretty simple! I’ll show you how to do this in EssCS then in Essbase, but it’s really the logic, just different steps.


There are two ways we can define generations for EssCS. You can choose which one you like best.

The first way is to log into EssCS, choose your cube, and click “Settings”.

In the “Dimensions” section, you will see the each dimension listed with details.

For demo purposes, I’m going to update Years.

Note that the generation names are “Gen1” and “Gen2”.

Let’s update “Gen1” to be “All Years” and “Gen2” to be “Year”. This is done by clicking on the Name (“Gen1” or “Gen2”) and entering the name you would like to enter.

Once I have finished naming my generations, I click “Save”.

Now, when I go into DV and connect to this cube, I have an updated data element name.



The second way to do this in EssCS is to use the Cube Designer workbook. On the “Cube.Generations” tab, enter the names of the generations for the dimensions you want updated.

From the Cube Designer ribbon, choose “Build Cube”.

Choose to “Update Cube – Retain All Data” and click “Run”. You will likely not want to load the data sheets, but you can if you would like to.

Confirm that you want to update the listed cube.

Choose Yes to see the job run.

Once the job finishes, you should have generation names available in DV.


If want to name the generations in Essbase, choose “Generations” from the right-click menu of the dimension you want to update:

Enter the names of the generations for that dimension and click “OK”.

When I go to DV, I see my generations updated for Department.

Note: To prove I wasn’t pulling any trickery, this is to prove that my screenshot is from my OP Essbase cube, not my cloud Essbase cube. …The Years were not updated in the OP cube!

Now we can build more meaningful visualizations in DV!

Visualizations in DV

Unlike in Smart View or HFR, you are NOT required to specify a member from every dimension in DV. Using the below as an example, I built a Row Expander visualization that is very something that we would normally build in Smart View for comparison. To the right, I used a stacked bar graph using just a few data elements. I’ve modified the visualization to show the currency amounts for the whole of what the government paid towards employee’s Basic Benefit and Thrift Savings Plans (no, it’s not real data!).

BIAPPS on PAAS – Backup and Restore - Introduction (Part1) - 23-Mar-2017 06:51 - Oracle

BI Applications (BIApps) is an integrated application involving multiple components. A backup of BIApps therefore would mean a backup of the integrated application and not just the datawarehouse or the database.

High level architecture of BIAPPS on PAAS is shown below:

There are four different cloud service instances involved at a minimum when using BIAPPS on PAAS. Following table shows the components/software that are installed on each:

DBCS (Database Cloud Service)

Database that has the ODI repository, BIACM Repository, SDS schemas and the Datawarehouse Schema

BICS (BI Cloud Service)

RPD, Webcat, Jazn file

Compute Cloud

Weblogic Server
ODI Server
BIAPPS Shiphome
Customer Data stored as files (E.g. Source Files, Universal Adaptor files)

*Optionally the below if installed
Corente VPN ?
Dev tools like sql developer/browser/ODI studio and their associated files

Storage Service

backups for Compute/DB

You will need to backup all of the above instances to be able to successfully restore the BI Applications. Each of the Cloud services provides its own backup mechanism. Relevant information for backing up each of these cloud services is available in the Oracle Cloud documentation and will be detailed in subsequent blogs in this backup series. Customer may also want to look at Oracle Database Backup Cloud Service that is a separate optional cloud service that is available to take Oracle Database backups.

However is it adequate if you just backed up each of the cloud instance independently? Following section details some of the considerations related to this question by drawing on few examples.

Example 1: Database has been backed up on weekend Saturday 11th March 11pm. Compute Cloud Instance was backed up earlier at 10 am the same day. Configuration that is done on weblogic (JDBC data sources, memory settings etc) is not stored in the database. So if any of the configuration was done between 10am 11th March and 11pm 11th March, that would be lost. And in that sense, the backup and restore does not truly reflect the state of the integrated BIAPPS environment as it would have been at 11pm 11th March.

Example2: BICS was backed up on 10 pm on Sunday 12th March. Now continuing from example1, we have the database backup from 11pm 11th Match and BICS backup from 10 pm on Sunday 12th March. If there were any changes that were done to the database (like adding a new table/column) that was followed by a change in the RPD, then there is a chance that when we restore the database and the BICS instance, we can have a failure since they are no longer in sync.

As you can see from the above examples, backing up the cloud instances at different points in time can cause a potential problem. That said, there is a no single button that can be clicked to backup all the instances at exact same time. However with a right process in place, it is still easy enough to backup and restore the BIAPPS Application. Following are some of the best practices/guidelines that the customer can take to avoid the above issues:

  1. The most volatile of the BIAPPS components is the database. That said, the database is primarily changed when the ETL is run. So it is probably a good idea to backup the database outside the ETL window and as frequent as is possible. Configuration changes done in BIACM will also reside in the database, but these are less likely to occur once the initial configuration is done. Similarly ODI repository changes also reside in the database but in a production instance this should not be done everyday but rather during limited controlled windows.
  2. BICS RPD is tightly coupled with the database. So the customer could restrict the RPD changes to certain limited days in a month and ensure that there is proper database backup along with accompanying BICS backup outside the change period. In other words, have a quiet period for making RPD changes and ensure that the BICS and DB are backed up in that quiet period. 
  3. Most of the configuration required for Weblogic is done during the initial configuration. So after the full load, ensure there is a blackout period when you back up all the cloud instances. Subsequently similar to the BICS quiet periods, ensure that the changes to the weblogic and other domains on the Compute are done only during certain days and ensure that they are being backed up during the quiet periods. 
  4. Most of the cloud services have either an API or command line utility to backup that instance. You could consider using those to automate the backup of all those instances. Better still, you can have that script kicked off automatically at the end of the ETL load plan. 
  5. When restoring the system from a backup, consider the impact of any extract from Source Systems. Most of the Sources Systems have incremental extracts. If the last extracted date is stored in the database, then that date will also be restored as part of the database restore. However if the extract date is stored outside the BIAPPS Database (E.g. Fusion or any on Prem Sources which you are replicating via a Replication tool), then you will need to ensure that post the database restore, you reset the extract dates to match the data in the database and also clear any pending data in transit (Like in UCM). 
  6. A full reset of the SDS and the warehouse, followed by a full load will fix any issues with the SDS/warehouse. However Full loads are expensive and certain Source Systems have restrictions on how much data can be extracted in a day (E.g. Taleo). Further you can potentially lose any existing snapshot data if doing reset of the warehouse (and if the snapshot data is not available in the source). 
  7. When you restore a database, you will be restoring all the tables and all the schemas. It is not easy to restore a single table. Therefore it is best to keep activities that impact different schemas separate. E.g. If doing major configuration in BIACM, then do that when no ETL is running and take a adhoc backup before and after those changes. Similarly when promoting code to the ODI Production repository, do it outside the ETL window and at a time when no BIACM changes are happening and take a backup, before and after those changes. This will ensure that you can use the db backup to restore the database to the point in time before those changes are done without worrying about impact to other schemas. For the same reasons, if you are making a change to a single warehouse table, keep a backup of that table (and other dependent tables) in the warehouse schema along with the data, so that you can use those to restore the table rather than use the complete database backup.

There are other components that are also involved in the BIAPPS on PAAS Solution and need to be included in the backup strategy. These include but not limited to:

  1. Source Systems: These are the systems from which BIAPPS gets the data from. The backup of those systems is also required when considering the entire application. However those are typically taken care by the Source System administrators and hence not listed here. 
  2. Replication Tools: If you are not using VPN to connect to the On Prem Source Systems, then it is likely, you have some kind of Replication mechanism to transfer the data from the On Premise Source System to the SDS. So your backup strategy ought to cover those as well.
  3. Identity Domain/ Users & Roles:  These are usually maintained from the Service Admin Console (SAC). Refer to SAC documentation on how to back these up.
  4. Any Network/Security Rules you setup between these various instances.

The customer ought to therefore understand the entire BIAPPS archictecture and then design the backup strategy accordingly. The customer will also likely have a Dev/Test/Prod environment, each of which is a complete BIAPPS application in itself. The customer will have to ensure that the backup strategy covers all those environments. Special care should also be taken if customer has a T2P process (Test to Production) and one of the environments requires to be restored.

The subsequent blogs in this series, will attempt to drill into the relevant backup functionality that is present for the individual components that make up the BIAPPS on PAAS solution. Below are few links that point to the backup documentation for the relevant cloud services:

Backing up Deployments on Database Cloud Service

About Database Backup Cloud Service (Optional cloud service that can be used to backup Oracle databases)

Backing up and Restoring Storage Volumes - Compute

Backing up and Restoring BICS

Disclaimer: Refer to the latest BIAPPS and Oracle Cloud Documentation as things might have changed since this blog was written.

All blogs related to BIAPPS on PAAS

BIAPPS on PAAS Backup Blog Series

Finding an Open Slip - 22-Mar-2017 09:01 - Red Pill Analytics

Let’s pretend that you need to change ports within your OBIEE 12c instance due to new applications that natively install into ports that OBIEE is running. Or maybe you want to have the ports be the same as the OBIEE 11g ports. Whatever the reason, how do we go about changing the ports within the instance? Below is a quick guide for doing just that.

Log into your OBIEE host and open a .cmd prompt or terminal window with rights to the OBIEE instance. Check the instance status, and shut down the OBIEE environment if it is up.

Navigate to the <ORACLE_HOME>/bi/config/fmwconfig/bienv/core directory.

In this directory there is a file that controls all of the ports for your current environment. Convenient, right? Let’s edit it. Open the bienv-components.xml file.

You’ll now see a list of all of the services that you have configured for your environment. Each of these will have an xml tag called <port> with the port number in it.

Before we edit…

Simply edit the file to change the port number to your new port number. Save and exit.


…and after!

Restart your OBIEE environment.

Remember to do this process while the environment is down as the ports will only be refreshed once it is started. Also, remember to check that there are no other services running on the port that you are changing to.

Check out our collection of tip posts here. Subscribe to have them delivered to your inbox here.

Finding an Open Slip - 22-Mar-2017 09:01 - Kevin McGinley
Photo credit: Joshua Stannard

Changing Ports in OBIEE 12c

Let’s pretend that you need to change ports within your OBIEE 12c instance due to new applications that natively install into ports that OBIEE is running. Or maybe you want to have the ports be the same as the OBIEE 11g ports. Whatever the reason, how do we go about changing the ports within the instance? Below is a quick guide for doing just that.

Log into your OBIEE host and open a .cmd prompt or terminal window with rights to the OBIEE instance. Check the instance status, and shut down the OBIEE environment if it is up.

Navigate to the <ORACLE_HOME>/bi/config/fmwconfig/bienv/core directory.

In this directory there is a file that controls all of the ports for your current environment. Convenient, right? Let’s edit it. Open the bienv-components.xml file.

You’ll now see a list of all of the services that you have configured for your environment. Each of these will have an xml tag called <port> with the port number in it.

Before we edit…

Simply edit the file to change the port number to your new port number. Save and exit.

…and after!

Restart your OBIEE environment.

Remember to do this process while the environment is down as the ports will only be refreshed once it is started. Also, remember to check that there are no other services running on the port that you are changing to.

Finding an Open Slip was originally published in Red Pill Analytics on Medium, where people are continuing the conversation by highlighting and responding to this story.

One of the real nice new features in Oracle 12c Release 2 ( is the ability to create an Oracle Data Guard Standby Database using DBCA (Database Configuration Assistant). This really does simplify the process of creating a standby database as well and automates a number of steps in the creation process which were earlier manually performed.

In this example we will see how a Data Guard environment is created via DBCA and then Data Guard Broker (DGMGRL).

The source database is called salesdb and the standby database DB_UNIQUE_NAME will be salesdb_sb.

Primary database host name is host01 and the Standby database host name is host02.

The syntax is:

dbca -createDuplicateDB 
    -gdbName global_database_name 
    -primaryDBConnectionString easy_connect_string_to_primary
    -sid database_system_identifier
        [-dbUniqueName db_unique_name_for_standby]]

We will run the command from the standby host host02 as shown below.

[oracle@host02 ~]$ dbca -silent -createDuplicateDB -gdbName salesdb -primaryDBConnectionString host01:1521/salesdb -sid salesdb -createAsStandby -dbUniqueName salesdb_sb
Enter SYS user password:
Listener config step
33% complete
Auxiliary instance creation
66% complete
RMAN duplicate
100% complete
Look at the log file "/u02/app/oracle/cfgtoollogs/dbca/salesdb_sb/salesdb.log" for further details.

Connect to the Standby Database and verify the role of the database


Note that the SPFILE and Password File for the Standby Database has been automatically created

[oracle@host02 dbs]$ ls -l sp*
-rw-r-----. 1 oracle dba 5632 Mar 22 09:40 spfilesalesdb.ora

[oracle@host02 dbs]$ ls -l ora*
-rw-r-----. 1 oracle dba 3584 Mar 17 14:38 orapwsalesdb


Add the required entries to the tnsnames.ora file


Continue with the Data Guard Standby Database creation using the Data Guard Broker

SQL> alter system set dg_broker_start=true scope=both;

System altered.

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release - 64bit Production
[oracle@host01 archivelog]$ dgmgrl
DGMGRL for Linux: Release - Production on Fri Mar 17 14:47:27 2017

connect /
Connected to "salesdb"
Connected as SYSDG.
DGMGRL> create configuration 'salesdb_dg'
> as primary database is 'salesdb'
> connect identifier is 'salesdb';
Configuration "salesdb_dg" created with primary database "salesdb"

DGMGRL> add database 'salesdb_sb' as connect identifier is 'salesdb_sb';
Database "salesdb_sb" added
DGMGRL> enable configuration;


Create the Standby Redo Log Files on the primary database


SQL> select member from v$logfile;


SQL> select bytes/1048576 from v$log;


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

Create the Standby Redo Log Files on the standby database


DGMGRL> connect /
Connected to "salesdb"
Connected as SYSDG.

DGMGRL> edit database 'salesdb_sb' set state='APPLY-OFF';

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1174405120 bytes
Fixed Size          8619984 bytes
Variable Size           436209712 bytes
Database Buffers   721420288 bytes
Redo Buffers              8155136 bytes
Database mounted.

SQL>  alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

SQL> alter database open;

Database altered.


Verify the Data Guard Configuration

DGMGRL> edit database 'salesdb_sb' set state='APPLY-ON';

DGMGRL> show configuration;

Configuration - salesdb_dg

 Protection Mode: MaxPerformance

 salesdb    - Primary database
   salesdb_sb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 8 seconds ago)

Set the property StaticConnectIdentifier to prevent errors during switchover operations

Edit database ‘salesdb’ set property StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host01.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';
Edit database ‘salesdb_sb’ set property StaticConnectIdentifier=StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host02.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_sb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';

Edit listener.ora on primary database host and add the lines shown below. Reload the listener.

    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_DGMGRL)
      (SID_NAME = salesdb)

Edit listener.ora on standby database host and add the lines shown below. Reload the listener.

    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_sb_DGMGRL)
      (SID_NAME = salesdb)
Migrating an On-Premises Essbase Cube to the Cloud - 21-Mar-2017 11:15 - Sarah Zumbrum

With Oracle Analytics Cloud (OAC) having been released last Friday (luck of the Irish!), I made it a priority to examine the level of difficulty (or ease) to migrate an on-premises (OP) Essbase cubes to the cloud. Overall the process is simple, but there are a few gotchas and caveats I found in the process.

And since I post my failures on my blog, you will see what I tried that failed so you won’t make the same mistakes I did!

Below is my OP Essbase cube. It is a fairly standard workforce planning/analysis cube that I built using the US’s Office of Personnel Management (OPM) structure for General Schedules (the pay and locality pay matrix for the different jobs in the US government). This is a BSO cube.

Because I truly wanted to test the export/import process, I added Typed Measures to my OP cube to see if they came over. Here, you can see the different localities in the GS schedule.

To show I don’t have just one or two rows of data, here are the cube statistics. There’s a chunk of data in there…comparable to some cubes I’ve built in the past.

I show the following screen to show the outline structure a bit. In each period block, I have the base level rolling up through Department to “GS Positions”. I wanted to highlight that the Typed Measures for Locality show at the base level, or Detroit in this example.

I’ve also added the option to enter and calculate pay and locality pay percentage increases for the different Budget versions.

As you can see, it’s not a wimpy cube to export, data included. Using the tool I wrote about last Friday that exports Essbase cubes, “dbxtool”, I enter the details required to connect to Essbase and download the desired cube, USG_WF.Emp_Dets.

I get a visual cue from the tool that my cube is being exported (so much better than a simple “Success”, IMHO).

When I go to the export’s folder, I see I have 2 items. I was expecting just one – the Excel file – so I want to investigate. The zip file contains my data because the data set is too large for Excel. I understand that reasoning.

When I open the Excel file, I see many of the cube details laid out for me in “Cube Designer” format (notice this is also the tab that opens when I open this file). I see my application name, database name, and dimensions listed.

On the Cube.Settings tab, I see the details about my cube. Everything looks correct to me.

The Cube.Generations tab is empty.

Next, the different dimensions are listed out on their own tab. I see the dimension name and hierarchy under “Members”. In the Definitions section, I see that the file and rule names have been provided for me.

Since I expected the dbxtool to export the OP cube in ready format for cloud load, I decide to load it to OAC as is. I need to set my Essbase cloud connection, and I did that by clicking “Connections”.

I entered the URL for Essbase.

Next, I clicked “Transform Data” to load the workbook.

You can have the system choose the cube name, or you can modify it yourself. I did not make any modifications and clicked “Run”.

The tool asks if I want to build the cube…yes, Run.

Once the job has been started, you can choose to go to Job Viewer to see the progress.

I see my cube load is running. I can click “Refresh” to see the updates.

Real World OBIEE: Demystification of Variables Pt. 2 - 21-Mar-2017 10:00 - Rittman Mead Consulting

In part one of this blog series, I went over using bins and presentation variables to dynamically create groups and switch between them in a report and on a dashboard. In part two, I am going to talk about making reports dynamic for periods of time using repository, system and presentation variables. Before I dive into an example, there are a couple of things I would like to cover first.


The sysdate function returns the current datetime set by the system where the database resides. Sysdate is a really useful function for creating repository variables for use with date dimensions. If I go into SQL Developer, I can write a query to return the current sysdate:

select sysdate from dual;


The current_date functions returns the current datetime set by the system where the bi server resides. This datetime may differ from sysdate depending on the geographical location of the database vs. the system that OBIEE resides on. I can write a query using sql developer to return the datetime using the current_date function:

select current_date from dual;

Since my database and OBIEE instance are on the same system, sysdate and current_date are the same.


When using sysdate or current_date to create repository variables for dates (which I am going to show in an upcoming example), you have to keep something in mind. While the date may match, the time may not. To show an example of this, I am going to join one of my date columns with sysdate.

select sysdate, dim_date_key from dual, 
where sysdate = dim_date_key;

If I run this query, I don't get an error but I get no results.

Why? To answer this, I need to write a query to inspect my date column.

select dim_date_key from gcbc_pef.dim_date;

As you can see by the results of my query, the DIM_DATE_KEY column does have the same format as sysdate but all the times are set to 00:00:00 (or midnight). To further demonstrate the difference between my date column and sysdate, I am going to write a new query and use the TRUNC (or TRUNCATE) function.

select sysdate, dim_date_key from dual, 
where trunc(sysdate) = dim_date_key;

As you can see, the query runs successfully but notice how sysdate and DIM_DATE_KEY still have different times. How is the join possible? Because I used the truncate function in the where clause in my query for sysdate. Without going into too much detail, using truncate on a date function without any formatting (which I will cover later) will set (or truncate) the datetime to the start (or midnight) of the current day. For example, if I run another query that just selects the truncated sysdate from dual, I get this result.

select trunc(sysdate) from dual;

Now, lets dive into an example.

Note: For all of the examples in this blog series I am using OBIEE

The Scenario

In this example, I have been asked to create a report that is going to reside on a products dashboard. It needs to have the same product grouping as the report I used part one of this series, needs to contain Gross Rev $, Net Rev $ and # of Orders and have a prompt that can select between the first and current day of the month and every day in-between. The person who requested the report wants the prompt to change dynamically with each month and does not want users to be able to select future dates.

There are two foreseeable challenges with this report. The first, and probably the most obvious, is how to make the date prompt for the current month and have it change dynamically with each month. The second is how to pass the dates into the report.

There is one more challenge that I will have to tackle. There is a gap in the data loads for # of Orders. Data does not update until the 2nd or 3rd of each new month. This wouldn't be a big deal except the person who requested the report wants a summary of the previous months # of Orders to be shown until the data is updated for the current month.

Fortunately, by using Repository, System and Presentation Variables, I can accomplish all of the requirements of this report.

The Example

For this example, I am going to start by creating Repository Variables to use with my date column in order to make the dates dynamic. There are other ways to make dates dynamic using functions within Answers but they are a little bit trickier to use and are less common. I am going to go over some of those functions in part three of this blog series.

Repository Variables are created using the Admin Tool. By launching the Admin Tool and opening my RPD in online mode (can also be created offline), I can go to Manage > Variables to start creating my first Repository Variable.

From the Variable Manager window, I can create a Repository Variable by selecting Action > New > Repository > Variable.

I am going to start by creating the Repository Variable for the current date. Since this variable will be dynamic, I need to make sure I select the option 'Dynamic' and I am going to give it the name USCurDate.

Now I need to create a new init block. I can do this by clicking New...

Once in the Repository Variable Initialization Block screen, I need to give the init block a name, set the schedule for when variable or variables will be refreshed then click Edit Data Source to define the connection pool the init block will use as well as the initialization string (query) the init block will use to populate the Repository Variable.

In the data source window, I am going to set my connection pool to one I have created just for my init blocks and then type in the following into the initialization string window:

select TRUNC(sysdate) from dual;

If I click Test, the query will execute and will return a result.

Notice how the result is the same as the query I ran using SQL Developer earlier.

Now I need to create a Repository Variable for the first day of every month. I am going to use the same method as before and name it USMoBeginDate. The query I am going to use is slightly different from the previous query. I still need to use the TRUNC function but I also need to apply formatting so that it truncates to the start of the month. I am going to enter the following into the initialization string window:

select TRUNC(sysdate, 'MM') from dual;

Some other useful queries I can use are:

First Day of the Current Year

select TRUNC(sysdate, 'YY') from dual;

Last Day of the Previous Year

select TRUNC(sysdate, 'YY') -1 from dual;

Previous Year Date

select TRUNC(ADD_MONTHS(sysdate, -12)) from dual;

Now I need to create a Repository Variable for the previous month to use with my # of Orders measure column. Upon inspection, I discover that the column I need to use is called Calendar Year Month and is a VARCHAR or character type. If I go into Answers and pull in the Calendar Year Month column, I can see the format is 'YYYYMM'

To create the Repository Variable, I am going to use the same method as with the current date and first day of the current month Repository Variables and issue a new query. Because the Calendar Year Month column is a VARCHAR, I need to use the to_char function to change sysdate from a date type to a character type, use some formatting syntax and use some basic arithmetic. The query is as follows:

select to_char(to_number(to_char(sysdate, 'YYYY')) * 100 + to_number(to_char(sysdate, 'MM') -1)) from dual;

To break down each part of this query, lets start with the year. In order to use the 'YYYY' format I must first cast sysdate to a character (to_char(sysdate, 'YYYY')). Then I need to cast that result back to and int so that I can multiply by 100. This will give me the result 201500.00. The reason for this is when I add the month number to my yearx100, there will always be a leading 0 for month numbers 1-9. To get the previous month number, I have to first cast sysdate to a character and use the formatting 'MM'. I then have to cast it back to an int and subtract 1 to get the previous month number (to_number(to_char(sysdate, 'MM') -1) then cast the entire statment back to a character type so that it matches the type for the Calendar Year Month column. When I run the query, I get this result.

Now that I have my three repository variables (USCurDate, USMoBeginDate and Prev_Month) I can start to create the report.

Im going to fast forward a little bit to the part of the report creation process where I will use my Repository Variables I created using the Admin Tool. Since I am using virtually the same report as part one of this blog series, please refer back for how to create custom groups using bins and presentation variables and custom value prompts.

Because of the delay in the data load for the # of Orders at the beginning of the month, I can not use a global report filter. Instead, I am going to have to use something called a Filter Expression within each measure column formula.

About Filter Expressions

Unlike global report filters, column formula level filter expressions are used when you need to specify a particular constraint within the column formula itself. Because the filter is at the column formula level, it is independent of any subsequent column filters.

Note: When using a column formula filter for a measure, you can not add a global filter of the same data subject on top of it. For example, if using a column level filter for a particular Year and Month, I can not add a global filter for a particular year. The two filters contradict each other and the result will be null.

To add a filter in the column formula, go to Edit formula, make sure the column syntax is highlighted and click Filter.

From here the Insert Filter window will pop up and I can select the attribute column to filter the measure by. Here, I want to use the column Day Date to filter Gross Rev $ by the day.

I can add a column by double clicking it in the the Subject Areas pane. When a column is added, I will be prompted with a New Filter window and from here, everything is exactly the same process as adding a global report filter.

Here I need to define the operator as is between since we are dealing with date ranges. I could call my Repository Variables for current_date and first day of the month here but, because the request is for a prompt to select between date ranges, I am going to have to call Presentation Variables and use the prompt to populate the actual values.

Note: If you are unsure about the functionality of Presentation Variables, see part one of this blog series

To add Presentation Variables to the filter expression, click Add More Options and select Presentation Variable from the dropdown.

When a Presentation Variable is added to the filter, two new text boxes appear. The Variable Expr box is where you define the variable to be used and the (default) box is used to add a default value. The default value is optional but, when defining a Presentation Variable within a filter, you have to specify a default value in order to get any results. The reason for this is because, when the report is run, the query issued will use the Presentation Variable placeholder that is defined unless a default value is specified. In other words, the default value will always be used unless the Presentation Variable is populated with a value or a list of values.

Because I want the users to be able to specify a date range, I need to define two Presentation Variables: one for the start date and one for the end date. I can add another place for a Presentation Variable by simply clicking Add More Options again and selecting Presentation Variable.

Now I need to add both my start and end date Presentation Variables in the Variable Expr boxes. I’m going to call my start date presentation variable pv_start_dt and my end date presentation variable pv_end_dt. I am also going to specify a default date range from the beginning of the current month (10/01/2015) to yesterday's date (10/15/2015).

If I click OK, I will be taken back to the Insert Filter screen where I can see the filter expression previously defined.

Clicking OK again will return me to Edit Column Formula which shows the column formula with the filter expression defined in the previous steps.

Now I have to do the exact same thing for the Net Rev $ column. Since the filter expression is identical, I can simply copy and paste the column formula for Gross Rev $ and replace the column name in the expression.

Now I need to take care of the # of Orders column. This column is tricky because of the gap between the 1st and the 2nd or 3rd of every month. I could use a filter expression that defaults to the previous month by using the previous month repository variable I created in a previous step, but this alone wouldn’t switch over when the data became available.

So how can we fulfill the requirement of the report if we don’t know the exact date in which the data will be available? This can be accomplished by using a CASE statement as shown previously in part one of this series. We can break the Case statement down into two parts or two conditions:

1. When the day for the current month is less than or equal to 2 OR if # of Orders is null, then filter # of Orders by Calendar Year Month using the value of the Prev_Month Repository Variable.

2. When condition one is not true, then filter # of Orders by Day Date between the values of the pv_start_date and the pv_end_date Presentation Variables

Putting both conditions together and using the correct syntax for Column Formula results in the following formula:

Note that I am using CURRENT_DATE in my column formula. In this case, I am extracting the day number from the current date by using the extract day function (DAY(CURRENT_DATE)). I am going to talk about this in further detail when I talk about using built in functions in Answers to make reports dynamic in part 3 of this series.

Now I need to create my dashboard prompt. I am going to start by clicking on New > Dashboard Prompt.

I need to create two prompts: One for the start date and one for the end date. Because I am using presentation variables as placeholders for the date between values, I have to use a Variable Prompt instead of a Column Prompt. Variable Prompts allow us to define a presentation variable and then define a list of values for the users to select from.

To create a Variable Prompt for Start Date, I can click on the new prompt icon and select Variable Prompt.

There a few things I need to do in order to make this prompt function for the report. First, I have to define the same presentation variable name (pv_start_dt) that I used in the filter expressions for the Gross Rev $, Net Rev $ and # of Orders columns.

Because this is not a column prompt, I have to manually specify the values I want the user to be able to select from. Rather than typing in each value, I can use the SQL Results option from the Choice List Values dropdown and use a SQL statement to select the exact values that I want.

This may seem daunting at first but there is a very straightforward way to accomplish this. Rather than manually writing out a SQL query, we can make use of the Advanced Tab within a new report.

I’m going to start by clicking New > Analysis and selecting the column that I want values for: Day Date.

I need to add a filter to Day Date so that it returns only the values I want to user to select from.

Now I need to select the operator to be is between and add two Repository Variables that I have set up: one for the first date of the current month and one for the current date of the current month.

If I go to results, I can see the data returned with the filter I have specified.

As you can see, the Day Date column only contains the values from the first of the month to the current date (October, 16th 2015 in this example)

Now for the good stuff. I can navigate to the Advanced Tab and copy the SQL statement used to generate these values and paste them into the SQL Results text box in my prompt.

You will notice that within the SQL Statement generated by OBI,
there are numbers and s_# between the SELECT and Day Date column, after the Day Date column and there is also an order by clause that uses a number “2”. Without going into too much detail, this what OBI uses to make the query more efficient when retrieving results from the database. In order to allow the values to populate the prompt, these have to be removed in OBIEE 12c and the “ORDER BY” clause has to be rewritten in order to make it work.


   0 s_0,
   "Sales - Fact Sales"."Periods"."Day Date" s_1
FROM "Sales - Fact Sales"
("Periods"."Day Date" BETWEEN VALUEOF("USMoBeginDate") AND  VALUEOF("USCurDate"))

Changed to this

   "Sales - Fact Sales"."Periods"."Day Date"
FROM "Sales - Fact Sales"
("Periods"."Day Date" BETWEEN  VALUEOF("USMoBeginDate") AND  VALUEOF("USCurDate"))
ORDER BY "Periods"."Day Date" ASC

This can be a bit confusing if you are not very familiar with SQL but just remember:

When populating a prompt using an SQL statement in OBIEE 12c, take out any number and anything that begins with “s” between the SELECT and first column and anything that begins with “s” after any subsequent columns and make sure the “ORDER BY” clause contains the actual column name of the column you want to order by.

Note: If you do not require any values to be in order, you can omit the “ORDER BY” clause all together.

If I expand Options in the Edit Prompt window, I can add a default selection or a default value that the prompt will start with. I can use the USMoBeginDate here as well so that the prompt always starts with the first date of every month as the start date.

Note: You will notice that under Options in the Edit Prompt window there is a Variable Data Type option with a dropdown selector. This can be used if the data type needs to be specified to something other than the default which is ‘text’ or character type. If you are getting an error when running the report that says “Selected value does not match datatype. Expected [this value] but got [this value]” you need to change the Variable Data Type to the datatype of the column you are prompting on. In this example, we are prompting a date datatype so therefore it needs to be set to date.

If I click OK, I can check the values in the display window by clicking the dropdown for the Start Date prompt I just created.

The blue checkmark indicates the value that is selected which, because the first date of every month was set by using the USMoBeginDate Repository Variable as the default value, defaults to the first date of the current month (October, 1st 2015) in this example.

Now I need to create another Variable Prompt for the End Date. The SQL statement used for Start Date can be reused for the values as we want the exact same values to be available for selection. I am going to specify the presentation variable to be named pvenddt, and the default value to be the USCurDate Repository Variable so that the End Date prompt always defaults to the current date.

Now all that’s left to do is put the prompt and report on the Dashboard. Here is the result.

So that concludes part 2 of Demystification of Variables. Please feel free to ask questions or leave me a comment! In part 3, I am going to talk about using built in front end functions and presentation variables to make reports dynamic for any series of time. Until next time.

Kscope17 Preview: Your Destination Awaits - 20-Mar-2017 11:09 - ODTUG Kscope
Kscope12, my very first Kscope and the one that shares this year's location, was a superbly unique event. In last year's ODTUG Cinco de Mayo #tbt Twitter party ("tbt" = "ThrowBack Thursday" for those of you who don't use Twitter), pictures from Kscope12 kept surfacing attached to fond memories...
March ODTUG News - 20-Mar-2017 11:04 - ODTUG
ODTUG celebrated its 20th birthday! The ODTUG News will keep you up to date on ODTUG Kscope17 updates, events, and sponsorship opportunities. Read more to learn about Meetups and events in your area, and mark your calendars for our upcoming March and April webinars.