Next
Your Three Must-Know EPM and Analytics Themes from OpenWorld 2017 - 18-Oct-2017 05:38 - Performance Architects

Author: Kirby Lunger, Performance Architects

The enterprise performance management (EPM) and enterprise analytics (business analytics, BI, data visualization) arenas may have seemed like an afterthought at Oracle OpenWorld this year, since the conference focus areas seemed to be autonomous (self-learning) databases, blockchain and cloud infrastructure!  Never fear!  I’ve distilled the top three themes from these areas to bring you up to speed in this arena as quickly as possible.

  1. Everyday Low Prices: Walmart Pricing and (Less) Discounting Now Applies

One of the most attractive things about buying cloud solutions is that we can purchase “by the drop,” meaning that we can expand capacity (seats, CPUs, whatever) as we need them, at an affordable price.

For the last several years, Oracle was dinged by analyst firms like Gartner for maintaining a sales model that wasn’t optimized for the cloud world (aka: a very large field sales force used to long, protracted negotiations with a lot of off-the-list discounting).

The beast finally awoke!  Right around the time of OpenWorld, Oracle quietly slashed the prices of Oracle Analytic Cloud (OAC), their next-generation analytics platform combining BI, data visualization and analytics (Essbase!) capabilities, by 50%.  We believe this will lead to much clearer sales incentives and less required discounting on Oracle’s part moving forward…which should frankly make your life (and our experience as a partner as well) much easier.  As far as we know, this only applies to the PaaS arena at the moment, although we fully expect that this operating model will edge into the SaaS and IaaS areas as we look out into the coming fiscal year.

Oracle has also spent a lot of time over the last several months clarifying and narrowing down the list of offerings in each of its SaaS, PaaS, and IaaS focus areas.  You can go to the EPM arena, for example, and clearly see the list of modules and their prices with a “Purchase Now” button.  The days of trying to track down one of your 15 Oracle sales reps to get a quote are on their way out!

As part of this, product naming and grouping/bundling has been simplified.  You’ll notice the “Service” is getting dropped from most of the cloud product names (e.g., Enterprise Planning and Budgeting Cloud Service or EPBCS is now often referred to as “Enterprise Planning Cloud” or “EPC” on the Oracle website).

  1. The Dashboard is Dead: Information Before You Know You Need It Is the Name of the Game

Think about all of the notifications you receive from shopping sites based on your past and predicted shopping behavior.  You don’t painstakingly create a dashboard to analyze and report on your behavior…this just happens based on the data they collect on you and alerts are pushed to you on an “as-needed” basis.

And where do you consume your information?  On your mobile phone.  As of November 2016, mobile web traffic overtook computer web traffic66% of all emails are now opened on a mobile phone!

We’ve all been saying this for years…and now it’s finally happening.  Using adaptive intelligence (Oracle’s term for what a bunch of other folks used to call “artificial intelligence”) and machine learning, Oracle is betting the ranch on proactive alerting throughout their solution stack to address this mobile-first, information-push world.

In the enterprise analytics field, they’re investing heavily in mobile enhancements for the visualization capabilities in OAC and to their mobile applications (Day-by-Day and Synopsis), while in EPM, they continue to focus on predictive modeling enhancements to the EPM product suite.

  1. Long Live the Data Lake: “Directionally Correct” is All the Rage 

First of all, EPM folks, don’t freak out.  This doesn’t mean transactional systems are going away…we’ll be in world where “hybrid” data storage models rule for many years to come. It just means that the world is moving to a “directionally correct” orientation instead of a “precisely wrong” focus.

What does this mean?  Rather than waiting for complete, tried-and-true, cause-and-effect historical relationships to be established in a traditional relational database structure that is then reported out in a dashboard, Oracle is using data lake technologies (which store data in its raw format until it is needed for analysis) to provide pointers to possible trends that could predict an outcome.

A centerpiece of this is the new OAC – Data Lake edition (which we think signals a shift away from the Oracle “Big Data Discovery” messaging of the past few years), which launched the week of OpenWorld.

That said…Oracle did announce a significant update to a more traditional EPM data management solution for you “structured data” fans, the Enterprise Data Management Cloud Service (also known as Enterprise Data Management Cloud).  This is the totally-rearchitected cloud version of the on-premise Oracle Hyperion Data Relationship Management (DRM), and is due out by the end of 2017 calendar year.  It should make cloud-based, enterprise data management a lot easier.

Want to learn more about the evolution of Oracle’s EPM and enterprise analytics solutions and what this means for you?  Contact us at communications@performancearchitects.com and we’d be happy to set up a time to talk.

Author: Kirby Lunger, Performance Architects

The enterprise performance management (EPM) and enterprise analytics (business analytics, BI, data visualization) arenas may have seemed like an afterthought at Oracle OpenWorld this year, since the conference focus areas seemed to be autonomous (self-learning) databases, blockchain and cloud infrastructure!  Never fear!  I’ve distilled the top three themes from these areas to bring you up to speed in this arena as quickly as possible.

  1. Everyday Low Prices: Walmart Pricing and (Less) Discounting Now Applies

One of the most attractive things about buying cloud solutions is that we can purchase “by the drop,” meaning that we can expand capacity (seats, CPUs, whatever) as we need them, at an affordable price.

For the last several years, Oracle was dinged by analyst firms like Gartner for maintaining a sales model that wasn’t optimized for the cloud world (aka: a very large field sales force used to long, protracted negotiations with a lot of off-the-list discounting).

The beast finally awoke!  Right around the time of OpenWorld, Oracle quietly slashed the prices of Oracle Analytic Cloud (OAC), their next-generation analytics platform combining BI, data visualization and analytics (Essbase!) capabilities, by 50%.  We believe this will lead to much clearer sales incentives and less required discounting on Oracle’s part moving forward…which should frankly make your life (and our experience as a partner as well) much easier.  As far as we know, this only applies to the PaaS arena at the moment, although we fully expect that this operating model will edge into the SaaS and IaaS areas as we look out into the coming fiscal year.

Oracle has also spent a lot of time over the last several months clarifying and narrowing down the list of offerings in each of its SaaS, PaaS, and IaaS focus areas.  You can go to the EPM arena, for example, and clearly see the list of modules and their prices with a “Purchase Now” button.  The days of trying to track down one of your 15 Oracle sales reps to get a quote are on their way out!

As part of this, product naming and grouping/bundling has been simplified.  You’ll notice the “Service” is getting dropped from most of the cloud product names (e.g., Enterprise Planning and Budgeting Cloud Service or EPBCS is now often referred to as “Enterprise Planning Cloud” or “EPC” on the Oracle website).

  1. The Dashboard is Dead: Information Before You Know You Need It Is the Name of the Game

Think about all of the notifications you receive from shopping sites based on your past and predicted shopping behavior.  You don’t painstakingly create a dashboard to analyze and report on your behavior…this just happens based on the data they collect on you and alerts are pushed to you on an “as-needed” basis.

And where do you consume your information?  On your mobile phone.  As of November 2016, mobile web traffic overtook computer web traffic66% of all emails are now opened on a mobile phone!

We’ve all been saying this for years…and now it’s finally happening.  Using adaptive intelligence (Oracle’s term for what a bunch of other folks used to call “artificial intelligence”) and machine learning, Oracle is betting the ranch on proactive alerting throughout their solution stack to address this mobile-first, information-push world.

In the enterprise analytics field, they’re investing heavily in mobile enhancements for the visualization capabilities in OAC and to their mobile applications (Day-by-Day and Synopsis), while in EPM, they continue to focus on predictive modeling enhancements to the EPM product suite.

  1. Long Live the Data Lake: “Directionally Correct” is All the Rage 

First of all, EPM folks, don’t freak out.  This doesn’t mean transactional systems are going away…we’ll be in world where “hybrid” data storage models rule for many years to come. It just means that the world is moving to a “directionally correct” orientation instead of a “precisely wrong” focus.

What does this mean?  Rather than waiting for complete, tried-and-true, cause-and-effect historical relationships to be established in a traditional relational database structure that is then reported out in a dashboard, Oracle is using data lake technologies (which store data in its raw format until it is needed for analysis) to provide pointers to possible trends that could predict an outcome.

A centerpiece of this is the new OAC – Data Lake edition (which we think signals a shift away from the Oracle “Big Data Discovery” messaging of the past few years), which launched the week of OpenWorld.

That said…Oracle did announce a significant update to a more traditional EPM data management solution for you “structured data” fans, the Enterprise Data Management Cloud Service (also known as Enterprise Data Management Cloud).  This is the totally-rearchitected cloud version of the on-premise Oracle Hyperion Data Relationship Management (DRM), and is due out by the end of 2017 calendar year.  It should make cloud-based, enterprise data management a lot easier.

Want to learn more about the evolution of Oracle’s EPM and enterprise analytics solutions and what this means for you?  Contact us at communications@performancearchitects.com and we’d be happy to set up a time to talk.

Patching Essbase in OAC to 17.3.3-926 with Oracle 11g RDBMS - 18-Oct-2017 05:21 - Performance Architects

Author: Andy Tauro, Performance Architects

I recently applied the 17.3.3-926 patch to Oracle Essbase Cloud Service (ESSCS) in Oracle Analytics Cloud (OAC). If you have not patched OAC yet, the good news is that this process is as close to “one-click” as it can get, with the capability to perform a “pre-check” of the instance (and while we have yet to see this fail, it is good to know that we can check to see if any known conditions for failure exist).

When the patching is initiated, it first performs a backup of the instance, just in case a rollback is needed. Once the backup is successful, the patching process begins. In our experience, this process takes about 30 minutes, including the backup. However, I fully expect that will vary depending on how much content is housed within the instance.

When I restarted the instance after the patch, I found that Essbase would not start. Digging through the log files on the instance, I found an error message that indicated that a “schema update” failed with the error message: “ORA-02000: missing ( keyword.”

Since the patching process in ESSCS is hidden from the customer, we contacted Oracle Support. After consulting with the product development team, we found out that the issue was because we were using Oracle Database 11g as our relational data store, a.k.a. DBaaS (Database-as-a-Service). By default, the update assumes that the latest version of the Oracle database (12c) is being used. To overcome this, Oracle Support provided a fix that required an update to a server script. The script being updated (setDomainEnv.sh) is found at the location “/u01/data/domains/esscs/bin” and Oracle Support advises backing up this script before making changes.

To change this script, one needs to change to the “Oracle” user with the command “sudo su – oracle.” Find the following words in the script: “-Dessbase.datasource=BIPlatformDatasource”

and replace it with “-Dessbase.datasource=BIPlatformDatasource -Dessbase.schema.update.disabled=true.”

Once the changes are made, log out of the instance and restart it using the “My Services: console. The updates will tell Essbase to skip the “Schema Update” step. As long as there isn’t anything else broken with the instance, this should allow Essbase to start up.

Oracle confirms that there is no functionality lost in using Oracle database 11g with ESSCS, and this issue will be fixed in patch 17.4, which is expected out in a couple of months. However, this stresses the importance of having a pre-production instance to verify that the deployed patches will not introduce issues with your setup. Since a PaaS like OAC allows for a lot of flexibility in how one uses one’s environment, it is important that such system changes are tested before they are released.

While included pre-checks can check for conditions on the Essbase server itself, when we consider the ways the instance can be connected to other services, whether in the Oracle Cloud or outside services like on-premise systems, not every possible condition can be checked for. That is because the tools included with OAC are robust and flexible enough to allow for solutions restricted only by one’s imagination.

Author: Andy Tauro, Performance Architects

I recently applied the 17.3.3-926 patch to Oracle Essbase Cloud Service (ESSCS) in Oracle Analytics Cloud (OAC). If you have not patched OAC yet, the good news is that this process is as close to “one-click” as it can get, with the capability to perform a “pre-check” of the instance (and while we have yet to see this fail, it is good to know that we can check to see if any known conditions for failure exist).

When the patching is initiated, it first performs a backup of the instance, just in case a rollback is needed. Once the backup is successful, the patching process begins. In our experience, this process takes about 30 minutes, including the backup. However, I fully expect that will vary depending on how much content is housed within the instance.

When I restarted the instance after the patch, I found that Essbase would not start. Digging through the log files on the instance, I found an error message that indicated that a “schema update” failed with the error message: “ORA-02000: missing ( keyword.”

Since the patching process in ESSCS is hidden from the customer, we contacted Oracle Support. After consulting with the product development team, we found out that the issue was because we were using Oracle Database 11g as our relational data store, a.k.a. DBaaS (Database-as-a-Service). By default, the update assumes that the latest version of the Oracle database (12c) is being used. To overcome this, Oracle Support provided a fix that required an update to a server script. The script being updated (setDomainEnv.sh) is found at the location “/u01/data/domains/esscs/bin” and Oracle Support advises backing up this script before making changes.

To change this script, one needs to change to the “Oracle” user with the command “sudo su – oracle.” Find the following words in the script: “-Dessbase.datasource=BIPlatformDatasource”

and replace it with “-Dessbase.datasource=BIPlatformDatasource -Dessbase.schema.update.disabled=true.”

Once the changes are made, log out of the instance and restart it using the “My Services: console. The updates will tell Essbase to skip the “Schema Update” step. As long as there isn’t anything else broken with the instance, this should allow Essbase to start up.

Oracle confirms that there is no functionality lost in using Oracle database 11g with ESSCS, and this issue will be fixed in patch 17.4, which is expected out in a couple of months. However, this stresses the importance of having a pre-production instance to verify that the deployed patches will not introduce issues with your setup. Since a PaaS like OAC allows for a lot of flexibility in how one uses one’s environment, it is important that such system changes are tested before they are released.

While included pre-checks can check for conditions on the Essbase server itself, when we consider the ways the instance can be connected to other services, whether in the Oracle Cloud or outside services like on-premise systems, not every possible condition can be checked for. That is because the tools included with OAC are robust and flexible enough to allow for solutions restricted only by one’s imagination.

ODTUG Elections – Vote Now! - 16-Oct-2017 11:19 - ODTUG
Elections for the 2018-19 ODTUG Board of Directors are underway — vote now!Exercise your right as an ODTUG member and vote for the board. This may be the most important thing you can do for ODTUG.
Elections for the 2018-19 ODTUG Board of Directors are underway — vote now!Exercise your right as an ODTUG member and vote for the board. This may be the most important thing you can do for ODTUG.
FDMEE - building Essbase dimensions - Part 1 - 16-Oct-2017 02:50 - John Goodwin
As you are probably aware FDMEE is great at processing and loading data but not so good when it comes to metadata, currently the only way to load metadata without customisation is with a supported ERP source system and even then, the functionality is pretty limited.

In the past I wrote about a way to handle Planning metadata through FDMEE using a custom jython script, so I thought it was time to turn to Essbase and look at a potential solution to building dimensions.

In the last post I demonstrated how easy it is in FDMEE to interact with the Essbase Java API using jython, continuing that theme the method I will go through in this post will also use the Java API.

I am going to take a different approach than I did with loading Planning metadata where it was all controlled by a custom script, this time I am going to create a custom application which will allow the metadata to be loaded into FDMEE before loading to an Essbase database.

In summary, the process flow will be to load a text file containing the Essbase dimension information to FDMEE, map the metadata, export to a text file and then build the dimension using an Essbase load rule.

As usual I am going to try and keep it as simple as possible, the aim here is not to provide a complex step by step guide but to plant ideas and then the rest is up to you.

So let us get on with it, I have put together a source comma separated file which is in parent/child format, the idea is to load it to FDMEE, map, export and then perform a dimension build to the existing product dimension in everybody’s favourite Sample Basic database.


In FDMEE a new custom target application is created.


New dimensions are added to the custom application to match the source file, I understand that in this scenario they are not dimension names and they are dimension build properties but usually you would be loading data by dimension to a target application, as this is a custom application and solution the concept of dimension can be ignored and thought more of as a property.


The source file is in the format of parent member, child member, alias, data storage and consolidation operator so the dimensions are added to reflect this, if there were additional columns in the source file they could easily be added into the custom application, even if there are properties that are required in the dimension build that are not in the source file they could be generated in FDMEE.

One of the properties needs to be assigned a target class of Account for the solution to work, it is not important which one and the remaining can be set a generic.

It is important to note that when working with a custom application the order of the data that is written to the output file will be defined by the order of “Data Table Column Name”, the order is defined as Account, Entity, UD1, UD2 to UD20, AMOUNT.

So in my example the output file will be in the order of ACCOUNT, UD1, UD2, UD3, UD4 which maps to Parent, Child, Alias, DataStorage, Consolidation.

On to the import format, the source is set as a comma delimited file and the target is the custom application that has just been created.


The source columns and column number from the file are mapped to the target, you will notice that there is a target amount column which is added by default, I am not interested in this target column and it is not present in the source but it needs to exist, I just map the source field to 1 and the value to 1 which will become apparent later.

There is nothing to really to say about the location as it is just the default values with the import format is selected.


A new data load rule is created, the import format and source file name are selected and I uploaded the source file to the FDMEE inbox.


In the rule target options the property value has been set to enable the export to a file and the column delimiter will be comma, the export file is required as this will be then used for the dimension build using an Essbase load rule.


In the custom options for the rule I have added some integration options, they basically define the Essbase application, database, dimension and rule name, it will be clearer how they are used later when I go through the jython script.


I have kept the data load mappings extremely simple and in the main they are like for like mappings, though this is where you could get as complex and creative as you like depending how your source file differs from the target dimension build file.


I did add explicit mappings for the data storage member property as the source file contains a more meaningful name than the property values required for an Essbase dimension build.


The Essbase administrator documentation has a table containing all the property codes and the description.

At this point I can run a data load to import the source file, map and then export.


From the workbench, you can see the full import to export process has been successful.

The source to target mappings can be viewed and you will also notice there is an amount column which I fixed to a value of 1 back in the import format.

The output data file name will be generated based on <target_application_name>_<process_id>.dat and will be written to <application_root_folder>\outbox directory.


The output file is ready for a dimension build using an Essbase load rule.


I am not going to go through the process of how to build a load rule in the EAS console but here is the completed version.


As the file has a header record this has been set to be skipped in the load rule, the amount column has been ignored in the field properties of the rule.

The rule is named the same as the integration option value which was defined earlier in the FDMEE load rule.

The dimension could now be built using the rule and file but we are going to get FDMEE to do that using a jython script.

If you look in the FDMEE log for the process that was just executed you will see reference to jython event scripts that are called at different stages throughout the process.

For example, after the export file has been created there will be the following in the log

INFO  [AIF]: Executing the following script: <application_root_folder>/data/scripts/event/AftExportToDat.py

The scripts are not there by default so you may get a warning saying the script does not exist, if they don’t exist it is just a matter of creating the script and it will be executed next time the process is run.

Please be aware that if event scripts have been enable and the script exists it will always be executed so you need to code it so it triggers only the section of the script you are interested in for this process.

I am going to use the above event script to carry out the dimension build using the Essbase Java API.

Now I am not going to go through every single line of the jython script I have wrote and only stick to the important sections, the script does contain comments so hopefully it provides you enough information.

In summary, the Essbase classes that are required to perform a dimension build are imported.

The target application name and process ID are stored in variables.

The values from the integration options in the FDMEE load rule are stored using the API method “getRuleDetails”, these are held in “RULE_ATTRx

The target Essbase application and database name are then generated from the retrieved values.

The full path to the exported text file and dimension build error file are generated.


The next section is where the Essbase JAVA API comes into play, a login to the Essbase server is made using a single sign-on token so no clear text passwords are stored.

A custom function is called which adds some additionally logging to the process logs which I will show later, it is not actually necessary to do this.

The dimension build is run using the “buildDimension” method passing in the stored rule name, load and error file.

If an error file is generated it is read and the errors are added to the process log.


Now that the jython is in place the export stage of the FDMEE load rule can be run again.


The process details confirm that the export and dimension build were successful, the dimension build file can also be downloaded.


The process steps include the additional custom logging I was referring to earlier.


Opening the outline in the EAS console shows the new members and properties have been successfully created in the product dimension.


Let me demonstrate what happens when dimension build errors occur.


This time I have added an invalid record to the source file which is highlighted above, the full data load process is then executed again.

Instead of a green tick, process details displays a warning icon which was generated using the custom logging function in the jython event script.


The process log contains the full location to the dimension build error file and includes the rejections in the log.


Now we have the option to load data and metadata to a target Essbase database.

You don’t have to use the custom application method, if the source file does not need any kind of mapping or require visibility of what is being loaded through the dimension build, then it could all be done with a single FDMEE custom script which would be practically along the same lines as the code in the event script.

So what if your source is not a file and is a relational database, what if you want to be able to run incremental builds for multiple dimensions, what if you don’t want to create an export file and instead have an Essbase SQL dimension build? Well look out for part 2 where all of these questions will be answered.

As you are probably aware FDMEE is great at processing and loading data but not so good when it comes to metadata, currently the only way to load metadata without customisation is with a supported ERP source system and even then, the functionality is pretty limited.

In the past I wrote about a way to handle Planning metadata through FDMEE using a custom jython script, so I thought it was time to turn to Essbase and look at a potential solution to building dimensions.

In the last post I demonstrated how easy it is in FDMEE to interact with the Essbase Java API using jython, continuing that theme the method I will go through in this post will also use the Java API.

I am going to take a different approach than I did with loading Planning metadata where it was all controlled by a custom script, this time I am going to create a custom application which will allow the metadata to be loaded into FDMEE before loading to an Essbase database.

In summary, the process flow will be to load a text file containing the Essbase dimension information to FDMEE, map the metadata, export to a text file and then build the dimension using an Essbase load rule.

As usual I am going to try and keep it as simple as possible, the aim here is not to provide a complex step by step guide but to plant ideas and then the rest is up to you.

So let us get on with it, I have put together a source comma separated file which is in parent/child format, the idea is to load it to FDMEE, map, export and then perform a dimension build to the existing product dimension in everybody’s favourite Sample Basic database.


In FDMEE a new custom target application is created.


New dimensions are added to the custom application to match the source file, I understand that in this scenario they are not dimension names and they are dimension build properties but usually you would be loading data by dimension to a target application, as this is a custom application and solution the concept of dimension can be ignored and thought more of as a property.


The source file is in the format of parent member, child member, alias, data storage and consolidation operator so the dimensions are added to reflect this, if there were additional columns in the source file they could easily be added into the custom application, even if there are properties that are required in the dimension build that are not in the source file they could be generated in FDMEE.

One of the properties needs to be assigned a target class of Account for the solution to work, it is not important which one and the remaining can be set a generic.

It is important to note that when working with a custom application the order of the data that is written to the output file will be defined by the order of “Data Table Column Name”, the order is defined as Account, Entity, UD1, UD2 to UD20, AMOUNT.

So in my example the output file will be in the order of ACCOUNT, UD1, UD2, UD3, UD4 which maps to Parent, Child, Alias, DataStorage, Consolidation.

On to the import format, the source is set as a comma delimited file and the target is the custom application that has just been created.


The source columns and column number from the file are mapped to the target, you will notice that there is a target amount column which is added by default, I am not interested in this target column and it is not present in the source but it needs to exist, I just map the source field to 1 and the value to 1 which will become apparent later.

There is nothing to really to say about the location as it is just the default values with the import format is selected.


A new data load rule is created, the import format and source file name are selected and I uploaded the source file to the FDMEE inbox.


In the rule target options the property value has been set to enable the export to a file and the column delimiter will be comma, the export file is required as this will be then used for the dimension build using an Essbase load rule.


In the custom options for the rule I have added some integration options, they basically define the Essbase application, database, dimension and rule name, it will be clearer how they are used later when I go through the jython script.


I have kept the data load mappings extremely simple and in the main they are like for like mappings, though this is where you could get as complex and creative as you like depending how your source file differs from the target dimension build file.


I did add explicit mappings for the data storage member property as the source file contains a more meaningful name than the property values required for an Essbase dimension build.


The Essbase administrator documentation has a table containing all the property codes and the description.

At this point I can run a data load to import the source file, map and then export.


From the workbench, you can see the full import to export process has been successful.

The source to target mappings can be viewed and you will also notice there is an amount column which I fixed to a value of 1 back in the import format.

The output data file name will be generated based on <target_application_name>_<process_id>.dat and will be written to <application_root_folder>\outbox directory.


The output file is ready for a dimension build using an Essbase load rule.


I am not going to go through the process of how to build a load rule in the EAS console but here is the completed version.


As the file has a header record this has been set to be skipped in the load rule, the amount column has been ignored in the field properties of the rule.

The rule is named the same as the integration option value which was defined earlier in the FDMEE load rule.

The dimension could now be built using the rule and file but we are going to get FDMEE to do that using a jython script.

If you look in the FDMEE log for the process that was just executed you will see reference to jython event scripts that are called at different stages throughout the process.

For example, after the export file has been created there will be the following in the log

INFO  [AIF]: Executing the following script: <application_root_folder>/data/scripts/event/AftExportToDat.py

The scripts are not there by default so you may get a warning saying the script does not exist, if they don’t exist it is just a matter of creating the script and it will be executed next time the process is run.

Please be aware that if event scripts have been enable and the script exists it will always be executed so you need to code it so it triggers only the section of the script you are interested in for this process.

I am going to use the above event script to carry out the dimension build using the Essbase Java API.

Now I am not going to go through every single line of the jython script I have wrote and only stick to the important sections, the script does contain comments so hopefully it provides you enough information.

In summary, the Essbase classes that are required to perform a dimension build are imported.

The target application name and process ID are stored in variables.

The values from the integration options in the FDMEE load rule are stored using the API method “getRuleDetails”, these are held in “RULE_ATTRx

The target Essbase application and database name are then generated from the retrieved values.

The full path to the exported text file and dimension build error file are generated.


The next section is where the Essbase JAVA API comes into play, a login to the Essbase server is made using a single sign-on token so no clear text passwords are stored.

A custom function is called which adds some additionally logging to the process logs which I will show later, it is not actually necessary to do this.

The dimension build is run using the “buildDimension” method passing in the stored rule name, load and error file.

If an error file is generated it is read and the errors are added to the process log.


Now that the jython is in place the export stage of the FDMEE load rule can be run again.


The process details confirm that the export and dimension build were successful, the dimension build file can also be downloaded.


The process steps include the additional custom logging I was referring to earlier.


Opening the outline in the EAS console shows the new members and properties have been successfully created in the product dimension.


Let me demonstrate what happens when dimension build errors occur.


This time I have added an invalid record to the source file which is highlighted above, the full data load process is then executed again.

Instead of a green tick, process details displays a warning icon which was generated using the custom logging function in the jython event script.


The process log contains the full location to the dimension build error file and includes the rejections in the log.


Now we have the option to load data and metadata to a target Essbase database.

You don’t have to use the custom application method, if the source file does not need any kind of mapping or require visibility of what is being loaded through the dimension build, then it could all be done with a single FDMEE custom script which would be practically along the same lines as the code in the event script.

So what if your source is not a file and is a relational database, what if you want to be able to run incremental builds for multiple dimensions, what if you don’t want to create an export file and instead have an Essbase SQL dimension build? Well look out for part 2 where all of these questions will be answered.

Smart View and Query Designer

I can't recall ever seeing this 'Warning Box' before while using Smart View and creating Query Designer Worksheets, etc. 

So, here is a little blog post about it.


  



And it is a Sunday GAMEDAY with less than an hour before kick-off, so I better complete this short but loving post in time..

I was creating a few Ad-Hoc Essbase queries and wanted to take one of my analysis sheets and turn that into a nice dynamic Query Designer report.

So, I followed the steps that I normally do. I usually start with an Ad-Hoc Query first. Then let Smart View and the Query Designer do a bit of the heavy lifting and transform that sheet into a 'Query'. I applied the query and got the desired results that I wanted.

A part of my analysis required me to start over and so I went ahead and started deleting the worksheets from the workbook. When I was done, I went to save my work. Clicked Save. And then the error appeared..

Sometimes when working with Smart View, I have to admit, I am reminded of this wonderful Disney Movie, Inside Out and all of the emotions.














Good news is that it looks to only be a warning-suggestion. But it feels a bit sad or even angry that the Query Designer worksheets were missing. Sorry to upset you Smart View.



















Borderline anger?



I went ahead and clicked 'OK'. Closed the workbook, re-opened and everything worked just fine.



Glad to see that Smart View was only maybe giving me a suggestion or trying to remind me :)



Well, phew, glad this was a short post. Now time for me to grab a bite to eat and watch the game!



Smart View Query Designer Gets Emotional - 15-Oct-2017 12:26 - Gary Adashek
Smart View and Query Designer

I can't recall ever seeing this 'Warning Box' before while using Smart View and creating Query Designer Worksheets, etc. 

So, here is a little blog post about it.


  



And it is a Sunday GAMEDAY with less than an hour before kick-off, so I better complete this short but loving post in time..

I was creating a few Ad-Hoc Essbase queries and wanted to take one of my analysis sheets and turn that into a nice dynamic Query Designer report.

So, I followed the steps that I normally do. I usually start with an Ad-Hoc Query first. Then let Smart View and the Query Designer do a bit of the heavy lifting and transform that sheet into a 'Query'. I applied the query and got the desired results that I wanted.

A part of my analysis required me to start over and so I went ahead and started deleting the worksheets from the workbook. When I was done, I went to save my work. Clicked Save. And then the error appeared..

Sometimes when working with Smart View, I have to admit, I am reminded of this wonderful Disney Movie, Inside Out and all of the emotions.














Good news is that it looks to only be a warning-suggestion. But it feels a bit sad or even angry that the Query Designer worksheets were missing. Sorry to upset you Smart View.



















Borderline anger?



I went ahead and clicked 'OK'. Closed the workbook, re-opened and everything worked just fine.



Glad to see that Smart View was only maybe giving me a suggestion or trying to remind me :)



Well, phew, glad this was a short post. Now time for me to grab a bite to eat and watch the game!



Next