Next
Deliver Reports to Document Cloud Services! - 28-Apr-2017 17:32 - Oracle BI Publisher

Greetings !

In release 12.2.1.1, BI Publisher added a new feature - Delivery to Oracle Document Cloud Services (ODCS). Around the same time, BI Publisher was also certified against JCS 12.2.1.x and therefore, today if you have hosted your BI Publisher instance on JCS then we recommend Oracle Document Cloud Services as the delivery channel. Several reasons for this:

  1. Easy to configure and manage ODCS in BI Publisher on Oracle Public Cloud. No port or firewall issues.
  2. ODCS offers a scalable, robust and secure document storage solution on cloud.
  3. ODCS offers document versioning and document metadata support similar to any content management server
  4. Supports all business document file formats relevant for BI Publisher

When to use ODCS?

ODCS can be used for all different scenarios where a document need to be securely stored in a server that can be retained for any duration. The scenarios may include:

  • Bursting documents to multiple customers at the same time.
    • Invoices to customers
    • HR Payroll reports to its employees
    • Financial Statements
  • Storing large or extremely large reports for offline printing
    • End of the Month/Year Statements for Financial Institutions
    • Consolidated department reports
    • Batch reports for Operational data
  • Regulatory Data Archival
    • Generating PDF/A-1b or PDF/A-2 format documents

How to Configure ODCS in BI Publisher?

Configuration of ODCS in BI Publisher requires the  URI, username and password. Here the username is expected to have access to the folder where the files are to be delivered.



How to Schedule and Deliver to ODCS?

Delivery to ODCS can be managed through both - a Normal Scheduled Job and a Bursting Job.

A Normal Scheduled Job allows the end user to select a folder from a list of values as shown below


\

In case of Bursting Job, the ODCS delivery information is to be provided in the bursting query as shown below:


Accessing Document in ODCS

Once the documents are delivered to ODCS, they can be accessed by user based on his access to the folder, very similar to FTP or WebDAV access.

That's all for now. Stay tuned for more updates !

The Case for ETL in the Cloud - CAPEX vs OPEX - 27-Apr-2017 12:12 - Rittman Mead Consulting

Recently Oracle announced a new cloud service for Oracle Data Integrator. Because I was helping our sales team by doing some estimates and statements of work, I was already thinking of costs, ROI, use cases, and the questions behind making a decision to move to the cloud. I want to explore what is the business case for using or switching to ODICS?

Oracle Data Integration Cloud Services

First, let me briefly talk about what is Oracle Data Integration Cloud Services? ODICS is ODI version 12.2.1.2 available on Oracle’s Java Cloud Service known as JCS. Several posts cover the implementation, migration, and technical aspects of using ODI in the cloud. Instead of covering the ‘how’, I want to talk about the ‘when’ and ‘why’.

Use Cases

What use cases are there for ODICS?
1. You have or soon plan to have your data warehouse in Oracle’s Cloud. In this situation, you can now have your ODI J2EE agent in the same cloud network, removing network hops and improving performance.
2. If you currently have an ODI license on-premises, you are allowed to install that license on Oracle’s JCS at the JCS prices. See here for more information about installing on JCS.
3. If you currently have an ODI license on-premises, and you don't need the full functionality of ODI JEE agents, you can also use standalone ODI agents in the Oracle Compute Cloud. These use cases are described in a webinar posted in the PM Webcast Archive.

When and Why?

So when would it make sense to move towards using ODICS? These are the scenarios I imagine being the most likely:
1. A new customer or project. If a business doesn’t already have ODI, this allows them to decide between an all on-premises solution or a complete solution in Oracle’s cloud. With monthly and metered costs, the standard large start-up costs for hardware and licenses are avoided, making this solution available for more small to medium businesses.
2. An existing business with ODI already and considering moving their DW to the cloud. In this scenario, a possible solution would be to move the current license of ODI to JCS (or Compute Cloud) and begin using that to move data, all while tracking JCS costs. When the time comes to review licensing obligations for ODI, compare the calculation for a license to the calculation of expected usage for ODICS and see which one makes the most sense (cents?). For a more detailed explanation of this point, let’s talk CAPEX and OPEX!

CAPEX vs. OPEX

CAPEX and OPEX are short for Capital Expense and Operational Expense, respectively. In a finance and budgeting perspective, these two show up very differently on financial reports. This often has tax considerations for businesses. Traditionally in the past, a data warehouse project was a very large initial capital expenditure, with hardware, licenses, and project costs. This would land it very solidly as CAPEX. Over the last several years, sponsorship for these projects has shifted from CIOs and IT Directors to CFOs and Business Directors. With this shift, several businesses would rather budget and see these expenses monthly as an operating expense as opposed to every few years having large capital expenses, putting these projects into OPEX instead.

Conclusion

Having monthly and metered service costs in the cloud that are fixed or predictable are appealing. As a bonus, this style of service is highly flexible and can scale up (or down) as demand changes. If you are or will soon be in the process of planning for your future business analytics needs, we provide expert services, assessments, accelerators, and executive consultations for assisting with these kinds of decisions. When it is time to talk about actual numbers, your Oracle Sales Representative will have the best prices. Please get in touch for more information.

OUG Ireland Meetup 11th May - 27-Apr-2017 09:23 - Brendan Tierney

The next OUG Ireland Meetup is happening on 11th May, in the Bank of Ireland Grand Canal Dock. This is a free event and is open to every one. You don't have to be a member to attend.

Following on from a very successful 2 day OUG Ireland Conference with over 250 attendees, we have organised our next meetup. This was mentioned during the opening session of the conference.

NewImage

We typically have 2 presentations at each Meetup and on 11th May we have:

1. Oracle Analytics Cloud Service.

Oralce Analytics Cloud Service was only released a few weeks ago and we some local people who have been working with the beta and early adopter releases. They will be giving us some insights on this new product and how it compares with other analytics products like Oracle Data Visualization and OBIEE.

Running Oracle DataGuard on RAC on Oracle 12c

The second presentation will be on using Oracle DataGuard on RAC on Oracle 12c. We have a very experienced DBA talking about his experiences of using these products how to workaround some key bugs and situations to be aware of for administration purposes. Lots of valuable information to be gained.

Check out the full agenda and to register for the Meetup by clicking on this link or on the Meetup image above.

There will be some food and refreshments available for you to enjoy.

The Meetup will be in Bank of Ireland, Grand Canal Dock. This venue is a very popular locations for Meetups in Dublin.

NewImage

Here is an overview of a few sessions Track Lead Tiffany Briseno is most looking forward to at ODTUG Kscope17 and why she will be attending them:
Here is an overview of Kscope17 sessions Kevin McGinley is most looking forward to and his thoughts on why you should attend them, too:
Setting up Oracle Database on Docker - 21-Apr-2017 06:05 - Brendan Tierney

A couple of days ago it was announced that several Oracle images were available on the Docker Store.

This is by far the easiest Oracle Database install I have every done !

You simply have no excuse now for not installing and using an Oracle Database. Just go and do it now!

The following steps outlines what I did you get an Oracle 12.1c Database.

1. Download and Install Docker

There isn't much to say here. Just go to the Docker website, select the version docker for your OS, and just install it.

You will probably need to create an account with Docker.

NewImage

After Docker is installed it will automatically start and and will be placed in your system tray etc so that it will automatically start each time you restart your laptop/PC.

2. Adjust the memory allocation

From the system tray open the Docker application. In the Advanced section allocate a bit more memory. This will just make things run a bit smoother. Be a bit careful on how much to allocate.

NewImage

In the General section check the tick-box for automatically backing up Docker VMs. This is assuming you have back-ups setup, for example with Time Machine or something similar.

3. Download & Edit the Oracle Docker environment File

On the Oracle Database download Docker webpage, click on the the Get Content button.

NewImage

You will have to enter some details like your name, company, job title and phone number, then click on the check-box, before clicking on the Get Content button. All of this is necessary for the Oracle License agreement.

The next screen lists the Docker Services and Partner Services that you have signed up for.

NewImage

Click on the Setup button to go to the webpage that contains some of the setup instructions.

NewImage

The first thing you need to do is to copy the sample Environment File. Create a new file on your laptop/desktop and paste the environment file contents into the file. There are a few edits you need to make to this file. The following is the edited/modified Environment file that I created and used. The changes are for DB_SID, DB_PASSWD and DB_DOMAIN.


####################################################################
## Copyright(c) Oracle Corporation 1998,2016. All rights reserved.##
## ##
## Docker OL7 db12c dat file ##
## ##
####################################################################

##------------------------------------------------------------------
## Specify the basic DB parameters
##------------------------------------------------------------------

## db sid (name)
## default : ORCL
## cannot be longer than 8 characters

DB_SID=ORCL

## db passwd
## default : Oracle

DB_PASSWD=oracle

## db domain
## default : localdomain

DB_DOMAIN=localdomain

## db bundle
## default : basic
## valid : basic / high / extreme
## (high and extreme are only available for enterprise edition)

DB_BUNDLE=basic

## end

I called this file 'docker_ora_db.txt'

4. Download and Configure Oracle Database for Docker

The following command will download and configure the docker image

$ docker run -d --env-file ./docker_ora_db.txt -p 1527:1521 -p 5507:5500 -it --name dockerDB121 --shm-size="8g" store/oracle/database-enterprise:12.1.0.2

This command will create a container called 'dockerDB121'. The 121 at the end indicate the version number of the Oracle Database. If you end up with a number of containers containing different versions of the Oracle Database then you need some way of distinguishing them.

Take note of the port mapping in the above command, as you will need this information later.

When you run this command, the docker image will be downloaded from the docker website, will be unzipped and the container setup and ready to run.

NewImage

5. Log-in and Finish the configuration

Although the docker container has been setup, there is still a database configuration to complete. The following images shows that the new containers is there.

NewImage

To complete the Database setup, you will need to log into the Docker container.


docker exec -it dockerDB121 /bin/bash

Then run the Oracle Database setup and startup script (as the root user).


/bin/bash /home/oracle/setup/dockerInit.sh
NewImage

This script can take a few minutes to run. On my laptop it took about 2 minutes.

When this is finished the terminal session will open as this script goes into a look.

To run any other commands in the container you will need to open another terminal session and connect to the Docker container. So go open one now.

6. Log into the Database in Docker

In a new terminal window, connect to the Docker container and then switch to the oracle user.


su - oracle

Check that the Oracle Database processes are running (ps -ef) and then connect as SYSDBA.


sqlplus / as sysdba

Let's check out the Database.


SQL> select name,DB_UNIQUE_NAME from v$database;

NAME DB_UNIQUE_NAME
--------- ------------------------------
ORCL ORCL


SQL> SELECT v.name, v.open_mode, NVL(v.restricted, 'n/a') "RESTRICTED", d.status
FROM v$pdbs v, dba_pdbs d
WHERE v.guid = d.guid
ORDER BY v.create_scn;


NAME OPEN_MODE RES STATUS
------------------------------ ---------- --- ---------
PDB$SEED READ ONLY NO NORMAL
PDB1 READ WRITE NO NORMAL

And the tnsnames.ora file contains the following:


ORCL = (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL.localdomain) ) )

PDB1 = (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PDB1.localdomain) ) )

You are now up an running with an Docker container running an Oracle 12.1 Databases.

7. Configure SQL Developer (on Client) to

access the Oracle Database on Docker

You can not use your client tools to connect to the Oracle Database in a Docker Container. Here is a connection setup in SQL Developer.

NewImage

Remember that port number mapping I mentioned in step 4 above. See in this SQL Developer connection that the port number is 1527.


Thats it. How easy is that. You now have a fully configured Oracle 12.1c Enterprise Edition Database to play with, to have fun and to explore all the wonderful features of the Oracle Database.

BI-dentity Crisis - 20-Apr-2017 09:48 - Red Pill Analytics

BI-dentity Crisis - Figuring Out That Data is Fluid

I had just gotten out of a discovery meeting with a client when she said “I’m sure we’re the worst you’ve seen”. The goal of the meeting was to better understand the process they were using, the data source, and how they were consuming that data. Turns out that data was entered into Share Point forms, fed into MS SQL Server, dumped into Access, cleaned up with VBA, and accessed with Tableau and Excel. I’ve seen plenty of scenarios like this in my time as a consultant, and in a previous life, I helped build a solution similar to this. These scenarios are not uncommon, and usually, the business has a problem that needs to be resolved and they figure out a way to do it with the resources they have on hand. Personally, I have no beef with these types of systems. I even like seeing them because it shows how people can problem solve with minimal resources; ingenuity can be a beautiful thing. Sure, like most people in the BI/DI/analytics/(insert all other related buzzwords here) space, I like to come into a client where everything is clean and segmented, but I suppose one of the fun things of this job is unraveling the ball of yarn and untangling the knots.

Now, back to what one of the clients said to me: “I’m sure we’re the worst you’ve seen…” I think it is troubling to hear that kind of self deprecation on such a regular basis. Why? I think that for a few reasons: Business Intelligence and Analytics (BIA) is a spectrum, data is necessary for modern businesses, and most businesses are not like the ones seen in the latest blogs or with the sexiest, newest tech. There are plenty of clients I have worked with that think that since they do not fit into these boxes, they are worst in class. Most need help, which is why I am there in the first place, but even the “best in class” usually need help in more ways than just a technical solution.

The BIA Spectrum

I think that part of the issue is based on semantics. To most people, BI and Analytics are defined like this:

  1. Business Intelligence: An analysis in which data is viewed post business activity to assess the business via metrics and indicators.
  2. Analytics: An analysis which uses past data to make projections as to what could happen.

The biggest difference here is that BI is backward-looking (into the past), and Analytics is forward-looking (into the future). This is a Boolean point of view, and frankly, unnuanced.

It also does not take into account the “unmentionable”: Operational Reporting. Yuck, right? Who the hell wants to do that? And that is (usually) the end of the discussion. The unsexiness of Operational Reporting means that it is forever pushed to the side, resulting in belabored sighs from clients claiming that they are “the worst you’ve seen”, just because they still have stuff running with VBA code to make sure the lights stay on and the orders are filled.

I think the duality of the current definition of BIA is wrong. Here is how I perceive the BIA spectrum:

In the wild, things are much more fluid than this.

This captures the entirety of what is happening in the business with reporting data; data that is being used to solve problems. Whether those problems are in the present, past or future is irrelevant, because if we want to have higher quality data, higher quality decisions and higher data accuracy, why should we only care about 2/3rds of the use case for curated data?

Why the Modern Business Needs Nuance

I think that we can boil down the definitions of those concepts into three simple questions.

Operational Reporting: What is happening?

Business Intelligence: How are we performing?

Analytics: Why do we care?

Here is an example to illustrate the above concept. Let’s pretend that a company called Company A manufactures charging equipment for smartphones, and that they have two product lines: Android, and iOS. When company A sees that the shipments from the warehouse starting to slow because the line workers headed out to the food truck for lunch, that is Operational Reporting. When Company A sees that shipment numbers are down compared to last month and customers are not getting orders by the promised date, that is Business Intelligence. When Company A determines that shipments are down because an increase in customers are not buying Android smart phone chargers, that is Analytics.

You may argue that this is all semantics, and you may be right. But I think that semantics matter. And, I think this example highlights another thing that is not usually discussed: if operational data can be used with curated data sets seen in BI and Analytic settings, the possibility for insight is greater. With the advent of Internet of Things connectivity, streaming with services like Kafka, and the ability to use nontraditional data types like JSON, integrating this data is a no brainer.

Is the Future Now?

Many times I come into organizations and see a mindset that what “we really need [is] a new tool”, and that this new technology will magically fix everything. While technological changes do need to happen, and many organizations need to adapt to current technological standards and methods, the tools themselves do not change the underlying issues (more on that later).

Technologists love the mantra that the future is now. Better yet, they love to say that the future arrives in waves (meaning that the future arrives at different times in different places). That may be true, but I think that simplistic mindset perpetuates a sense of urgency to be an early adopter of the latest technology. Buying and building a foundation for these concepts (or even BIA) takes considerable time, money and effort. This isn’t like heading down to the Apple store and buying a new iPhone every year. These initiatives are usually million dollar efforts that require multiple teams across multiple years. For comparison, how long does it take to buy an iPhone? An afternoon? Even by percentage terms, the amount of effort is not even comparable between these two tasks. Let’s stop pretending that buying a tool will fix the fundamental issue at hand, because in most cases it will not.

Another concept that we should take a good, hard look at is the “data lake”. I mentioned above that Internet of Things data can be streamed, captured and analyzed to benefit the business. But just because we can stream it and hold on to it, does that mean it has value? I hear many industry talking heads talk about data lakes, big data, algorithms and the like, but what is the use if there is no use case? Most clients I have worked with don’t understand what data they have or how to use it — is it really feasible to think that these clients will directly benefit from a solution like this? Long term, perhaps they will. Short term, there are better initiatives that can bring higher benefits for lower cost and effort.

A better approach is to focus on how we can use, and most importantly think about data. I dare you to go to HBR (a favorite business publication of mine, so I’m not hating here), and search for the term “analytics”. When I conducted this experiment, I had 1,521 results returned. So, clearly, people are talking about this topic. What are they saying? Here are some of the titles: “Figuring Out How IT, Analytics, and Operations Should Work Together” (Berkooz), “How an Analytics Mindset Changes Marketing Culture” (Sweetwood), “The Reason So Many Analytics Efforts Fall Short” (McShea, Oakley, Mazzei), and… I could keep going with examples. Each of these articles, and many others seen have great points describing what mistakes were made. I think that many of these mistakes continue to be made because businesses refuse to change their thinking about analytics, reporting, BI or whatever you’d like to call it. Einstein’s definition of insanity is apt here: “doing the same thing over and over and expecting different results”. I would be remiss if I placed the blame squarely on businesses, but consulting partners also have a role to play in this. Being a technical consultant does not necessarily mean only focusing on a technical solution; it is important to advise on the implications of these solutions and how the business should be thinking and acting.

Doctor, It Hurts When I Do This

These attitudes towards data lead to many situations I see with clients. Many times, I feel as though I am the one who needs to broach the topic and feel akin to someone who needs to tell a friend that s/he really needs an Altoid after that falafel pita they had for lunch. Because businesses are so focused on getting things done, they rarely have the time to focus on the self reflection that would lead them to recognizing these issues. These are difficult discussions — but would we rather not have them and let people drown in data quagmire? The items below are the road blocks that I most commonly see as deterrents from accepting a new mindset around data. Until these fundamental data hurdles are jumped, it will be hard for any organization to overcome an old world view and embrace a new one for a new world.

  1. Data is inaccessible. Many times, the people who need the data do not have ready access to it. This may be because data is difficult to extract (maybe they are accountants that do not have database access). Or maybe it is hoarded (someone has access but only sends limited amounts of information). Perhaps the data is disparate (meaning that it is spread thinly throughout the organization, typically in Excel files). These issues prevent people from making decisions and they spend many, many hours finding loopholes, work arounds and writing their own “underground” code and databases to compile this data, when they should be making decisions or correcting it in a different system.
  2. Data is indecipherable. Sometimes data is accessible but the data means different things to different people. For example, Jimmy may calculate “On Time Order Percentage” as ((Number of On Time Orders / Total Number of Orders) * 100), where Jane may calculate “On Time Order Percentage” as ((Number of On Time Orders / (Total Number of Orders — Number of Cancelled Orders) * 100). Who is right? In some respects, they both are; in some respects, they are both wrong as well. Because no one can clearly understand what is happening, the data loses it’s meaning. It always needs a qualifier, and thus, is value is decreased because no one fully trusts it. This is also an issue when the data set is complex. If someone needs to understand different codes that represent business processes (100 = order placed, 102 = order packed, 103 = order ready for shipping, etc), or if the data does not represent the business process, the common language between the end user and the data is destroyed, and not only does the data become useless, it becomes meaningless.
  3. Data lacks vision. Many times I see companies that “just want reports on X”. X could be order management, or accounting, or purchase orders. However, rarely do I see companies create and execute a vision for their corporate data. This requires thinking of data within several tracks. Operational reporting keeps the lights on; what do I need to do right now to keep us moving forward? Business intelligence provides the business with goals, key performance indicators and metrics to track performance over time. Analytics answer the hard questions regarding what is happening in the business and in the general marketplace; these are usually open ended and have a grey area in terms of what the answer is. The lack of vision is a massive detriment to many companies because it means they move disjointedly when it comes to strategy and execution, particularly when it comes to internal resources.

For those of you who are in consulting, I am sure you have more points to add to the list. However, the point isn’t about making a list, it is about recognizing the blind spots many companies have in regards to data. It’s hard for any organization to get on board with data investments when it can’t overcome obstacles like the ones above. It is important for all of us to speak out and help transform the landscape of data from a Boolean point of view into a varchar point of view; it may need some error handling, but at least you can get what you want out of it… most of the time.

An Ending, but not The End

All of the above is great food for thought, but how do we implement a plan to combat the mistakes of the past? How can we start a movement that changes how we think about and interact with data? I’ll be following up this blog with strategies and ideas for winning these battles. Until then, remember that the mindset that we bring to the table when talking about data matters as much as problem we are trying to solve.

BI-dentity Crisis - 20-Apr-2017 09:47 - Kevin McGinley
Photo Credit: Drew Collins

Figuring Out That Data is Fluid

I had just gotten out of a discovery meeting with a client when she said “I’m sure we’re the worst you’ve seen”. The goal of the meeting was to better understand the process they were using, the data source, and how they were consuming that data. Turns out that data was entered into Share Point forms, fed into MS SQL Server, dumped into Access, cleaned up with VBA, and accessed with Tableau and Excel. I’ve seen plenty of scenarios like this in my time as a consultant, and in a previous life, I helped build a solution similar to this. These scenarios are not uncommon, and usually, the business has a problem that needs to be resolved and they figure out a way to do it with the resources they have on hand. Personally, I have no beef with these types of systems. I even like seeing them because it shows how people can problem solve with minimal resources; ingenuity can be a beautiful thing. Sure, like most people in the BI/DI/analytics/(insert all other related buzzwords here) space, I like to come into a client where everything is clean and segmented, but I suppose one of the fun things of this job is unraveling the ball of yarn and untangling the knots.

Now, back to what one of the clients said to me: “I’m sure we’re the worst you’ve seen…” I think it is troubling to hear that kind of self deprecation on such a regular basis. Why? I think that for a few reasons: Business Intelligence and Analytics (BIA) is a spectrum, data is necessary for modern businesses, and most businesses are not like the ones seen in the latest blogs or with the sexiest, newest tech. There are plenty of clients I have worked with that think that since they do not fit into these boxes, they are worst in class. Most need help, which is why I am there in the first place, but even the “best in class” usually need help in more ways than just a technical solution.

The BIA Spectrum

I think that part of the issue is based on semantics. To most people, BI and Analytics are defined like this:

  1. Business Intelligence: An analysis in which data is viewed post business activity to assess the business via metrics and indicators.
  2. Analytics: An analysis which uses past data to make projections as to what could happen.

The biggest difference here is that BI is backward-looking (into the past), and Analytics is forward-looking (into the future). This is a Boolean point of view, and frankly, unnuanced.

It also does not take into account the “unmentionable”: Operational Reporting. Yuck, right? Who the hell wants to do that? And that is (usually) the end of the discussion. The unsexiness of Operational Reporting means that it is forever pushed to the side, resulting in belabored sighs from clients claiming that they are “the worst you’ve seen”, just because they still have stuff running with VBA code to make sure the lights stay on and the orders are filled.

I think the duality of the current definition of BIA is wrong. Here is how I perceive the BIA spectrum:

In the wild, things are much more fluid than this.

This captures the entirety of what is happening in the business with reporting data; data that is being used to solve problems. Whether those problems are in the present, past or future is irrelevant, because if we want to have higher quality data, higher quality decisions and higher data accuracy, why should we only care about 2/3rds of the use case for curated data?

Why the Modern Business Needs Nuance

I think that we can boil down the definitions of those concepts into three simple questions.

Operational Reporting: What is happening?

Business Intelligence: How are we performing?

Analytics: Why do we care?

Here is an example to illustrate the above concept. Let’s pretend that a company called Company A manufactures charging equipment for smartphones, and that they have two product lines: Android, and iOS. When company A sees that the shipments from the warehouse starting to slow because the line workers headed out to the food truck for lunch, that is Operational Reporting. When Company A sees that shipment numbers are down compared to last month and customers are not getting orders by the promised date, that is Business Intelligence. When Company A determines that shipments are down because an increase in customers are not buying Android smart phone chargers, that is Analytics.

You may argue that this is all semantics, and you may be right. But I think that semantics matter. And, I think this example highlights another thing that is not usually discussed: if operational data can be used with curated data sets seen in BI and Analytic settings, the possibility for insight is greater. With the advent of Internet of Things connectivity, streaming with services like Kafka, and the ability to use nontraditional data types like JSON, integrating this data is a no brainer.

Is the Future Now?

Many times I come into organizations and see a mindset that what “we really need [is] a new tool”, and that this new technology will magically fix everything. While technological changes do need to happen, and many organizations need to adapt to current technological standards and methods, the tools themselves do not change the underlying issues (more on that later).

Technologists love the mantra that the future is now. Better yet, they love to say that the future arrives in waves (meaning that the future arrives at different times in different places). That may be true, but I think that simplistic mindset perpetuates a sense of urgency to be an early adopter of the latest technology. Buying and building a foundation for these concepts (or even BIA) takes considerable time, money and effort. This isn’t like heading down to the Apple store and buying a new iPhone every year. These initiatives are usually million dollar efforts that require multiple teams across multiple years. For comparison, how long does it take to buy an iPhone? An afternoon? Even by percentage terms, the amount of effort is not even comparable between these two tasks. Let’s stop pretending that buying a tool will fix the fundamental issue at hand, because in most cases it will not.

Another concept that we should take a good, hard look at is the “data lake”. I mentioned above that Internet of Things data can be streamed, captured and analyzed to benefit the business. But just because we can stream it and hold on to it, does that mean it has value? I hear many industry talking heads talk about data lakes, big data, algorithms and the like, but what is the use if there is no use case? Most clients I have worked with don’t understand what data they have or how to use it — is it really feasible to think that these clients will directly benefit from a solution like this? Long term, perhaps they will. Short term, there are better initiatives that can bring higher benefits for lower cost and effort.

A better approach is to focus on how we can use, and most importantly think about data. I dare you to go to HBR (a favorite business publication of mine, so I’m not hating here), and search for the term “analytics”. When I conducted this experiment, I had 1,521 results returned. So, clearly, people are talking about this topic. What are they saying? Here are some of the titles: “Figuring Out How IT, Analytics, and Operations Should Work Together” (Berkooz), “How an Analytics Mindset Changes Marketing Culture” (Sweetwood), “The Reason So Many Analytics Efforts Fall Short” (McShea, Oakley, Mazzei), and… I could keep going with examples. Each of these articles, and many others seen have great points describing what mistakes were made. I think that many of these mistakes continue to be made because businesses refuse to change their thinking about analytics, reporting, BI or whatever you’d like to call it. Einstein’s definition of insanity is apt here: “doing the same thing over and over and expecting different results”. I would be remiss if I placed the blame squarely on businesses, but consulting partners also have a role to play in this. Being a technical consultant does not necessarily mean only focusing on a technical solution; it is important to advise on the implications of these solutions and how the business should be thinking and acting.

Doctor, It Hurts When I Do This

These attitudes towards data lead to many situations I see with clients. Many times, I feel as though I am the one who needs to broach the topic and feel akin to someone who needs to tell a friend that s/he really needs an Altoid after that falafel pita they had for lunch. Because businesses are so focused on getting things done, they rarely have the time to focus on the self reflection that would lead them to recognizing these issues. These are difficult discussions — but would we rather not have them and let people drown in data quagmire? The items below are the road blocks that I most commonly see as deterrents from accepting a new mindset around data. Until these fundamental data hurdles are jumped, it will be hard for any organization to overcome an old world view and embrace a new one for a new world.

  1. Data is inaccessible. Many times, the people who need the data do not have ready access to it. This may be because data is difficult to extract (maybe they are accountants that do not have database access). Or maybe it is hoarded (someone has access but only sends limited amounts of information). Perhaps the data is disparate (meaning that it is spread thinly throughout the organization, typically in Excel files). These issues prevent people from making decisions and they spend many, many hours finding loopholes, work arounds and writing their own “underground” code and databases to compile this data, when they should be making decisions or correcting it in a different system.
  2. Data is indecipherable. Sometimes data is accessible but the data means different things to different people. For example, Jimmy may calculate “On Time Order Percentage” as ((Number of On Time Orders / Total Number of Orders) * 100), where Jane may calculate “On Time Order Percentage” as ((Number of On Time Orders / (Total Number of Orders — Number of Cancelled Orders) * 100). Who is right? In some respects, they both are; in some respects, they are both wrong as well. Because no one can clearly understand what is happening, the data loses it’s meaning. It always needs a qualifier, and thus, is value is decreased because no one fully trusts it. This is also an issue when the data set is complex. If someone needs to understand different codes that represent business processes (100 = order placed, 102 = order packed, 103 = order ready for shipping, etc), or if the data does not represent the business process, the common language between the end user and the data is destroyed, and not only does the data become useless, it becomes meaningless.
  3. Data lacks vision. Many times I see companies that “just want reports on X”. X could be order management, or accounting, or purchase orders. However, rarely do I see companies create and execute a vision for their corporate data. This requires thinking of data within several tracks. Operational reporting keeps the lights on; what do I need to do right now to keep us moving forward? Business intelligence provides the business with goals, key performance indicators and metrics to track performance over time. Analytics answer the hard questions regarding what is happening in the business and in the general marketplace; these are usually open ended and have a grey area in terms of what the answer is. The lack of vision is a massive detriment to many companies because it means they move disjointedly when it comes to strategy and execution, particularly when it comes to internal resources.

For those of you who are in consulting, I am sure you have more points to add to the list. However, the point isn’t about making a list, it is about recognizing the blind spots many companies have in regards to data. It’s hard for any organization to get on board with data investments when it can’t overcome obstacles like the ones above. It is important for all of us to speak out and help transform the landscape of data from a Boolean point of view into a varchar point of view; it may need some error handling, but at least you can get what you want out of it… most of the time.

An Ending, but not The End

All of the above is great food for thought, but how do we implement a plan to combat the mistakes of the past? How can we start a movement that changes how we think about and interact with data? I’ll be following up this blog with strategies and ideas for winning these battles. Until then, remember that the mindset that we bring to the table when talking about data matters as much as problem we are trying to solve.


BI-dentity Crisis was originally published in Red Pill Analytics on Medium, where people are continuing the conversation by highlighting and responding to this story.

In an earlier post I had mentioned one of the new features in Oracle Database 12.2 was the ability to set SGA and PGA memory related parameters even at the individual PDB level. So it enables us to further limit or define the resources which a particular PDB can use and enable a more efficient management of resources in a multitenant environment.

We can further in Oracle 12c Release 2 now even limit the operations which can be performed within a particular PDB as well as restrict features which can be used or enabled – all at the individual PDB level. We can also limit network connectivity a PDB can have by enabling or disabling the use of network related packages like UTL_SMTP,UTL_HTTP, UTL_TCP at the PDB level.

This is done via the new 12.2 feature called Lockdown Profiles.

We create lockdown profiles via the CREATE LOCKDOWN PROFILE statement while connected to the root CDB and after the lockdown profile has been created, we add the required restrictions or limits which we would like to enforce via the ALTER LOCKDOWN PROFILE statement.

To assign the lockdown profile to a particular PDB, we use the PDB_LOCKDOWN initialization parameter which will contain the name of the lockdown profile we have earlier created.

If we set the PDB_LOCKDOWN parameter at the CDB level, it will apply to all the PDB’s in the CDB. We can also set the PDB_LOCKDOWN parameter at the PDB level and we can maybe have different PDB_LOCKDOWN values for different PDB’s as we will see in the example below.

Let us have a look at an example of PDB Lockdown Profiles at work.

In our CDB, we have two pluggable databases PDB1 and PDB2. We want to limit some kind of operations depending on the PDB involved.

Our requirements are the following:

  • We want to ensure that in PDB1 the value for SGA_TARGET cannot be altered – so even a privileged user cannot allocate additional memory to the PDB. However if memory is available, then PGA allocation can be altered.
  • To shutdown PDB1, it can only be done if connected to the root container and not from within the Pluggable Database itself
  • The Partitioning feature is not available in PDB2
This content is available for purchase. Please select from available options.
Purchase Only
Jorge Rimblas, the APEX track lead for ODTUG Kscope17, shares his recommended “don’t miss sessions” at ODTUG Kscope17:
Next