Updating dbaascli in Oracle Cloud - 19-Mar-2018 19:13 - Skant Gupta

Normally, you use the dbaascli utility to update the cloud tooling on Database Cloud Service database deployments hosting a single-instance database or Oracle Data Guard configuration. However, older deployments don’t support the two dbaascli subcommands you use:

dbaascli dbpatchm --run -list_tools
dbaascli dbpatchm --run -toolsinst

  • Connect as the opc user to the compute node.
  • Start a root-user command shell:

[opc@Student015-skant ~]$ sudo -s
[root@Student015-skant opc]#

  • Navigate to the /tmp directory:

# cd /tmp

  • Download the RPM file containing the latest version of the cloud tooling:

[root@Student015-skant opc]# wget
--2018-03-19 22:00:15--
Connecting to||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 109750444 (105M) [application/octet-stream]
Saving to: “dbaastools.rpm”

100%[=============================================================================================================================>] 109,750,444 103M/s in 1.0s

2018-03-19 22:00:16 (103 MB/s) - “dbaastools.rpm” saved [109750444/109750444]

[root@Student015-skant opc]#

  • Get information about the cloud tooling in the downloaded RPM file:

[root@Student015-skant opc]# rpm -qpi ./dbaastools.rpm
Name : dbaastools                       Relocations: (not relocatable)
Version : 1.0                            Vendor: Oracle
Release : 1+       Build Date: Tue 13 Mar 2018 01:19:55 AM UTC
Install Date: (not installed)            Build Host:
Group : Applications/Administrative      Source RPM: dbaastools-1.0-1+
Size : 114935799                         License: Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
Signature : (none)
Packager : mmoteka
Summary : DBAAS Tools
Description :
DBAAS Tools for Oracle Public Cloud
[root@Student015-skant opc]#

  • Get information about the installed cloud tooling:

[root@Student015-skant opc]# rpm -qa|grep -i dbaastools

  • Install the cloud tooling in the downloaded RPM file:

[root@Student015-skant opc]# rpm -Uvh ./dbaastools.rpm
Preparing... ########################################### [100%]
package dbaastools-1.0-1+ is already installed
[root@Student015-skant opc]#

  • Exit the root-user command shell and disconnect from the compute node:

# exit
$ exit

Stay tuned for More articles on Oracle Cloud

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Skant Gupta’s LinkedIn:

Joel Perez’s LinkedIn: Joel Perez’s Profile

Anuradha’s LinkedIn: Anuradha’s Profile

LinkedIn Group: Oracle Cloud DBAAS

Facebook Page: OracleHelp

The post Updating dbaascli in Oracle Cloud appeared first on ORACLE-HELP.

RESTful API in Oracle Cloud - 19-Mar-2018 17:34 - Skant Gupta

A RESTful API is an application program interface (API) that uses HTTP requests to GET, PUT, POST and DELETE data.

A RESTful API — also referred to as a RESTful web service — is based on representational state transfer (REST) technology, an architectural style and approach to communications often used in web services development.

Oracle REST Data Services (ORDS) makes it easy to develop modern REST interfaces for relational data in the Oracle Database, Oracle Database 12c JSON Document Store, and Oracle NoSQL Database. A mid-tier Java application, ORDS maps HTTP(S) verbs (GET, POST, PUT, DELETE, etc.) to database transactions and returns any results formatted using JSON.

URL Structure


  • rest-server is the REST server to contact for your identity domain. To learn the name of this server:
  1. Sign in to the My Services dashboard for your identity domain.
  2. In the list of services, locate the entry for Oracle Database Cloud Service and then click Details.
  3. The REST Endpoint field shows the REST server to contact for your identity domain; for example, or
  • endpoint-path is the relative path that defines the endpoint.

Supported Methods

Status Codes

Stay tuned for More articles on Oracle Cloud and RESTapi

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Skant Gupta’s LinkedIn:

Joel Perez’s LinkedIn: Joel Perez’s Profile

Anuradha’s LinkedIn: Anuradha’s Profile

LinkedIn Group: Oracle Cloud DBAAS

Facebook Page: OracleHelp

The post RESTful API in Oracle Cloud appeared first on ORACLE-HELP.

Reference Costs - 19-Mar-2018 03:44 - Jonathan Lewis

The partitioning option “partition by reference” is a very convenient option which keeps acquiring more cute little features, such as cascading truncates and cascading splits, as time passes – but what does it cost and would you use it if you don’t really need to.

When reference partitioning came into existence many years ago, I had already seen several performance disasters created by people’s enthusiasm for surrogate keys and the difficulties this introduced for partition elimination; so my first thought was that this was a mechanism that would have a hugely beneficial effect on systems which (in 20:20 – or 6:6 if you’re European – hindsight) had been badly designed and would otherwise need a lot of re-engineering to use partitioning effectively.

(Side note: Imagine you have partitioned an orders table on colX which is a column in the real (business-oriented) candidate key, but you’ve created a surrogate key which is used as the target for a foreign key from the order_lines tables – how do you get partition-wise joins between orders and order_lines if you haven’t got the partitioning column in the order_lines table ?)

So ref partitioning was a good way to workaround a big existing problem and, whatever overheads it introduced, the benefit was potentially so  huge that you wouldn’t care (or, probably, notice) that your system was less efficient than it ought to be. But what if you’re working on a new project and still have control of the physical design – how does that change the cost/benefit analysis.

It’s actually taken me several years to get round to producing a little demonstration to highlight one of the obvious costs of reference partitioning – even though it’s a very simple demo raising the obvious question: ‘how much work does Oracle have to do to find the right partition when inserting a “child” row ?’ If you chose to implement reference partitioning without asking that simple question you may be using a lot more machine resources than you really need to, although you may not actually be heading for a disastrous performance problem.

As a demonstration of the issue I’m going to set up something that approximates an order/order_lines model in two ways, one using reference partitioning and one using a copied column, to see what differences show up when you start loading data.

rem     Script:         pt_ref.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2018
rem     Purpose:
rem     Last tested

create table orders (
        id              number(10,0) not null,
        id_cust         number(10,0) not null,
        date_ordered    date not null,
        padding         varchar2(150)
partition by range (date_ordered)
        partition p201801       values less than (to_date('01-Feb-2018')),
        partition p201802       values less than (to_date('01-Mar-2018')),
        partition p201803       values less than (to_date('01-Apr-2018')),
        partition p201804       values less than (to_date('01-May-2018')),
        partition p201805       values less than (to_date('01-Jun-2018')),
        partition p201806       values less than (to_date('01-Jul-2018')),
        partition p201807       values less than (to_date('01-Aug-2018'))

create unique index ord_pk on orders (id);
alter table orders add constraint ord_pk primary key(id);

create table order_lines (
        id_ord          number(10,0) not null,
        line_number     number(4,0)  not null,
        id_product      number(6,0)  not null,
        qty             number(6,0)  not null,
        value           number(10,2) not null,
        padding         varchar2(150),
        constraint orl_fk_ord foreign key (id_ord) references orders
                on delete cascade
partition by reference (orl_fk_ord)

create unique index orl_pk on order_lines (id_ord, line_number);
alter table order_lines add constraint orl_pk primary key (id_ord, line_number);

create table order_lines_2 (
        date_ordered    date,
        id_ord          number(10,0) not null,
        line_number     number(4,0)  not null,
        id_product      number(6,0)  not null,
        qty             number(6,0)  not null,
        value           number(10,2) not null,
        padding         varchar2(150),
        constraint orl2_fk_ord foreign key (id_ord) references orders
                on delete cascade
partition by range (date_ordered)
        partition p201801       values less than (to_date('01-Feb-2018')),
        partition p201802       values less than (to_date('01-Mar-2018')),
        partition p201803       values less than (to_date('01-Apr-2018')),
        partition p201804       values less than (to_date('01-May-2018')),
        partition p201805       values less than (to_date('01-Jun-2018')),
        partition p201806       values less than (to_date('01-Jul-2018')),
        partition p201807       values less than (to_date('01-Aug-2018'))

create unique index orl2_pk on order_lines_2 (id_ord, line_number);
alter table order_lines_2 add constraint orl2_pk primary key (id_ord, line_number);

It’s a bit of a bodge job as far as modelling goes, but that’s to keep workload comparisons easy and make a point without writing too much code. All I’ve got is an orders table partitioned by date and an order_lines table that I want partitioned the same way. I’ve handled the requirement for partitioning order_lines in two ways, one is partition by reference and the other is to copy down the partitioning column from the orders table. (In my view the “real” key for an orders table should be (customer identifier, order date, counter) and if I thought efficient partitioning was going to be a necessary feature for scalability I would copy down all three columns. Depending on the nature of the business I would compress the primary key index on orders on one or two of the columns, and the foreign key index on order_lines on one, two, or three of its columns)

Now all I have to do is load some data into the tables. First the orders table:

insert into orders(
        id, id_cust, date_ordered, padding
with g as (
        select rownum id from dual
        connect by level <= 1e4
        rownum                                  id,
        trunc(dbms_random.value(10000,20000))   id_cust,
        to_date('01-Jan-2018') +
                trunc((rownum-1)/100)           date_ordered,
        rpad('x',40)                            padding
        rownum <= 2e4

execute dbms_stats.gather_table_stats(user,'orders')

This produces 100 orders per day, for 200 days which fits within the seven months of pre-declared partitions. I’ve gathered table stats on the table because that’s probably the best way to deal with any requirements for block cleanout after the insert. (Note: I’m avoiding interval partitioning in this example because that’s just another complication to add to the comparison and, as I reported a few days ago, introduces another massive inefficiency on data loading.)

Now I’ll insert some order_lines rows at 5 lines per order into the two versions of the order_lines tables. One of them, of course, has to have a date generated using the same algorithm that I used for the orders table. Note that I’ve made a call to dbms_random.seed(0) before each insert to guarantee that the same “random” values will be inserted in both table.

execute dbms_random.seed(0)

insert into order_lines_2(
        date_ordered, id_ord, line_number, id_product, qty, value, padding
with g as (
        select rownum id from dual
        connect by level <= 1e4
        to_date('01-Jan-2018') +
                trunc((rownum-1)/500)           date_ordered,
        1 + trunc((rownum-1)/5)                 id_ord,
        1 + mod(rownum,5)                       line_number,
        trunc(dbms_random.value(10000,20000))   id_product,
        1 qty,
        1 value,
        rpad('x',80)                            padding
        rownum <= 10e4


execute dbms_random.seed(0)

insert into order_lines(
        id_ord, line_number, id_product, qty, value, padding
with g as (
        select rownum id from dual
        connect by level <= 1e4
        1 + trunc((rownum-1)/5)                 id_ord,
        1 + mod(rownum,5)                       line_number,
        trunc(dbms_random.value(10000,20000))   id_product,
        1 qty,
        1 value,
        rpad('x',80)                            padding
        rownum <= 10e4


What I haven’t shown in the code is the snapshot wrapping I used to check the session stats, system latch activity and system rowcache activity – which I thought would give me the best indication of any variation in workload. In fact, of course, the first and simplest variation was the elapsed time: 4.5 seconds for the ref partitioned table, 2.5 seconds for the explicitly created table (regardless of which insert I did first), and it was nearly all pure CPU time.

It turned out that the rowcache stats showed virtually no variation, and the latch stats only showed significant variation in the latches that I could have predicted from the session stats, and here are the most significant session stats that highlight and explain the difference in times:

Explicitly Created
CPU used by this session                                                   231
DB time                                                                    242
db block gets                                                          219,471
db block changes                                                        27,190
redo entries                                                            15,483
redo size                                                           24,790,224
HSC Heap Segment Block Changes                                           2,944

Ref partitioned
CPU used by this session                                                   515
DB time                                                                    532
db block gets                                                          615,979
db block changes                                                       418,025
redo entries                                                           209,918
redo size                                                           70,043,676
HSC Heap Segment Block Changes                                         100,048

These results were from, but the figures from and were similar though the CPU time dropped as the version number went up: what you’re seeing is the effect of turning an array insert (for the precreated table) into single row processing for the ref partitioned table. Basically it seems that for every row inserted Oracle has to do something to work out which partition the row should go into, and while it does that work it release any pins of buffers it would have been holding from the previous row’s insert; in other words, various optimisations relating to array inserts are not taking place.

  • Looking in more detail at the figures for the ref partition insert:
  • The 100,000 “HSC heap Segment Block Changes” equate to the 100,000 rows inserted into the table
  • Add the single row index updates to the primary key and you get 200,000 redo entries.
  • For every individual row inserted Oracle has to do a current mode (db block gets) check against the primary key of the orders table – but when array processing the root block can be pinned.

We can get a closer look at the differences by taking snapshots of v$segstat (or v$segment_statistics), to see the following (pre-created table on the left):

ORD_PK                                |   ORD_PK
  logical reads          199,440      |     logical reads          300,432
ORDER_LINES_2 - P201801               |   ORDER_LINES - P201801
  logical reads            2,112      |     logical reads           16,960
  db block changes         1,280      |     db block changes        16,944
ORDER_LINES_2 - P201802               |   ORDER_LINES - P201802
  logical reads            2,256      |     logical reads           16,144
  db block changes         1,248      |     db block changes        15,088
ORDER_LINES_2 - P201803               |   ORDER_LINES - P201803
  logical reads            2,288      |     logical reads           17,264
  db block changes         1,376      |     db block changes        16,560
ORDER_LINES_2 - P201804               |   ORDER_LINES - P201804
  logical reads            2,672      |     logical reads           16,768
  db block changes         1,280      |     db block changes        16,144
ORDER_LINES_2 - P201805               |   ORDER_LINES - P201805
  logical reads            2,224      |     logical reads           17,472
  db block changes         1,264      |     db block changes        16,528
ORDER_LINES_2 - P201806               |   ORDER_LINES - P201806
  logical reads            2,624      |     logical reads           16,800
  db block changes         1,328      |     db block changes        16,160
ORDER_LINES_2 - P201807               |   ORDER_LINES - P201807
  logical reads            1,376      |     logical reads           10,368
  db block changes           864      |     db block changes        10,752
ORL2_PK                               |   ORL_PK
  logical reads           10,640      |     logical reads          206,352
  db block changes         7,024      |     db block changes       104,656

The right hand data set does an extra 100,000 logical reads on the ORD_PK index (top set of lines) which I think are the 100,000 gets on the root block that was pinned for the table on the left – the numbers don’t quite add up, so there’s some extra complexity that I haven’t guessed correctly.

The insert into the ORL[2]_PK index (lines) is single row processed for the right hand table – with, I think, the logical reads recording two current gets per insert.

Every partition of the table, except the last, shows 15,000 db block changes, totalling a difference of about 100,000 db block changes corresponding to the single rows being inserted. Then ORL[2]_PK shows another 100,000 db block changes, giving us the 200,000 we saw as redo entries and 400,000 (when doubled up to allow for the undo) db block changes that we saw in total.

Finally we need to explain the difference of 400,000 db block gets between the two sets of session stats – and I think this is the extra 100,000 for ORD_PK, the 100,000 for the table inserts, and 200,000 for the ORL[2]_PK index, which I think might be explained as 100,000 as a current get that checks for “duplicate key” and 100,000 gets to do the actual insert.

Bottom Line, though – if you use reference partitioning every array insert seems to turn into single row processing with the attendant increase in buffer gets, undo and redo generated,  latch activity, and CPU used as Oracle checks for every single row which partition it should go into: and there doesn’t seem to be any optimisation that caters for “this row belongs in the same partition as the previous row”. You may decide that this extra cost due to reference partitioning is worth it for the benefits that reference partitioning supplies – it’s all down to what your application does, especially in terms of aging data perhaps – but it’s nice to know that this cost is there so that you can do a better cost/benefit analysis.


Interested readers might like to extend this test to a multi-layered set of ref-partitioned tables to see if the increase in overheads is linear or geometric.


The requirement is simple: a Node JS application that receives HTTP requests and forwards (some of) them to other hosts and subsequently the returns the responses it receives to the original caller.


This can be used in many situations – to ensure all resources loaded in a web application come from the same host (one way to handle CORS), to have content in IFRAMEs loaded from the same host as the surrounding application or to allow connection between systems that cannot directly reach each other. Of course, the proxy component does not have to be the dumb and mute intermediary – it can add headers, handle faults, perform validation and keep track of the traffic. Before you know it, it becomes an API Gateway…

In this article a very simple example of a proxy that I want to use for the following purpose: I create a Rich Web Client application (Angular, React, Oracle JET) – and some of the components used are owned and maintained by an external party. Instead of adding the sources to the server that serves the static sources of the web application, I use the proxy to retrieve these specific sources from their real origin (either a live application, a web server or even a Git repository). This allows me to have the latets sources of these components at any time, without redeploying my own application.

The proxy component is of course very simple and straightforward. And I am sure it can be much improved upon. For my current purposes, it is good enough.

The Node application consists of file www that is initialized with npm start through package.json. This file does some generic initialization of Express (such as defining the port on which the listen). Then it defers to app.js for all request handling. In app.js, a static file server is configured to serve files from the local /public subdirectory (using express.static).


var app = require('../app');
var debug = require('debug')(' :server');
var http = require('http');

var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);
var server = http.createServer(app);
server.on('error', onError);
server.on('listening', onListening);

function normalizePort(val) {
var port = parseInt(val, 10);

if (isNaN(port)) {
// named pipe
return val;

if (port >= 0) {
// port number
return port;

return false;

function onError(error) {
if (error.syscall !== 'listen') {
throw error;

var bind = typeof port === 'string'
? 'Pipe ' + port
: 'Port ' + port;

// handle specific listen errors with friendly messages
switch (error.code) {
case 'EACCES':
console.error(bind + ' requires elevated privileges');
console.error(bind + ' is already in use');
throw error;

function onListening() {
var addr = server.address();
var bind = typeof addr === 'string'
? 'pipe ' + addr
: 'port ' + addr.port;
debug('Listening on ' + bind);


"name": "jet-on-node",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
"dependencies": {
"body-parser": "~1.18.2",
"cookie-parser": "~1.4.3",
"debug": "~2.6.9",
"express": "~4.15.5",
"morgan": "~1.9.0",
"pug": "2.0.0-beta11",
"request": "^2.85.0",
"serve-favicon": "~2.4.5"


var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

const http = require('http');
const url = require('url');
const fs = require('fs');
const request = require('request');

var app = express();
// uncomment after placing your favicon in /public
//app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
app.use(bodyParser.urlencoded({ extended: false }));

// define static resource server from local directory public (for any request not otherwise handled)
app.use(express.static(path.join(__dirname, 'public')));

app.use(function (req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");

// catch 404 and forward to error handler
app.use(function (req, res, next) {
var err = new Error('Not Found');
err.status = 404;

// error handler
app.use(function (err, req, res, next) {
// set locals, only providing error in development
res.locals.message = err.message;
res.locals.error ='env') === 'development' ? err : {};

// render the error page
res.status(err.status || 500);
message: err.message,
error: err

module.exports = app;

Then the interesting bit: requests for URL /js/jet-composites/* are intercepted: instead of having those requests also handle by serving local resources (from directory public/js/jet-composites/*), the requests are interpreted and routed to an external host. The responses from that host are returned to the requester. To the requesting browser, there is no distinction between resources served locally as static artifacts from the local file system and resources retrieved through these redirected requests.

// any request at /js/jet-composites (for resouces in that folder)
// should be intercepted and redirected
var compositeBasePath = '/js/jet-composites/'
app.get(compositeBasePath + '*', function (req, res) {
var requestedResource = req.url.substr(compositeBasePath.length)
// parse URL
const parsedUrl = url.parse(requestedResource);
// extract URL path
let pathname = `${parsedUrl.pathname}`;
// maps file extention to MIME types
const mimeType = {
'.ico': 'image/x-icon',
'.html': 'text/html',
'.js': 'text/javascript',
'.json': 'application/json',
'.css': 'text/css',
'.png': 'image/png',
'.jpg': 'image/jpeg',
'.wav': 'audio/wav',
'.mp3': 'audio/mpeg',
'.svg': 'image/svg+xml',
'.pdf': 'application/pdf',
'.doc': 'application/msword',
'.eot': 'appliaction/',
'.ttf': 'aplication/font-sfnt'

handleResourceFromCompositesServer(res, mimeType, pathname)

async function handleResourceFromCompositesServer(res, mimeType, requestedResource) {
var reqUrl = "http://yourhost:theport/applicationURL/" + requestedResource
// fetch resource and return
var options = url.parse(reqUrl);
options.method = "GET";
options.agent = false;

// options.headers['host'] =;
http.get(reqUrl, function (serverResponse) {
console.log('<== Received res for', serverResponse.statusCode, reqUrl); console.log('\t-> Request Headers: ', options);
console.log(' ');
console.log('\t-> Response Headers: ', serverResponse.headers);


serverResponse.headers['access-control-allow-origin'] = '*';

switch (serverResponse.statusCode) {
// pass through. we're not too smart here...
case 200: case 201: case 202: case 203: case 204: case 205: case 206:
case 304:
case 400: case 401: case 402: case 403: case 404: case 405:
case 406: case 407: case 408: case 409: case 410: case 411:
case 412: case 413: case 414: case 415: case 416: case 417: case 418:
res.writeHeader(serverResponse.statusCode, serverResponse.headers);
serverResponse.pipe(res, { end: true });

// fix host and pass through.
case 301:
case 302:
case 303:
serverResponse.statusCode = 303;
serverResponse.headers['location'] = 'http://localhost:' + PORT + '/' + serverResponse.headers['location'];
console.log('\t-> Redirecting to ', serverResponse.headers['location']);
res.writeHeader(serverResponse.statusCode, serverResponse.headers);
serverResponse.pipe(res, { end: true });

// error everything else
var stringifiedHeaders = JSON.stringify(serverResponse.headers, null, 4);
res.writeHeader(500, {
'content-type': 'text/plain'
res.end(process.argv.join(' ') + ':\n\nError ' + serverResponse.statusCode + '\n' + stringifiedHeaders);



Express Tutorial Part 2: Creating a skeleton website -

Building a Node.js static file server (files over HTTP) using ES6+ -

How To Combine REST API calls with JavaScript Promises in node.js or OpenWhisk -

Node script to forward all http requests to another server and return the response with an access-control-allow-origin header. Follows redirects. -

5 Ways to Make HTTP Requests in Node.js -

The post Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller appeared first on AMIS Oracle and Java Blog.

My objective: create a Node application to download sources from a repository on GitHub. I want to use this application to read a simple package.json-like file (that describes which reusable components (from which GitHub repositories) the application has dependencies on) and download all required resources from GitHub and store them in the local file system. This by itself may not seem very useful. However, this is a stepping stone on the road to a facility to trigger run time update of appliation components triggered by GitHub WebHook triggers.

I am making use of the Octokit Node JS library to interact with the REST APIs of GitHub. The code I have created will:

  • fetch the meta-data for all items in the root folder of a GitHub Repo (at the tip of a specific branch, or at a specific tag or commit identifier)
  • iterate over all items:
    • download the contents of the item if it is a file and create a local file with the content (and cater for large files and for binary files)
    • create a local directory for each item in the GitHub repo that is a diectory, then recursively process the contents of the directory on GitHub

An example of the code in action:

A randomly selected GitHub repo (at


The local target directory is empty at the beginning of the action:


Run the code:


And the content is downloaded and written locally:


Note: the code could easily provide an execution report with details such as file size, download, last change date etc. It is currently very straightforward. Note: the gitToken is something you need to get hold of yourself in the GitHub dashboard: . Without a token, the code will still work, but you will be bound to the GitHub rate limit (of about 60 requests per hour).

const octokit = require('@octokit/rest')() 
const fs = require('fs');

var gitToken = "YourToken"

    type: 'token',
    token: gitToken

var targetProjectRoot = "C:/data/target/" 
var github = { "owner": "lucasjellema", "repo": "WebAppIframe2ADFSynchronize", "branch": "master" }

downloadGitHubRepo(github, targetProjectRoot)

async function downloadGitHubRepo(github, targetDirectory) {
    console.log(`Installing GitHub Repo ${github.owner}\\${github.repo}`)
    var repo = github.repo;
    var path = ''
    var owner = github.owner
    var ref = github.commit ? github.commit : (github.tag ? github.tag : (github.branch ? github.branch : 'master'))
    processGithubDirectory(owner, repo, ref, path, path, targetDirectory)

// let's assume that if the name ends with one of these extensions, we are dealing with a binary file:
const binaryExtensions = ['png', 'jpg', 'tiff', 'wav', 'mp3', 'doc', 'pdf']
var maxSize = 1000000;
function processGithubDirectory(owner, repo, ref, path, sourceRoot, targetRoot) {
    octokit.repos.getContent({ "owner": owner, "repo": repo, "path": path, "ref": ref })
        .then(result => {
            var targetDir = targetRoot + path
            // check if targetDir exists 
   => {
                if (item.type == "dir") {
                    processGithubDirectory(owner, repo, ref, item.path, sourceRoot, targetRoot)
                } // if directory
                if (item.type == "file") {
                    if (item.size > maxSize) {
                        var sha = item.sha
                        octokit.gitdata.getBlob({ "owner": owner, "repo": repo, "sha": item.sha }
                        ).then(result => {
                            var target = `${targetRoot + item.path}`
                                , Buffer.from(, 'base64').toString('utf8'), function (err, data) { })
                            .catch((error) => { console.log("ERROR BIGGA" + error) })
                    }// if bigga
                    octokit.repos.getContent({ "owner": owner, "repo": repo, "path": item.path, "ref": ref })
                        .then(result => {
                            var target = `${targetRoot + item.path}`
                            if (binaryExtensions.includes(item.path.slice(-3))) {
                                    , Buffer.from(, 'base64'), function (err, data) { reportFile(item, target) })
                            } else
                                    , Buffer.from(, 'base64').toString('utf8'), function (err, data) { if (!err) reportFile(item, target); else console.log('Fuotje ' + err) })

                        .catch((error) => { console.log("ERROR " + error) })
                }// if file
        }).catch((error) => { console.log("ERROR XXX" + error) })

function reportFile(item, target) {
    console.log(`- installed ${} (${item.size} bytes )in ${target}`)

function checkDirectorySync(directory) {
    try {
    } catch (e) {
        console.log("Created directory: " + directory)


Octokit REST API Node JS library: 

API Documentation for Octokit:

The post Create a Node JS application for Downloading sources from GitHub appeared first on AMIS Oracle and Java Blog.

Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

Getting started with Docker

Installing Docker

Of course you need a Docker installation. I’ll not get into details here but;

Oracle Linux 7

yum-config-manager --enable ol7_addons
yum-config-manager --enable ol7_optional_latest
yum install docker-engine
systemctl start docker
systemctl enable docker


curl -fsSL | sudo apt-key add -
add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce

You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

Running a Docker container

See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

  • go to and enable your OTN account to be used
  • go to the product you want to use and accept the license agreement
  • do docker login -u username -p password

If you are using the Docker Store, you first need to

  • go to and create an account
  • find the image you want to use. Click Get Content and accept the license agreement
  • do docker login -u username -p password

To start a container in the foreground

docker run --name mycontainer -it imagename /bin/sh

To start a container in the background

docker run --name mycontainer -d imagename tail -f /dev/null

To ‘enter’ a running container:

docker exec -it mycontainer /bin/sh

/bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

Cleaning up

Good to know is how to clean up your images/containers after having played around with them. See here.

# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)

Options for JDK

Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

Oracle JDK on Oracle Linux

When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about support and roadmap, read the following blog.

The steps to obtain Docker images for Oracle JDK / Oracle Linux from are as follows:

Create an account on Go to Click Get Content. Accept the agreement and you’re ready to login, pull and run.

#use the username and password
docker login -u yourusername -p yourpassword
docker pull store/oracle/serverjre:8

To start in the foreground:

docker run --name jre8 -it store/oracle/serverjre:8 /bin/bash

You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

#use your OTN username and password
docker login -u yourusername -p yourpassword

docker pull

#To start in the foreground:
docker run --name jre8 -it /bin/bash

OpenJDK on Alpine Linux

When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread related challenges with Alpine Linux though. See for example here and here.

Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login.

When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

docker pull openjdk:8-jdk-alpine

Next you can do

docker run --name openjdk8 -it openjdk:8-jdk-alpine /bin/sh

Zulu on Ubuntu Linux

You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

Do it yourself

Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

Spring Boot in a Docker container

Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

FROM openjdk:8-jdk-alpine
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","","-jar","/app.jar"]

The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.


To actually build the Docker image, which allows using it locally, you can do:

mvn install dockerfile:build

If you want to distribute it (allow others to easily pull and run it), you can push it with

mvn install dockerfile:push

This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to You can find it there since it is public.

You can then do something like

docker run -t maartensmeets/accs-cache-sample:latest

The post Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu appeared first on AMIS Oracle and Java Blog.

Oracle Database 18c Schema Only Accounts - 16-Mar-2018 09:46 - Pythian

One nice new feature of Oracle Database 18c is that schemas can be created without a password. These are called “Schema Only Accounts“. This new functionality is pretty simple and is a nice step forward that can allow administrators to further secure their databases and protect their applications.


Preliminary Questions

Question: Why do we want to have these “schema only accounts” or schemas without passwords?
Answer: This is applicable for schemas which hold application objects (i.e. tables, PL/SQL units, etc). I call those “application schemas”. We shouldn’t want to ever be able to connect to application schemas for any reason.

Question: Why don’t we want to connect to application schemas and hence have them have passwords?
Answer: To ensure with certainty that nobody can connect to them. Because if we can connect to the application schema then we bypass all security and have unrestricted access to the schema objects. All data within the schema objects can be viewed, manipulated, and/or compromised. Plus maybe a password rotation is then required with turnover of staff with knowledge of this password. So having application schemas never connected to is a good design principle.

Question: But maybe we do need to connect to them sometimes – for example when creating the schema or schema objects during application setup or upgrades, things like that?
Answer: Actually I don’t agree with that regardless of the version of the Oracle Database being used. We can use an alternative approach which I’ll discuss below.


Basic Setup and Usage

The basic setup and usage of a schema only account is really simple. We just use the new Oracle 18c DDL syntax “NO AUTHENTICATION” when creating or altering a user. For example:


User created.


With some basic checks we see:

SQL> select username, password, password_versions, account_status, authentication_type
  2  from dba_users where username = 'SCOTT';

-------------------- -------------------- ----------------- -------------------- --------------------
SCOTT                                                       OPEN                 NONE


Interestingly, the ACCOUNT_STATUS column still says “OPEN”. The PASSWORD column is NULL in DBA_USERS in previous versions as well but now the PASSWORD_VERSIONS column is also NULL. But the proper way to identify that this is a schema only account is via AUTHENTICATION_TYPE=NONE.

Out of interest, checking in some other familiar locations, we can see that we do still seem to have some hashed values in the SYS.USER$ table :

SQL> select name, spare4 from sys.user$ where name = 'SCOTT';

NAME                 SPARE4
-------------------- --------------------------------------------------------------------------------
SCOTT                S:0000000000000000000000000000000000000000C39C2BC7429D5EB08A12;T:000000000000000


If we try to connect to our newly created “NO AUTHENTICATION” user we get the standard ORA-01017 error. For example:

$ echo "show user" | sqlplus -s scott/tiger
ORA-01017: invalid username/password; logon denied

And finally, we can still of course extract the account’s DDL using DBMS_METADATA and it shows the NO AUTHENTICATION clause as we’d expect:

SQL> select dbms_metadata.get_ddl('USER','SCOTT') from dual;






We can easily switch back and fourth between a “schema only account” and a noramal account using the familiar syntax:


One caveat to this is regarding “administrative privileges” (i.e. the SYSDBA, SYSOPER, SYSRAC, etc privileges stored in password files and viewable via p$pwfile_users).

Schema only accounts can be assigned all normal database roles and privileges but not the administrative privileges. If the user is assigned an administrative privilege it cannot be converted into a schema only account. And similarly schema only accounts cannot be granted the administrative privileges:

SQL> grant sysdba to scott;
grant sysdba to scott
ERROR at line 1:
ORA-40366: Administrative privilege cannot be granted to this user.

SQL> alter user scott identified by tiger;

User altered.

SQL> grant sysdba to scott;

Grant succeeded.

SQL> select * from v$pwfile_users;

-------------------- ----- ----- ----- ----- ----- ----- ------------------------------ --------------------
------------------------ --------- --------- -------------------- -------- --- ----------
                                                                  PASSWORD YES          0

                                                                  PASSWORD YES          0

                                                                  PASSWORD YES          0

                                                                  PASSWORD YES          0

                                                                  PASSWORD NO           3

SQL> alter user scott no authentication;
alter user scott no authentication
ERROR at line 1:
ORA-40367: An Administrative user cannot be altered to have no authentication type.

SQL> revoke sysdba from scott;

Revoke succeeded.

SQL> alter user scott no authentication;

User altered.



SESSION_USER for Schema Only Accounts

If we can’t connect to the database using the application schema “how can we fully develop/populate it?” is the obvious question. We can do some things like create tables in other schemas quite easily. But other more advanced objects such as database links are more tricky to create in other schemas.

So it may seem from the previous sections that the solution is to temporarily add a password, connect, do what’s required, and then change it back to being schema only. However, that actually is not the intended workflow.

Rather the proper way is to make a “Proxy Authenticated Connection”. This feature was first introduced with Oracle Database 10g, has many other advantages, and is the proper way to change your SESSION_USER to the schema only account you want to create objects in.

In this simple example I create a DBA privileged (normal) account called “SIMON_DBA” and then use it to proxy into the schema only SCOTT account created previously.

SQL> create user simon_dba identified by testing;

User created.

SQL> grant dba to simon_dba;

Grant succeeded.

SQL> alter user simon grant connect through simon_dba;

User altered.

SQL> connect simon_dba[scott]/testing
SQL> select sys_context('USERENV','SESSION_USER') as session_user,
  2         sys_context('USERENV','SESSION_SCHEMA') as session_schema,
  3         sys_context('USERENV','PROXY_USER') as proxy_id,
  4         user
  5    from dual;

-------------- -------------- -------------- --------------
SCOTT          SCOTT          SIMON_DBA      SCOTT


As you can see I effectively became the schema only account SCOTT meaning I can do whatever schema work is required but I actually logged in using the SIMON_DBA credentials. Incidentally this is how we should do maintenance on all application accounts regardless of the version of Oracle being used (as long as it’s at least 10g).

For more details on the proxy authenticated connection, refer to my earlier blog:


Alternatives from Earlier Releases

In releases prior to Oracle Database 18c we can effectively add somewhat similar functionality by either:

  1. Locking the schema account
  2. Setting an impossible password
  3. Creating the user with external authorization using a certificate

Locking is not ideal as locked accounts may be unlocked (i.e. during refreshes, application/schema upgrades or modifications, etc). So it runs the risk of being unlocked and hence accessed.

Additionally, a locked schema gives a bad actor information that the schema does indeed exist (and hence may be worth trying to brute force attack). Therefore effectively locking by setting an impossible password may be a bit better. A further discussion of this can be found here:

And finally, we can create database users with external authentication via a certificate. This is probably the closest match in functionality with the minimum risk of accidental or inadvertent modifications permitting password based schema access.


Additional Thoughts

Another logical question might be whether container database “common accounts” can be created as “schema only”? The answer is “yes”:

SQL> create user c##tester no authentication;

User created.

SQL> select username, authentication_type, password_versions from dba_users where username = 'C##TESTER';

-------------------- -------------------- -----------------
C##TESTER            NONE

SQL> select con_id, username, authentication_type, password_versions from cdb_users where username = 'C##TESTER';

---------- -------------------- -------------------- -----------------
         1 C##TESTER            NONE
         3 C##TESTER            NONE

SQL> alter session set container = TEST1;

Session altered.

SQL> select username, authentication_type, password_versions from dba_users where username = 'C##TESTER';

-------------------- -------------------- -----------------
C##TESTER            NONE


So really no difference if the user is common or local in the multitennant architecture.

And one final point: schema only accounts are only applicable to RDBMS databases, not ASM databases.



This isn’t the most extravagant new feature of Oracle Database 18c. But it’s simple and adds some nice new functionality. It makes the ability to create truly password-less application schemas trivially simple. There were similar techniques that could be used with previous database releases but those have some limitations or risks – this approach seems simpler and better.

Paying someone to learn for you? - 16-Mar-2018 07:45 - Tim Hall

In my previous post called Nobody is an expert at Oracle Database 18c I said,

“There will be lots of people and companies who will happily take your money to learn the new tech, but that’s probably the subject of another post…”

Here is that post. 🙂

I see two distinct scenarios, and have two very different attitudes to them.

The Good

When something is new to the market, there is nobody that has the skills you need, but there are people/companies with a wealth of experience and a proven track record of picking up new skills and delivering good results.

I totally understand someone wanting to pay these types of people/companies to help them get to their goal. This is mutually beneficial as the customer gets to their goal in a shorter space of time than if they did it all themselves, and the person/company doing the investigation and work gets new skills and real world experience at the new tech.

The important point is both sides need to clearly understand this situation before work gets underway, as it might affect timescales and cost of the work.

The Bad and the Ugly

There are a lot of people and companies that are using their clients to improve their skills without the customer being aware of the relationship they are in. To me this is robbery. They have represented themselves as something they are not and that is a problem.

In some cases they might still deliver a reasonable product in a reasonable timescale, but in some cases not. 🙁 In some ways the result is not really the issue, it’s the deceit that is the problem. It’s usually really easy to spot, as the saying goes, you don’t have to know how to swim to recognise when someone is drowning.


I’m surprised how many people don’t ask for reference customers, or if they do, they never follow up on them properly. You can tell a lot by what is said and what is not said if you are asking the right questions. If the person/company doesn’t have a track record of delivering the thing you are asking for, do you really want them?

I emphasised the thing you are asking for, because it is a really important point. A person/company might be awesome at some related stuff, which might not be exactly what you are looking for, but might fool you into thinking they know what they are doing. To use myself as an example, I know a bit about Oracle databases and WebLogic, but does that mean I’m the right person to work on OBIEE that uses Oracle databases and WebLogic? I can tell you the answer to that. It’s NO! If I were an expert at eBusiness Suite does that mean I’m the right person to work on Oracle Cloud Applications? I can tell you the answer to that too. It’s NO!

As I mentioned before, if both parties understand what they are getting into I totally understand why you would want to pay someone to learn some new stuff for you. If you as a customer don’t know this is what is happening, that’s a problem!



Paying someone to learn for you? was first posted on March 16, 2018 at 12:45 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

In this post, You can convert a conventional disk group ie created before Oracle ASM 18c to an Oracle ASM flex disk group without using the restrictive mount (MOUNTED RESTRICTED) option.

Migrating (converting) to a flex disk group can be performed for a NORMAL or HIGH redundancy disk group with the CONVERT REDUNDANCY TO FLEX clause of the ALTER DISKGROUP SQL statement. You cannot migrate an EXTERNAL redundancy disk group to a FLEX redundancy disk group.

Below are steps to convert Normal Redundancy to Flex.

[oracle@RAC18C ~]$ sqlplus / as sysasm

SQL*Plus: Release Production on Fri Mar 16 03:00:59 2018

Copyright (c) 1982, 2017, Oracle. All rights reserved.

Connected to:
Oracle Database 18c Enterprise Edition Release - Production

SQL> CREATE DISKGROUP TEST_NORMAL_TO_FLEX NORMAL REDUNDANCY DISK '/dev/oracleasm/disks/DISK4','/dev/oracleasm/disks/DISK5','/dev/oracleasm/disks/DISK6' ATTRIBUTE 'au_size'='4M', 'compatible.asm' = '18.0', 'compatible.rdbms' = '18.0','compatible.advm' = '18.0';

Diskgroup created.


NAME                           TYPE
------------------------------ ------
OCR                            EXTERN


Diskgroup altered.


NAME                           TYPE
------------------------------ ------
OCR                            EXTERN


Stay tuned for More articles on Oracle 18c 

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Skant Gupta’s LinkedIn:

Joel Perez’s LinkedIn: Joel Perez’s Profile

Anuradha’s LinkedIn: Anuradha’s Profile

LinkedIn Group: Oracle Cloud DBAAS

Facebook Page: OracleHelp

The post Converting Normal Disk Groups to Flex Disk Groups in Oracle 18c appeared first on ORACLE-HELP.

New look of ASMCA in Oracle 18c - 15-Mar-2018 16:14 - Skant Gupta

Oracle ASM Configuration Assistant (ASMCA) supports installing and configuring Oracle ASM instances, disk groups, volumes, and Oracle Automatic Storage Management Cluster File System (Oracle ACFS). In addition, you can use the ASMCA command-line interface as a non-GUI utility.

In Oracle 18c database, ASMCA comes up with new GUI interface which looks cool and easy to use. Mainly, we can navigate to all functionality from the sidebar.

Let’s see how it looks in Oracle 18c, invokes asmca after setting Oracle Grid home.

As we see a welcome page contain sidebar to navigate for different functionality.

On click on ASM Instances, we can able to see which instances running.

On click on Disk Groups, we can able to Disk group detail. Right click on respective disk group, we can edit and modify setting related to it.

on click on ACFS tab, we can get detail on ACFS file system related detail.

On click setting, we can insert root credential, so that asmca automatically run the scripts related root.

Stay tuned for More articles on Oracle 18c 

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Skant Gupta’s LinkedIn:

Joel Perez’s LinkedIn: Joel Perez’s Profile

Anuradha’s LinkedIn: Anuradha’s Profile

LinkedIn Group: Oracle Cloud DBAAS

Facebook Page: OracleHelp

The post New look of ASMCA in Oracle 18c appeared first on ORACLE-HELP.