Next

At the earliest levels of athletics, coaches are asked to teach players the basic tenets of teamwork, sportsmanship, and developing fundamentals. At the highest levels, however, coaches are tasked with leading a team to success, oftentimes being some sort of national or professional championship. As the participants gain experience and their skill-sets become more refined, the roles of the coach change. In the beginning, we are teaching new fundamentals and skills, trying to stoke a passion for a sport in a young, impressionable mind. At the highest levels, the coach must rely on their experience and expertise to put their team and players in a position to be successful.

A coach at the college or professional level must constantly be surveying what is happening, and anticipating what will happen next based on their experience. In the professional world, the best project managers are the ones who lead each day in the same way that the collegiate and professional athletic coaches do. We have a team with diverse backgrounds, skill sets, experiences, and motivations, all working towards the same goal. Removing obstacles, anticipating changes, understanding who is on your team and what they are capable of, and applying a firm, yet flexible management approach to set the entire project up for success. At CheckPoint Consulting we have developed a methodology to allow our PM’s to do just that, based on our decades of experience and success. Our methodology is rooted in the Project Management Institute’s (PMI) processes and knowledge areas, and adapted to fit the ever-changing needs of our industry to provide a best-in-class experience to our clients.

CheckPoint’s team of consultants, technicians, and project managers come to each project bringing unmatched expertise and experience, allowing them to anticipate issues ahead of time, manage risk, manage costs, and deliver on-time. With some of the most knowledgeable and well-respected people in the industry, we will work side-by-side with you throughout the project life-cycle, and train you to use the product confidently the moment you go live.

Contact us to learn more about how we can help coach your next project.

At the earliest levels of athletics, coaches are asked to teach players the basic tenets of teamwork, sportsmanship, and developing fundamentals. At the highest levels, however, coaches are tasked with leading a team to success, oftentimes being some sort of national or professional championship. As the participants gain experience and their skill-sets become more refined, the roles of the coach change. In the beginning, we are teaching new fundamentals and skills, trying to stoke a passion for a sport in a young, impressionable mind. At the highest levels, the coach must rely on their experience and expertise to put their team and players in a position to be successful.

A coach at the college or professional level must constantly be surveying what is happening, and anticipating what will happen next based on their experience. In the professional world, the best project managers are the ones who lead each day in the same way that the collegiate and professional athletic coaches do. We have a team with diverse backgrounds, skill sets, experiences, and motivations, all working towards the same goal. Removing obstacles, anticipating changes, understanding who is on your team and what they are capable of, and applying a firm, yet flexible management approach to set the entire project up for success. At CheckPoint Consulting we have developed a methodology to allow our PM’s to do just that, based on our decades of experience and success. Our methodology is rooted in the Project Management Institute’s (PMI) processes and knowledge areas, and adapted to fit the ever-changing needs of our industry to provide a best-in-class experience to our clients.

CheckPoint’s team of consultants, technicians, and project managers come to each project bringing unmatched expertise and experience, allowing them to anticipate issues ahead of time, manage risk, manage costs, and deliver on-time. With some of the most knowledgeable and well-respected people in the industry, we will work side-by-side with you throughout the project life-cycle, and train you to use the product confidently the moment you go live.

Contact us to learn more about how we can help coach your next project.

Author: Andrew Tauro, Performance Architects

Just like there are multiple ways to skin a cat, there’s more than one way to create an Essbase cube in Oracle Analytics Cloud (OAC). While the best way to migrate on-premise Essbase cubes to OAC is to use the standalone “EssbaseLCMUtility” tool, to create cubes from scratch there are three ways that I have used so far: use the Web UI; build an Application Workbook by hand (or from a template); or use the Cube builder. The latter two are the focus of this blog.

The Application Workbook is essentially a Microsoft Excel workbook that contains a predefined set of tabs, with the contents arranged in a predetermined manner. What that means is the workbook has a bunch of tabs like this:

Each of these tabs serves a particular purpose, but from what I can tell only the first two are a “must” when creating the application:

The “Essbase.Cube” worksheet defines the application and database names, which are required information when creating a cube. In addition, this sheet is used to define the cube dimensions:

“Cube.Settings” and “Cube.Generations” define properties of the Essbase database. The former defines some crucial cube information, such as whether it is going to be a block storage option (BSO) or aggregate storage option (ASO) cube, and if it will allow for duplicate member names.

The remaining tabs populate the dimensions (“Dim” tabs), data (“Data” tabs) and/or define calculation scripts (“Calc” tabs) for the cube. If you are familiar with building Essbase dimensions or data files and/or writing calc scripts, these will look very familiar.

For those of you who are not familiar with these items, there is the option of using the Cube Designer.

This is an add-in for Microsoft Excel that you can download via Smart View from your OAC instance.

The “Cube Designer” menu item provides tabbed screens for creating the application workbook. Walking through the tabs allows to setup the application workbook, and the “To Sheet” and “From Sheet” options facilitate reading from, and pushing to, the active workbook:

Once complete, the cube can be created via the web user interface as an import.

This has greatly reduced the complexity of creating Essbase cubes, and is just one of the ways that OAC is redefining the way we perform analytics using Essbase.

As we explore the capabilities of OAC, we will continue to share our thoughts with you, so stay tuned. While you take this journey with us, if you have any questions on this, feel free to send us a note at communications@performancearchitects.com and we will be in touch.

Fun with Oracle Analytics Cloud (OAC): Creating Essbase Cubes - 16-Aug-2017 05:20 - Performance Architects

Author: Andrew Tauro, Performance Architects

Just like there are multiple ways to skin a cat, there’s more than one way to create an Essbase cube in Oracle Analytics Cloud (OAC). While the best way to migrate on-premise Essbase cubes to OAC is to use the standalone “EssbaseLCMUtility” tool, to create cubes from scratch there are three ways that I have used so far: use the Web UI; build an Application Workbook by hand (or from a template); or use the Cube builder. The latter two are the focus of this blog.

The Application Workbook is essentially a Microsoft Excel workbook that contains a predefined set of tabs, with the contents arranged in a predetermined manner. What that means is the workbook has a bunch of tabs like this:

Each of these tabs serves a particular purpose, but from what I can tell only the first two are a “must” when creating the application:

The “Essbase.Cube” worksheet defines the application and database names, which are required information when creating a cube. In addition, this sheet is used to define the cube dimensions:

“Cube.Settings” and “Cube.Generations” define properties of the Essbase database. The former defines some crucial cube information, such as whether it is going to be a block storage option (BSO) or aggregate storage option (ASO) cube, and if it will allow for duplicate member names.

The remaining tabs populate the dimensions (“Dim” tabs), data (“Data” tabs) and/or define calculation scripts (“Calc” tabs) for the cube. If you are familiar with building Essbase dimensions or data files and/or writing calc scripts, these will look very familiar.

For those of you who are not familiar with these items, there is the option of using the Cube Designer.

This is an add-in for Microsoft Excel that you can download via Smart View from your OAC instance.

The “Cube Designer” menu item provides tabbed screens for creating the application workbook. Walking through the tabs allows to setup the application workbook, and the “To Sheet” and “From Sheet” options facilitate reading from, and pushing to, the active workbook:

Once complete, the cube can be created via the web user interface as an import.

This has greatly reduced the complexity of creating Essbase cubes, and is just one of the ways that OAC is redefining the way we perform analytics using Essbase.

As we explore the capabilities of OAC, we will continue to share our thoughts with you, so stay tuned. While you take this journey with us, if you have any questions on this, feel free to send us a note at communications@performancearchitects.com and we will be in touch.

Last night we decided to patch our OAC instance to the awaited 106 release for Essbase Cloud. There were actually 2 patches made available, one for Essbase Cloud and one for BI Cloud. I installed them consecutively (although you can run all patches at once since they hit different VM’s – we haven’t tried this … Continue reading OAC Series: Patching OAC
OAC Series: Patching OAC - 15-Aug-2017 16:30 - Opal Alapat
Last night we decided to patch our OAC instance to the awaited 106 release for Essbase Cloud. There were actually 2 patches made available, one for Essbase Cloud and one for BI Cloud. I installed them consecutively (although you can run all patches at once since they hit different VM’s – we haven’t tried this … Continue reading OAC Series: Patching OAC

I had the great pleasure of presenting at Kscope17 on the power of Essbase CDFs.  At the end of my CDF presentation this year, I gave a live demonstration of a little CDF that is designed to spark the imagination.

In 2009, Matt Milella presented on CDFs at Kaleidoscope and talked about the top 5 CDFs that his team had created.  At the end, he showed a very cool demonstration of how his Essbase server could send out a tweet using a CDF. This was an amazing display and really inspired me to figure out how to create CDFs.

So, as an homage to Matt’s blog post about how Ashton Kutcher can get his Essbase server updates, I have created an updated version of the Twitter CDF. As Matt states, he used JTwitter back in 2009.  Unfortunately for me, Twitter has long since changed their authentication to use OAuth for security which means that JTwitter doesn’t work anymore.

I did some searching and found Twitter4J, an unofficial Java library for the Twitter API. This library handles the OAuth authentication as well as allows submitting of new status updates, sending direct messages, searching tweets, etc. Between Matt’s original Twitter code, the Twitter4J sample code, and some trial and error, I was able to get the library setup and created a Java class that could send my tweets.

  1. The first step was to download the Twitter4J library.  I added the twitter4j-core-4.0.3.jar file into my lib folder in JDeveloper and added it to my classpath.
  2. Next, I had to setup a new Twitter account (EssbaseServer2).
  3. Then, I went to http://twitter.com/oauth_clients/new and setup my application to get the OAuth keys needed for my code to authenticate.
    TwitterApp
  4. Once I gathered the keys, I put them into a .properties file called “EssbasTweet.properties”.  This file will be placed onto my Essbase server into the %EOH%/products/Essbase/EssbaseServer/java/udf directory.  Placing the file into the …/java/udf directory puts it into Essbase’s Java classpath and Essbase will be able to access the file when its needed.
    propertiesFile
  5. Next, I wrote my code (based heavily on Twitter4J’s sample code), compiled it, deployed the code to a JAR and placed the JAR on the Essbase server.
    SourceCode
  6. I registered the CDF manually in EAS.
    RegisterCDF
  7. I was able to pretty much reuse Matt’s original calc script as he had it back in 2009 with the exception of using an @CalcMgr function instead of one of the older data functions.

Does it work? Well, go and check out the @EssbaseServer2 account for yourself.

While publicly tweeting your data might not be the best idea, hopefully this serves as a spark to ignite your imagination of the power of CDFs. Anything you can do in Java can be implemented in an Essbase calculation. Some attendees of my presentation were pretty excited about the possibilities of communicating with their users by submitting messages using Slack or updating a status on a SharePoint site. The possibilities are limited only by your imagination.

Thanks again to Matt for presenting on CDFs eight years ago. It definitely inspired me to learn more and hopefully this will inspire others to do the same.

There has been some uncertainty about the fate of CDFs with OAC and the Essbase cloud service, but never fear, CDFs are supported but they are limited to local CDFs. More on that in the future.


Kim Kardashian can get my Essbase server updates - 15-Aug-2017 10:11 - Robert Gideon

I had the great pleasure of presenting at Kscope17 on the power of Essbase CDFs.  At the end of my CDF presentation this year, I gave a live demonstration of a little CDF that is designed to spark the imagination.

In 2009, Matt Milella presented on CDFs at Kaleidoscope and talked about the top 5 CDFs that his team had created.  At the end, he showed a very cool demonstration of how his Essbase server could send out a tweet using a CDF. This was an amazing display and really inspired me to figure out how to create CDFs.

So, as an homage to Matt’s blog post about how Ashton Kutcher can get his Essbase server updates, I have created an updated version of the Twitter CDF. As Matt states, he used JTwitter back in 2009.  Unfortunately for me, Twitter has long since changed their authentication to use OAuth for security which means that JTwitter doesn’t work anymore.

I did some searching and found Twitter4J, an unofficial Java library for the Twitter API. This library handles the OAuth authentication as well as allows submitting of new status updates, sending direct messages, searching tweets, etc. Between Matt’s original Twitter code, the Twitter4J sample code, and some trial and error, I was able to get the library setup and created a Java class that could send my tweets.

  1. The first step was to download the Twitter4J library.  I added the twitter4j-core-4.0.3.jar file into my lib folder in JDeveloper and added it to my classpath.
  2. Next, I had to setup a new Twitter account (EssbaseServer2).
  3. Then, I went to http://twitter.com/oauth_clients/new and setup my application to get the OAuth keys needed for my code to authenticate.
    TwitterApp
  4. Once I gathered the keys, I put them into a .properties file called “EssbasTweet.properties”.  This file will be placed onto my Essbase server into the %EOH%/products/Essbase/EssbaseServer/java/udf directory.  Placing the file into the …/java/udf directory puts it into Essbase’s Java classpath and Essbase will be able to access the file when its needed.
    propertiesFile
  5. Next, I wrote my code (based heavily on Twitter4J’s sample code), compiled it, deployed the code to a JAR and placed the JAR on the Essbase server.
    SourceCode
  6. I registered the CDF manually in EAS.
    RegisterCDF
  7. I was able to pretty much reuse Matt’s original calc script as he had it back in 2009 with the exception of using an @CalcMgr function instead of one of the older data functions.

Does it work? Well, go and check out the @EssbaseServer2 account for yourself.

While publicly tweeting your data might not be the best idea, hopefully this serves as a spark to ignite your imagination of the power of CDFs. Anything you can do in Java can be implemented in an Essbase calculation. Some attendees of my presentation were pretty excited about the possibilities of communicating with their users by submitting messages using Slack or updating a status on a SharePoint site. The possibilities are limited only by your imagination.

Thanks again to Matt for presenting on CDFs eight years ago. It definitely inspired me to learn more and hopefully this will inspire others to do the same.

There has been some uncertainty about the fate of CDFs with OAC and the Essbase cloud service, but never fear, CDFs are supported but they are limited to local CDFs. More on that in the future.


FDMEE - Custom Scheduler - Part 2 - 15-Aug-2017 04:01 - John Goodwin
In the last part I went through an example solution to build a custom scheduler which could be accessed through the FDMEE UI, it was created using a combination of jython and the ODI Java API, the post covered in detail the process to create a new batch schedule but could easily be adapted to schedule data and metadata load rules or custom scripts.

In this post I want to continue where I left off and move on to updating, deleting and viewing active schedules, also look at the possibilities of running these tasks outside of the FDMEE UI.

Let us start off with updating an existing schedule, a custom script was registered which has parameters defined to select the schedule, set the start and end dates and repetition in minutes or hours. These parameters will be displayed when the custom script is executed and then passed into the jython script which will then update the schedule.


The parameter “SCHEDULE_ID” has been defined to call a query called “Batch Info”, this SQL query is against the ODI repository which hold the scheduling information, the query has been defined to return an understandable description and the ODI internal ID for the schedule, this ID is important as it will be used with the ODI Java API to access the selected schedule.


You will see the query in action when I execute the custom script which now appears in the UI under the “Custom Batch Scheduler” group it has been assigned to.


Executing the “Update Batch Schedule” script displays a window with the parameters that were defined in the script registration.


Now if select the schedule to update the SQL query comes into action and the script name, start to end date, repetition and schedule ID are displayed ready for selection.


It would be nice if there was the functionality to base a SQL query on the results of a SQL query but unfortunately that is not yet available in FDMEE so the remaining input fields need to input manually, as the schedule is being updated this is not much of an issue because you would need to enter new values anyway.


Now the new schedule information has been entered the script can be run and a custom message is displayed if the update was successful.


The process logs also display the status of executing the custom script.


The FDMEE log associated with the process contains the updated schedule information.


To prove the schedule has been updated I can switch over to the ODI console where it should have a new start date of 20th of August at 8am.


As I explained in the last part the end date has to be calculated and set as a maximum cycle duration, the execution cycle includes the interval between repetitions which correctly matches the value entered when running the custom script.


To prove the scheduling information had been update correctly I could have also just run the update schedule custom script again and selected the option to return the active schedules, this would have returned the updated values from the ODI repository.


What you do need to watch out for with populating parameter value fields is there is currently a 50-character limit, the SQL will return no problem, for example


As the above text is are over 50 characters then after selecting you would be hit with an unfriendly ADF error.


The magic to update the schedule is all handled in the jython script, the script follows the same concept as what I went through in the last part so there is no need for me to show all the script again.

The parameters values are stored, the schedule duration is calculated by finding the difference in minutes between the start and end date.


A connection is made to the ODI repository and then the schedule is returned by searching based on the ID that was passed into the script.

The values of the schedule are then updated to the ones defined when running the script, these values are then committed and the agent schedule updated.


So that is creating and updating schedules covered off, on to deleting a schedule.

Once again a custom script is registered, this time there is only the requirement for one parameter and that is to select the schedule to delete, the query is exactly the same as the one used in the update script.


After the script has been registered, it is available in script execution under the group which it was registered to.


When the script is executed there is one parameter available to select the schedule to delete.


Selecting the magnifying glass will run the SQL query to run all the active batch schedules.


I selected the first schedule in the list to be deleted.


A custom message is displayed if the delete was successful.


If I run the script again and view the schedules it confirms the schedule has definitely been deleted.


The jython script is much simpler this time as all that is required is to retrieve the ID of the schedule from the custom script parameter values, connect to the ODI repository, find the schedule by the ID, delete the schedule, commit and then update the ODI agent scheduler.


Just to recap that means with the custom scheduler solution we have the ability to create, update and delete schedules.

So how about viewing the active schedules, well you can certainly run the custom scripts like I have already shown to display the schedules but personally I don’t feel like is the nicest solution, what I did to overcome this was to create a report in FDMEE.

Before creating a report a SQL query was created and registered, the query was similar to one used in the update/delete scripts.


The query produces the following output that be used in the FDMEE report.


Once the query validated the generate XML button was selected, this produces an XML file which can be loaded into word using BI Publisher desktop to generate a template file.

Now that everything is in place a new report definition was created, the query, template and group were assigned to the definition, there were no need for any parameters for this report.


The report is available against the report group it has been assigned to.


When executing the report you have the option to generate in either PDF, HTML or Excel (XLSX)


I selected XLSX and depending on browser settings it will either open directly in excel or you will have the option to download and then open in excel.


So this can report can be generated at any time to provide up-to-date batch schedule information.

Finally on to running the activities such as creating a new schedule outside of the FDMEE UI, if you are not aware from 11.1.2.4.210 it is possible to use a REST resource to run tasks such as data load rules, batches, custom scripts and reports, I have covered the functionality in previous blogs if you would like to understand it in more detail.

To run a custom script you can make a post to the following URL format:

http(s)://<webserver>:<port>/aif/rest/V1/jobs/

The body of the post will need to include in JSON format the job type which will be REPORT, the report format type which will be SCRIPT, the job name which will be the name of the custom script to run and the parameter display names and values.

An example to run the script to create a new batch schedule using a REST client would be:


After the post has been made a response will be returned with FDMEE process information such as ID and status.


If the status is in a running state a GET request can be made to keep checking the status.


You could then convert this into a script which could be called by an automated process and parameters passed in or say run from a user’s client machine instead of having to log into workspace.

I created the following example script using PowerShell which uses the REST resource to run the custom script to create a new schedule, I know it requires error trapping but it just to give an idea.



When the script is run, it takes user input to define the parameters of the schedule which it then converts into JSON, makes a post to the REST resource and then keeps checking the status until the process has successfully completed.


Running the report again to view the scheduled batches confirm that the new schedule has been created.


Job done, a full solution to create, update, delete and view schedules.
In the last part I went through an example solution to build a custom scheduler which could be accessed through the FDMEE UI, it was created using a combination of jython and the ODI Java API, the post covered in detail the process to create a new batch schedule but could easily be adapted to schedule data and metadata load rules or custom scripts.

In this post I want to continue where I left off and move on to updating, deleting and viewing active schedules, also look at the possibilities of running these tasks outside of the FDMEE UI.

Let us start off with updating an existing schedule, a custom script was registered which has parameters defined to select the schedule, set the start and end dates and repetition in minutes or hours. These parameters will be displayed when the custom script is executed and then passed into the jython script which will then update the schedule.


The parameter “SCHEDULE_ID” has been defined to call a query called “Batch Info”, this SQL query is against the ODI repository which hold the scheduling information, the query has been defined to return an understandable description and the ODI internal ID for the schedule, this ID is important as it will be used with the ODI Java API to access the selected schedule.


You will see the query in action when I execute the custom script which now appears in the UI under the “Custom Batch Scheduler” group it has been assigned to.


Executing the “Update Batch Schedule” script displays a window with the parameters that were defined in the script registration.


Now if select the schedule to update the SQL query comes into action and the script name, start to end date, repetition and schedule ID are displayed ready for selection.


It would be nice if there was the functionality to base a SQL query on the results of a SQL query but unfortunately that is not yet available in FDMEE so the remaining input fields need to input manually, as the schedule is being updated this is not much of an issue because you would need to enter new values anyway.


Now the new schedule information has been entered the script can be run and a custom message is displayed if the update was successful.


The process logs also display the status of executing the custom script.


The FDMEE log associated with the process contains the updated schedule information.


To prove the schedule has been updated I can switch over to the ODI console where it should have a new start date of 20th of August at 8am.


As I explained in the last part the end date has to be calculated and set as a maximum cycle duration, the execution cycle includes the interval between repetitions which correctly matches the value entered when running the custom script.


To prove the scheduling information had been update correctly I could have also just run the update schedule custom script again and selected the option to return the active schedules, this would have returned the updated values from the ODI repository.


What you do need to watch out for with populating parameter value fields is there is currently a 50-character limit, the SQL will return no problem, for example


As the above text is are over 50 characters then after selecting you would be hit with an unfriendly ADF error.


The magic to update the schedule is all handled in the jython script, the script follows the same concept as what I went through in the last part so there is no need for me to show all the script again.

The parameters values are stored, the schedule duration is calculated by finding the difference in minutes between the start and end date.


A connection is made to the ODI repository and then the schedule is returned by searching based on the ID that was passed into the script.

The values of the schedule are then updated to the ones defined when running the script, these values are then committed and the agent schedule updated.


So that is creating and updating schedules covered off, on to deleting a schedule.

Once again a custom script is registered, this time there is only the requirement for one parameter and that is to select the schedule to delete, the query is exactly the same as the one used in the update script.


After the script has been registered, it is available in script execution under the group which it was registered to.


When the script is executed there is one parameter available to select the schedule to delete.


Selecting the magnifying glass will run the SQL query to run all the active batch schedules.


I selected the first schedule in the list to be deleted.


A custom message is displayed if the delete was successful.


If I run the script again and view the schedules it confirms the schedule has definitely been deleted.


The jython script is much simpler this time as all that is required is to retrieve the ID of the schedule from the custom script parameter values, connect to the ODI repository, find the schedule by the ID, delete the schedule, commit and then update the ODI agent scheduler.


Just to recap that means with the custom scheduler solution we have the ability to create, update and delete schedules.

So how about viewing the active schedules, well you can certainly run the custom scripts like I have already shown to display the schedules but personally I don’t feel like is the nicest solution, what I did to overcome this was to create a report in FDMEE.

Before creating a report a SQL query was created and registered, the query was similar to one used in the update/delete scripts.


The query produces the following output that be used in the FDMEE report.


Once the query validated the generate XML button was selected, this produces an XML file which can be loaded into word using BI Publisher desktop to generate a template file.

Now that everything is in place a new report definition was created, the query, template and group were assigned to the definition, there were no need for any parameters for this report.


The report is available against the report group it has been assigned to.


When executing the report you have the option to generate in either PDF, HTML or Excel (XLSX)


I selected XLSX and depending on browser settings it will either open directly in excel or you will have the option to download and then open in excel.


So this can report can be generated at any time to provide up-to-date batch schedule information.

Finally on to running the activities such as creating a new schedule outside of the FDMEE UI, if you are not aware from 11.1.2.4.210 it is possible to use a REST resource to run tasks such as data load rules, batches, custom scripts and reports, I have covered the functionality in previous blogs if you would like to understand it in more detail.

To run a custom script you can make a post to the following URL format:

http(s)://<webserver>:<port>/aif/rest/V1/jobs/

The body of the post will need to include in JSON format the job type which will be REPORT, the report format type which will be SCRIPT, the job name which will be the name of the custom script to run and the parameter display names and values.

An example to run the script to create a new batch schedule using a REST client would be:


After the post has been made a response will be returned with FDMEE process information such as ID and status.


If the status is in a running state a GET request can be made to keep checking the status.


You could then convert this into a script which could be called by an automated process and parameters passed in or say run from a user’s client machine instead of having to log into workspace.

I created the following example script using PowerShell which uses the REST resource to run the custom script to create a new schedule, I know it requires error trapping but it just to give an idea.



When the script is run, it takes user input to define the parameters of the schedule which it then converts into JSON, makes a post to the REST resource and then keeps checking the status until the process has successfully completed.


Running the report again to view the scheduled batches confirm that the new schedule has been created.


Job done, a full solution to create, update, delete and view schedules.
Next