Friday, 25 September 2015

Compiling Oracle code from Atom text editor

Just yesterday I was watching a webinar titled: "Fill the Glass: Measuring Software Performance" which featured Jorge Rimblas. You can check out the video on Vimeo at the following URL: https://vimeo.com/140068961. It's just giving an insight into a project Jorge is working on, giving various tips here and there - so if you have a minute (rather, an hour) to spare, go and check it out.

One of the sections that particularly caught my attention was the fact Jorge was able to compile packages from his editor of choice (which is Sublime text editor) (this happens at around the 18:20 mark). Because I like free (and open source) software, I actually use Atom text editor. Being centred around a plugin ecosystem, I was curious how he was able to do this - and if in fact I'd be able to accomplish the same, in Atom.

So, first of all I did some hunting to see how it was done in Sublime. This led me to this great post by Tim St. Hilaire - http://wphilltech.com/sublime-text-for-oracle-apex-developers/ - which covers all the plugins he uses with Sublime. The section of interest is titled "Building Code" - where I discovered it used a build system embedded into the application itself; Tim also provides some sample scripts that he uses for the actual compilation, so I grabbed them (it's actually a zip file with a couple of examples) as a reference point.

The next thing was to find how to build code in Atom. I did a search for "atom editor build system". This immediately led me to the package - https://atom.io/packages/build. I installed that and begun hacking away to get this build process working! As per the documentation, depending on the files in your project directory, will determine which build process gets initiated. If your desired build process isn't built in (which it isn't for Oracle), you can add a file to your project .atom-build.json.

Now, because I'm on Ubuntu, my script is a `sh` script, and I'm going to place it in /usr/local/bin - which goes without saying, this file should be executable. Ideally, this will all be packaged into a plugin that extends from the build package, but that's for a future date.

It's worth mentioning here, that of course you will need an Oracle client set up on your system.

So, my .atom-build.json, in the root of my project folder, will look like this:

{
  "cmd": "/usr/local/bin/atom_compileOracle.sh",
  "name": "Build Oracle code file",
  "args": [ "${host}", "${port}", "${sid}", "${user}", "${password}", "{FILE_ACTIVE}" ],
  "sh": true,
  "cwd": "{FILE_ACTIVE_PATH}",
  "env": {
    "user": "hr",
    "password" : "hr",
    "host" : "example.com",
    "port" : "1521",
    "sid" : "XE"
  },
  "errorMatch": [
  ],
  "keymap": "",
  "targets": {
  }
}



So, for each project, you will just update the environment settings accordingly. To support for different environments (e.g dev and prod) I would guess the targets could be used here - I haven't got that far yet.

Then, the actual script would look like (again, script based from Tim St. Hilaire's examples):

#!/bin/bash
echo "Running"
echo "host: $1"
echo "port: $2"
echo "sid: $3"
echo "user: $4"
# echo "password: $5"
echo "Compiling file: $6"

sqlplus -S -L $4/$5@$1:$2/$3 << EOF

    set linesize 200
    set tab off
    set serveroutput on

    COLUMN MY_USER FORMAT A20
    COLUMN DB FORMAT A20
    COLUMN SID FORMAT A10
    COLUMN NOW FORMAT A35
    --Show the details of the connection for confirmation
    select
        user as MY_USER
      , ora_database_name as DB
      --, '$3' as SID
      , systimestamp as NOW
    from dual;

    @$6

    COLUMN MY_USER  CLEAR
    COLUMN DB      CLEAR
    COLUMN SID      CLEAR
    COLUMN NOW      CLEAR

    show error

EOF


So, in SQL Developer I confirm I have no packages:



Then, I have the config file for database settings:


So, this is a good template that can be copied and updated to your other Oracle projects, so you can compile directly. You of course would likely want to ignore it from being committed to version control, since the password is included.

Now, I can create a code file and compile it with the keyboard shortcut ctrl+alt+b (cmd + alt+ b for Mac).


And, just to verify, in SQL Developer I refresh the package list and see my code file was in fact compiled to the database.


Saturday, 19 September 2015

Making connections to the Oracle Database from Golang

I posted the other day about getting going with Golang[1], and as mentioned am planning to do a series of posts on various aspects of the language - as a means to help with the learning process.

Being an Oracle developer, it seemed logical I would want to be able to make connections to the Oracle database. Built into the language core is an SQL interface (for issuing queries and statements) which is through the module "database/sql". The other side of it is that you need a valid driver. A list of available drivers is on the go wiki[2]. You will see that there are 2 listed for Oracle. How do I judge which project to use? First I see that one has considerably more stars and for another, the online commentary I see also seems to suggest that same package. And that is https://github.com/mattn/go-oci8.

Driver setup

Before we get to the driver, you need to make sure you have an Oracle client and the SDK installed on your system. For, this I followed the steps as per the Ubuntu wiki[3] - where you would at minimum want to install the instant client and the SDK. Once the installation is complete, you should end up with an $ORACLE_HOME that points to /usr/lib/oracle/11.2/client64 or something similar depending on your systems architecture and the version of Oracle you installed.

Within $ORACLE_HOME, you should have 3 folders:
  1. bin
  2. include
  3. lib
At this point, if you try to install the aforementioned driver, you will get an error:

$ go get github.com/mattn/go-oci8
# pkg-config --cflags oci8
Package oci8 was not found in the pkg-config search path.
Perhaps you should add the directory containing `oci8.pc'
to the PKG_CONFIG_PATH environment variable
No package 'oci8' found
exit status 1

So, before you install the driver, you need to do the pkg-config set up. This application should be already installed on your system, but if not you can do so with sudo apt-get install pkg-config. This program is just a way to provide the necessary details for compiling and linking a program to a library[4].

The project itself does provide an example of a package config file for oci8, however it's example is for Windows, so the config file I came up with was:

prefixdir=/usr/lib/oracle/11.2/client64
libdir=${prefixdir}/lib
includedir=${prefixdir}/include

Name: OCI
Description: Oracle database driver
Version: 11.2
Libs: -L${libdir} -lclntsh
Cflags: -I${includedir}

I just grabbed the path to $ORACLE_HOME and placed it in the variable prefixdir, since the config file doesn't know about environment variables.

Before installing the driver, you need to make this file available in a place that pkg-config knows about. So there are two options here. First, you can place it in the system wide pkg-config directory: /usr/lib/pkgconfig and the system will automatically find it. The other option if you have it some obscure location is to export that location into the environment variable PKG_CONFIG_PATH. Once that is set up, you should be able to successfully install the driver with: go get github.com/mattn/go-oci8

You will now find that both the compiled version and source for that package in your GOPATH (in pkg and src folders respectively).

Connecting and querying database

Now that we have the driver available, we can begin with our program. Most of the aspects I describe here are also documented on the go wiki[5].

First, we want to open our connection. This is done with the sql.Open function[6], where you will pass in the name of the driver, and the connection string.

db, err := sql.Open("oci8", "hr/hr2@example.com:1521/xe")
if err != nil {
    fmt.Println(err)
    return
}
defer db.Close()

Opening the connection is actually deferred until you start issuing statements - so for example, if you enter incorrect password, only when you attempt to query the database will you find out. So in this case, the Ping function[7] might be useful to test the connection is OK.

err = db.Ping()
if err != nil {
    fmt.Printf("Error connecting to the database: %s\n", err)
    return
}

Then, to the actual querying. Here we have the functions Query[8] and QueryRow[9] depending on if you want return a row set or a single row respectively. You will more than likely want to use bind variables in your queries - the documentation suggests the use of `?` as placeholders. And I've also seen examples of `$1`. However, using either of those methods seemed to return the error:

sql: statement expects 0 inputs; got 1

What I've found works is what you would be used to use an Oracle developer - a number prefixed with a colon (:1). I'm assuming that's just the specific implementation of the driver. note: If you repeat the same bind number in your program, don't expect it to be re-used - it's just the position of the bind in the query, and then the respective parameter index. In the result set, you'll then want to declare variables of suitable data types to store the data in, which is fetched with the Scan function.

rows,err := db.Query("select employee_id, first_name from employees where employee_id < :1", 105)
if err != nil {
    fmt.Println("Error fetching employees")
    fmt.Println(err)
    return
}
defer rows.Close()

for rows.Next() {

    var emp_id int
    var first_name string
    rows.Scan(&emp_id, &first_name)
    println(emp_id, first_name)

}

var last_name string
err = db.QueryRow("select last_name from employees where employee_id = :1", 101).Scan(&last_name)
if err != nil {
    fmt.Println("Error fetching row")
    fmt.Println(err)
}

fmt.Printf("Last name is %s\n", last_name)


If you are running a query that doesn't return anything (such as insert or create statements), you would typically use the Exec[10] function. And further, if you are repeating a statement, the Prepare[11] function.

Full program (main.go):

package main

import (
    "fmt"
    "database/sql"
    _ "github.com/mattn/go-oci8"
)

func main(){


    db, err := sql.Open("oci8", "hr/hr@example.com:1521/xe")
    if err != nil {
        fmt.Println(err)
        return
    }
    defer db.Close()
    
    
    if err = db.Ping(); err != nil {
        fmt.Printf("Error connecting to the database: %s\n", err)
        return
    }

    rows,err := db.Query("select employee_id, first_name from employees where employee_id < :1", 105)
    if err != nil {
        fmt.Println("Error fetching employees")
        fmt.Println(err)
        return
    }
    defer rows.Close()

    for rows.Next() {

        var emp_id int
        var first_name string
        rows.Scan(&emp_id, &first_name)
        println(emp_id, first_name)

    }
    
    var last_name string
    err = db.QueryRow("select last_name from employees where employee_id = :1", 101).Scan(&last_name)
    if err != nil {
        fmt.Println("Error fetching row")
        fmt.Println(err)
    }
    
    fmt.Printf("Last name is %s\n", last_name)

}

1: http://tschf.github.io/2015/09/18/getting-going-with-golang/
2: https://github.com/golang/go/wiki/SQLDrivers
3: https://help.ubuntu.com/community/Oracle%20Instant%20Client
4: http://people.freedesktop.org/~dbn/pkg-config-guide.html
5: https://github.com/golang/go/wiki/SQLInterface
6: http://golang.org/pkg/database/sql/#Open
7: http://golang.org/pkg/database/sql/#DB.Ping
8: http://golang.org/pkg/database/sql/#DB.Query
9: http://golang.org/pkg/database/sql/#DB.QueryRow
10: http://golang.org/pkg/database/sql/#DB.Exec
11: http://golang.org/pkg/database/sql/#DB.Prepare

Wednesday, 16 September 2015

Oracle developer choice awards

From June 21, Oracle opened up nominations for what they have dubbed Oracle developer choice awards, in 5 broad categories:

  1. SQL
  2. PL/SQL
  3. ORDS
  4. APEX
  5. Database Design
There has been a panel that has narrowed down the nominations from each category, and now the voting has opened up - until the 15th October. 

You are able to vote for more than one person in each category, the rules don't specify how many votes in each category, but I would encourage selecting a few of the people you think are most deserving in the community and giving them an up-vote. 

Please choose wisely, because the system doesn't allow you to undo your upvote - the only way to undo that vote, is to then down vote. So unless you feel very strongly that that person shouldn't be awarded and wish to down-vote them, I emplore you to choose wisely.

The APEX candidates

Jari Laine

Very active of the OTN APEX forum, from well before I even started working with APEX. The time he has spent assisting the community is really something to be admired. He has an APEX blog - http://jaris.blogsite.org/apex/f?p=BLOG:HOME - that he built in APEX, and open sourced! If you prefer to look at samples, go to one of his sample applications: https://apex.oracle.com/pls/apex/f?p=39006 or http://actionet.homelinux.net/apex/f?p=100, both with countless examples.


Juergen Schuster

Juergen is really active promoting APEX, so much so that he created an APEX (apeks) sticker that many people are now rocking on their laptops (the word on the street is that he even sends them to people all over the world, at no charge!). He also recently started a talk back show speaking with prominent figures in the APEX community including the APEX development team - you can find existing podcasts listed here: http://apex.press/talkshow


Paul MacMillan (fac586)

Also very active on the OTN forums, and in my opinion will be the first to bring anyone up on poor practices new people to APEX might be using in their systems. This can only be a good thing! He is constantly help people get their problems resolved - and definitely deserves some recognition for all his efforts!


Morten Braten

Have you ever heard the Alexandria PL/SQL utility? If you haven't, it's a large collection of pl/sql utilities in one spot! And Morten is the one behind this awesome project. If that's not a contribution to the APEX community, I don't know what is! The project was recently moved over to GitHub - https://github.com/mortenbra/alexandria-plsql-utils, so if you haven't heard of it, go and check it out.


Kiran Pawar

Yet another member of the community that has become very active in the APEX forums of late (as well as other forums such as ORDS). Between Kiran, Jari and Paul, I think you should be pretty well covered with any questions you may ask on the forums! 


Karen Cannell

What better way giving back to the community the (co)authoring an APEX related book? Karen co-authored Agile Oracle Application Express, which is good supplementary material on how to leverage Application Express with agile methodologies. 


Trent Schafer

Oh hi there, that's me! I started working with APEX about 7 years ago now and I am a firm believer in giving back to the community, so I began answering where I could on the forums, and learning from the other posts I would come across. I honestly believe APEX has a great community! I started a blog as a mean to share back my knowledge to create hopefully a good reference for people. If you think I am worthy, I appreciate any votes you send my way!

Prototyping usage of an OAUTH API

There are many services out there that offer an OAUTH2 API - whereby you get an access token, associated to your account, and you use that with requests rather than logging in each request. It's a solid design, because for one, you as a user can opt to grant permission to parts of the system, and for another you can easily revoke access to individual applications.

One problem? It is fairly unlikely there is a client library built for PL/SQL or APEX use. For instance, Google provides libraries for[1]:
  1. Java
  2. JavaScript
  3. .NET
  4. PHP
  5. Python
  6. Objective-C
Another example, Instagram provides[2]:
  1. Python
  2. Ruby
It's not too hard to build it in PL/SQL once you understand the flow of OAUTH. I started working on one for Google services, which is hosted over on my GitHub profile. Before you get too far with that though, you may like to test the requests you are attempting to work with. The one's I have seen tend to provide an API test tool, however you may like to create templates that you can use down the track without relying on their test tools.

So here, I'd like to introduce you to a neat little tool called Postman. I'm sure there are quite similar apps out there, but I'll be focusing on this particular tool. This is an app that is actually available through the Chrome Web Store[3]. And because it's an app offered through the Chrome Web Store, it is cross platform - and you can just launch it from your system launcher - you need not even know its a chrome app!



Since I already have a Google library, I'll focus on the Instagram API. The business case I have is that I've posted a comment on someone's post, and I want to track it down with their API (my goal was to delete it, but Instagram only offers access to additional scopes for applications that have been submitted for review[4] - you can get basic access to Instagram though). The API doesn't offer an endpoint for comments you've posted, but you can list comments on particular media objects, to locate it that way.

The first step with any OAUTH project is to set up a client, where you can give it a name and in turn get a client id and client secret, that are needed for fetching the access token that needs to be sent with each request. Note: A lot of the Instagram requests simply require use of the client id, but for the purposes of this example, I will be fetching the access token and using that in the requests.



As you can see with the redirect URI, you can use a URL suggested by postman so that you can successfully fetch the token in the app. The other bit of information you need before requesting a token, is what scopes you would like access to. You can review available scopes for Instagram by looking at the authentication documentation[5].

So, on your request builder, within the authorization tab you can select which authentication you need. If you change that to OAuth 2.0, you will see a button "Get access token". Click that, and fill out all the required fields. If you review the authentication documentation, you will see that the authorization base URL is https://api.instagram.com/oauth/authorize/ and the access token URL is https://api.instagram.com/oauth/access_token. So, we populate that information and specify the scope "basic comments". If all goes well, you should get a success result along with the token.



Once you save the token with a convenient name, it will appear in the authorization tab where you can either add it to the URL or as a request header. The Instagram API works with the token in the URL as a URL parameter, so you will need to add it there. Once you have the request URL defined (e.g. https://api.instagram.com/v1/users/self/) ensure the radio is selected "Add token to URL" and click the recently saved token "Instagram token". 


So now to prototype the actual task you're attempting to perform. By looking at the list of available endpoints[6] you can start to figure out which end points you might need to perform the desired task. Looking at the comments endpoint, you will notice it has a DEL operation, which accepts two paramaters:
  1. media id
  2. comment id
So, from here you will need to find these two identifiers. If you know the shortcode for the media, \youI can perform a request using the /media/shortcode/[code] endpoint. Or if you know the user, you can search to find their user id using the /users/search endpoint, then list their recent media, and get the id of the media that way.

Once you have the media id, you need to figure out the comment id. This can be done through the /media/[media-id]/comments endpoint. Though, it is worth noting that Instagram seems to only return 150 of the most recent comments. 

Once you find the two bits of information, you can then delete the comment (obviously, you can only delete comments you have authored or comments on your own posts). The good thing about Postman is that it allows you to save requests in set collections, so you can easily repeat the process again.

For simplicity, I'll go with the shortcode method of determining the media ID. I attached a comment to media with shortcode: 54-EW8IaHO, giving us the first request URL of: https://api.instagram.com/v1/media/shortcode/54-EW8IaHO. Here, I find that the media id is: 1042856292129022414_528817151.



The next step, is to get the comments. I compile a new request, to https://api.instagram.com/v1/media/1042856292129022414_528817151/comments. From there, scrolling to the end of the response, I can see my comment has the ID 1074977853463962032. In reality, you would need to provide some means of iterating of the returned comments.



Finally, we can compile the DELETE request using the aforementioned identifiers, giving us the request: https://api.instagram.com/v1/media/1042856292129022414_528817151/comments/1074977853463962032


As previously mentioned, this specific request isn't possible without Instagram first reviewing your application - but the general idea is there.

Since you can save the request templates, you can easily go back in to repeat any processes you set up as an extra means to debug when your developed solution isn't working as expected - could be useful if there are any API changes. I set up a collection called Instagram, and added all related requests in there.





Once you have the flow prototyped, it should more straightforward to convert it into some functions/procedures in your Oracle application (or other system if no client libraries are unavailable).

Sunday, 13 September 2015

My experience as an Oracle developer running Ubuntu

If you have been following my blog, you may have noticed the desktop environment I use is the Ubuntu distribution of Linux. Growing up, I always tried Linux here and there, but never stuck with it - it was more just to try out. As with anything, if you are going to use something new, you really have to commit to it - pick a period of time you are willing to use it for, and see how you go. If you still don't like it at the end, go back into your old ways.

(I will add that I did once upon a time own a Mac. Actually, I purchased a Powerbook right before Apple made the switch to the Intel CPU architecture. I know Mac OS X seems quite popular in the development community nowadays, however I still prefer to be able to select my hardware, and I believe I get that freedom and also get a bit of Software freedom by running Linux.)

At around 2008 - actually, soon after I started working with APEX - I had a couple of colleagues that were running Ubuntu. Since I started developing with Oracle, I didn't have any reliance on Windows for work, which made the switch possible. I fully committed - both my work workstation and home PC were now running Ubuntu. I believe the first release I used would have been Karmic Koala (9.10). A couple of releases later, I decided that I would just stick to long term support (LTS) releases, which come out every 2 years. 

The installation supports creating different partitions for folders on your Linux system. So I have followed the general methodology of having 2 partitions, one for /home and another for / (root). /home can be akin to "C:/Documents and Settings" or C:/Users in Windows. So what this means is that every time I install a new release, I can opt not to format /home - which is what I do. So all my personal files (and application settings) remain, and it just blows away everything else to install afresh. Note, you can just do a system upgrade, but I just prefer to start afresh on a new installation.

These days Ubuntu uses their own Unity user interface, which seems to face a bit of criticism. Being Linux, you have lots of choice! You are not restricted to one distribution or user interface, but I personally have no problem with the UI.

So, what tools do I have in my repertoire?

Terminal

If you are going to use Linux, you will want to know how to use the terminal. Whilst you can probably get away using just GUI tools, there are likely going to be times when you need to use the terminal. You may be connecting to your development server over SSH, here is where you will be doing that. And there's a very good chance your server will not be having a GUI available to you.

A terminal will most-definitely come pre-installed on your system. The specific command would be gnome-terminal. There of course are other terminals that you can use with extended/more basic feature sets.

Oracle SQL Developer

Well, being this post is about Oracle development, you will most likely want SQL Developer. Luckily, this is a cross-platform application and supported for Windows, Mac and Linux. Just so long as you have Java available to you. In Ubuntu, there is OpenJDK in the repositories. However, I personally prefer to use the official Oracle Java JDK. This version had to be removed from the repositories due to licensing changes. There is a PPA you can add to your system which will make the installation quite easy - see this article: http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html. However, what I tend to do is follow the instructions on askubuntu.comhttp://askubuntu.com/questions/56104/how-can-i-install-sun-oracles-proprietary-java-jdk-6-7-8-or-jre

The next challenge with SQL Developer is that they don't provide an installation file in Debian package format (Ubuntu is based on Debian). They provide 2 options for Linux:

1. RPM (for Redhat based distributions)
2. tar.gz (an archive (or tarball if you will) of all the SQL Developer files)

What I tend to do is grab the RPM file, and use a tool alien to convert it into a debian installation file. The conversion process should take around

sudo apt-get install alien
sudo alien --to-deb -c sqldeveloper-*.noarch.rpm
#sqldeveloper_4.1.1.19.59-2_all.deb generated

F
That will give you an installation file Ubuntu knows how to handle, and all is pretty straight forward from there on in. Just double click the generated file and follow the prompts.

Google Chrome

Being APEX, you will probably want a web browser. Ubuntu comes with Firefox, but I personally prefer Chrome. And wanting to be on the cutting-edge I tend to grab the beta release. All the release channels can be seen here - https://www.chromium.org/getting-involved/dev-channel, but the specific link to download is: https://www.google.com/chrome/browser/beta.html?platform=linux

Code Editor

Ubuntu by default comes with gedit, which is a nice basic editor that will get you out of trouble. The main editor I have been using lately is the Atom text editor, which is an open sourced text editor released by GitHub. The most common comparison I have seen is that it's quite similar to Sublime text editor. You get download the installation media from: https://atom.io/. It also comes with a whole raft of packages (extensions). For instance, if your version control system is git, you will likely want to use the git-plus package - https://atom.io/packages/git-plus - which supports a raft of git operations.

For instance, if you want to see the git diff, you can type in 'diff` in the command palette (ctrl+shift+p).



Code Diffs

A great tool for comparing code files I've found is meld. It supports up to 3 way comparisons and supports either file (or blank, where you can paste code in), and directory comparisons. To give a sample of it's UI, I will add a tweak to the following code:

function sayHello(){
   
    console.log("hello, world");
}




As you can see, it highlights both differences on the same line of code, and additional lines of code.

This is easy to install. Just run, sudo apt-get install meld, on the command line.

Lately, when I've been using SQL Developers database diff tool, it has been coming back with differences in some packages. I've found the interface in SQL Developer a little hard to see what differences there between the two, so I have been double checking with meld what difference the packages had (if any) as packages were being returned as differing after a fresh sync.

FTP transfers

For the occasional time I need to do a transfer over FTP, FileZilla is my go to application. It's in the repo's so it's easy to install: sudo apt-get install filezilla.

This program is supported on all platforms, so there's a good chance you are already using it, and have some familiarity.

BI Publisher reports / Unavailable apps

Ok - this is the only problem area. The report builder is built for Microsoft Word, and it's probably no surprise to you that there is no Microsoft Word For Linux. Some folks will probably tell you that they use Wine, but I personally never use this. Instead I have a Windows based virtual machine running through VirtualBox. To install VirtualBox on Linux, it's best to follow the "Debian-based Linux distributions" instructions here: https://www.virtualbox.org/wiki/Linux_Downloads

I also have it set up to to share my home folder so that it is dead simple to move files between the two environments - which is added as a mount point in My Computer.



Nothing much else to stay here - other than try not to rely on this for your software! Find native alternatives where possible. Ubuntu/Linux has LibreOffice/OpenOffice. Outside of the report building phase, most of the documents I make these days are done in Google Drive - I don't actually use LibreOffice too frequently myself.


Expanding abbreviations (for less typing)

I've heard a couple of mentions of this in the community. TextExpander is the name of an OS X application and the basic concept is that you define abbreviations for words and the system will automatically expand them once you've typed them. Once such blog post about the app can be found here, by Jorge Rimblas - http://rimblas.com/blog/2013/07/textexpander-and-apex/

When I saw this post, I did go exploring to see if there was anything similar for Ubuntu, and what I found was AutoKey. Basically, if you just google <name of app> ubuntu, you should get some results on a similar program (usually as a question on askubuntu.com).

Once you find out the name, the next step is to search the repository:

$ apt-cache search autokey
autokey-common - desktop automation utility - common data
autokey-gtk - desktop automation utility - GTK+ version
autokey-qt - desktop automation utility - KDE version

Then, you can go ahead and install the relevant version. For me, that's the gtk version.

sudo apt-get install autokey-gtk

This application works on phrases. So you first create a phrase, and then set up an abbreviation. Following on from Jorge's demo, I set up a phrase for an APEX URL: "f?p=AppId:PageId:Session:Request:Debug:ItemNames:ItemValues". 

Then set up the abbreviation as: "aurl" 


Now, save the phrase, and wherever you type "aurl" followed by for example a space, the abbreviation will be replaced with the phrase. 

That's of course only a basic example, but if you feel this might be useful to you I encourage you to give it a shot!


..

What are your toolsets?

Friday, 19 June 2015

Building an Oracle 11G XE Server with APEX 5, ORDS and SQLcl in 30 minutes

Pre-requisites

1. You have VirtualBox installed

2. You have the necessary Oracle installation files (oracle-xe, apex5, ords3 and sqlcl)




3. You have the installation image for CentOS - I just grabbed the Minimal image from a local mirror - http://centos.mirror.digitalpacific.com.au/7/isos/x86_64/

Set up the virtual machine

Give the Virtual machine an appropriate name, and since CentOS is based off Red Hat, specify the type as Red Hat (64-bit).



Allocate 1GB RAM



Create a new virtual hard drive (for this demo, I will go with a 10gb disk size).






Your machine and virtual hard drive will now have been created and you should see the machine in your machine list in Virtual Box. The next step is go into settings to apply some configuration.

Firstly, you will want to verify your network configuration - for this demo, I will just be using a bridged network connection.



You will also want to load in the Linux disk image. This is done through the storage tab by selecting the empty disk node and selecting the image - mine has been used in the past, so I can easily select it without locating a new disk.



Now we can start up the machine and install the operating system.

Before you begin the installation process you will want to go into network configuration and turn on the device as well as giving the machine a host name.



Once the installation begins, you will want to set the root password and create a personal user account. Here, I also make my self an administrator - this will allow us to run commands as root using the sudo command.



Once that process has finished, you can reboot the machine to get into your newly installed system.

Transferring the Oracle files to your machine

Before getting started we need to have all the installation media on our new server. I have already downloaded these files onto my machine. So first, I'll create a folder on the server where I will transfer these to.

mkdir ~/orafiles

Then to transfer the files across, I'll use FileZilla (an FTP program). We can connect here using sftp (which uses the SSH port/22).



Installing Oracle XE, APEX, ORDS and SQLcl

The next step is to install Oracle. There is a project on GitHub that has some scripts that basically automate the whole installation process. So for this, we will need to first install git.

sudo yum install git

The project is located at: https://github.com/OraOpenSource/oraclexe-apex

So, we need to clone the project onto our server, which is done with the following command.

git clone https://github.com/OraOpenSource/oraclexe-apex.git

This will clone all the project files into a new folder named oraclexe-apex at the current file location. The next step is to change into that directory

cd oraclexe-apex

In this folder, there is a config file that needs updating - config.properties. One editor that comes installed on our minimal CentOS is vi, but you can install another if you prefer - a popular choice is nano.

vi config.properties

The important properties you need to add a value for are:

  • OOS_ORACLE_FILE_URL 
  • OOS_APEX_FILE_URL
  • OOS_ORDS_FILE_URL
  • OOS_SQLCL_FILE_URL

These all related to the Oracle installation media that we copied over to the server in the previous step. So the top of the config file should end up looking something like:

#To download Oracle XE, go to: http://www.oracle.com/technetwork/database/database-technologies/express-edition/overview/index.html
OOS_ORACLE_FILE_URL=file:///home/trent/orafiles/oracle-xe-11.2.0-1.0.x86_64.rpm.zip

#To download APEX, go to: http://download.oracleapex.com
OOS_APEX_FILE_URL=file:///home/trent/orafiles/apex_5.0_en.zip

#To download ORDS, go to: http://www.oracle.com/technetwork/developer-tools/rest-data-services/overview/index.html
#Note, for now ORDS 2 is in production. ORDS 3 specific scripts have also been included and need to be finalized once its out of beta
OOS_ORDS_FILE_URL=file:///home/trent/orafiles/ords.3.0.0.121.10.23.zip

#To downlaod SQLcl, go to: http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
#Note: at time of writing SQLcl is still in beta
#No default value in here since only if you want to add it in (not required)
OOS_SQLCL_FILE_URL=file:///home/trent/orafiles/sqlcl-4.2.0.15.121.1046-no-jre.zip


The path starts with file:///home/trent/orafiles because my user account is named trent, and I placed all the files in a new folder named orafiles.

The rest of the parameters are quite self explanatory - mostly relating to ports to use for various services.

The script also creates an APEX workspace during the installation. If you don't wish to create a workspace/user during the installation, you can set the property OOS_ORACLE_CREATE_USER_YN to N. Otherwise, you can update the property OOS_APEX_USER_WORKSPACE to your preferred workspace name.

I'd also suggest updating OOS_APEX_ADMIN_EMAIL to something other than admin@domain.com (default).

Now, we can start the installation process.

sudo ./build.sh

This installation process should take around 20 minutes to complete. After it completes, the script is set to reboot the machine - so ensure you save any unsaved work before running the above.

Once it has completed and successfully rebooted, you should then be able to access APEX on port 80. e.g. if your IP address of your machine is 192.168.1.10, you would access with: http://192.168.1.10. The default login details for the internal workspace are admin/Oracle1! (if you didn't update those configuration parameters).

It's also worth noting that the default configuration doesn't leave open port 1521 for connecting to the database. So you will need to set up SSH port forwarding on your local machine in order to connect (or open that port on the server). SQL Developer has support for this from within the software.

Monday, 18 May 2015

Building ORDS plugins

Oracle Request Data Services (ORDS) released version 3 recently. If you didn't notice, this version allows you to develop plugins (with Java) that can be imported into your ORDS instance, and provide extended functionality.

You want to refer to the Oracle REST Data Services Plugin API, located here: http://download.oracle.com/otndocs/java/javadocs/3.0/plugin-api/index.html, which itself includes a couple of nice documents:
  1. Getting started guide
  2. Developer guide
This post just re-iterates a lot of what has already been covered in the getting started guide, with a focus on using IntelliJ IDEA.

Setting the project up in IntelliJ IDEA

So the first step is to set up a new project. First however, you will want to verify what version of Java you are using on your server (assuming you are developing on a separate workstation). This can be verified with the command java -version.

[user@host ~]# java -version
java version "1.7.0_79"
OpenJDK Runtime Environment (rhel-2.5.5.1.el7_1-x86_64 u79-b14)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

So, in my instance, I'm running on Java 1.7, so I'll want to target that same JVM when setting up the new project.



After setting up the project, you will want to include the library files required for developing plugins. The libraries can be found in ORDS_INSTALL_FOLDER/examples/plugins/lib. This is done in IDEA by opening the Project Structure tool window (File --> Project Structure... or Ctrl+Alt+Shift+S). Select the Libraries nodes and add all the specified files.




Now that the libraries are available to our project, we can starting coding our plugin (or in this example case, just add in the Demo java file as from ORDS_INSTALL_FOLDER/examples/plugins/plugin-demo/src/PluginDemo.java).

The next step is to get the build process to generate the jar file. In IDEA, this is typically done by setting up an Artifact (again through the Project Structure tool window). 

The only problem is that this method on its own doesn't seem to generate the oracle.dbtools.plugin.api.di.providers file (inside the META-INF folder), which contains a list of classes that define a Provider. The @Provider annotation in the source file in effect advertises the class the D.I. frame work

However, building through Ant does include this necessary file. Here, you can make a copy of the build file in the ORDS_INSTALL_FOLDER/examples/plugins/plugin-demo folder, and tweak as necessary. This requires you to know where the libraries are stored on your file system (since IDEA doesn't actually copy them into your project folder, but references the file location when you set them up). 

The other strategy is to get IDEA to build the ant build file for you. So you would still set up the Artifact so that the generated build file includes the build jar task. After the artifact has been set up, generate the build file.

On the build menu, select the option Generate Ant Build...



Once the build file has been generated, you need to set it up as an ant build script. This is done by opening the Ant tools window (View --> Tool Windows --> Ant Build), and then adding that build script.

Then, we can generate our jar file by running the all ant target, which will clean, build and generate the artifact, which is output to PROJECT_FOLDER/out/artifacts/ARTIFACT_NAME (by default). 

Loading the plugin into ords.war

Now that we have a built plugin file (the jar), the next step is to embed it into the ords.war file. This is done with the plugin command. 

java -jar ords.war plugin Demo.jar

Where Demo.jar is the filename of the jar file that was generated from the java program. You will see output like:

May 17, 2015 5:14:43 PM oracle.dbtools.cmdline.plugin.UpdatePlugin execute
INFO: The plugin jar named: Demo.jar was added to or updated in ords.war

What this in-effect does is load your plugin into the ords.war/WEB-INF/lib folder. If a library with the same name already exists, it will simply overwrite the existing library. So, if you need to remove your plugin (I'm unaware of any official method) - you should be able to do so with the command: zip -d ords.war WEB-INF/lib/<plugin_name.jar> (I say cautiously not basing this off any ORDS documentation).

Testing the plugin

This demo in effect demonstrates connecting to the schema name that is the first part of the path in the URL, after the ords portion, and issuing a query before outputting a hello string combining the schema name and the query string parameter.

First set up a schema that you would like to test with, granting it the connect and resource privileges, as well enabled ords access to the schema.

SQL> create user demo_user identified by demo_user;

User created.

SQL> grant connect,resource to demo_user;

Grant succeeded.

SQL> connect demo_user
Enter password: 
Connected.

SQL> exec ords.enable_schema;

PL/SQL procedure successfully completed.

Then, in the browser, if you go to the path: /ords/demo_user/plugin/demo?who=thomas (as defined in our plugin code), you will see the message:

DEMO_USER says hello to: thomas

In the above example, where we enabled the schema, this creates a URL mapping pattern to the same name of the schema (with the default parameters). However, this can be customised to another URL pattern by setting the parameter p_url_mapping_pattern on the call to ords.enable_schema to something else (refer to ORDS PL/SQL package documentation for more information). E.g. if we want to map it to demo_plugin instead of demo_user:

exec ords.enable_schema(p_url_mapping_pattern => 'demo_plugin');

The URL mapping pattern is important, because if you have an APEX workspace with the same name as the URL mapping pattern, you will get a clash and the workspace will generally get the priority.

Wednesday, 13 May 2015

Building dynamic forms

How do I build a dynamic form? Well first, you need to have your questions stored somewhere, so we set up the following (basic) model:

CREATE TABLE DYNAMIC_QS(
    ID NUMBER PRIMARY KEY,
    QUESTION_KEY VARCHAR2(20) NOT NULL,
    QUESTION_TYPE VARCHAR2(20) NOT NULL,
    QUESTION_SEQ NUMBER NOT NULL
);
/

CREATE TABLE DYNAMIC_ANS(
    ID NUMBER PRIMARY KEY,
    QUESTION_KEY VARCHAR2(20) NOT NULL,
    QUESTION_ANS VARCHAR2(4000) NOT NULL
);
/

create sequence dynamic_qs_seq;
/

create or replace trigger BI_DYNAMIC_ANS
before insert on DYNAMIC_ANS
for each row
begin
    :NEW.ID := dynamic_qs_seq.nextval;
end BI_DYNAMIC_ANS;
/

insert into DYNAMIC_QS values (1, 'NAME', 'TEXT', 10);
insert into DYNAMIC_QS values (2, 'AGE', 'TEXT', 20);
insert into DYNAMIC_QS values (3, 'ADDRESS', 'TEXTAREA', 30);
/

create or replace view v_dynamic_qs as
select 
    dynamic_qs.id
  , dynamic_qs.question_key
  , dynamic_qs.question_type
  , dynamic_ans.question_ans
  , row_number() over (order by dynamic_qs.question_Seq) f_index
from 
    dynamic_qs
    left outer join dynamic_ans on (dynamic_qs.question_key = dynamic_ans.question_key);
/


There are two strategies that come to mind, to achieve a dynamic form.
  1. Utilising the APEX_ITEM API
  2. Custom jQuery POST using the g_f01 and g_f02 arrays (for item keys and item values)


Utilising the APEX_ITEM API


The first, is to make use of the apex_item API for rendering different types of fields, and apex_application API for submitting the data. So first off, we should set up a view on our data with an additional column, starting with 1, that increments by 1, for each question (used f_index in my view).


Then, on the page, we set up a dynamic PL/SQL region with the following code:

begin

    for i in (
        select *
        from v_dynamic_qs
    )
    LOOP
        if i.question_type = 'TEXT'
        then
            htp.p(
                apex_item.text(
                    p_idx => i.f_index,
                    p_Value => i.question_ans
                )
            );
        elsif i.question_type = 'TEXTAREA'
        then
            htp.p(
                apex_item.textarea(
                    p_idx => i.f_index,
                    p_Value => i.question_ans
                )
            );
        end if;    
        htp.p('<br /><br />');        
    END LOOP;

end;


That allows us to render the form and display any saved values, should they exist. The next part is an on-submit process that will save the new values. For this, we can re-use the same view, but will need to use some dynamic SQL in order to get values from the apex_application.g_f0x array.


declare
    l_sql varchar2(4000);
begin
    --get rid of old values before saving the data again
delete from dynamic_ans; for i in ( select id , question_key , to_char(f_index, 'FM09') f_index from v_dynamic_qs ) loop l_sql := ' begin insert into dynamic_ans (question_key,question_ans) values (:1 , apex_application.g_f' || i.f_index || '(1) ); end;'; execute immediate l_sql using i.question_key; end loop; end;


Custom jQuery POST using the g_f01 and g_f02 arrays


Another approach is to build the HTML form controls yourself, and do an AJAX post to an on-demand process. To do this, you need to parse the form controls and use the apex_application.g_f01 and apex_application.g_f02 arrays in your on-demand process.

So similar to above, set up a dynamic region which will render the HTML. Unlike the previous example which was dependent on a sequence ID, this example will use the question_key field (which will be submitted as an array in f01). At run time, the text "QS_" is stripped from the html elements' ID attribute, and mapped into an Array. Similarly, the item values are mapped to an array and submitted in f02.

begin

    for i in (
        select *
        from v_dynamic_qs
    )
    loop
        if i.question_type = 'TEXT'
        then
            htp.prn('<input class="dynamicQs" id="QS_' || i.question_key || '" type="text" value="' || i.question_ans || '" />');
        elsif i.question_type = 'TEXTAREA'
        then
            htp.prn('<textarea class="dynamicQs" id="QS_' || i.question_key || '">' || i.question_ans || '</textarea>');
        end if;    
        htp.p('<br /><br />');     
    end loop;

end;


Then, set up an AJAX Callback process on your page.

begin
    --get rid of old values before saving the data again
    delete from dynamic_ans;
    
    for i in 1..apex_application.g_f01.COUNT
    LOOP
        insert into dynamic_ans (question_key, question_ans) values (apex_application.g_f01(i), apex_application.g_f02(i));
    END LOOP;
    
end;


Finally, add a submit button and tie it up to dynamic action that executes JavaScript code:

var _f01 = $.map($('.dynamicQs'), function(el) { return $(el).attr('id').replace('QS_', ''); } );
var _f02 = $.map($('.dynamicQs'), function(el) { return $(el).val(); } );

$.post(
    'wwv_flow.show', 
    {
        "p_request" : "APPLICATION_PROCESS=SAVE_RESPONSES", 
        "p_flow_id" :  $v('pFlowId'), 
        "p_flow_step_id" : $v('pFlowStepId'), 
        "p_instance": $v('pInstance'),
        f01: _f01,
        f02: _f02
    }, 
    function success(data) { 
        //todo
    }
);

Again, these are obviously a simplified versions, but hopefully you get the gist of how to accomplish a dynamic form.

Friday, 8 May 2015

Deploying individual components

The scenario is you have two (or more) environments, some changes are ready from the development environment and others aren't - so you can't deploy the whole application, because some components are broken. Yes, you could employ the use of build options to prevent any issues happening, and only enable components when they are ready, but what other options are there?

Well, APEX has functionality to export pages and individual components. So for one example, you've been good and kept the schema name, workspace names consistent. You also maintained the same application ID amongst environments.

So, you do an page export from one environment, and try to import it to the other, but you receive the following error:

This page was exported from a different application or from an application in different workspace. Page cannot be installed in this application.

These components were exported from a different application or from an application in a different workspace. The components cannot be installed in this application.

For a page or component export respectively.

The missing link?

You need to have consistent workspace ID's amongst workspaces!

Once you have consistent workspace ID's amongst environments, when importing components, you will find you have better success!



Great, now you know what to do in the future. What if you haven't done this for existing systems? Don't despair, you still have options, albeit, a bit more lengthy!

You need to export the application that has all the new enhancements in it, and import it into the same workspace where the target application is, but don't replace the existing application - instead just auto assign a new ID. Now, in this case, instead of doing page exports and imports, you need to use the copy function that comes with apex.

So go to your target application and in page designer go to the Create toolbar and select Page as Copy.


Then specify Page in another application



From here, you will specify the application you had just imported and then the page that contains all the enhancements you are after. Follow the steps and you will find the page from the other application in your target application.

Not as elegant as if you had consistent workspace ID's in your environments, so from here on in.. consistent workspace ID's ;-)




Sunday, 22 March 2015

APEX 5 Give me Interactive Reports

One of the statements of direction for APEX 5 is multiple interactive reports on a single page.

Multiple Interactive Reports – Allow any number of Interactive Reports to be defined on a single page.

So, time to explore that functionality.

One pattern I've used in the past is loading particular saved reports (rather than the default primary report). You may have some pre-defined filters, a row highlight, specific columns. So for demonstrations, I've built an IR on the emp table.

Then I go and apply some filters, and save an alternate report:




Then in the page designer assign an alias to that report.





As you can see, that allows me to enter in the request portion of the URL: IR_SAL_TOTAL. However, if we add a second IR to the page and try to access that saved report, we would receive an error "Report does not exist". This is because it tries to find that that report name in both IR's.

To get around this, you MUST set a static ID property for your IR regions. Then the request portion of the URL would become: IR[<static ID>]_<ALIAS>. So I gave a static id: empIr, thus the request portion would be: IR[empIr]_SAL_TOTAL, to link to the report I previously saved.



The same concept applies to filtering your IR in the URL. The supported URL operations can be reviewed in the (beta) docs:

  • C = Contains
  • EQ = Equals (this is the default)
  • GTE = Greater than or equal to
  • GT = Greater Than
  • LIKE = SQL Like operator
  • LT = Less than
  • LTE = Less than or equal to
  • N = Null
  • NC = Not Contains
  • NEQ = Not Equals
  • NLIKE = Not Like
  • NN = Not Null
  • NIN = Not In (escape the comma separated values with backslash, \)
  • IN = In (escape the comma separated values with backslash, \)
  • ROWFILTER = Row Text Contains (this searches all columns displayed in the report with type STRING or NUMBER)

So, in the itemNames portion of the URL, you would spacify IR[<static id>]<operator as above>_<column>. So for example, if we wanted employees with a salary greater than 2000 we would use IR[empIr]GT_SAL giving us a URL like so:

f?p=101:1:12313123::::IR[empIr]GT_SAL:2000

Friday, 20 March 2015

Identifying functions and procedures without arguments

I wanted to find a report on all procedures in my schema, that accepted zero arguments.

There are two views that are useful here:
  1. USER_PROCEDURES
  2. USER_ARGUMENTS
With USER_PROCEDURES, if the row is referring to a subprogram in a packaged, object_name is the package name, and procedure_name is the name of the subprogram. With any other subprogram out of the context of a package, the object_name is the name of the subprogram and procedure_name returns NULL.

With user_argument, object_name becomes the name of the subprogram, with package_name being NULL when you are not dealing with a package's subprogram. 

In the case of subprograms out of the package context, no rows are returned in the user_arguments  view. That differs from a subprogram in a package - you get a row, but argument_name is set to NULL. You will never get a NULL argument if there is at least one argument.

In the case of functions, you will get an additional argument with argument_name set to NULL that relates to the return type. 

So with that information in mind, we can identify procedures like:


select user_procedures.object_name
from 
    user_procedures
    left outer join user_Arguments proc_arguments on (user_procedures.object_name = proc_arguments.object_name )
where 
    proc_arguments.object_id IS NULL
    and user_procedures.object_Type = 'PROCEDURE'
    

Then, we can get functions:

with func_arg_count as (
select func_args.*, count(*) over (partition by func_args.object_id) arg_count
from 
    user_procedures
    left outer join user_Arguments func_args on (user_procedures.object_name = func_args.object_name )
where 
    user_procedures.procedure_name  IS NULL and func_args.package_name is null
    and user_procedures.object_Type = 'FUNCTION'
)
select object_name
from func_arg_count
where argument_name is null 
and data_Type is not null
and arg_count = 1
    

Giving us a consolidated query for functions and procedures of:

with func_arg_count as (
    select 
        user_procedures.object_name
      , user_procedures.object_id  
      , user_procedures.object_Type
      , func_args.argument_name
      , func_args.data_Type
       --func_args.*
      , count(case when func_args.object_id is not null then 1 end) over (partition by user_procedures.object_id) arg_count
    from 
        user_procedures
        left outer join user_Arguments func_args on (user_procedures.object_name = func_args.object_name )
    where 
        user_procedures.procedure_name  IS NULL and func_args.package_name is null
)
select object_name
from func_arg_count
where (
        ( object_Type = 'FUNCTION' and argument_name IS NULL and ARG_COUNT = 1)
        or 
        ( object_Type = 'PROCEDURE' and arg_count = 0 )
    )    

Then we can do the packages like so:

with pkgs as (
    select 
        user_procedures.object_name
      , user_procedures.object_id  
      , user_procedures.object_Type
      , user_procedures.procedure_name
      , func_args.in_out
      , func_args.argument_name
      , func_args.data_Type
      , func_args.subprogram_id
    , count(*) over (partition by user_procedures.object_id, func_args.subprogram_id) arg_count
    from 
        user_procedures
        join user_Arguments func_args on (user_procedures.object_name = func_args.package_name and user_procedures.procedure_name = func_args.object_name)
    where 
        user_procedures.object_Type = 'PACKAGE'
)
select object_name pkg, procedure_name 
from pkgs
where argument_name is NULL and arg_count = 1

Monday, 9 February 2015

Reset an Interactive Report (IR)

To reset an IR back to the default settings, you may know you can go to the actions menu, and hit reset:




If you inspect the apply button you will see it's calling gReport.reset()


And there are a bunch of examples using this gReport object both for resetting the report and other IR functions.

The problem? This is not documented, and with APEX 5 supporting multiple IRs, this will no longer work. In your console, if you enter gReport, you will see that object no longer exists.

The other technique you can use is the clear cache portion of the URL. According to the docs:

To reset an interactive report in a link, use the string "RIR" in the Clear-Cache section of a URL. This is equivalent to the end user choosing the Reset option from the interactive report actions menu on the target page. The report is returned to the default report settings specified by the developer or saved by the user.

Just testing this on a 4.2 instance, this this only resets the PRIMARY report, and takes you back to the PRIMARY report i.e. any filters added on saved reports will remain. The reset button in the actions menu on the other hand just clears the filters added on the saved report, and you remain on that saved report.

So, the best way to programatically reset an IR?

First, I suggest giving your IR a region static ID so you can easily query the apex dictionary.

Then if we look at the APEX_IR API, we can see it has the procedure (2) RESET_REPORT, which accepts 3 parameters - page id, region_id and report_id. It is acceptable to pass in NULL to p_report_id, and it will just use the last viewed report for that region - otherwise, you'd need to make use of APEX_IR.GET_LAST_VIEWED_REPORT_ID.

Then, we can have a dynamic action with 2 true actions: Execute PL/SQL code and Refresh region, with the PL/SQL code looking something like this:


declare
    l_region_id apex_application_page_regions.region_id%type;
begin

    select region_id into l_region_id
    from apex_application_page_regions
    where application_id = :APP_ID
    and page_id = :APP_PAGE_ID
    and static_id = 'region-static-id';--update this accordingly
    
    apex_ir.reset_report(
        p_page_id => :APP_PAGE_ID
      , p_region_id => l_region_id
      , p_report_id => NULL
    );
    
end;

This re-iterates the importance of only using documented functions.

APEX 5 API changes

Based on the current beta API docs: https://docs.oracle.com/cd/E59726_01/doc.50/e39149/toc.htm, here is what's changed.

APEX_APPLICATION_INSTALL
  • Function GET_AUTO_INSTALL_SUP_OBJ added 
  • Procedure SET_AUTO_INSTALL_SUP_OBJ added

APEX_CUSTOM_AUTH
  • LOGOUT procedure deprecated

APEX_ESCAPE
  • Function JSON added
  • Function REGEXP added

APEX_INSTANCE_ADMIN
  • Procedure CREATE_SCHEMA_EXCEPTION added
  • Procedure FREE_WORKSPACE_APP_IDS added
  • Function GET_WORKSPACE_PARAMETER added
  • Procedure REMOVE_SCHEMA_EXCEPTION added
  • Procedure REMOVE_SCHEMA_EXCEPTIONS added
  • Procedure REMOVE_WORKSPACE_EXCEPTIONS added
  • Procedure RESERVE_WORKSPACE_APP_IDS added
  • Procedure RESTRICT_SCHEMA added
  • Procedure SET_WORKSPACE_PARAMETER added
  • Procedure UNRESTRICT_SCHEMA added

APEX_IR
  • Procedure CHANGE_SUBSCRIPTION_EMAIL added
  • Procedure CHANGE_REPORT_OWNER added

APEX_LDAP
  • Function SEARCH added

APEX_PLUGIN_UTIL
  • Function GET_ATTRIBUTE_AS_NUMBER added

APEX_UTIL
  • Procedure CLOSE_OPEN_DB_LINKS added
  • Procedure IR_CLEAR deprecated
  • Procedure IR_DELETE_REPORT deprecated
  • Procedure IR_DELETE_SUBSCRIPTION deprecated
  • Procedure IR_FILTER deprecated
  • Procedure IR_RESET deprecated
  • Procedure PUBLIC_CHECK_AUTHORIZATION deprecated
  • Procedure RESET_AUTHORIZATIONS deprecated
  • Procedure SET_GROUP_GROUP_GRANTS added
  • Procedure SET_GROUP_USER_GRANTS added

APEX_WEB_SERVICE
  • Function MAKE_REST_REQUEST_B added

Packages
  • Package APEX_SPATIAL added
  • Package APEX_ZIP added
  • Package APEX_JSON added

JavaScript APIS

New namespaces
  • apex.da  
  • apex.debug 
  • apex.lang 
  • apex.util 

apex.navigations additions
  • apex.navigation.dialog
  • apex.navigation.dialog.cancel
  • apex.navigation.dialog.close
  • apex.navigation.dialog.fireCloseHandler
  • apex.navigation.dialog.registerCloseHandler
  • apex.navigation.dialog.title
  • apex.navigation.openInNewWindow
  • apex.navigation.popup
  • apex.navigation.redirect

apex.server additions
  • apex.server.url


Saturday, 7 February 2015

APEX 5 creating regions on an existing page

In APEX 4.2, it's nice and simple. Right click on the body node of the page tree view, and click create:


After clicking create, you would be presented with all the possible region types:


The rest is just a matter of following the steps.

APEX 5, has a new page designer, and it took me a minute or so to figure out how to add a form region, since there seems to be a couple of options for adding regions!

Firstly, in the centre column down the bottom of the page, you have a grid view of (almost) all the region types you can add. You can either drag them onto the visual layout of your page above, or right click and select where to add it to.


The other option is on the left hand pane, right click and then select Create Region.

.


This will create a region with the type set as: "Static Content". 

You then need to change the type on the right hand pane, and then fill out any region specific settings.


At this point, it's probably also worth mentioning that the settings for the regions on the right pane might not be displaying all settings. There are two options in the toolbar:

1. Show common
2. Show all

E.g. region static ID is only displayed if you have `Show all` mode enabled:




Some regions should be created through the new page wizard, which you will notice if you try to select Tabular Form:



And form region is not listed there - well technically, it's not a region type, so that makes sense. On the top of your page, you will notice a plus icon button. And this is how we can add form regions to our page,



Which gives us all the wizard options we've come to expect:





disclaimer: This is based on the current early adopter (3) instance of apex