Tuesday, 19 September 2017

Setting up Oracle XE in Docker - Lessons Learned

For a little side project I'm working on, I had the need to set up a Docker instance of Oracle. I have read about Docker a while back and understand the general concept - but up until now, I hadn't had much experience with it (other that running the basic hello-world Docker example).

So, since I'm about using the free version of the Oracle Database at the moment (Oracle XE), I wanted specifically to build that in a VM. Now, I know that Oracle has a GitHub repository with some sample Docker files - unfortunately, I didn't have success with the Oracle XE version, so it wasn't a bad time to go and build my own.

Here are some things I learnt:

1. The default storage location on my Ubuntu system is: /var/lib/docker

This is a problem as I'm developing, since the way my disk is set up, the root partition soon fills up.

2. You can change the location that images are stored, by creating a json file: /etc/docker/daemon.json.

In here, you will want an entry "data-root" with the path to your desired location. The Docker daemon runs as a service, which is effectively the `dockerd` binary. So, whilst testing your config, it's a good idea to stop that (systemctl stop docker). Then, update your config and run `dockerd`. There is also a `debug` property (true|false) that you can set so you get additional output. The first time I ran the daemon with the config file in place, my system had to set up a storage pool

DEBU[0001] devmapper: Pool doesn't exist. Creating it.

This seemed to bring my system to a hault, but leaving my system and coming back it was doing something - so keep in mind that isn't a quick process!

Something curious is that this config file allows you also to specify the storage driver. Now, I don't know much about the Docker storage drivers at this stage, but without the config file it uses the `aufs` storage driver, however if I try to set that in my config file, I get an error about the driver not being available. So it's possible I'm missing some other corresponding entry.

Once all looks good, switch back to using the daemon as a service: systemctl start docker

3. The CMD argument in your Dockerfile is best as a single executable

The CMD argument is what get executed whenever you run a new instance of your image. For the image I built, my command was to start the oracle-xe service. So I started with CMD ["/etc/init.d/oracle-xe", "start"]. This is good, but the problem is when running in detached mode, because the CMD completes, the container also exits.

After doing a bit of research and looking at the Oracle's example Dockerfile, it is a good idea to watch a file - the Oracle one was monitoring an log file, but I so another example online suggested just to watch /dev/null. So effectively I needed to run two commands:

1. oracle-xe start
2. tail -f /dev/null

When I tried chaining these in the CMD property of the Dockerfile, the container threw errors.

Again, after some more research and looking at the example on Oracle's repository, if you have more than one program to run, you should place this in a simple shell script, and pass that into the CMD property.

So, I end up with a script:

#!/bin/bash
/etc/init.d/oracle-xe start
tail -f /dev/null

Then the Dockerfile referencing:

CMD ["/root/install/runXe.sh"]

4. The images are generally very minimal and need to have context set up

If you run the Ubuntu image:

docker run -it ubuntu:16.04

You will see there is very little environment variables set up:

PWD=/
SHLVL=1
HOME=/root
_=/usr/bin/env
root@b7535b34011e:/# pwd
/


So, it's not a bad idea to set up some basic environment such as the user

USER root
WORKDIR /root
ENV HOME /root 
ENV USER root


When copying files over, at least now we are placing them within the users home (/root) rather than the root of the file system.

5. It's a good idea to run interactively whilst developing

Docker supports two methods to run an image

-i, --interactive=true|false
    Keep STDIN open even if not attached. The default is false.

-d, --detach=true|false
    Detached mode: run the container in the background and print the new container ID. The default is false.

I like to run interactively most of the time to see everything is behaving as expected and to see any output. So the full command would be something like:

docker run -it <image_tag>

Replace i with d when all looks good!

If your CMD property isn't giving the desired result, you can override what is set in the image by passing the --entrypoint command line argument with the value of /bin/bash

docker run -it --entrypoint=/bin/bash <image_tag>

6. You need to set the shared memory size in the command line argument

XE requires at least 1gb of shared memory, so I am a generous guy and give 2gb. This needs to be set when running the image through an argument. So the previous example becomes:

docker run --shm-size=2g -it --entrypoint=/bin/bash <image_tag>

7. Avoid anything that depends on a hostname

When installing XE, the generated listener.ora will reference the hostname. Since each time you run an image in a new container the hostname will be something different, it's a good idea to update this so that you will be able to connect on each corresponding container.

See the example of the listener.ora from the Oracle repository here: https://github.com/oracle/docker-images/blob/master/OracleDatabase/dockerfiles/11.2.0.2/runOracle.sh#L98-L115

8. Finally, some useful commands

docker build -t   
docker run --shm-size=2g -it 

# Stop running containers
docker stop $(docker ps -q -a)
# Remove all containers
docker rm $(docker ps -a -q)
# Delete images with no tag/name
docker rmi $(docker images -f "dangling=true" -q)
# Delete all images
docker rmi $(docker images -q)

Friday, 18 August 2017

Bulk loading a template APEX application in a single workspace

I recently had a need to import the same application multiple times - 30+, so each person accessing the workspace could use their own application to play around with. And the user list was supplied to me in the format of an Excel spreadsheet. So, first in the Excel sheet, I had to make a new column to derive some sort of unique value - what better than the first letter of their first name and the surname. Like John Smith -> JSMITH.

So, first I made a new column with the value as a formula, which was: =LEFT(A2,1). That would give me "J" for "John.

Then, for the actual account ID I come up with the formula =UPPER(CONCAT(C2, B2)) Where C2=J and B2 = SMITH, giving me JSMITH.

With that list, I can copy it over to my trusty text editor in order to manipulate to create an array.

The first part I wanted to do was create all the users, so for that I can use raw PL/SQL, and come up with a block of code like so:

declare

    type t_users is table of varchar2(200)
        index by PLS_INTEGER;
    l_all_users t_users;
    l_ws_id NUMBER;

    procedure append_list(
        p_emps in out t_users,
        p_new_person in varchar2
    )
    as
    begin
        p_emps(p_emps.count + 1) := p_new_person;
    end append_list;

begin

    l_ws_id := apex_util.find_security_group_id('WORKSPACE_NAME');
    apex_util.set_security_group_id(l_ws_id);

    append_list(l_all_employees, 'USER1');
    append_list(l_all_employees, 'USER2');

    for i in 1..l_all_employees.count
    loop

        apex_util.create_user(
            p_user_name => l_all_employees(i),
            p_web_password => l_all_employees(i),
            p_developer_privs => 'CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL',
            p_change_password_on_first_use => 'N',
            p_allow_app_building_yn => 'Y',
            p_allow_sql_workshop_yn => 'Y',
            p_allow_team_development_yn => 'Y'
        );
    end loop;

end;
/

Run that, and you should find all the users now have accounts in the workspace - I just assigned the password value as the same as the username to keep things simple.

So, the next part was loading the applications. APEX comes with the API APEX_APPLICATION_INSTALL, so if we look at the documentation, we can see a simple import script as:

begin
  apex_application_install.generate_application_id;
  apex_application_install.generate_offset;
  apex_application_install.set_application_alias( 'F' || apex_application_install.get_application_id );
end;
/
 
@f645.sql

So, we need to run an anonymous block, and then the actual application export script - which we can't put inside an anonymous block.

So, there comes in the scripting functionality of SQLcl. It support running JavaScript, so from the first step, I can make an Array of all the users, but this time in JavaScript. Then we can just loop over that structure and set the statement to run - which we do two separate statements, 1. the anonymous block; and 2. the application import script:

function runCmd(cmd){
    sqlcl.setStmt(cmd);
    sqlcl.run();
}

var people = [
    'USER1',
    'USER2'
];

for (var i = 0; i < people.length; i++){

    print ('Loading application for: ' + people[i]);

    runCmd([
            "declare",
            "    l_ws_id number;",
            "begin",
            "    l_ws_id := apex_util.find_security_group_id('WS_NAME');",
            "    apex_util.set_security_group_id(l_ws_id);",
            "    apex_application_install.generate_application_id;",
            "    apex_application_install.generate_offset;",
            "    apex_application_install.set_application_alias( 'F' || apex_application_install.get_application_id);",
            "    apex_application_install.set_application_name( '" + people[i] + " - Workshop Application' );",
            "end;",
            "/"
        ].join("\r\n")
    );

    runCmd('@f237.sql');

    print ('......next ->');
}

Then, I just connect to my database in SQLcl and run that script (named appLoader.js)

conn user/pass@DEV
script appLoader.js

note: I could have just run everything (creating the user account and importing the app) in the SQLcl script, but this is just the process I followed as I was trying to figure things out :)

Thursday, 13 July 2017

Report row buttons firing a dynamic action

A lot of the time on reports of data, we'll define a link column, and specify an image which is the beloved pencil icon.


Which is good and all, but looking in the Sample Database Application (packaged application), they have taken a different approach to style links - a button that stretches the width of it's containing cell. So, the UI ends up looking like so:



So, is this done?

Well first, you define a new column for the placeholder. Suppose I want my link column to be Report, I define an extra column in my query:

, 'Report' report_link

Then I need to go to my new column's attributes, and change the column type to a link.


And, in the link attributes, set the text to what you want displayed. Here, I'm just going to display the value of the new column and point to my desired page. So I set the target page, then also the Link text as #REPORT_LINK#.

At this point the report will just show link text, as we expect.


So, then to replicate the same style, we just need to apply the same classes, which happen to be:


  • t-Button 
  • t-Button--simple 
  • t-Button--hot 
  • t-Button--stretch
So, in the link, set the link attributes as:

class="t-Button t-Button--simple t-Button--hot t-Button--stretch".

Now when we run the page, we get the desired behaviour:


How about for a dynamic action?

Well, in the columns link, there is no option for "Defined by Dynamic Action" as we usually see in regular page buttons. We just get page or URL options.


Typically, you will assign a special class to your element so that you can specify a jQuery selector on your dynamic action target that will fire and optionally a data attribute used to store the ID of the row (such as data-order-id="xx"). More on this later.

So, back to the column action. A typical pattern here will be to specify URL and that a target of #, javascript:void(0);, or javascript:;.

The # is usually a trick when you want to go to a position in the same page, so not using href as it was designed for.

The void is a JavaScript operator that returns void, and the other one is obviously an empty javascript expression!

When using void, it is important to pass in 0 to the function, or it's likely you will receive the following error:

Uncaught SyntaxError: Unexpected token )

With all that said, I think it is better to avoid these (hacky?) solutions. It is not a "link" so the href tag should ideally not be there. If you want a row button that will fire a dynamic action, instead you should set the column type to plain text and then set a HTML expression on the column.

Since it is a button we want, it is a button we shall use. We will end up with markup as follows,  that we add into the HTML expression:

<button class="orderReportButton
           t-Button
           t-Button--simple
           t-Button--hot
           t-Button--stretch" data-order-id="#ORDER_ID#" type="button">
    <span class="t-Button-label">Report</span>
</button>

There we have.



Now, we just need to finish off our dynamic action. We create a click dynamic action based on a jQuery selector.



Then, in our true action, we can just reference the expression "this.triggeringElement.getAttribute('data-order-id')" or jQuery "$(this.triggeringElement).data('order-id');


Monday, 10 July 2017

Custom workflow to download or upload specific Google Drive files

I had an interest in downloading and uploading a specific file into my Google Drive at certain timing points. For examples sake, let's call the files "Designs.cad". There is a command line project on GitHub which is Go based. So if you have Go installed, it is just a matter of running:

go get github.com/prasmussen/gdrive


This will download from source, so if you don't want to do this or if you don't have Go installed (you should - there are lots of great go projects), you can see a list of releases on the project page - https://github.com/prasmussen/gdrive - which you can download and place in your system somewhere, with the command becoming "gdrive".

When you first install it, you will need to authenticate your account. Running any command will prompt you to go to a URL to get an authorization code and paste in back on your console. So for example, if I run:

gdrive list

Which is designed to list files in my Drive. After running the command, if you haven't previsouly authenticated, you will be prompted to go to a URL to paste the authorization code. Sample output:

trent@birroth:~$ gdrive list
Authentication needed
Go to the following url in your browser:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id...

Enter verification code: xxx

Where xxx is the verification code I pasted back in.
If successful, you will end up with a file at: $HOME/.gdrive/token_v2.json, containing on the necessary authorization codes so the program can continue to function whilst ever the authorization exists for this application.

So, looking at the help, there are two commands that will be useful - download and update.

We can find out the program usage with the help command, like so for the download operation.

$ gdrive help download
Download file or directory
gdrive [global] download [options] 

global:
  -c, --config          Application path, default: /home/trent/.gdrive
  --refresh-token       Oauth refresh token...
  --access-token        Oauth access token...
  --service-account     Oauth service account filename...

options:
  -f, --force           Overwrite existing file
  -s, --skip            Skip existing files
  -r, --recursive       Download directory recursively...
  --path                Download path
  --delete              Delete remote file when download is successful
  --no-progress         Hide progress
  --stdout              Write file content to stdout
  --timeout             Set timeout in seconds...


To use this, I need to find the file ID, which I can do by using the list command, or looking at the web interface of Google Drive. The filename will be the same as that as it is on Google Drive. So, with that information I end up with the command:

gdrive download --path /home/trent/ --force 1GUG3Y3Ce56Pa0k2

Which gives output like so:

Downloading Designs.cad -> /home/trent/Designs.cad
Downloaded 1GUG3Y3Ce56Pa0k2 at 18.2 KB/s, total 18.2 KB

Similarly, to send the file in the other direction, we need to look at the usage for the command update

$ gdrive help update
Update file, this creates a new revision of the file
gdrive [global] update [options]  

global:
  -c, --config           Application path, default: /home/trent/.gdrive
  --refresh-token        Oauth refresh token...
  --access-token         Oauth access token...
  --service-account      Oauth service account filename...

options:
  -p, --parent           Parent id...
  --name                 Filename
  --description          File description
  --no-progress          Hide progress
  --mime                 Force mime type
  --timeout              Set timeout in seconds...
  --chunksize            Set chunk size in bytes...

That means, our command will become:

gdrive update 1GUG3Y3Ce56Pa0k2 /home/trent/Designs.cad


Sunday, 9 July 2017

Converting a physical machine for use in VirtualBox

I recently had an interest in converting a physical machine (Windows) I have into a virtualised environment. For my personal Virtual needs, I tend to use and favour VirtualBox - so my end goal is to have the system running in VirtualBox.

A quick google with these keywords reveals this VirtualBox article: https://www.virtualbox.org/wiki/Migrate_Windows. However, this talks about Windows XP, and then one of the software it references is 10 years old. And for some extra context, Windows XP was first released in 2001 and the last release (SP3) in 2008. Ok, so I'm deciding not to go down that path.

Putting aside the previous article, I happen to know there are two solutions that support converting a physical machine.
  • VMWare
  • Microsoft (Disk2vhd)
So, I will go with the VMWare option for this article/guide. The software is found over at: https://www.vmware.com/ca/products/converter.html. So, I download and install the software (note, you will need a VMWare account in order to download this software).

The software includes official documentation in the form of a PDF document, so if you want to review that, head over to: https://www.vmware.com/pdf/convsa_61_guide.pdf - chapter 6 is the relevant chapter.

Firstly, you will want to start the program as Administrator to ensure you have full privileges. Search for VMWare and right click the relevant entry, and then click "Run as administrator".  Once it's opened, click on the "Convert machine" toolbar button - this will begin the wizard.

For the source, it's pretty safe to leave the default options


On the second step, Destination system, I will opt to target VMWare Player 7.x. And I think it goes without saying you should be targeting this to be saved on an external disk. So, the desitnation configuration ends up looking like:


After this step, you will be presented with all the system options that you may want to update - such as reducing the memory, since your virtual environment would probably have far less memory than your physical machine. I also change the number of virtual cores down to 2 (from 4).

That's all done we can run the conversion. Click Finish and the process will begin. After the job submits, the process will take some time, so you can re-visit the process later (it offers a time estimation - for me, it estimated 1 hour, but jumped up and down periodically).

In the meantime, we can go and grab VMWare Workstation player, if we don't already have anything to verify the conversion in VMWare. This can be found here: https://my.vmware.com/en/web/vmware/free#desktop_end_user_computing/vmware_workstation_player/12_0 note: this is free for non-commercial use. This is just if you want to verify the conversion in VMWare - if you prefer, or that license doesn't suit, you can skip this step and grab the ovftool directly over here: https://www.vmware.com/support/developer/ovf/

With the conversion finished, you should end up with two files: a vmdk and vmx file. Here, we can test it works by opening it up in VMWare Player - for performance, you may want to copy the files over to your host computer, space permitting. Aside from that, all should be working.

Now to convert this for use in VirtualBox. You can either just point at the vmdk file and create the machine manually or create an appliance - I will go forward with the latter approach, though pointing at the existing storage file (vmdk) will get you up and running much quicker!

VMWare player comes with a command line program named ovftool which supports converting from "VMX" to "OVF". (This tool can also be downloaded standalone: https://www.vmware.com/support/developer/ovf/ - the next step is based on if you installed VMWare player). So, go to the folder where workstation player was installed to - on my system, it is: "C:\Program Files (x86)\VMware\VMware Player". You should see a folder named "OVFTool" - right click that whilst holding the shift key, and click "Open PowerShell Window Here".


Then run (adjusting to where your VMWare VM was saved to and where you want the VirtualBox appliance saved to) the following:

ovftool.exe "F:\VMWareConverted\WindowsVM\WindowsVM.vmx" "F:\VBAppliance\WindowsVM.ovf"

This is another fairly long process, so sit back!

Once that finishes, head over to VirtualBox, and import the appliance we just created - the ovf file.



The name comes through as "vm" so you'll probably want to update that, I also changed the Guest OS Type property to Windows to match my system.


I also disabled all the network adapters (4) that came through in the settings, so I can manually add them later - I encountered issues where the network adapters weren't being detected properly in the guest, so this is probably the safer option - we add the network interfaces manually after import.

Click import to create the VM within VirtualBox.

Once that finishes, when attempting to run the machine, you will find you're not out of the woods yet. So, here are some extra steps you'll want to do before starting the virtual machine.

  • Enable EFI (if your computer is recent and EFI based)
  • Move the HDD to the SATA controller 
  • Add an optical disk (so we can install the guest additions)
  • Enable the network
  • Switch to Intel HD Audio
  • Install guest additions


I found that when starting the system, I would receive the following error:

Enable EFI

After doing a bit of digging, I can see in the VirtualBox settings, EFI is not enabled but my system is fairly recent so that is in use and needs to be enabled. So head into the configuration and make sure that is enabled.



So, now this time when trying to boot we will get another error. Progress!



Move the HDD to the SATA controller 

Looking in the VM configuration, I see the storage device is attached to a SAS controller. Just on a hunch, I try moving the moving the device into the SATA controller (delete and re-add the disk from the storage sections in the machine settings).

So, remove the disk "WindowsVM-disk1.vmdk" from the SAS controller



Then re-attach it to the SATA controller. When prompted, specify Choose existing disk and navigate to where the disk is stored. This will a path similar to be: $HOME/VirtualBox VMs/ConvertedPC/WindowsVM-disk1.vmldk.



Add an optical disk

In the same settings area (Storage), click on the optical disk icon that is on the same line as the controller, in order to attach an optical drive to the system.



When prompted, specify to leave empty.


Add a network interface

Since we disabled all network interfaces when importing the appliance, head over to network settings and add a NAT adapter so you can access the internet.



Switch to Intel HD Audio

When I first booted, I found the audio controller wasn't working so I switched it over to Intel HD Audio, and all worked well after that.



And after doing all this, we are able to successfully boot into our PC that has now been virtualised.


Install guest additions

Log in, and install guest additions. This is done by clicking on the Devices menu and selecting Insert Guest Additions CD image...


Follow the steps in the VM. After the process completes it will want to reboot your system - after which point you have your fully functional converted PC as a VM in VirtualBox.

Sunday, 25 June 2017

Using the Oracle node driver on Windows

I have a pretty fresh installation of Windows, and I wanted to get going with the Node driver on that system. I know from memory it requires the Oracle client, so the first thing I do is go and install the Oracle instant client.

So, I head over to: http://www.oracle.com/technetwork/topics/winx64soft-089540.html

And what I normally stick to is three packages:
  1. Basic (Light)
  2. SQL*Plus
  3. SDK

With those downloaded I then unzip them all - as long as you downloaded to the same directory, each should hopefully unzip into a single folder: instantclient_12_2 (depending on the version you grabbed, of course). After I unzip all those, I moved the instantclient folder to a more convenient location. So, I just place it at: C:\instantclient_12_2

The next step after that is to set up some environment variables.

First, you will want to update your path so that `sql*plus` is on it. So, add a new entry on your path pointing to: C:\instantclient_12_2

Usually, you will want to set your ORACLE_HOME environment variable, so I also do that.

So, path should have a new entry:


And, similarly ORACLE_HOME


A good test to check this is working is, is SQL*Plus working?



Yep, that looks good.

I already have node and npm installed, so now without consulting the node oracledb install documentation, I'll just give a test to install that package.

Still in command prompt, in a new folder somewhere - I'm testing out in a tmp folder, I run: npm install oracledb.



Ok, it looks like the system is missing some dependencies. Probably a good time to turn to the install documentation. The README says we need the following:


  • Python 2.7
  • C compiler with support for C++ 11
  • Oracle client (done)
  • Set OCI_LIB_DIR and OCI_INC_DIR

My thought process - Python 2.7 is easy, I can head to the python website and download - but a C compiler? Where is the best place to get that? OCI_LIB_DIR and OCI_INC_DIR.. what paths should these be set to, the README doesn't expand.

For the first two requirements - there is a convenient node package, windows-build-tools, which just so happens to include python 2.7 and a C compiler. So, let's first install that.

Just to caution on the safe side, I launch Command Prompt as Administrator and run: npm install -g windows-build-tools


Great - that's looking good. 

How about OC_LIB_DIR and OCI_INC_DIR environment variables. Well, the documentation also links to an INSTALL document which contains more detail. It states:

set OCI_LIB_DIR=C:\oracle\instantclient\sdk\lib\msvc
set OCI_INC_DIR=C:\oracle\instantclient\sdk\include

This is a but different install path, but the same concept applies. Head to your envionrment variables area and add those in, except if you followed my general install path for the instant client C:\oracle\instantclient would be replaces with c:\instantclient_12_2.

Your environment variables page would look like:



Now with that all done, we should be able to successfully install the node Oracle drive. Start a new command prompt window to make sure all your variables are re-evaluated.




Now, if all that was done correctly, you should be able to install the driver. I verified by again running npm install oracledb:



If you have to use Windows, at least you have a path to get up and running! Some would encourage OS X, but I'll leave you by encouraging you to make a switch to Linux ;-)

Sunday, 21 May 2017

SQLcl scripting and npm modules

Node, or rather npm, has a huge collection of submitted modules! And if you've been using SQLcl to connect with the database, you should know it has scriptable ability where you can for example run JavaScript code and Java APIs. So the question is, how do we get the script to pick up node modules?

During my research, I found a few suggested methods:
  1. J2V8
  2. Project Avatar
  3. jvm-npm
I didn't try the first two methods, but have had success using the third (jvm-npm). This is a library with the source code hosted over on GitHub - https://github.com/nodyn/jvm-npm.

At this point, it's worth noting - not everything will work, as it does not include the full NodeJS API. 

So, first step to do is you'll probably want to host that library somewhere in your project - in my example, which I will provide the link to at the end of the article, I place it at ./lib/jvm-npm.js. Worth mentioning, you can also source this over http in your script, but probably best to keep a local copy.

For this example project, I'm going to focus on the lodash node module. So, the next step will be to set up a package.json file where you list any node dependencies - this is purely so I can run `npm install` to install any dependencies my script requires. So go ahead and run `npm init` in your project directory. After completing the wizard add your dependencies. You should end up with something like:

{
  "name": "sqlcl-npm-demo",
  "version": "1.0.0",
  "description": "Example of using node modules in an SQLcl script",
  "main": "lodashExample.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Trent Schafer",
  "license": "Apache",
  "dependencies": {
    "lodash": "^4.17.4"
  }
}


Now, after that is done, run: `npm install`.

trent@birroth:~/Projects/sqlcl-npm-demo$ npm install
sqlcl-npm-demo@1.0.0 /home/trent/Projects/sqlcl-npm-demo
└── lodash@4.17.4
trent@birroth:~/Projects/sqlcl-npm-demo$ ls node_modules/
lodash


Ok, so we have all of our dependencies installed. Now, how to we get this to work in our SQLcl script?

In a typical node app, you would have something like:

var _ = require('lodash');

If you try this out now, you will get an error like so:

SQL> script lodashExample
javax.script.ScriptException: ReferenceError: "require" is not defined in  at line number 1
<stack trace trimmed>


So, in order to get `require` available to us, we need to make it so our script has this jvm-npm library loaded - that we place in our project earlier. If you're not aware, you can load other scripts into a nashorn script with the `load` function. Read more over at this article: https://wiki.openjdk.java.net/display/Nashorn/Nashorn+extensions.

So, at the start our script, add the following call:

load('./lib/jvm-npm.js');

Then try re-running your script, and you should notice no errors this time around. So, it looks like we are good to go! Just to verify, let's take something out of the lodash API to see if all is working as expected. Lodash has an `nth` function that basically retrieves the specified index of the passed in array - with one little enhancement, it supports negative indexes to search backward - so let's try that out.

My full script becomes:

load('./lib/jvm-npm.js');
var _ = require('lodash');

var testArray = [
    55,
    12,
    99,
    65,
    164,
    32
];

var secondLast = _.nth(testArray, -2);

print("The second last item was", secondLast);


Then, the test output:

trent@birroth:~/Projects/sqlcl-npm-demo$ sql vmtest/vmtest@//192.168.1.116/xe

SQLcl: Release 4.2.0 Production on Sun May 21 19:40:07 2017

Copyright (c) 1982, 2017, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production


SQL> script lodashExample.js
The second last item was 164


Which all looks correct. Yay, success.

As I mentioned, it doesn't include the full node API, so you will probably need to be cautious going forward if you use this - but just offers that little bit more.

Working files from this example: https://github.com/tschf/sqlcl-npm-demo