Friday, February 12, 2016

Using Oracle PIVOT Function When the Number of Columns Returned Is Dynamic (APEX)

I've recently dabbled in the use of Oracle's PIVOT function, and found it to be quite nice - if you know which values you can expect at runtime to be used in your pivot.  What if the list of values that you want to pivot are dynamic?

According to the documentation, the subclauses of the pivot_clause provide the following support:


  • when using the pivot_for_clause you must explicitly list the values you want to pivot.  


  • when using the pivot_in_clause you can generate your pivot values dynamically from a subquery, however,  that's only supported if you want XML as output.


In my case the list of values varied based on user input within an APEX 5 application and I did not want XML as output.  I needed to stick with the pivot_for_clause option.

The application's purpose was to query electric meters by a single transformer and date range, allow users to pick all or subset of the meters fed by the transformer, and display their aggregated 15 minute KW graphically.  This allows the end user to see if a transfomer's load is under its rated capacity.

Transformer Load Based on Meter Aggregation


Graphical representation was not enough as they wanted tabular data that they could interact with and download to csv as well.  It was simple enough to add an interactive report below the chart to meet this requirement.



The last request was to allow the user to view the un-aggregated tabular data as a pivot table (on the same page) so that individual meter names became columns and their KW for a time interval became the row value.  APEX 5 introduced the capability to allow users to pivot interactive reports, however you have to explicitly select your columns to include.  This is a limitation of the underlying pivot function, unless you want your output as XML.

Using the APEX 5 OOTB pivot capability approach would have been fine except the next time different inputs were entered the returned meters would be different and the pivot configuration would be broken and have to be redone.  I needed a better way to handle dynamic lists of returned meters.

I decided to use a classic report region for this task as they allow you to generate a report based on the sql returned by a function.  In other words, they allow you to generate your sql on the fly.


What this allowed me to do is capture the meter names input by the end user and dynamically build my pivot clause (pivot_for_clause) at runtime.  Here's my function:

DECLARE
--a place to store the final sql
   l_retval           VARCHAR2 (32767);
--a place to store the PIVOT clause columns based on user input
   l_csv_meter_list   VARCHAR2 (32767);
BEGIN
   l_csv_meter_list := '''' || REPLACE ( :P1_METERS, ':', ''',''') || '''';  --List of meters the users selected

   l_retval :=
         'SELECT * from (
  SELECT meter.udc_id meter,  lp.local_interval_time TIME, 
  lp.lp_value*4 KW_VALUE
    FROM emapp.lookup_mv_sdps sdp,
emapp.lookup_mv_meters meter,
         emapp.lookup_mv_channels1 channel,
         emapp.lp_intervals lp
   WHERE     sdp.premise_id IN (SELECT premid
                                  FROM AMFC.PREM_TRANSFORMER_REL
                                 WHERE xfmrswitchpoint = :P1_TRANSFORMER)
         AND SDP.CFG_TYPE_CD = ''Electric''
and sdp.meter_id=meter.meter_id
         AND SDP.SDP_ID = CHANNEL.SDP_ID
and meter.meter_id=channel.meter_id
         AND CHANNEL.CHANNEL_TYPE = ''Interval Data''
         AND channel.logical_channel_no = 1
         AND channel.channel_id = lp.channel_id
         AND lp.utc_interval_time BETWEEN amfc.to_utc_time (sdp.start_date)
                                      AND amfc.to_utc_time (
                                             NVL (sdp.end_date, SYSDATE))
AND lp.utc_interval_time BETWEEN amfc.to_utc_time (channel.start_date)
                                      AND amfc.to_utc_time (
                                             NVL (channel.end_date, SYSDATE))
         AND lp.utc_interval_time BETWEEN amfc.to_utc_time(:P1_START_DATE) AND amfc.to_utc_time(:P1_END_DATE)
and instr(:P1_METERS,meter.udc_id)>0)
PIVOT (SUM(KW_VALUE) As KW FOR (meter) IN ('  
      || l_csv_meter_list  --APPEND DYNAMIC PIVOT CLAUSE HERE
      || '))';

   RETURN l_retval;  --RETURN dynamic sql statement to classic report
END;

Here's the classic report result:

Pivot Result


Here's the final page result:

Final Result - aggregated graph, aggregated tabular, and raw pivoted




Monday, September 15, 2014

Passing Item Values that Contain Colons to an Apex Page

One of the most helpful features of Oracle Application Express (APEX) applications that I've been able to exploit is the ability to drill down into a chart item.  I've been doing this here for applications without issue for a couple of years UNTIL I tried to drill down by passing a #LABEL# value that is a DATE that includes time (and a colon!).

We are currently using APEX 4.2, and according to the oracle docs the URL syntax for linking to pages is as follows:

f?p=App:Page:Session:Request:Debug:ClearCache:itemNames:itemValues:PrinterFriendly

I want to focus on the itemNames and itemValues components of the URL.


itemNames
Comma-delimited list of item names used to set session state with a URL.
itemValues
List of item values used to set session state within a URL. Item values cannot include colons, but can contain commas if enclosed with backslashes. To pass a comma in an item value, enclose the characters with backslashes. For example:
\123,45\

In my case, I need to pass a premise(99999), service type(Electric), channel(1), start date(24-AUG-2014 00:00), and an end date(25-AUG-2014 00:00) value in the url.

For example:

fp=108:4:112608898900151::NO::P4_PREMISE,P4_SERVICE_TYPE,P4_CHANNEL,P4_START_DATE,P4_END_DATE:99999,Electric,1,24-AUG-2014%2000:00,25-AUG-2014%2000:00

You'll notice that APEX encodes the spaces with %20.

The problem is that any colon after the last item (itemNames) is treated as an item delimiter!  So, the first date value of 24-AUG-2014 00:00 gets passed to page 4 as 24-AUG-2014 (no time) and the rest of the itemValues are ignored (25-AUG-2014 00:00 is not passed at all).

Hmm.  So Apex escapes the spaces but not the "non delimiting colons".  So I try to manually manipulate the URL to encode the colons with %3A:

fp=108:4:112608898900151::NO::P4_PREMISE,P4_SERVICE_TYPE,P4_CHANNEL,P4_START_DATE,P4_END_DATE:47000,Electric,1,24-AUG-2014%2000%3A00,25-AUG-2014%2000%3A00

No dice, same result as before with the escaped colon still being treated as a delimiter.  This means that using utl_https.escape on the second half of the URL would not work.  As a matter of fact it was worse as I would pass escaped characters to apex when building my URL and APEX would re-escape them on transmission(escape the % signs for example).

I do some googling around and see lots of activity on the subject with some folks hinting around at substituting the colons and spaces prior to passing to the new page, then replacing the values with spaces and colons.  So I decided to write a function to do this.

CREATE OR REPLACE FUNCTION AMFC.escape_url (V_URL_BASE      VARCHAR2,
                                            V_URL_SUFFIX    VARCHAR2)
   RETURN VARCHAR2
IS
   encoded_url   VARCHAR2 (4000);
BEGIN
   encoded_url :=
         v_url_base
      || replace(replace (v_url_suffix,' ','_SPACE_'),':','_COLON_');
   RETURN (encoded_url);
END escape_url;
/

I then call this function from my series sql to populate the link column and set the 



Here is what the URL that gets passed to page 4 looks like now.  You'll notice that all of the colons and spaces have been replaced on the second part of the URL (variable values) only as a result of the function call.

fp=108:4:112608898900151::NO::P4_PREMISE,P4_SERVICE_TYPE,P4_CHANNEL,P4_START_DATE,P4_END_DATE:47000,Electric,1,28-AUG-2014_SPACE_00_COLON_00,29-AUG-2014_SPACE_00_COLON_00

Since there are no colons being treated as delimeters, these values make it to their destination items on page 4.

The next step was to decode the values of _SPACE_ and _COLON_ to their correct values before the loading of the regions page 4.

I created two computations to handle this task:


Each computations is the same with the exception of the item referenced in the code.  Here is P4_START_DATE:


Success!



Thursday, May 29, 2014

ACFS default size for /cloudfs on Oracle Database Appliance X4-2

We recently deployed an Oracle Database Appliance X4-2 to support some large databases.  Overall the ODA is really performing well.

The only thing I didnt like about the deployment is that just about ALL of the storage is presented to ASM as part of the install, leaving little for the OS on each node.  While I know you can expand your storage capabilities using NFS and other solutions, I focused on expanding the /cloudfs mount point that was setup as part of our ODA deployment and is visible to both nodes.

The default size of this volume is 50G, and it turns out it was pretty simple to resize to 200G at the cost of stealing a bit of space from our +RECO asm diskgroup.


[grid@uoda2 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM2
The Oracle base has been set to /u01/app/grid
[grid@uoda2 ~]$ /sbin/acfsutil size +161061273600 /cloudfs
acfsutil size: new file system size: 214748364800 (204800MB)
[grid@uoda2 ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G  3.1G   25G  12% /
/dev/md0              487M   36M  426M   8% /boot
/dev/mapper/VolGroupSys-LogVolOpt
                       59G  8.7G   47G  16% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       97G   38G   55G  41% /u01
tmpfs                 127G  223M  126G   1% /dev/shm
/dev/asm/acfsvol-154  200G   33G  168G  17% /cloudfs

Here's how I found out it was using space from the RECO group:

[grid@uoda2 ~]$ asmcmd
ASMCMD> volinfo -a
Diskgroup Name: RECO

         Volume Name: ACFSVOL
         Volume Device: /dev/asm/acfsvol-154
         State: ENABLED
         Size (MB): 204800
         Resize Unit (MB): 32
         Redundancy: HIGH
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: ACFS
         Mountpath: /cloudfs





Thursday, May 9, 2013

Edit Links On Page Disappear After Upgrading to APEX 4.2

I recently upgraded from APEX 4.1 to 4.2.  All went well except I could no longer get edit links to show up on pages launched from the APEX Administration Application.

After a few days of back and forth with support I was provided this little snippet that fixed the problem. Not sure how or why but I'm looking into it and will update this page with any new info.

This is a result of bug#16769040 - DEVELOPER TOOLBAR IS MISSING AFTER UPGRADING TO 4.2.2

I was told to ONLY run this if you are on 4.2.2.00.11.  Use at your own risk!

Run the following code as SYS or APEX_040200


begin
apex_040200.wwv_flow_security.set_internal_cookie_name;
commit;
end;
/



Thursday, January 5, 2012

DBMS_DATAPUMP - What Happened to the CONTENT=METADATA_ONLY Parameter?

I've recently taken a step towards learning the Data Pump API to explore new scripting possibilities.  After reviewing the dbms_datapump for 11gR2 reference I noticed a very close relationship to the output of expdp help=y.  I found it quite easy to map the command line parameters to the package options...except for one item, the CONTENTS command line parameter.

The CONTENTS command line parameter allows you to specify metadata only (like rows=no in standard import/export), ALL (default), or DATA_ONLY (rows only, no metadata) as input options.

When browsing the alphabetical list of options in the dbms_datapump package there is no CONTENTS parameter nor is there a CONTENTS option for the dbms_metadata.set_parameter() procedure.

After digging for a bit I found the dbms_datapump.data_filter() procedure to work:


DBMS_DATAPUMP.DATA_FILTER(
   handle      IN NUMBER,
   name        IN VARCHAR2,
   value       IN CLOB,
   table_name  IN VARCHAR2 DEFAULT NULL,
   schema_name IN VARCHAR2 DEFAULT NULL);



DBMS_DATAPUMP.DATA_FILTER(h1,'INCLUDE_ROWS',0,NULL,NULL);

I've left the table name and schema names null to include all.  Setting the value = 0 in conjunction with the INCLUDE_ROWS option disables the unloading of data by datapump (metadata_only!)

The description for the INCLUDE_ROWS option for the data filter:

If nonzero, this filter specifies that user data for the specified table should be included in the job. The default is 1.



My next task will be to write some robust data pump job monitoring scripts...