Carts and SQL Developer Command Line Interface

Oracle SQL Developer is a great tool and had evolved quite a bit.  There are two features that have really helped me keep track of all the code that is generated in the database.

The first is the cart tool that generates code for selected objects.  The only problem is the clicking or dragging objects to the cart.   To solve this I wrote a package to produce a cart file.  The other cool feature is the SDCLI that can produce the DDL as specified in the cart.  That Jeff Smith did a great article recently.

But generating a cart  as new objects are  created  can be a pain.  The code for the package below takes a schema and generates a code for table, view, function,  procedure, package and body.      I generate the DDL,  not data as in the article, but that change could be added.  I have a scheduled job running every night to generate the desired schema and a shell script that uses sdcli to add the code to a SVN repository.  The scripts deletes the the target directory just before generating the files so the files are not renamed with a number.

The code uses the user UTILS and a connection known as UTILS.

 

--------------------------------------------------------
-- DDL for Package CART_TOOL
--------------------------------------------------------

CREATE OR REPLACE PACKAGE CART_TOOL AS 

 PROCEDURE do_schema( pSchema VARCHAR2, pStartdate date default null);

END CART_TOOL;

--------------------------------------------------------
-- DDL for Package Body CART_TOOL
--------------------------------------------------------

 CREATE OR REPLACE PACKAGE BODY "CART_TOOL" 
AS
 FUNCTION fillObj(
 ptype VARCHAR2,
 p_obj VARCHAR2,
 pName VARCHAR2)
 RETURN VARCHAR2
 AS
 vRet VARCHAR2(32000) ;
 crlf VARCHAR2(4) := chr(10);
 BEGIN
 vRet := '<dbobject>' 
 || crlf || ' <objectconnection><![CDATA[IdeConnections%23UTILS]]></objectconnection>' 
 || crlf || ' <objecttype><![CDATA['|| p_obj ||']]></objecttype>' 
 || crlf || ' <objectschema><![CDATA['|| ptype ||']]></objectschema>' 
 || crlf || ' <objectname><![CDATA['|| pName ||']]></objectname>' 
 || crlf || ' <include><![CDATA[true]]></include>' 
 || crlf || ' <meta><![CDATA[true]]></meta>' 
 || crlf || ' <data><![CDATA[false]]></data>' 
 || crlf || ' <filter><![CDATA[]]></filter> ' 
 || crlf || '</dbobject>';
 RETURN vRet;
 END;
 
 
 PROCEDURE do_schema(
 pSchema VARCHAR2, pStartdate date default null )
 AS
 vHeader VARCHAR2(32000);
 vObject VARCHAR2(32000);
 crlf VARCHAR2(4) := chr(10);
 vFooter VARCHAR2(32000) := ' </objects>' || crlf || '</cart>' ;
 fh utl_file.file_type;
 l_filename varchar2(256) := pSchema||'.sdcart' ;
 BEGIN
 fh := utl_file.fopen( 'OUTPUT', l_filename, 'w' , 32767 );

 vHeader := '<?xml version="1.0" encoding="UTF8" ?>' 
 || crlf || '<cart>' 
 || crlf || ' <connections>' 
 || crlf || ' <connection>'
 || crlf || ' <displayname><![CDATA[UTILS]]></displayname>' 
 || crlf || ' <connectionname><![CDATA[IdeConnections%23UTILS]]></connectionname>' 
 || crlf || ' <connectionuser><![CDATA[UTILS]]></connectionuser>' 
 || crlf || ' </connection>' 
 || crlf || ' </connections>' 
 || crlf || '<objects>' ;
 utl_file.put_line( fh, vHeader );
 FOR rec IN
 (SELECT owner
 ,object_type
 ,object_name
 FROM all_objects
 WHERE object_TYPE IN ( 'FUNCTION','PACKAGE','PACKAGE BODY','PROCEDURE','TABLE','VIEW' )
 and LAST_DDL_TIME >= coalesce( pStartdate, to_date('20150701','YYYYMMDD')) -- Date of db creation
 AND DECODE ( pSchema,NULL,1,owner,1,0 ) = 1
 ORDER BY
 CASE object_type
 WHEN 'TABLE' THEN 1
 WHEN 'VIEW' THEN 2
 WHEN 'PACKAGE' THEN 3
 WHEN 'FUNCTION' THEN 4
 WHEN 'PROCEDURE' THEN 5
 ELSE 6
 END
 )
 LOOP
 utl_file.put_line( fh, fillObj( rec.owner,rec.object_type,rec.object_name ));
 END LOOP;
 utl_file.put_line(fh, vFooter );
 utl_file.fclose( fh );
 END do_schema;
 
 
END CART_TOOL;

/


## script snippet
for f in `ls /OUTPUT/*.sdcart`
 do 
 d=${f##*/} 
 d=${d%.*}
 echo "Exporting $d"
 d=$base_dir$d 
 rm -rf $d

 sdcli cart export -cart $f -tgt $d -cfg $base_dir/export_tool.xml 
 rm -f $f 
 done 


Error logging

Often logging is done by writing to the database. This leads to a problem. A lot of problems really.    I’ll also ask this;  If logging to the database makes sense why does  Oracle log to the file system?

Can’t connect to the database to see the failure

Something failed and a helpdesk person calls in the middle of the night,  reports an error but cannot tell you more as they have no access to the db.   Or the db is so trashed you normal users cannot login to look at the logs.

Logging a failure you want to rollback

So something happened worthy of logging. You want to rollback but if you log to the db then you have to use Autonomous Transactions.  I would rather not use them in production code.  That is why I came up with this method.

Log tables grow and never go away.

There is so much data it the log table you cannot wade through it.  It grows and is never cleaned up.  Not a big deal but who needs the clutter.

Data logged is inconsistent

You have created a table for logging but the usage is inconsistent. Each developer does an insert directly into the table instead of using the nice APIs you have made for him to use.  The API is there but the developer knows what they are doing so just write to the table.

Logging errors from an application

Some developers will even create a connection to the db just to log an error.

So what to do? Log to the OS file system.  But you say then I can’t access the logs from with in the database!   Well use the EXTERNAL TABLE feature and have it all. All will be revealed, but lets start with the package to produce the file.  I build this in a separate utility schema.

CREATE OR REPLACE PACKAGE dev_log AS
    PROCEDURE debug(   loc VARCHAR2,  msg VARCHAR2);
    PROCEDURE debug(   loc VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) ;
    PROCEDURE info(    loc VARCHAR2,  msg VARCHAR2);
    PROCEDURE info(    loc VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) ;
    PROCEDURE warning( loc VARCHAR2, err VARCHAR2,  msg VARCHAR2);
    PROCEDURE warning( loc VARCHAR2, err VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) ;
    PROCEDURE error(   loc VARCHAR2, err VARCHAR2,  msg VARCHAR2);
    PROCEDURE error(   loc VARCHAR2, err VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) ;
    PROCEDURE daily_reset;
END ;

-- Also create the needed SEQ 
CREATE SEQUENCE  LOG_RUN#;

There are four overloaded API’s and one utility API. More flexibility could be exposed but experience has shown this is a good set to expose to the developers. The parameters are:

  • loc – This is the module making the call
  • msg – a free form message suitable for for the call
  • err – The Oracle error value
  • run_no – To be used if a set of messages are sent so they can be tracked.

The first three should be straight forward, but the use of run_no might need a bit more explications. Suppose your were writing files in a LOOP and wanted to track when you committed.

cntr NUMBER :=1;
myRun NUMBER :=NULL'
LOOP 
    ….
    IF mod( cntr, 100 ) = 0 then 
commit;
dev_log.info( 'MyBatch' , 'committed '|| cntr ||' records so far', myRun  );
    ENDIF
END LOOP

If NULL is passed in a value is assigned and returned. The first time through the loop myRun is NULL so dev_log will assign a number and then in subsequent loops the same value will be used.   Of course you can always pass in zero or an value of your choosing.

Now for the code for the body.

CREATE OR REPLACE PACKAGE BODY dev_log as
   PROCEDURE write_log( filename VARCHAR2, buf VARCHAR2, write_flag CHAR DEFAULT 'A') AS
      F1 UTL_FILE.FILE_TYPE;
   BEGIN
      F1 := Utl_file.Fopen('LOGDIR',Filename,write_flag,32767);
      Utl_file.Put(F1, Buf );
      Utl_file.Fclose( F1 );
   END;

    PROCEDURE any_log( sev VARCHAR2, loc VARCHAR2 ,
     err VARCHAR2, msg VARCHAR2, run_no IN OUT NUMBER )  IS

      filename VARCHAR2( 64 );
      buf      VARCHAR2(32000);
      dt       VARCHAR2(32);
      st       VARCHAR2(32) := to_char(SYSDATE, 'yyyy-mm-dd hh24:MI:SS');
      tab      VARCHAR2(2)  := chr(9);
      ai       VARCHAR2(64) := sys_context('USERENV', 'CLIENT_INFO' );
      sid      VARCHAR2(64) := sys_context('USERENV', 'SID' );
      ser      VARCHAR2(64) := sys_context('USERENV', 'SESSIONID' );
      instance NUMBER        := sys_context('USERENV', 'INSTANCE');
      sev_i    VARCHAR2(32) := upper( sev );
      pdir     VARCHAR2(32) := 'LO' || instance ;
      nl       VARCHAR2(2)  :=chr(13);
   BEGIN
     IF run_no IS NULL THEN
     run_no := log_run#.NEXTVAL;
     END IF;
    SELECT to_char( SYSDATE, 'Dy') INTO dt FROM dual;
    filename := 'dev_util.log.'||dt;
    Buf := Sev_i  || tab ||
       St  || tab ||
      Loc  || tab ||
       Run_no  || tab ||
      Err  || tab ||
       Ai  || tab ||
      Sid  || tab ||
      Ser  || tab ||
      Msg  || nl ;

       write_log( Filename, buf ) ;
    END;



    PROCEDURE ensure_exists AS
       filename VARCHAR2( 64 );
       dt VARCHAR2(32);
    BEGIN
      FOR cntr IN 1..6 LOOP
 SELECT to_char( SYSDATE-cntr, 'Dy') INTO dt FROM dual;
 filename := 'dev_util.log.'||dt;
 write_log( Filename, NULL );

      END LOOP;
    END;


    PROCEDURE daily_reset AS
       F1 UTL_FILE.FILE_TYPE;
       filename VARCHAR2( 64 );
       buf VARCHAR2(32000);
       dt VARCHAR2(32);
       st VARCHAR2(32) := to_char(SYSDATE, 'yyyy-mm-dd hh24:MI:SS');
       tab  VARCHAR2(4) := chr(9);
       ai varchar2(64) := sys_context('USERENV', 'CLIENT_INFO' );
       sid varchar2(64) := sys_context('USERENV', 'SID' );
       ser varchar2(64) := sys_context('USERENV', 'SESSIONID' );
       instance NUMBER := sys_context('USERENV', 'INSTANCE');
       sev_i VARCHAR2(32) := 'RESET';
       nl CHAR(2) :=chr(13);
    BEGIN
       ensure_exists;
     SELECT to_char( SYSDATE, 'Dy') INTO dt FROM dual;
     filename := 'dev_util.log.'||dt;
     Buf := Sev_i  || tab ||
      'RESET'    || tab ||
        0    || tab ||
  'ORA-00000'    || tab ||
        Ai    || tab ||
       Sid    || tab ||
       Ser    || tab ||
'Daily Reset'    || nl;
     write_log( Filename, buf, 'W' );
    END;



    PROCEDURE debug(   loc VARCHAR2,  msg VARCHAR2) AS
       run_no NUMBER :=0;
    BEGIN
       debug( loc, msg, run_no);
    END;
    PROCEDURE debug(   loc VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) AS
    BEGIN
       any_log( 'DEBUG', loc, 'ORA-00000', msg,run_no );
    END;



    PROCEDURE info(    loc VARCHAR2,  msg VARCHAR2) AS
       run_no NUMBER :=0;
    BEGIN
       info( loc, msg,run_no);
    END;
    PROCEDURE info(loc VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) AS
    BEGIN
       any_log( 'INFO', loc, 'ORA-00000', msg,run_no );
    END;


    PROCEDURE warning( loc VARCHAR2, err VARCHAR2,  msg VARCHAR2) AS
       run_no NUMBER :=0;
    BEGIN
       warning( loc, err,  msg, run_no );
    END;
    PROCEDURE warning( loc VARCHAR2, err VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) AS
    BEGIN
       any_log( 'WARNING', loc, err, msg,run_no );
    END;


    PROCEDURE error(   loc VARCHAR2, err VARCHAR2,  msg VARCHAR2) AS
       run_no NUMBER :=0;
    BEGIN
       error(loc, err, msg, run_no );
    END;
    PROCEDURE error(   loc VARCHAR2, err VARCHAR2,  msg VARCHAR2, run_no IN OUT NUMBER ) AS
    BEGIN
       any_log( 'ERROR', loc, err, msg,run_no );
    END;
END ;

 

 

So there are several functions that are not exposed:

 

  • write_log
  • log_any
  • daily_reset
  • ensure_exists

 

write_log does that, takes a filename and a string and writes it to the file.

 

 

 

log_any is called by all the exposed API, formats the data as a tab delimited string and calls write_log.  It also captures some environmental data using the the SYS_CONTEXT function.

If the application has written Application Info then it is captured as is USER, SID, and Serial

 

daily_reset, create an entry early in the day and is called from a job.

 

ensure_exists is called by daily_reset and write a non-record  to the file. This makes sure the file is there. If it is not then the EXTERNAL table fails.

So to create a default set of files

var jobno number
exec dbms_job.submit( :jobno, 'begin dev_log.daily_reset; end;' , trunc(sysdate+1), 'trunc(sysdate+1)' ) ;
commit; 
exec dbms_job.run( :jobno ) ; 

Now create the table, so the data can be seen in the database.

CREATE TABLE db_logs ( 
    sev varchar2(24), 
    time_log date, 
    module varchar2(128), 
    run_no NUMBER  , -- Sequence 
    err_no varchar2(16), -- Oracle Error like ORA-06502 ! 
    ai varchar2(64),     -- additional infor 
    sid number, 
    serial number, 
    msg varchar2(4000) )  ORGANIZATION EXTERNAL( 
       TYPE ORACLE_LOADER 
       DEFAULT DIRECTORY LOGDIR 
              ACCESS PARAMETERS 
       ( 
         records delimited by newline 
         badfile LOGDIR:'dev_logt%a_%p.bad' 
         logfile LOGDIR:'dev_log%a_%p.log' 
         fields terminated BY   '       ' 
         missing field values are null 
         ( sev, time_log char date_format date mask "yyyy-mm-dd hh24:MI:SS", module, run_no, err_no, ai, sid, serial, msg ) 
       ) 
       location ( 
 'dev_util.log.Mon', 
 'dev_util.log.Tue',
 'dev_util.log.Wed',
 'dev_util.log.Thu',
 'dev_util.log.Fri',
 'dev_util.log.Sat',
 'dev_util.log.Sun' )  ) 
PARALLEL 
     REJECT LIMIT UNLIMITED 
/

Now adding the calls, something like the following to your code

begin
...
   dev_log.info( 'My Module' ,  'This is a test ' );
...
EXCEPTION WHEN OTHERS -- use of WHEN OTHERS is not encouraged 
   THEN  
     dev_log.waring( 'My Module' , sqlcode, sqlerrm  );
     RAISE;
END;

Create appropriate synonyms and grant to public, or individual schema and you good to go.    This is a method I have used since 9i and has served me well.   This is just a start on logging.  There may be more data points your organization needs to track.  Maybe you want longer term retention, you could add code to daily_reset to permanently save some records based on some criteria.  Add an API in your Java code and log all the Java errors.

 

 

Taking STOCK

A feature of Oracle that I use a lot is EXTERNAL tables.   An external table leverages SQL*Loader  to read a file and present it as a table.  In early versions of EXTERNAL table the file needed to be a regular file.   With current releases the there is now an option to use a PREPROCESSOR.  This allows for converters like unzip to extract a file.  I will show how to pull stock quotes into an external table.

In this example I will use a shell script that pulls quotes from yahoo finance by using curl,  curl is a tool to transfer data from or to a server.   I have only tested on Linux but should work on any Unix like system.  If you do not have curl use your package manager to get curl.

The first thing needed is a directory for external tables to work with.  This is done with CREATE DIRECTORY.  I am going to create two for this example.   One for normal operations and one for executables.    As SYS or SYSTEM do:

CREATE DIRECTORY logdir as '/tmp' ;
CREATE DIRECTORY exec_dir  as '/usr/local/bin' ;

GRANT read,write ON DIRECTORY logdir TO stocks; 
GRANT read,execute ON DIRECTORY  exec_dir TO stocks;

This has created two directories, logdir and exec_dir.   Then assigned  appropriate access to each directory, read and write on logdir and read and execute on exec_dir to the existing user ‘stocks’,  but you user may be different.   I use two directories because I don’t want to write the directory /usr/local/bin  for two reasons.  The first is clutter, there are various log files that are generated with external files.  The other is I do not want to open the file permissions on the directory to allow writing by the process running oracle.    I have the logdir created for those files and for files I want to read in common usage of external files.  This is sent to /tmp so the OS will clean them up at some point.   We also need a table to hold the stock symbols we wish to track.  Log in as the user stocks and create the needed table.

CREATE TABLE stocks ( symbol VARCHAR2( 32 ) ) ; 

INSERT INTO stocks ( 'ORCL' ) ;

This is a trivial table you can expand depending on your needs.   You can add additional symbols if you would like

Now to create the file that will be used to get the list and populate the table, save as  yahoo_quote.sh.   So do the following:

sudo vi /usr/local/bin/yahoo_quote.sh

go into insert move with an i then paste in the following.  Change the location of sqlplus if yours is different.   Also if you have a different schema password changes those.

#!/bin/bash 
CURL=/usr/bin/curl
TR=/usr/bin/tr

## Make sure this is where sqlplus resides on you system! 
SQLPLUS=/u01/app/oracle/product/11.2.0/xe/bin/sqlplus

## These are octal values for their namesake
SPACE="\40"
QUOTE="\42"
TAB="\11"

## Change the user/pass if needed
lst=$( ${SQLPLUS} -s stocks/stocks  << EOF 
set head off
set feed off
set timing off
select listagg(symbol,',') WITHIN GROUP (ORDER BY symbol) FROM stocks;
exit
EOF
)

base_url='http://download.finance.yahoo.com/d/quotes.csv?s='
params='&f=sl1d1t1c1ghvpp2wern'

endpoint=$(echo ${base_url}${lst}${params}|${TR} -d  $SPACE )

${CURL} -s  "${endpoint}"|${TR} -d $QUOTE | ${TR} ',' $TAB

Save and exit (‘Esc’ :wq)  then  change the permissions and test.  A brief explanation of the script.   I establish names for the fully qualified executables I will need.  Then set some values that will be used to aid in formatting the output.  Next I pull the list of stocks using sqlplus and a query.    I then concatenate the parts of the URL also using tr to strip any spaces.   Lastly I call curl to pull the data, stripping quotes and converting commas to tabs

$> sudo chmod 755 /usr/local/bin/yahoo_quote.sh
$> yahoo_quote.sh

If you have done all the steps you should see something like:

ORCL    34.86    1/11/2013    4:00pm    -0.05    34.78    35.00    15105158    34.91    -0.14%    25.33 - 35.00    2.127    16.41    Oracle Corporatio

Now we re ready to put into the database, in sqlplus paste the following.

drop table  yahoo_quotes_ext 
/

CREATE TABLE yahoo_quotes_ext (
   symbol varchar2(16) ,
   last_trade varchar2(32), 
   last_trade_dt VARCHAR2(20),
   last_trade_tm VARCHAR2(20),
   change VARCHAR2(20),
   day_lo VARCHAR2(20),
   day_hi VARCHAR2(20),
   volume VARCHAR2(20),
   previous VARCHAR2(20),
   change_pc VARCHAR2(20),
   Year_range  varchar2(32),
   eps   number ,
   pe    number ,
   co_name  varchar2( 24 ) )  
ORGANIZATION EXTERNAL(
 TYPE ORACLE_LOADER
  DEFAULT DIRECTORY LOGDIR
 ACCESS PARAMETERS(
         records delimited BY newline
 preprocessor exec_dir:'yahoo_quote.sh' 
         FIELDS terminated BY X'09'
            MISSING FIELD VALUES ARE NULL)
                          LOCATION (exec_dir:'yahoo_quote.sh')
                          )
   PARALLEL
   REJECT LIMIT UNLIMITED
/

To briefly explain the create table statement, the first part is like any other giving names and data types.    Then the ORGANIZATION EXTERNAL describes the format.   In this case the name and location of the preprocessor to run then each record is on a line, fields separated by a tab character.

If all went well we can now get  the quote with a select statement

SELECT    SYMBOL ,
      LAST_TRADE,
      to_date( LAST_TRADE_DT
         ||LAST_TRADE_TM, 'MM/DD/YYYYHH:MIpm' ) last_trade_dt ,
      CHANGE ,
      CASE
         WHEN DAY_HI ='N/A'
         THEN LAST_TRADE
         ELSE DAY_HI
      END AS day_hi,
      CASE
         WHEN DAY_LO = 'N/A'
         THEN last_trade
         ELSE day_lo
      END AS day_lo ,
      VOLUME ,
      PREVIOUS ,
      CHANGE_PC ,
      YEAR_RANGE ,
      EPS ,
      PE ,
      CO_NAME
 FROM yahoo_quotes_ext ;

I convert the ‘N/A’ that may get stored to numbers to keep the data  type consistent with what I want to store

 

 

A new world ORDER

Often I have seen sites where you need to select a country for some reason or other.  And of course you have to wade through a big list to find yours.    A list like this one from iso.org.  Some will put the U.S.  or the home country at the top but is there a way make this easy to maintain?  Generally what they do is add a sort column into the table.   This is just cruft that becomes another source of maintenance .

So what to do?   Add a table that lets the sort be defined and different sorts for different needs.    First the tables:

CREATE TABLE COUNTRY_CODES (
                COUNTRY_CODE CHAR(2) NOT NULL,
                COUNTRY_CODE# NUMBER(22) NOT NULL,
                COUNTRY_NAME VARCHAR2(64),
                CONSTRAINT COUNTRY_CODES_PK PRIMARY KEY (COUNTRY_CODE)
);

CREATE TABLE COUNTRY_SORT (
                USAGE VARCHAR2(16) NOT NULL,
                COUNTRY_CODE CHAR(2) NOT NULL,
                POS NUMBER(22) NOT NULL,
                CONSTRAINT COUNTRY_SORT_PK PRIMARY KEY (USAGE, COUNTRY_CODE, POS)
);

ALTER TABLE COUNTRY_SORT ADD CONSTRAINT COUNTRY_SORT_COUNTRY_CODE_FK1
FOREIGN KEY (COUNTRY_CODE)
REFERENCES COUNTRY_CODES (COUNTRY_CODE);

I then populated country_codes from the CSV in the link above and use a sequence.   Then I inserted the following into the country_sort_table:

REM INSERTING into COUNTRY_SORT
SET DEFINE OFF;
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('AT',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('BE',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('BG',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('CY',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('CZ',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('DE',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('DK',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('EE',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('ES',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('FI',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('FR',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('GB',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('GR',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('HU',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('IE',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('IT',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('LT',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('LU',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('LV',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('MT',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('NL',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('PL',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('PT',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('RO',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('SE',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('SI',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('SK',27,'EU');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('US',1,'North America');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('CA',2,'North America');
Insert into COUNTRY_SORT (COUNTRY_CODE,POS,USAGE) values ('MX',3,'North America');

Now I have two sets of sorting for the country_codes table and have not had to touch the base table.  I can select and change the order in what I select not by changing the tables or the order by

SQL> var loc varchar2(32) 
SQL> exec :loc:='North America' 
SQL> select c.*, ( select pos from country_sort s 
                    where usage = :loc 
                      and s.country_code = c.country_code ) pos 
       from country_codes c 
      order by pos, country_code# ;

COUNTRY_CODE# COUNTRY_NAME                        CO       POS
-------- --------------------------------------- -- ------------
      236 UNITED STATES                           US         1
       40 CANADA                                  CA         2
      144 MEXICO                                  MX         3
        1 AFGHANISTAN                             AF
        2 ?LAND ISLANDS                           AX
        3 ALBANIA                                 AL
        4 ALGERIA                                 DZ
...........
 249  ZIMBABWE                                    ZW 

SQL> exec :loc:='EU' 
SQL> select c.*, ( select pos from country_sort s 
 where usage = :loc 
 and s.country_code = c.country_code ) pos 
 from country_codes c
    order by pos, country_code# ;

COUNTRY_CODE# COUNTRY_NAME                          CO       POS
------------- ------------------------------------- -- ------------
       15 AUSTRIA                                   AT        27
       22 BELGIUM                                   BE        27
       35 BULGARIA                                  BG        27
       59 CYPRUS                                    CY        27
       60 CZECH REPUBLIC                            CZ        27
       61 DENMARK                                   DK        27
       70 ESTONIA                                   EE        27
       75 FINLAND                                   FI        27
       76 FRANCE                                    FR        27
       83 GERMANY                                   DE        27
       86 GREECE                                    GR        27
      101 HUNGARY                                   HU        27
      107 IRELAND                                   IE        27
      110 ITALY                                     IT        27
      123 LATVIA                                    LV        27
      129 LITHUANIA                                 LT        27
      130 LUXEMBOURG                                LU        27
      138 MALTA                                     MT        27
      157 NETHERLANDS                               NL        27
      177 POLAND                                    PL        27
      178 PORTUGAL                                  PT        27
      182 ROMANIA                                   RO        27
      202 SLOVAKIA                                  SK        27
      203 SLOVENIA                                  SI        27
      209 SPAIN                                     ES        27
      215 SWEDEN                                    SE        27
      235 UNITED KINGDOM                            GB        27
        1 AFGHANISTAN                               AF
...........
     249  ZIMBABWE                                  ZW

So that is cool,  change a variable and change the order.  But looking at the output of the second select, I don’t want UK so far down the list:

insert into country_code ( select conutry_code, 27, 'EU GB First' 
                             from country_code );
update country_sort set pos = 1 where country_code='GB' 
                                  and usage = 'EU GB First' ;

/* now we can get */
exec :loc:='EU GB First'

SQL> select c.*, ( select pos from country_sort s 
                    where usage = :loc 
                      and s.country_code = c.country_code ) pos 
      from country_codes c
    order by pos, country_code# ;

COUNTRY_CODE# COUNTRY_NAME                          CO       POS
------------- ------------------------------------- -- ------------
      235 UNITED KINGDOM                            GB         1
       15 AUSTRIA                                   AT        27
       22 BELGIUM                                   BE        27
....

Now using what we learned about pipelined functions, in the previous article:

CREATE TYPE country_t VARCHAR2(64); 
CREATE OR REPLACE  function ordered_country_list( pOrder VARCHAR2 )
   RETURN country_t  deterministic pipelined AS
  lRet VARCHAR2(64) ;
BEGIN
     FOR  rec IN  (  SELECT  c.*, ( SELECT pos 
                                      FROM country_sort s
                                     WHERE usage =  pOrder
                                       AND  s.country_code = c.country_code ) pos
                       FROM country_codes c
                      ORDER BY pos, country_code# )
      LOOP
         PIPE ROW (rec.country_name);
      END LOOP;
      RETURN ;
END;
/

Calling the ordered_country_list function and passing in the parameter will list the results in the  requested order:

select column_value as country_name 
  from table(  ordered_country_list( 'North America' ) ) ;

This is a real world example.  I have also used this technique for things like prescriptions where brand name are listed above the generic, or  just below.   Control is in the sort table and any number of ways of sorting can be added and used.

 

 

Put that in your PIPE

One feature of Pl/Sql that is often ignored in development is PIPELINED functions,  pipelined functions  are a way to return results before all values are processed. From the documentation “pipelined functions return a row to its invoker immediately after processing that row and continues to process rows”    Their use can speed up processing particularly when subsets of data are all you need.    In this article I give a simple, but practical example of a pipeline function.  Mainly I want to show syntax and  to show the features without getting carried away trying to do too much.  Then I will give an example and some background in this article on one technique that I use regularly.

Pipeline return a collection type and that type must be globally known to the schema or defined in the package.    Here the PACKAGE spec is simple, a declaration of a type  TABLE OF NUMBER,  numset_t,  and  two function signatures  returning that type. numset_t.  Notice that the functions are defined DETERMINISTIC.  This informs Oracle that for any given input, the output is always the same.   An example is the square root function, input is 25 and 5 is always the result.   Lastly the functions are declared PIPELINED, indicating that the result is a pipeline:

CREATE OR REPLACE 
PACKAGE  unit_pivot AS
    TYPE numset_t IS TABLE OF NUMBER;
    FUNCTION pivot( p_unit VARCHAR2 ) RETURN numset_t DETERMINISTIC PIPELINED;
    FUNCTION pivot( p_unit NUMBER ) RETURN numset_t DETERMINISTIC PIPELINED;
END;

The pivot function taking a VARCHAR2 relies on a table defined as below and does the same as the pivot function taking a NUMBER, just gets the value for the range from a select.   The table contains common name-value pairs for units like ‘HOURS’ ,  24 ;  ‘MONTH’, 31

CREATE TABLE UNITS
  (    NAME VARCHAR2(32),
       VALUE NUMBER 
  )

The BODY implements the functions.  The functions are mostly alike.  The function with the NUMBER parameter passes that as the upper bounds of a LOOP and for each value it does a PIPE ROW of that value.  Notice that the RETURN statement does not have a value as would be the case of a normal function.  The function with the VARCHAR2  parameter looks up the value in the table for the named unit passed in and calls the other function.

CREATE OR REPLACE
PACKAGE BODY unit_pivot AS
FUNCTION pivot( p_unit VARCHAR2 )
   RETURN numset_t DETERMINISTIC PIPELINED AS
   lRet NUMBER ;
   BEGIN
      SELECT VALUE INTO lRet 
        FROM units
       WHERE NAME = p_unit;
      FOR rec IN ( SELECT column_value FROM TABLE( pivot( lRet ) ) )
      LOOP
         PIPE ROW( REC.COLUMN_VALUE );
      END LOOP;
      RETURN;
   END;
FUNCTION pivot( p_unit NUMBER )
      RETURN numset_t DETERMINISTIC PIPELINED AS
   BEGIN 
      FOR i IN 1..p_unit LOOP
         PIPE ROW( i );
      END LOOP;
      RETURN;
   END;
END;

So why would you use this?   This  produces the same as

SELECT level FROM dual CONNECT BY LEVEL <= :p_unit;

Would you use UNIT_PIVOT?  Maybe not, since the way I used it here was as a way of demonstrating the PIPELINE FUNCTION .  I use the UNIT_PIVOT in place of CONNECT BY LEVEL, mostly as a way of promoting good usage.    I find developers think that they can use CONNECT BY LEVEL to get N values.  That only works when the result set has only one value to begin with.  So by having a stand alone function, it separates the usage, making consistent code.

So now an example of using the package:

/* file pipecal.sql */
WITH x AS (
SELECT * FROM (
 select trunc(SYSDATE, 'mm')+column_value-1  dy ,
   to_char( trunc(SYSDATE, 'mm')+column_value-1, 'dd')  dm ,
   to_char( trunc(SYSDATE, 'mm')+column_value-1, 'mm')  curr_mth ,
   to_number(to_char( trunc(SYSDATE, 'mm')+column_value-1, 'd')) dw ,
/* CASE below prevents DEC from placing its last week in next year */
   to_number(to_char( trunc(SYSDATE, 'mm')+column_value -
      CASE to_char(SYSDATE, 'mm' ) WHEN '12' THEN 8 
                                   ELSE 1 END, 'iyiw'))  wk,
   to_char(SYSDATE, 'mm') mth
   from TABLE(  unit_pivot.pivot( 31  ) ) 
   ) WHERE curr_mth = mth
) , 
z AS ( SELECT /* This puts SUNDAY on the same week as MONDAY */
      dw, 
      dm, 
      CASE dw WHEN 1 THEN wk+1 ELSE wk END  wk FROM x
)
SELECT 
       MAX( CASE dw WHEN 1 THEN dm END ) Su,
       MAX( CASE dw WHEN 2 THEN dm END ) Mo,
       MAX( CASE dw WHEN 3 THEN dm END ) Tu,
       MAX( CASE dw WHEN 4 THEN dm END ) We,
       MAX( CASE dw WHEN 5 THEN dm END ) Th,
       MAX( CASE dw WHEN 6 THEN dm END ) Fr,
       MAX( CASE dw WHEN 7 THEN dm END ) Sa
  FROM z
 GROUP BY wk ORDER BY wk 
/

The output prints the current month:

 SQL> @pipecal 

SU MO TU WE TH FR SA
-- -- -- -- -- -- --
                  01
02 03 04 05 06 07 08
09 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31

6 rows selected.

So I hope you now know a little more about pipelined functions and find uses for it in your projects in the future.

Where is the duplicate

I have often said  ‘One test is worth a thousand speculations.’    I never assume that one method is right or even right in every case.   I look at what might work and then test.   So there is a forum I follow,  where there was a question on how to delete duplicates.   The general consensus was the way I have done it in the past:

DELETE
  FROM tab
 WHERE ROWID NOT IN
    (SELECT   MIN (ROWID)
       FROM tab
      GROUP BY <dup column list> )

But there were also some new ideas, to me at least,  on alternative solutions.  The example was how to delete duplicate firstname and lastname columns in the emp table.  The first suggestions were variations on the template above:

/* del_by_group.sql */

DELETE FROM emp WHERE ROWID NOT IN (
SELECT MIN (ROWID) FROM emp GROUP BY firstname , lastname)

The next was using an analytic function:

/* del_by_analytic.sql */ 
delete emp where rowid in  (
select lag(rowid) 
 over(partition by firstname, lastname order by empid) from emp ) 
/

the last was inspired by using analytics and is using TOP-N logic.

/* del_by_rownumber.sql */
DELETE
  FROM EMP t
 WHERE t.ROWID IN
    ( SELECT   dupl.RID
       FROM ( SELECT row_number() 
               OVER (PARTITION BY FirstName, 
                                  LastName 
                      ORDER BY EmpID) AS ROWNR,
            FirstName  ,
            LastName   ,
            ROWID AS RID
       FROM EMP
   ORDER BY FirstName,
            LastName ) dupl
  WHERE dupl.ROWNR > 1 ) /

So I created an emp table with empid, firstname, and lastname by building data from all_objects,  then tested  the various methods.  Here is the script to create the test data.

/* del_dup_test */
truncate table emp;
drop table emp;
create table emp as ( select object_id as empid, OWNER firstname , OBJECT_NAME lastname from all_objects ) ;
insert into  emp  ( select object_id+26000,  initcap( owner ) ,initcap( OBJECT_NAME ) from all_objects ) ;
insert into  emp  ( select object_id+52000,  initcap( owner ) , ( OBJECT_NAME ) from all_objects ) ;
insert into  emp  ( select object_id+78000,  initcap( owner ) , ( OBJECT_NAME ) from all_objects where owner <> 'SYS'  ) ;
insert into  emp  ( select object_id+104000,  initcap( owner ) , initcap( OBJECT_NAME ) from all_objects where owner <> 'SYS'  ) ;
commit;
select count(*) from emp ; 
@del_by_group
select count(*) from emp ;
roll
/
@del_by_analytic
select count(*) from emp ;
roll
/
@del_by_rownumber
select count(*) from emp ;
roll
/
exec dbms_output.put_line( ' del_by_group');
set autotrace traceonly exp stat ;
@del_by_group
SET autotrace off ;
roll
exec dbms_output.put_line( ' del_by_analytic');
set autotrace traceonly exp stat ;
@del_by_analytic
SET autotrace off ;
roll
exec dbms_output.put_line( ' del_by_rownumber')
set autotrace traceonly exp stat ;
@del_by_rownumber
SET autotrace off ;
roll

If you try this you may need to change the multiple of 26000 to prevent duplicate  empid’s.   I create three rows for every row in all_objects, the base row, a row where both first and last names are initcap’ed and the third row where only the first is initcap’ed.   Then I create duplicates for those extra row excluding the ‘SYS’ schema.

The rest of the script calls the other three scripts, first to show they all produce the same result and then calls them again with tracing on (there are other ways this can be evaluated but this gives you the idea).

Now here’s the spooled output from running the script.  This is with feed and timing off to reduce the clutter.   The times were all sub second.

SQL> @del_dup_test

    COUNT(*)
------------
       87865

    COUNT(*)
------------
       58239

    COUNT(*)
------------
       87865

    COUNT(*)
------------
       58239

    COUNT(*)
------------
       87865

    COUNT(*)
------------
       58239

    COUNT(*)
------------
       87865
 del_by_group

Execution Plan
----------------------------------------------------------
Plan hash value: 3239269824

------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |          | 46243 |  1083K|       |   366   (3)| 00:00:05 |
|   1 |  DELETE               | EMP      |       |       |       |            |          |
|*  2 |   HASH JOIN ANTI      |          | 46243 |  1083K|  1088K|   366   (3)| 00:00:05 |
|   3 |    TABLE ACCESS FULL  | EMP      | 46243 |   541K|       |   127   (1)| 00:00:02 |
|   4 |    VIEW               | VW_NSO_1 | 46243 |   541K|       |   131   (4)| 00:00:02 |
|   5 |     SORT GROUP BY     |          | 46243 |  2077K|       |   131   (4)| 00:00:02 |
|   6 |      TABLE ACCESS FULL| EMP      | 46243 |  2077K|       |   127   (1)| 00:00:02 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access(ROWID="MIN(ROWID)")

Note
-----
   - dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
         36  recursive calls
      30884  db block gets
        940  consistent gets
          0  physical reads
    8308824  redo size
       1141  bytes sent via SQL*Net to client
       1346  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      29626  rows processed

 del_by_analytic

Execution Plan
----------------------------------------------------------
Plan hash value: 1245347917

-------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT       |          |    38M|   873M|       |  2501  (15)| 00:00:31 |
|   1 |  DELETE                | EMP      |       |       |       |            |          |
|*  2 |   HASH JOIN            |          |    38M|   873M|  1448K|  2501  (15)| 00:00:31 |
|   3 |    VIEW                | VW_NSO_1 | 61779 |   723K|       |  1892   (1)| 00:00:23 |
|   4 |     SORT UNIQUE        |          | 61779 |  3559K|  4384K|  1892   (1)| 00:00:23 |
|   5 |      WINDOW SORT       |          | 61779 |  3559K|  4384K|  1892   (1)| 00:00:23 |
|   6 |       TABLE ACCESS FULL| EMP      | 61779 |  3559K|       |   127   (1)| 00:00:02 |
|   7 |    TABLE ACCESS FULL   | EMP      | 61779 |   723K|       |   127   (1)| 00:00:02 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access(ROWID="$kkqu_col_1")

Note
-----
   - dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
         36  recursive calls
      30863  db block gets
        940  consistent gets
          0  physical reads
    8303216  redo size
       1141  bytes sent via SQL*Net to client
       1362  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      29626  rows processed

 del_by_rownumber

Execution Plan
----------------------------------------------------------
Plan hash value: 3671588261

-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT             |          |     1 |    24 |   342   (2)| 00:00:05 |
|   1 |  DELETE                      | EMP      |       |       |            |          |
|   2 |   NESTED LOOPS               |          |     1 |    24 |   342   (2)| 00:00:05 |
|   3 |    VIEW                      | VW_NSO_1 | 46251 |   542K|   127   (1)| 00:00:02 |
|   4 |     SORT UNIQUE              |          |     1 |  1129K|            |          |
|*  5 |      VIEW                    |          | 46251 |  1129K|   127   (1)| 00:00:02 |
|   6 |       WINDOW SORT            |          | 46251 |  2664K|   127   (1)| 00:00:02 |
|   7 |        TABLE ACCESS FULL     | EMP      | 46251 |  2664K|   127   (1)| 00:00:02 |
|   8 |    TABLE ACCESS BY USER ROWID| EMP      |     1 |    12 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("DUPL"."ROWNR">1)

Note
-----
   - dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
         36  recursive calls
      30883  db block gets
        940  consistent gets
          0  physical reads
    8309716  redo size
       1141  bytes sent via SQL*Net to client
       1499  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      29626  rows processed

So there were some surprises for me.   First that the tried and true way did so well.   But also the one that looked the most complex produced the best trace.    But even so, I’ll stick with the way I have always done delete of dupes as I can remember how to do it off the top of my head,  and my databases do not have duplicates very often.   In regards to the group by, changing min to max will easily delete different entries.   This is also not a complete test.  The duplicates are based on only two columns, and the percentage of duplicates is fairly high.   Sometimes there are duplicates and other columns  have a bearing on which records are to be removed.

afiedt.buf whats with that

The file afiedt.buf  is the default buffer for editing files from SQL*Plus.  It is a throw back to long ago before SQL*Plus existed.  Before SQL*Plus there was User-Friendly-Interface.  Then the edit buffer was ufiedt.buf.   Then an early name for SQL*Plus was Advanced-User-Friendly-Interface and the edit buffer became afiedt.buf.    AUFI got changed to sqlplus but the buffer name stayed the same.

There are several problems with the name afiedt.buf.   There is only one name,  but it ends up in the current working directory.   The long sql you worked on for a new project gets over written when you start a new session and forgot to save.  You cannot write to the file as another session has the file open.  Then also your editor does not know that the file should be treated as SQL.  So what to do?

-- Part of login.sql
define _EDITOR = "vim -c 'colorscheme evening'  "

--  make the name of the edit buffer a . sql to editor knows to highlight
define efile=sqltmp1.SQL

-- now we make it unique to the session and user in /tmp for system cleanup
column file_name new_value efile
SELECT '/tmp/sql' || SYS_CONTEXT ('USERENV', 'SESSION_USER' ) 
                  || SYS_CONTEXT ('USERENV', 'SESSIONID' ) 
                  || '.sql' file_name  from dual;
set editfile '&efile'

The first define sets the editor to vim,  I use SlickEdit (vs) myself but that’s another blog. However, any on Linux would have vi or a derivative.  The next define just sets a place holder for the variable efile.    The column command lets efile be the target of a select.  The select populates efile with the string  /tmp/sqlUSER#####.sql    Lastly the set sets editfile to the value of efile.   Now each sessions get it’s own file.  The sql statement from five sessions ago is still there (at least until tmp gets cleaned up; I actually use /var/sql).

I hope this makes your experience in SQL*Plus a little better.

Function of Procedure

My biggest peeve is the misuse of Functions, and when I say function it can be standalone or part of a package.   The point is  Functions and Procedures have different purposes.

First my simple rules on FUNCTION and PROCEDURE.

  • Functions return a single meaningful value.
  • Functions do not  have side-effects.
  • Procedures perform work, do something.
  • Procedures  have side-effects

These rules date back to ADA, the language Pl/Sql is derived from.    They still hold, one reason is a Function can be used in a SQL statement. but if it contains DML an error will be thrown.

Functions are used to return a value that would be used by the program.  Things like the sum of a set of values; or the square root of a number;  the amount of tax to be applied.

/* Example FUNCTION */ 
CREATE OR REPLACE FUNCTION 
     amount_of_tax( pTaxable number, pZipcode varchar2 );
RETURN NUMBER IS
   lRet Number 
BEGIN
    select tax_Rat * pTaxable INTO lRet 
      FROM tax_rates 
     WHERE zipcode= pZipcode;
   RETURN lRet; 
END;

Simple and meaningful that is what functions should be . I have seen code like:

CREATE OR REPLACE FUNCTION myFunc
             RETURN NUMBER IS
BEGIN
    UPDATE employees 
       SET phone_number = '123.456.7890' 
     WHERE employe_id = 107;
    RETURN 1;
EXCEPTION
WHEN OTHERS THEN
    RETURN 0;
END;

/* And called like one of these */

SELECT myFunc FROM dual;
--
var foo NUMBER 
EXEC foo := myFunc ;

This is wrong in so many ways.

  • Returns useless data.
  • Invalidates the exception handling mechanism.
  • Returns zero for failure, as a long time C programmer non-zero are the error.
  • Need to have a variable to capture the return

Correctly this should be

CREATE OR REPLACE PROCEDURE myProc IS
BEGIN
    UPDATE employees 
       SET phone_number = '123.456.7890' 
     WHERE employe_id = 107;
END; 
-- Called from SQL Prompt
exec myProc

So this is just a quick reminder.  Functions return values you use and Procedures do things for you.

This is for you

 

plsql.me is for you.  I have worked with PL/SQL for over 10 years, including supporting Pl/Sql at Oracle.  So this site is for you to benefit from that experience.   Working at Oracle were lots of less than optimal usage of Oracle and Pl/Sql.   I found lots of way to solve these problems.   So too there were those cutting edge users that pushed me to learn even more.

It is my hope that I can share with you some tips.  And in turn get pushed by you to help you solve real world situations.