12:04
2024-12-10 13:02:57
6:53
2024-12-10 13:18:23
3:49:58
2024-12-10 13:29:28
9:46
2024-12-11 07:45:15
5:40:21
2024-12-11 09:08:49
2:05:58
2024-12-12 09:00:08
1:01:06
2024-12-12 12:26:38
8:06
2024-12-12 16:34:52
Visit the ORACLE PL/SQL Fundamentals course recordings page
WEBVTT
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
you
-->
So single row queries using Intu with execute immediate for queries that is turn a single
-->
row.
-->
Then the placeholder binding is placeholder colon one or you can say colon my bar which
-->
is more relevant to you to pass dynamically parameter security then dynamic query safety
-->
so always validate user input to prevent SQL injection when constructing dynamic queries.
-->
Then our use case here is fetching data dynamically based on runtime parameters and creating flexible
-->
queries for reusable programs.
-->
Next we will try is multi row queries, let me know if you are good to go, thank you.
-->
Okay, let's move on to 371 which is encapsulating dynamic SQL select to get a procedure that
-->
allow us to use and specify complex querying logic, so let's do the 371, 371, thank you.
-->
Okay, let's move on 371, so here, so it's better that we are able to code and try it,
-->
so the data is playing and also we have test cases, so it provides example that
-->
we can use.
-->
So here we are engaging with use case dynamic report generator, so here we create a procedure
-->
just a simple select then report generator to volume, so we can code the procedure generated before.
-->
Then we will try using files, so files can be read or written on the server where the
-->
oracle database is running, so they are directly configured as an oracle directory object.
-->
So
-->
so let's try the slide 375.
-->
Last two minutes on the right side.
-->
So
-->
common errors now and solution is invalid path, so ensure that the directory of the
-->
points are valid path on the server, also you write error, so check if you have the
-->
sufficient permission on empty space, also in a file always handle the end of file
-->
when reading the files, so benefit of files handling interest is the data export, so
-->
generate logs or export data for reporting, also data import, so we bought data into
-->
the database for processing and automation, so automate tasks like backups or data transfer.
-->
Okay, let me know if you're good, you can move on the next slide. Go ahead, yeah no problem.
-->
Sir Adrian, are you good, sir?
-->
So you do it, sir?
-->
Ah okay.
-->
Yes, sir, yes, sir.
-->
Right, it will automatically create data.
-->
Yes, sir.
-->
Yes, sir.
-->
Yes, sir.
-->
Okay, so it is permission or
-->
last five minutes.
-->
I think you upload.
-->
Okay, let's move on to the next slide.
-->
Oh
-->
yeah, so let's move on, so benefit
-->
of
-->
so here the utf file package provide procedures, it's a package, provide
-->
procedure for working with files and put line to this right and get line which is
-->
only an approach to close the file handles. In this case we declare the file handles
-->
as mutl file.file type, okay, then we open the example the text, we put line, then we close and
-->
we output that we written in the file. Also we can do append to file, so in this case we
-->
open the example, we put line append in this line, we'll project the A to append, then close,
-->
then it will say file is successful. Then so here we have an example to read the example
-->
text, so we do loop to loop the files, the line in the files, then put content, so exception
-->
when no data found, it will close the file handle and says file will be complete.
-->
So after we log employee salary changes to file,
-->
so we have a trigger condition after update of salary, execute the trigger after any update operation
-->
on the salary column of the employees table. So file mode is A, it's used to append data to log the file.
-->
Then the field slug is the employee ID, this is the new NMP ID after salary timestamp,
-->
then exception handling, so we ensure that the file is closed, we should ensure that the file
-->
is closed properly in case of errors. So again benefit of that is automated logging,
-->
so automate the password capturing salary changes for obvious purposes.
-->
Today's ability ensure that every salary update is recorded with precise details and centralized
-->
logs, so logs stored in a single file facilitate easier monitoring in trouble. So now let's study
-->
with dbms alert package for local notification. So dbms alert package provides mechanism for
-->
signaling and receiving notification across database session. It is typically used to notify
-->
one session of changes or action in another session. So the step in creating a signal alert
-->
is first register and signal alert. So here we register, so the purpose of that is to register
-->
the session to listen for the specified alert, which in our case is salary update. So it's
-->
a prerequisite to listening for alert. Then the signal, the purpose of that is to send
-->
notification for the registered alert. The message passes a custom message employee salary
-->
update along with the signal. Then for the listening alert we have way to one
-->
which is the listener for a specified alert salary update and retrieve the
-->
message. So let's try that on the next
-->
column. So let's go with slide 380.
-->
So
-->
so now we try dbms alert package. So here we gonna do our
-->
our key points in task scheduling. It's automatic execution, task around automatically manual
-->
intervention, custom interval, so it adjusts interval to suit specific needs, for example
-->
hourly weekly. Then error handling, so job that fails repeatedly are marked as broken
-->
in user jobs or used dbms underscore job that's broken. The manually fixed are disabled
-->
then job removal to remove a job just type dbms underscore job then the job id
-->
yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
-->
okay
-->
Yes, but
-->
Yes, but
-->
Yes
-->
Oh
-->
You're not okay
-->
Yes, yes
-->
Okay, yeah, thank you. Thank you
-->
so
-->
Yeah, so we can create tapas and then I'm I know which can a man is done. So you can copy it and create
-->
Put it somewhere or I can export it to compile
-->
Then for the dynamic directory management
-->
So we're
-->
You can create the directory and use it for
-->
utl file operation
-->
So here we create our place directory
-->
So you can use it
-->
Then of course we have to show that we have the
-->
Permission and we grant access to
-->
Make directly accessible to the user
-->
So next one, so some collections the dial
-->
Okay, so collections in PLS QL our data structure that can hold multiple elements of the same data type
-->
They enable efficient handling
-->
these
-->
Records, so there are three types of collections
-->
associative arrays
-->
Formerly known as index by then the nested tables that can store an unbounded collection of elements
-->
So that were for scenarios requiring multi-level data and we erase or vertical variable size arrays
-->
So picks up precise useful for small collection or sizes
-->
Yeah, so
-->
Again, it's a collection. It's a data section first and it holds
-->
multiple values of the same data type
-->
So for us making an attention, let's try the three eight five
-->
I put more data there
-->
So actually it's more information, but yeah, you can try the example
-->
That was my nested hash set and nested table can quickly tie that and map interface
-->
and associative arrays
-->
And I also give you an exact analogy
-->
So
-->
I have more sample
-->
In the next um slides, but yeah, let's give me 10 to 15 minutes to try it
-->
slide three eight five
-->
So your hierarchy also collection framework hierarchy, so iterable interface analogy
-->
So all collection classes implement the iterable interface enabling iteration over elements
-->
then collection
-->
Collection can be iterated using
-->
Loops like for loop
-->
Then
-->
Which is
-->
So other arrays
-->
It's a similar
-->
Um
-->
What are your benefits of using plsql collection so it's efficient in data processing
-->
It reduce context switching between SQL and plsql also dynamic manipulation because it's handled unknown or dynamic
-->
Data structure effectively
-->
Um bulk operation as well performed bulk patches are built using
-->
But collect and for all which we will have some all later on and also I provide code organization
-->
Because it encapsulate complex logically usable data structure
-->
Also, it enhanced performance in memory operational passive compared to database level processing
-->
So what are the best practices in using the collection use the right collection
-->
So use associative array for sparse data and lookup tables
-->
Use nested tables among for bulk operation
-->
Use v arrays for ordered fixed size data set
-->
That was optimized for performance use bulk connect and limit for fetching large data set
-->
Manage memory by training and use elements and also avoid complexity keeping collection operation simple
-->
And predictable avoid performance
-->
Determination
-->
And again use exception handles always handle exception like no data found or value error
-->
when working with that
-->
collection
-->
SQL collection are powerful tools for managing data efficiently within oracle databases
-->
So by leveraging this structure developers can implement complex data operations
-->
And optimize performance and maintain clean and
-->
Understanding the hierarchy
-->
It's very important and the usage of collection is essential for building a robust pls
-->
Let me know if you're going to proceed
-->
Type
-->
So this is the example of the time so
-->
Then here is the associative arrays so
-->
Nested table
-->
So record variables a record variable is a custom composite data
-->
Type that can group related fields of various data type
-->
Similar to row in a table. So for example
-->
In this case, um, the advantage of that is the ease of use so it simplifies
-->
Having multiple fields related to entity allowing fetching the entire row into a single variable
-->
which we
-->
I think discussed yesterday
-->
also the flexibility because it can dynamically adapt
-->
The changes in the table structure when the whole type is used
-->
And custom records can be defined to handle the data to span multiple tables
-->
It also improved readability. So group related
-->
To psychological structure making co-design food and for example in the case in this case you have this assumed table employee
-->
and the output is
-->
So your associated arrays the man index or before is indexed by tables
-->
Where they are stored using numeric
-->
String based on based on indexes
-->
so in features
-->
Um index type of course numeric binary integer or string-based marker too
-->
They have the connotation. So no predefined size required
-->
Since numbers are sparse support non-conviguous indexes
-->
for example skipping indexes
-->
Then efficient lookup best suited for lookups or recaps. So here we have declared
-->
So we declare the salaries of numbering as an associated array type with numeric indexes
-->
Then assignments so values are assigned
-->
To indexes one and five demonstrating the ability to skip indexes
-->
And dynamic behavior no need to pre-define the size of the array
-->
Output so values are to be using the respective
-->
indexes and displayed using dbms
-->
output
-->
Good line
-->
So our real world use cases division caching on beta. So could we access data from the cache?
-->
for performance parts due to storing so under scenario where
-->
Intermediate values are not respond then look up tables use as a memory table for keep
-->
the table of three value pairs
-->
So nested tables the man is an embodied collection stored in the database or manipulated in memory
-->
So indexing again, it's indexed basically starting from one
-->
Dense and sparse initially dense, but division can create
-->
gaps so
-->
Maybe it's called sparse array. So storage
-->
Can be stored in database column or used in memory in tls or drop
-->
One of the features as well for nested tables dynamic expansion. So no element can be added
-->
At the runtime here we have declaration so non-table is declared as a list of table type of number
-->
So number is initialized
-->
With values one to three and we have the data expansion that that extend
-->
Method that adds a new slot to the table
-->
Then sparse behavior that that db method removes the values specific index
-->
creating a gap
-->
So attempting to access the index result results in the no data count error
-->
So here's the sample of the V arrays so V arrays
-->
Our variable size array our connection in tls with a fixed upper bound
-->
Unlike nested table or associated arrays. They must be dense
-->
And elements are restored with specific order
-->
So key features now is bounded size. So the maximum size is specified during the creation. It cannot exceed
-->
this limit
-->
Then structure so all elements are contagious
-->
Meaning there are no gaps or sparsity
-->
Ordered storage element retainer order and access the integer subscript
-->
Um ideal usage here are useful when the number of elements is known and does not vary significantly
-->
So here we declare V array underscore type as type is defined with a maximum size of five
-->
So fruits is
-->
initialized within three elements of all banana and cherry
-->
So elements are accessed using the index starting from one
-->
So dynamic expansion we can put that extend method and slot for a new element
-->
To fill the maximum size
-->
X is
-->
Size is exceeded
-->
So iteration we can use use a whole loop
-->
within the fruits
-->
That come to iterate through and through all elements
-->
Um in all cases, so fixed length data storage, so it's too small
-->
Excellent place like categories or short enumeration data transfer so pass compact collection
-->
Um pln has to block out application and also predefined groups
-->
used for cases where
-->
The number of elements is predictable such as a list of weekdays or months
-->
So advantages is your compact design as ideal for a small data set
-->
That was order of preservation, so we ensure that consistent ordering of elements
-->
That was is of use. It's very simple and straightforward for
-->
Predictable data sizes limitation along fixation cannot dynamically resize
-->
Beyond the declared limit and dense only does not allow gaps or sparse data
-->
So let's have an example
-->
Um
-->
So lastly the index by table or associated arrays
-->
also known as
-->
Powerful sql structure used
-->
for temporary
-->
In memory data processing they allow you to store e-value pairs with numeric or string based keys
-->
Making them ideal for lookup tables or caching operations
-->
Some features not dynamic size automatically grows as element or added
-->
Flexible indexing with support numeric binary figure or string
-->
or chart two
-->
indexes
-->
It's sparse data. So it's allow gaps
-->
in indexes
-->
And making it suitable for data sets with non-contiguous keys
-->
Not persistent data is stored only in memory and does not persist across
-->
Session
-->
So here we declare um price table is defined as an associative array with the keys
-->
But third two in the values number
-->
So we initialize the table. Um tables dynamically populated with keys of all banana and
-->
Cherries for example before
-->
Then we can access the data using the key to fetch the associated value for example price
-->
Up so advantages
-->
flexible indexing efficient memory and dynamic sizing
-->
And limitation is not persistent data is not stored in the database
-->
indexing constraint only support numeric and string indexes
-->
limited size restricted by available memory, so let's try the
-->
But before that we
-->
Know that we can have a direct assignment like this
-->
Then using loop we can have direct assignment based on vi when we loop
-->
So let's try to give non-existing elements, um, please do the
-->
Slide 396 please do the 396. Thank you. Then simply long. So being um, it really is an exception
-->
I'll give you five minutes
-->
So use the test case three
-->
Okay
-->
Slide 397
-->
Allowing rows of similar data types to be looped together
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You
-->
You