14 videos 📅 2024-12-10 08:00:00 Asia/Brunei
12:04
2024-12-10 13:02:57
6:53
2024-12-10 13:18:23
3:49:58
2024-12-10 13:29:28
9:46
2024-12-11 07:45:15
5:40:21
2024-12-11 09:08:49
5:22
2024-12-11 17:54:14
1:42
2024-12-11 18:04:44
2:05:58
2024-12-12 09:00:08
1:01:06
2024-12-12 12:26:38
3:59
2024-12-12 15:09:36
51:24
2024-12-12 15:24:00
8:06
2024-12-12 16:34:52
2:35
2024-12-12 17:17:14
3:05
2024-12-12 17:26:54

Visit the ORACLE PL/SQL Fundamentals course recordings page

                WEBVTT

00:00:42.720 --> 00:00:44.120
you

00:01:12.720 --> 00:01:14.120
you

00:01:42.720 --> 00:01:44.120
you

00:02:13.000 --> 00:02:14.400
you

00:02:42.600 --> 00:02:44.000
you

00:03:12.720 --> 00:03:14.120
you

00:03:42.720 --> 00:03:44.120
you

00:04:12.720 --> 00:04:14.120
you

00:04:42.720 --> 00:04:44.120
you

00:05:02.120 --> 00:05:03.520
you

00:05:43.000 --> 00:05:44.400
you

00:06:12.720 --> 00:06:14.120
you

00:06:42.720 --> 00:06:44.120
you

00:07:12.720 --> 00:07:14.120
you

00:07:42.720 --> 00:07:44.120
you

00:08:12.720 --> 00:08:14.120
you

00:08:43.000 --> 00:08:44.400
you

00:09:12.720 --> 00:09:14.120
you

00:09:43.000 --> 00:09:44.400
you

00:10:12.720 --> 00:10:14.120
you

00:10:42.720 --> 00:10:44.120
you

00:11:10.460 --> 00:11:11.860
you

00:11:41.820 --> 00:11:43.220
you

00:12:00.740 --> 00:12:02.140
you

00:12:30.000 --> 00:12:30.420
you

00:13:39.440 --> 00:13:40.840
you

00:14:03.240 --> 00:14:04.640
you

00:14:36.280 --> 00:14:37.680
you

00:15:00.000 --> 00:15:00.520
you

00:15:30.000 --> 00:15:30.880
you

00:15:37.240 --> 00:15:38.140
you

00:15:38.140 --> 00:15:38.160
you

00:15:38.160 --> 00:15:38.180
you

00:15:49.400 --> 00:15:56.720
So single row queries using Intu with execute immediate for queries that is turn a single

00:15:56.720 --> 00:15:57.120
row.

00:15:57.380 --> 00:16:05.140
Then the placeholder binding is placeholder colon one or you can say colon my bar which

00:16:05.140 --> 00:16:11.540
is more relevant to you to pass dynamically parameter security then dynamic query safety

00:16:11.540 --> 00:16:18.160
so always validate user input to prevent SQL injection when constructing dynamic queries.

00:16:19.060 --> 00:16:25.600
Then our use case here is fetching data dynamically based on runtime parameters and creating flexible

00:16:25.600 --> 00:16:27.860
queries for reusable programs.

00:16:28.800 --> 00:16:35.620
Next we will try is multi row queries, let me know if you are good to go, thank you.

00:16:52.720 --> 00:17:01.400
Okay, let's move on to 371 which is encapsulating dynamic SQL select to get a procedure that

00:17:01.400 --> 00:17:11.760
allow us to use and specify complex querying logic, so let's do the 371, 371, thank you.

00:17:13.240 --> 00:17:22.840
Okay, let's move on 371, so here, so it's better that we are able to code and try it,

00:17:23.320 --> 00:17:31.500
so the data is playing and also we have test cases, so it provides example that

00:17:31.500 --> 00:17:32.600
we can use.

00:17:33.480 --> 00:17:48.780
So here we are engaging with use case dynamic report generator, so here we create a procedure

00:17:48.780 --> 00:17:57.100
just a simple select then report generator to volume, so we can code the procedure generated before.

00:17:59.760 --> 00:18:07.100
Then we will try using files, so files can be read or written on the server where the

00:18:07.100 --> 00:18:12.620
oracle database is running, so they are directly configured as an oracle directory object.

00:18:12.620 --> 00:18:13.660
So

00:18:13.660 --> 00:18:29.900
so let's try the slide 375.

00:19:11.200 --> 00:19:12.600
Last two minutes on the right side.

00:19:21.700 --> 00:19:22.300
So

00:19:22.300 --> 00:19:28.540
common errors now and solution is invalid path, so ensure that the directory of the

00:19:28.540 --> 00:19:34.840
points are valid path on the server, also you write error, so check if you have the

00:19:34.840 --> 00:19:40.740
sufficient permission on empty space, also in a file always handle the end of file

00:19:41.380 --> 00:19:47.420
when reading the files, so benefit of files handling interest is the data export, so

00:19:47.420 --> 00:19:54.600
generate logs or export data for reporting, also data import, so we bought data into

00:19:54.600 --> 00:20:00.460
the database for processing and automation, so automate tasks like backups or data transfer.

00:20:00.580 --> 00:20:05.440
Okay, let me know if you're good, you can move on the next slide. Go ahead, yeah no problem.

00:20:24.980 --> 00:20:28.620
Sir Adrian, are you good, sir?

00:20:30.060 --> 00:20:32.940
So you do it, sir?

00:20:34.800 --> 00:20:36.260
Ah okay.

00:21:25.940 --> 00:21:28.480
Yes, sir, yes, sir.

00:21:30.440 --> 00:21:34.300
Right, it will automatically create data.

00:22:18.100 --> 00:22:19.640
Yes, sir.

00:22:24.000 --> 00:22:24.740
Yes, sir.

00:24:14.120 --> 00:24:15.720
Yes, sir.

00:24:15.980 --> 00:24:18.020
Okay, so it is permission or

00:24:25.380 --> 00:24:26.760
last five minutes.

00:24:38.780 --> 00:24:50.380
I think you upload.

00:24:57.400 --> 00:25:00.020
Okay, let's move on to the next slide.

00:25:06.480 --> 00:25:07.160
Oh

00:25:15.780 --> 00:25:23.460
yeah, so let's move on, so benefit

00:25:23.460 --> 00:25:23.820
of

00:25:30.160 --> 00:25:37.440
so here the utf file package provide procedures, it's a package, provide

00:25:37.440 --> 00:25:44.000
procedure for working with files and put line to this right and get line which is

00:25:44.000 --> 00:25:51.440
only an approach to close the file handles. In this case we declare the file handles

00:25:51.440 --> 00:26:01.700
as mutl file.file type, okay, then we open the example the text, we put line, then we close and

00:26:01.700 --> 00:26:10.240
we output that we written in the file. Also we can do append to file, so in this case we

00:26:10.240 --> 00:26:17.840
open the example, we put line append in this line, we'll project the A to append, then close,

00:26:17.840 --> 00:26:26.560
then it will say file is successful. Then so here we have an example to read the example

00:26:26.560 --> 00:26:37.040
text, so we do loop to loop the files, the line in the files, then put content, so exception

00:26:37.040 --> 00:26:42.860
when no data found, it will close the file handle and says file will be complete.

00:26:43.660 --> 00:26:51.100
So after we log employee salary changes to file,

00:26:52.500 --> 00:26:59.540
so we have a trigger condition after update of salary, execute the trigger after any update operation

00:27:00.380 --> 00:27:08.240
on the salary column of the employees table. So file mode is A, it's used to append data to log the file.

00:27:09.460 --> 00:27:16.240
Then the field slug is the employee ID, this is the new NMP ID after salary timestamp,

00:27:17.780 --> 00:27:24.620
then exception handling, so we ensure that the file is closed, we should ensure that the file

00:27:24.620 --> 00:27:31.640
is closed properly in case of errors. So again benefit of that is automated logging,

00:27:31.900 --> 00:27:35.340
so automate the password capturing salary changes for obvious purposes.

00:27:35.340 --> 00:27:42.400
Today's ability ensure that every salary update is recorded with precise details and centralized

00:27:42.400 --> 00:27:49.500
logs, so logs stored in a single file facilitate easier monitoring in trouble. So now let's study

00:27:49.500 --> 00:27:57.860
with dbms alert package for local notification. So dbms alert package provides mechanism for

00:27:57.860 --> 00:28:03.480
signaling and receiving notification across database session. It is typically used to notify

00:28:03.480 --> 00:28:11.800
one session of changes or action in another session. So the step in creating a signal alert

00:28:11.800 --> 00:28:21.040
is first register and signal alert. So here we register, so the purpose of that is to register

00:28:21.040 --> 00:28:27.040
the session to listen for the specified alert, which in our case is salary update. So it's

00:28:27.040 --> 00:28:32.560
a prerequisite to listening for alert. Then the signal, the purpose of that is to send

00:28:32.560 --> 00:28:39.140
notification for the registered alert. The message passes a custom message employee salary

00:28:39.140 --> 00:28:46.640
update along with the signal. Then for the listening alert we have way to one

00:28:49.260 --> 00:28:54.800
which is the listener for a specified alert salary update and retrieve the

00:28:55.160 --> 00:28:58.600
message. So let's try that on the next

00:29:00.300 --> 00:29:04.000
column. So let's go with slide 380.

00:33:42.580 --> 00:33:43.680
So

00:33:43.680 --> 00:33:50.600
so now we try dbms alert package. So here we gonna do our

00:34:51.890 --> 00:34:58.970
our key points in task scheduling. It's automatic execution, task around automatically manual

00:34:59.770 --> 00:35:05.850
intervention, custom interval, so it adjusts interval to suit specific needs, for example

00:35:05.850 --> 00:35:12.430
hourly weekly. Then error handling, so job that fails repeatedly are marked as broken

00:35:12.430 --> 00:35:19.250
in user jobs or used dbms underscore job that's broken. The manually fixed are disabled

00:35:20.130 --> 00:35:28.210
then job removal to remove a job just type dbms underscore job then the job id

00:35:31.710 --> 00:35:48.430
yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes

00:35:56.030 --> 00:35:57.430
okay

00:36:19.250 --> 00:36:20.950
Yes, but

00:36:25.190 --> 00:36:26.770
Yes, but

00:36:29.610 --> 00:36:30.390
Yes

00:36:50.830 --> 00:36:52.230
Oh

00:36:54.830 --> 00:36:56.230
You're not okay

00:36:56.230 --> 00:36:57.430
Yes, yes

00:37:00.670 --> 00:37:03.350
Okay, yeah, thank you. Thank you

00:37:04.530 --> 00:37:05.070
so

00:37:05.070 --> 00:37:11.070
Yeah, so we can create tapas and then I'm I know which can a man is done. So you can copy it and create

00:37:12.730 --> 00:37:16.490
Put it somewhere or I can export it to compile

00:37:20.090 --> 00:37:23.130
Then for the dynamic directory management

00:37:29.290 --> 00:37:30.370
So we're

00:37:30.370 --> 00:37:33.810
You can create the directory and use it for

00:37:35.110 --> 00:37:37.550
utl file operation

00:37:37.550 --> 00:37:41.130
So here we create our place directory

00:37:41.650 --> 00:37:43.650
So you can use it

00:37:50.470 --> 00:37:56.070
Then of course we have to show that we have the

00:37:56.070 --> 00:37:58.250
Permission and we grant access to

00:37:58.590 --> 00:38:02.330
Make directly accessible to the user

00:38:04.150 --> 00:38:07.630
So next one, so some collections the dial

00:38:08.310 --> 00:38:15.150
Okay, so collections in PLS QL our data structure that can hold multiple elements of the same data type

00:38:15.950 --> 00:38:18.190
They enable efficient handling

00:38:18.190 --> 00:38:19.210
these

00:38:21.130 --> 00:38:25.050
Records, so there are three types of collections

00:38:25.050 --> 00:38:27.150
associative arrays

00:38:27.150 --> 00:38:35.390
Formerly known as index by then the nested tables that can store an unbounded collection of elements

00:38:36.110 --> 00:38:43.430
So that were for scenarios requiring multi-level data and we erase or vertical variable size arrays

00:38:43.430 --> 00:38:48.330
So picks up precise useful for small collection or sizes

00:38:48.330 --> 00:38:55.550
Yeah, so

00:39:00.230 --> 00:39:05.450
Again, it's a collection. It's a data section first and it holds

00:39:05.450 --> 00:39:08.210
multiple values of the same data type

00:39:08.210 --> 00:39:11.810
So for us making an attention, let's try the three eight five

00:39:11.810 --> 00:39:15.130
I put more data there

00:39:16.330 --> 00:39:21.150
So actually it's more information, but yeah, you can try the example

00:39:23.630 --> 00:39:31.230
That was my nested hash set and nested table can quickly tie that and map interface

00:39:31.870 --> 00:39:33.590
and associative arrays

00:39:34.190 --> 00:39:37.490
And I also give you an exact analogy

00:39:38.670 --> 00:39:39.590
So

00:39:41.170 --> 00:39:44.150
I have more sample

00:39:44.990 --> 00:39:51.130
In the next um slides, but yeah, let's give me 10 to 15 minutes to try it

00:39:51.150 --> 00:39:53.230
slide three eight five

00:39:54.510 --> 00:40:00.690
So your hierarchy also collection framework hierarchy, so iterable interface analogy

00:40:01.550 --> 00:40:07.490
So all collection classes implement the iterable interface enabling iteration over elements

00:40:08.110 --> 00:40:09.530
then collection

00:40:09.530 --> 00:40:11.930
Collection can be iterated using

00:40:13.230 --> 00:40:15.110
Loops like for loop

00:40:15.110 --> 00:40:15.850
Then

00:40:15.850 --> 00:40:23.450
Which is

00:40:26.310 --> 00:40:27.890
So other arrays

00:40:34.050 --> 00:40:37.710
It's a similar

00:40:45.110 --> 00:40:45.610
Um

00:40:50.870 --> 00:40:57.930
What are your benefits of using plsql collection so it's efficient in data processing

00:40:57.930 --> 00:41:05.750
It reduce context switching between SQL and plsql also dynamic manipulation because it's handled unknown or dynamic

00:41:06.530 --> 00:41:08.110
Data structure effectively

00:41:08.110 --> 00:41:12.310
Um bulk operation as well performed bulk patches are built using

00:41:12.600 --> 00:41:19.940
But collect and for all which we will have some all later on and also I provide code organization

00:41:20.600 --> 00:41:24.840
Because it encapsulate complex logically usable data structure

00:41:25.480 --> 00:41:31.560
Also, it enhanced performance in memory operational passive compared to database level processing

00:41:32.440 --> 00:41:37.440
So what are the best practices in using the collection use the right collection

00:41:37.960 --> 00:41:42.200
So use associative array for sparse data and lookup tables

00:41:42.600 --> 00:41:46.220
Use nested tables among for bulk operation

00:41:46.220 --> 00:41:49.880
Use v arrays for ordered fixed size data set

00:41:50.840 --> 00:41:56.680
That was optimized for performance use bulk connect and limit for fetching large data set

00:41:57.600 --> 00:42:04.520
Manage memory by training and use elements and also avoid complexity keeping collection operation simple

00:42:04.520 --> 00:42:06.860
And predictable avoid performance

00:42:06.860 --> 00:42:08.180
Determination

00:42:09.100 --> 00:42:15.260
And again use exception handles always handle exception like no data found or value error

00:42:15.260 --> 00:42:17.740
when working with that

00:42:17.740 --> 00:42:18.940
collection

00:42:20.600 --> 00:42:26.620
SQL collection are powerful tools for managing data efficiently within oracle databases

00:42:27.500 --> 00:42:32.400
So by leveraging this structure developers can implement complex data operations

00:42:33.180 --> 00:42:36.360
And optimize performance and maintain clean and

00:42:37.200 --> 00:42:39.140
Understanding the hierarchy

00:42:39.900 --> 00:42:45.740
It's very important and the usage of collection is essential for building a robust pls

00:42:45.740 --> 00:42:48.720
Let me know if you're going to proceed

00:43:02.540 --> 00:43:03.440
Type

00:43:03.440 --> 00:43:15.180
So this is the example of the time so

00:43:22.040 --> 00:43:27.260
Then here is the associative arrays so

00:43:32.540 --> 00:43:33.720
Nested table

00:43:39.920 --> 00:43:43.980
So record variables a record variable is a custom composite data

00:43:44.640 --> 00:43:47.900
Type that can group related fields of various data type

00:43:48.840 --> 00:43:52.680
Similar to row in a table. So for example

00:43:53.260 --> 00:43:58.000
In this case, um, the advantage of that is the ease of use so it simplifies

00:43:58.880 --> 00:44:05.860
Having multiple fields related to entity allowing fetching the entire row into a single variable

00:44:05.860 --> 00:44:07.240
which we

00:44:07.240 --> 00:44:08.980
I think discussed yesterday

00:44:08.980 --> 00:44:12.560
also the flexibility because it can dynamically adapt

00:44:13.100 --> 00:44:16.640
The changes in the table structure when the whole type is used

00:44:17.260 --> 00:44:21.440
And custom records can be defined to handle the data to span multiple tables

00:44:22.080 --> 00:44:25.400
It also improved readability. So group related

00:44:26.120 --> 00:44:32.740
To psychological structure making co-design food and for example in the case in this case you have this assumed table employee

00:44:33.320 --> 00:44:35.500
and the output is

00:44:35.500 --> 00:44:40.280
So your associated arrays the man index or before is indexed by tables

00:44:41.060 --> 00:44:43.400
Where they are stored using numeric

00:44:44.080 --> 00:44:46.360
String based on based on indexes

00:44:47.280 --> 00:44:49.240
so in features

00:44:50.180 --> 00:44:55.480
Um index type of course numeric binary integer or string-based marker too

00:44:56.440 --> 00:44:59.880
They have the connotation. So no predefined size required

00:45:00.880 --> 00:45:04.960
Since numbers are sparse support non-conviguous indexes

00:45:05.480 --> 00:45:07.860
for example skipping indexes

00:45:07.860 --> 00:45:13.580
Then efficient lookup best suited for lookups or recaps. So here we have declared

00:45:13.580 --> 00:45:19.500
So we declare the salaries of numbering as an associated array type with numeric indexes

00:45:20.160 --> 00:45:22.300
Then assignments so values are assigned

00:45:22.920 --> 00:45:26.860
To indexes one and five demonstrating the ability to skip indexes

00:45:28.000 --> 00:45:31.260
And dynamic behavior no need to pre-define the size of the array

00:45:32.340 --> 00:45:35.420
Output so values are to be using the respective

00:45:36.280 --> 00:45:38.300
indexes and displayed using dbms

00:45:39.200 --> 00:45:39.840
output

00:45:39.840 --> 00:45:41.080
Good line

00:45:41.080 --> 00:45:46.640
So our real world use cases division caching on beta. So could we access data from the cache?

00:45:47.660 --> 00:45:51.600
for performance parts due to storing so under scenario where

00:45:52.740 --> 00:45:58.640
Intermediate values are not respond then look up tables use as a memory table for keep

00:45:59.240 --> 00:46:01.320
the table of three value pairs

00:46:02.380 --> 00:46:08.400
So nested tables the man is an embodied collection stored in the database or manipulated in memory

00:46:09.620 --> 00:46:14.400
So indexing again, it's indexed basically starting from one

00:46:15.160 --> 00:46:18.960
Dense and sparse initially dense, but division can create

00:46:19.560 --> 00:46:20.960
gaps so

00:46:20.960 --> 00:46:24.000
Maybe it's called sparse array. So storage

00:46:24.560 --> 00:46:29.260
Can be stored in database column or used in memory in tls or drop

00:46:30.160 --> 00:46:35.920
One of the features as well for nested tables dynamic expansion. So no element can be added

00:46:36.200 --> 00:46:43.800
At the runtime here we have declaration so non-table is declared as a list of table type of number

00:46:44.440 --> 00:46:46.440
So number is initialized

00:46:46.960 --> 00:46:51.960
With values one to three and we have the data expansion that that extend

00:46:53.140 --> 00:46:56.560
Method that adds a new slot to the table

00:46:57.780 --> 00:47:03.400
Then sparse behavior that that db method removes the values specific index

00:47:04.100 --> 00:47:05.220
creating a gap

00:47:06.260 --> 00:47:12.060
So attempting to access the index result results in the no data count error

00:47:19.680 --> 00:47:23.880
So here's the sample of the V arrays so V arrays

00:47:24.580 --> 00:47:30.040
Our variable size array our connection in tls with a fixed upper bound

00:47:30.620 --> 00:47:34.360
Unlike nested table or associated arrays. They must be dense

00:47:35.240 --> 00:47:38.760
And elements are restored with specific order

00:47:39.480 --> 00:47:45.560
So key features now is bounded size. So the maximum size is specified during the creation. It cannot exceed

00:47:46.240 --> 00:47:47.620
this limit

00:47:48.200 --> 00:47:51.020
Then structure so all elements are contagious

00:47:51.540 --> 00:47:54.540
Meaning there are no gaps or sparsity

00:47:55.860 --> 00:48:01.020
Ordered storage element retainer order and access the integer subscript

00:48:01.980 --> 00:48:08.700
Um ideal usage here are useful when the number of elements is known and does not vary significantly

00:48:10.280 --> 00:48:18.060
So here we declare V array underscore type as type is defined with a maximum size of five

00:48:18.860 --> 00:48:20.900
So fruits is

00:48:20.900 --> 00:48:24.840
initialized within three elements of all banana and cherry

00:48:25.500 --> 00:48:29.580
So elements are accessed using the index starting from one

00:48:30.540 --> 00:48:36.000
So dynamic expansion we can put that extend method and slot for a new element

00:48:36.540 --> 00:48:38.740
To fill the maximum size

00:48:38.740 --> 00:48:40.280
X is

00:48:40.280 --> 00:48:41.380
Size is exceeded

00:48:41.900 --> 00:48:44.620
So iteration we can use use a whole loop

00:48:45.200 --> 00:48:46.560
within the fruits

00:48:46.560 --> 00:48:49.980
That come to iterate through and through all elements

00:48:52.000 --> 00:48:57.360
Um in all cases, so fixed length data storage, so it's too small

00:48:57.360 --> 00:49:05.980
Excellent place like categories or short enumeration data transfer so pass compact collection

00:49:06.960 --> 00:49:11.540
Um pln has to block out application and also predefined groups

00:49:11.540 --> 00:49:13.260
used for cases where

00:49:13.260 --> 00:49:19.800
The number of elements is predictable such as a list of weekdays or months

00:49:20.640 --> 00:49:25.420
So advantages is your compact design as ideal for a small data set

00:49:26.220 --> 00:49:31.880
That was order of preservation, so we ensure that consistent ordering of elements

00:49:31.880 --> 00:49:35.900
That was is of use. It's very simple and straightforward for

00:49:36.880 --> 00:49:42.220
Predictable data sizes limitation along fixation cannot dynamically resize

00:49:42.780 --> 00:49:48.400
Beyond the declared limit and dense only does not allow gaps or sparse data

00:49:49.340 --> 00:49:52.020
So let's have an example

00:49:52.020 --> 00:49:52.700
Um

00:49:54.420 --> 00:49:57.620
So lastly the index by table or associated arrays

00:49:58.620 --> 00:49:59.980
also known as

00:50:00.820 --> 00:50:03.300
Powerful sql structure used

00:50:03.300 --> 00:50:04.960
for temporary

00:50:05.700 --> 00:50:12.580
In memory data processing they allow you to store e-value pairs with numeric or string based keys

00:50:13.200 --> 00:50:16.360
Making them ideal for lookup tables or caching operations

00:50:17.470 --> 00:50:22.510
Some features not dynamic size automatically grows as element or added

00:50:23.510 --> 00:50:28.550
Flexible indexing with support numeric binary figure or string

00:50:28.550 --> 00:50:30.490
or chart two

00:50:30.490 --> 00:50:31.830
indexes

00:50:31.830 --> 00:50:34.190
It's sparse data. So it's allow gaps

00:50:34.770 --> 00:50:36.790
in indexes

00:50:36.790 --> 00:50:41.230
And making it suitable for data sets with non-contiguous keys

00:50:41.230 --> 00:50:45.550
Not persistent data is stored only in memory and does not persist across

00:50:46.110 --> 00:50:46.650
Session

00:50:47.830 --> 00:50:54.890
So here we declare um price table is defined as an associative array with the keys

00:50:54.890 --> 00:50:57.730
But third two in the values number

00:50:58.510 --> 00:51:04.830
So we initialize the table. Um tables dynamically populated with keys of all banana and

00:51:06.610 --> 00:51:08.670
Cherries for example before

00:51:09.310 --> 00:51:14.770
Then we can access the data using the key to fetch the associated value for example price

00:51:14.770 --> 00:51:16.810
Up so advantages

00:51:17.910 --> 00:51:21.990
flexible indexing efficient memory and dynamic sizing

00:51:22.530 --> 00:51:26.610
And limitation is not persistent data is not stored in the database

00:51:27.710 --> 00:51:31.730
indexing constraint only support numeric and string indexes

00:51:32.970 --> 00:51:36.770
limited size restricted by available memory, so let's try the

00:51:39.070 --> 00:51:41.790
But before that we

00:51:41.790 --> 00:51:45.710
Know that we can have a direct assignment like this

00:51:46.330 --> 00:51:52.150
Then using loop we can have direct assignment based on vi when we loop

00:51:53.530 --> 00:51:58.910
So let's try to give non-existing elements, um, please do the

00:51:59.710 --> 00:52:06.310
Slide 396 please do the 396. Thank you. Then simply long. So being um, it really is an exception

00:52:08.090 --> 00:52:10.110
I'll give you five minutes

00:52:11.790 --> 00:52:13.830
So use the test case three

00:52:18.370 --> 00:52:18.970
Okay

00:52:28.070 --> 00:52:29.270
Slide 397

00:52:34.310 --> 00:52:37.290
Allowing rows of similar data types to be looped together

00:53:10.370 --> 00:53:11.770
You

00:53:40.370 --> 00:53:41.770
You

00:54:10.370 --> 00:54:11.770
You

00:54:40.370 --> 00:54:41.770
You

00:55:10.370 --> 00:55:11.770
You

00:55:40.370 --> 00:55:41.770
You

00:56:10.370 --> 00:56:11.770
You

00:56:40.370 --> 00:56:41.770
You

00:57:10.370 --> 00:57:11.770
You

00:57:40.370 --> 00:57:41.770
You

00:58:10.370 --> 00:58:11.770
You

00:58:40.370 --> 00:58:41.770
You

00:59:10.370 --> 00:59:11.770
You

00:59:40.370 --> 00:59:41.770
You

01:00:10.370 --> 01:00:11.770
You

01:00:40.370 --> 01:00:41.770
You

01:01:04.790 --> 01:01:06.190
You