14 videos 📅 2024-05-06 08:00:00 America/Creston
39:11
2024-05-06 08:57:50
3:11:28
2024-05-06 10:30:19
24:39
2024-05-07 07:20:20
19:49
2024-05-07 08:03:25
1:14:35
2024-05-07 08:35:13
15:41
2024-05-07 10:06:25
38:33
2024-05-07 10:37:03
2:48
2024-05-07 11:19:01
59:37
2024-05-07 11:33:56
6:10
2024-05-07 14:55:25
39:40
2024-05-07 15:02:44
9:45
2024-05-08 06:44:21
29:27
2024-05-08 08:09:24
2:51:40
2024-05-08 12:09:24

Visit the Apache Nifi - GROUP 1 course recordings page

                WEBVTT

00:00:02.520 --> 00:00:03.040
I

00:00:03.040 --> 00:00:07.320
Didn't get far no worries you want to kind of talk me through

00:00:10.200 --> 00:00:13.020
How are you going to accomplish this

00:00:14.460 --> 00:00:20.200
Yeah, so I just started off with converting the CSVs to JSON and

00:00:21.340 --> 00:00:27.160
Regardless, it was a failure success since you would get a failure. It was already a JSON. I had them go

00:00:28.320 --> 00:00:36.960
Through another conversion where it just made the JSON files pretty and then basically merged those into one

00:00:38.080 --> 00:00:38.740
and

00:00:38.740 --> 00:00:45.080
Then the part where it was execute script was where I just I don't really have experience doing Python

00:00:45.080 --> 00:00:47.320
So I wasn't sure how to go about that

00:00:48.240 --> 00:00:50.360
Okay. Okay. No worries

00:00:51.560 --> 00:00:57.300
Yeah, and you know, they're like said there's there's a couple of different ways, you know, you can handle this

00:00:57.320 --> 00:01:02.520
You could have you know use the execute script SQL

00:01:03.500 --> 00:01:05.360
I'd be careful

00:01:05.360 --> 00:01:14.440
picking up files and moving files that you know through a processor that may not be needed just because you know

00:01:14.440 --> 00:01:19.980
To get the best performance out of the system you want that processor to

00:01:20.700 --> 00:01:25.020
You know work on data. It knows about or do that single

00:01:26.940 --> 00:01:32.260
Task that it wants to do. So if you send the files that you know, you know may fail

00:01:34.320 --> 00:01:38.980
Send those a different direction have another process handle things like that

00:01:38.980 --> 00:01:42.980
But overall, you know great job in and again, you know

00:01:42.980 --> 00:01:49.220
We have a mixed audience of different skill sets and those types things. So I didn't expect

00:01:50.040 --> 00:01:55.080
Everyone to get finished. I didn't actually expect anyone really to get as far as some some folks did

00:01:55.100 --> 00:01:59.500
But you know being able to explain this explain how you're building out your flow

00:02:00.180 --> 00:02:06.300
It's critical. Um, you know, just keep in mind also, you know name your processor something a little bit more

00:02:07.060 --> 00:02:08.520
readable when you can

00:02:08.520 --> 00:02:10.740
Add some labels those types of things

00:02:11.480 --> 00:02:16.780
You know you if if you were to set this up and have a very expensive data flow

00:02:17.440 --> 00:02:20.900
And just help visually to to do that thing

00:02:21.960 --> 00:02:27.040
Recognize where those are and we'll go into some of the other visual aspects enough

00:02:27.040 --> 00:02:32.300
I because you know when we get to some of the security settings, but no great job

00:02:32.880 --> 00:02:38.440
Thanks for walking me through it and any questions that you know, you have so far on any of the

00:02:39.060 --> 00:02:41.420
components in the canvass or any of that stuff

00:02:44.880 --> 00:02:46.740
Hi, where's next

00:02:49.960 --> 00:02:51.060
This is really nice

00:02:53.660 --> 00:03:01.520
Thank you morning Tyler you want to kind of walk me through your flow what you're thinking how far you got

00:03:01.520 --> 00:03:03.580
You know any questions?

00:03:04.580 --> 00:03:12.680
Yeah, just ingest the data

00:03:17.560 --> 00:03:19.000
To the conversion

00:03:19.000 --> 00:03:25.900
I was just trying to work with this query record

00:03:27.340 --> 00:03:33.060
So this three records during the agitation I'm having some issues with the date column right now

00:03:33.940 --> 00:03:34.540
and

00:03:35.220 --> 00:03:37.580
then this three record is

00:03:39.260 --> 00:03:43.500
Pulling out the right now. I just have a wind speed warning so that

00:03:43.500 --> 00:03:44.340
I

00:03:45.000 --> 00:03:51.700
Didn't really fledge out of these two paths, but these would be going to merge for that. I

00:03:51.700 --> 00:03:56.180
Guess this path would be going to an aggregation report and then this path out

00:03:57.060 --> 00:04:02.980
I'm taking it's building all of those records and then pinning them by either day nice nice

00:04:04.340 --> 00:04:04.980
so

00:04:04.980 --> 00:04:09.820
You know another instance where we're using a query record

00:04:10.540 --> 00:04:12.680
And you're using evaluate JSON

00:04:13.200 --> 00:04:18.740
You know there's tons of different processors. I really like your layout

00:04:18.740 --> 00:04:21.420
You know, it's nice clean

00:04:21.420 --> 00:04:28.940
You can kind of follow the path in the life of a packet of data. So, you know overall, you know, great job

00:04:28.940 --> 00:04:31.240
Any questions I can answer?

00:04:32.540 --> 00:04:34.800
any issues or anything else I

00:04:34.800 --> 00:04:36.760
Didn't really have many issues

00:04:37.680 --> 00:04:45.180
The only issue I had was with the date, but I think it's just because in the schema it's pulling it in as a

00:04:45.180 --> 00:04:47.540
String string. Yeah

00:04:48.220 --> 00:04:50.560
Yeah, yeah, they can be you know

00:04:52.580 --> 00:04:56.640
We've seen that earlier as well yesterday they can be a little little finicky but

00:04:57.700 --> 00:05:01.880
Yeah, do the best if you have any other questions if you work on it you have any other questions

00:05:01.880 --> 00:05:04.200
Let me know I can

00:05:04.760 --> 00:05:06.320
if you want to

00:05:06.320 --> 00:05:12.240
Take a look at that when we don't break as well to see if there's a way we can quickly fix that date

00:05:16.280 --> 00:05:17.720
Good morning, I

00:05:20.760 --> 00:05:21.920
Can I can

00:05:24.660 --> 00:05:27.160
Okay, so we didn't get to talk much yesterday

00:05:37.340 --> 00:05:40.980
Converting the CSV to JSON a two different

00:05:41.520 --> 00:05:46.740
Get files whether it was CSV or JSON pick it up

00:05:47.880 --> 00:05:51.720
But then started into the flow at a different point

00:05:54.880 --> 00:05:58.000
My question is is there a processor that

00:05:59.060 --> 00:06:06.300
Allows you to want external application like if I already have a Java application a jar file sitting on the server

00:06:06.860 --> 00:06:10.960
Can I launch that from here pass it the input file and have that do the work?

00:06:11.680 --> 00:06:18.960
But you can and so say you have like a jar file that that has all the logic built in

00:06:22.800 --> 00:06:28.160
You know pass the flow file to that and and you know get the results

00:06:36.320 --> 00:06:37.480
Processing here

00:06:47.380 --> 00:06:55.280
So you can execute a process and that could be like a like a shell script

00:06:56.520 --> 00:06:58.720
You know we use this for

00:06:59.220 --> 00:07:03.140
Executing the like Linux commands, you know those types of things

00:07:03.140 --> 00:07:04.880
The arguments

00:07:07.800 --> 00:07:13.060
This question is is when you take a piece of data outside of NAFA

00:07:13.880 --> 00:07:17.940
And any kind of processing that happens there you're going to lose

00:07:18.480 --> 00:07:20.720
You know that data governance part

00:07:20.720 --> 00:07:24.700
so we're going to see in the lineage that you sent it to the process and

00:07:25.400 --> 00:07:28.240
We will see in the lineage when we get the data back

00:07:28.240 --> 00:07:35.100
But any processing or anything that happens outside of that, you know, we will not be able to

00:07:35.100 --> 00:07:40.720
You know capture that you know big Paul and lots of events now with that being said, right?

00:07:40.760 --> 00:07:45.040
You know, sometimes we have these external applications that you know do

00:07:45.700 --> 00:07:49.440
You know things like this very well and we call upon right we use it

00:07:50.020 --> 00:07:54.880
There there's ways in you know, if you don't want to rewrite your whole well

00:07:54.880 --> 00:08:00.560
Already a Java I bet converting it to a processor would be pretty easy

00:08:02.600 --> 00:08:06.000
But you know there there's ways as well to

00:08:06.900 --> 00:08:14.160
You know keep the NAFA ecosystem alive in your separate process, you know being able to

00:08:15.260 --> 00:08:16.980
You know save those attributes

00:08:17.960 --> 00:08:20.600
And then there's callbacks to NAFA you can

00:08:20.870 --> 00:08:26.850
Use to say, you know, here's the attributes associated with this flow file, you know those types of things

00:08:26.850 --> 00:08:30.750
But yeah, you do have a processor to to execute a process

00:08:31.510 --> 00:08:37.910
And if that's you know execute Java dash jar and and your parameters, you know have that

00:08:41.470 --> 00:08:48.410
Yep, and like I said, you know, I would love to be able to go through each individual processor

00:08:49.410 --> 00:08:55.650
And do that, you know, that's a different way of handling this you can execute, you know a script even

00:08:56.530 --> 00:09:00.930
And and you know do some of that stuff you but probably in your case

00:09:00.930 --> 00:09:04.010
You would just use an execute process execute that Java

00:09:04.610 --> 00:09:11.590
You know, just make sure that that external application gets shipped with them with NAFA

00:09:11.590 --> 00:09:14.890
Just because you know you deploy this and cluster it

00:09:15.770 --> 00:09:20.450
You'll need to make sure you have access to that binary, you know across all the machines

00:09:21.170 --> 00:09:28.130
But great question and and actually, you know, that's why we go through some of these is because nobody's even asked that question yet

00:09:30.490 --> 00:09:33.870
If you have any other questions or issues, let me know let me have to take a look

00:09:35.090 --> 00:09:37.630
Ain't great hearing from you today. All right Ben

00:09:42.110 --> 00:09:42.990
Good morning

00:09:45.690 --> 00:09:50.270
A little bit. Yeah, it's looking a little better a little less spotter webby

00:09:57.190 --> 00:10:05.230
Okay, you just kind of walk me through your flow how far you got and what issues you have

00:10:16.070 --> 00:10:19.330
What we did before which is you know, get the source file

00:10:23.870 --> 00:10:30.570
If it is a Jason put in my work folder of the CSV do the conversion

00:10:32.170 --> 00:10:33.950
You know, there's failures

00:10:35.690 --> 00:10:38.710
Funnel and then the workflow the upper ray

00:10:40.370 --> 00:10:43.610
Is that's the merge of the

00:10:43.610 --> 00:10:48.050
File but also backing up of the original

00:10:50.830 --> 00:10:59.430
The other stuff like making an attribute all those things my brain doesn't work in that space, you know, it's an alien to me

00:10:59.430 --> 00:11:07.690
No, no worries, but you did something that I would highly recommend and you know when you were going through

00:11:08.430 --> 00:11:11.750
Designing your data flow started laying these things out

00:11:11.750 --> 00:11:15.770
One of the things I noticed is you would put your original file back

00:11:15.770 --> 00:11:21.530
Or you would you know take a copy of the file and save it somewhere else those types of things

00:11:21.530 --> 00:11:28.310
You know when you're building your data flow and in some of those precautions and in some of that

00:11:28.310 --> 00:11:35.650
You know safety mechanism to to ensure that you know, if you are writing a file to a database for instance

00:11:35.650 --> 00:11:41.090
You know, you're writing the values you may run it to a file first just so you can see, you know

00:11:41.090 --> 00:11:45.790
Does this look exactly like I want it to go in and you know, you know

00:11:45.790 --> 00:11:52.070
The beauty of a processor is is you can branch off, you know hundreds of success, right?

00:11:52.350 --> 00:11:57.610
And so if it's a success you can send a same success to another file

00:11:57.610 --> 00:12:03.510
And so when you're building your flows building in some of those safety mechanisms, I feel like really helps

00:12:03.510 --> 00:12:08.630
And then when you're done you've got it tested you're ready to start shipping this out

00:12:09.410 --> 00:12:13.350
You know go in and look and get rid of the redundancy, you know go in

00:12:13.950 --> 00:12:20.730
You know get rid of some of those safety mechanisms. So, you know that way the flow can perform as best as possible

00:12:21.490 --> 00:12:22.010
but

00:12:33.590 --> 00:12:34.330
Oh

00:12:47.450 --> 00:12:57.950
Well, and I think it's funny how it's being used

00:12:58.810 --> 00:13:02.610
so that kind of tells me who's used this before and

00:13:02.610 --> 00:13:08.210
You know, who's experimented things like that. So no great job on I'm putting that in

00:13:10.030 --> 00:13:15.150
I like how the only other thing I would do is just you know, you know, again back to the labeling

00:13:15.150 --> 00:13:17.470
beautification and make it easier to read

00:13:17.470 --> 00:13:22.530
But usually that's at the end and when you're ready to start shipping that data flow out, you know

00:13:22.530 --> 00:13:24.910
You start doing those types of things. So

00:13:24.910 --> 00:13:28.650
No, this looks great. I get what you're trying to get at and

00:13:28.650 --> 00:13:31.050
And I understand where you're going your flow

00:13:31.050 --> 00:13:38.370
If you you know, you don't have the skill set to write code for instance, you know, that's that's fine

00:13:38.370 --> 00:13:41.430
So long as you you you get close

00:13:43.370 --> 00:13:46.130
Thank you, let's look at Amanda

00:13:53.030 --> 00:13:56.490
Oh, yeah, she did not want to do training today

00:14:03.780 --> 00:14:04.580
I

00:14:06.220 --> 00:14:09.760
Actually, that's who I was pulling up. Okay, Ali or Alyssa

00:14:15.020 --> 00:14:15.820
I

00:14:15.820 --> 00:14:21.280
Needed too much further. I was just basically looking at the processes

00:14:22.600 --> 00:14:27.520
Processers and what they did. I mean I did a little improvement like

00:14:27.520 --> 00:14:33.180
All my JSON now looks the same because why I added a flattened

00:14:35.260 --> 00:14:39.240
Oh, yeah, but basically I'm just picking up the files

00:14:40.380 --> 00:14:46.080
And then routing by type and then I didn't

00:14:48.320 --> 00:14:50.920
But I believe it was so much

00:14:51.860 --> 00:14:53.920
Can you open up your

00:14:57.180 --> 00:15:02.200
Okay, no worries

00:15:04.100 --> 00:15:08.080
You you did the one thing though that you know

00:15:08.080 --> 00:15:15.120
I was I mentioned is as you change the destination from flow file content to flow file attribute

00:15:15.120 --> 00:15:17.720
So that's what you needed to do

00:15:17.720 --> 00:15:21.400
The reason being is you can this processor and

00:15:22.500 --> 00:15:27.560
Some of these things even being a committer, you know, some of these things, you know

00:15:27.560 --> 00:15:33.140
Confuse me why we're doing it this way, but you know, if you had a flow file content

00:15:33.140 --> 00:15:38.460
It's only gonna let you extract, you know one element out of that JSON document

00:15:38.460 --> 00:15:46.160
But if you do a flow file attribute you can go through the the whole JSON tree and start extracting

00:15:46.160 --> 00:15:52.920
You know every every value out of there and then having that as an attribute. So I'm glad you changed that

00:15:54.000 --> 00:15:58.960
You know, I think I think you know if you would have had chance to go further

00:15:58.960 --> 00:16:05.580
You're you're really close because you know, once you start looking at the evaluate JSON and you can do

00:16:05.580 --> 00:16:07.200
You know some of the same things with CSVs

00:16:08.020 --> 00:16:12.620
You know, I think you're merging and other things later would have been a lot easier

00:16:13.460 --> 00:16:13.940
so

00:16:15.360 --> 00:16:16.940
Any questions concerns

00:16:19.000 --> 00:16:20.860
That can help me with media

00:16:23.400 --> 00:16:24.600
No, okay

00:16:25.520 --> 00:16:27.540
Is there anyone else in the room?

00:16:29.000 --> 00:16:32.420
Cuz I know you guys are sharing a room. Is there anyone else in a room?

00:16:34.480 --> 00:16:36.220
Look at Brett

00:16:39.380 --> 00:16:43.480
Okay, Brett how far looks nice looks real nice

00:16:44.920 --> 00:16:53.460
Good morning. Thank you. So I switched I didn't get much further than I did yesterday when I was this but I

00:16:53.460 --> 00:16:58.540
Switched like halfway through the way I was doing it. I was trying to use I think it was split JSON. Mm-hmm

00:16:59.880 --> 00:17:00.220
I

00:17:02.120 --> 00:17:03.220
didn't I'm I

00:17:03.220 --> 00:17:08.460
Wasn't getting I was able to get break things up into different files because I thought that was the way to do it

00:17:08.460 --> 00:17:10.480
But then I switched to this value

00:17:11.120 --> 00:17:11.540
Jason

00:17:13.560 --> 00:17:15.780
So I get the inventory file

00:17:15.780 --> 00:17:26.560
To Jason it's Jason I just go to this to die with Jason and then I was able to get

00:17:29.940 --> 00:17:31.300
The attributes

00:17:31.300 --> 00:17:37.960
Extracted before yeah extracted into the thing. Oh, I think I showed this yesterday. Oh, yes

00:17:37.960 --> 00:17:40.240
so like our humidity

00:17:41.020 --> 00:17:43.640
Precipitation. Mm-hmm the station and all that stuff

00:17:45.900 --> 00:17:46.460
and

00:17:46.460 --> 00:17:47.180
then

00:17:47.980 --> 00:17:50.000
The plan was to just feed this

00:17:50.880 --> 00:17:56.420
Converted CSV converted into JSON into that same thing. You mentioned yesterday that I might have to do a separate evaluate

00:17:56.420 --> 00:17:59.840
And I think I did because the second one I got

00:18:01.020 --> 00:18:03.940
Didn't parse correctly. Mm-hmm

00:18:03.940 --> 00:18:10.460
So I'll probably have to do just a separate evaluate Jason for that CSV to get that to work. I

00:18:10.460 --> 00:18:11.880
absolutely or

00:18:11.880 --> 00:18:14.020
you may want to

00:18:14.020 --> 00:18:20.620
Just you know how you're parsing your JSON parse your CSV and have is as an attribute and then

00:18:21.140 --> 00:18:23.840
You know with both of those as an attribute

00:18:24.460 --> 00:18:29.200
You can put processors on down the line that would you know, right?

00:18:30.320 --> 00:18:32.820
a single JSON document and

00:18:32.820 --> 00:18:34.540
All of it would be the same

00:18:34.540 --> 00:18:39.340
So there's a couple different ways. I like the path your own

00:18:39.340 --> 00:18:44.340
Always like using, you know, you know record reader and record writers just because

00:18:45.300 --> 00:18:48.240
They're reusable components. They

00:18:48.240 --> 00:18:53.580
You know, you can add some logic and schema and some intelligence behind it

00:18:54.280 --> 00:18:57.580
But I think you would have got, you know pretty close

00:18:58.360 --> 00:19:00.420
If you would have had the time

00:19:01.080 --> 00:19:06.140
Any issues or concerns or any questions you had about the overall scenario or flow?

00:19:11.720 --> 00:19:13.500
Perfect perfect perfect

00:19:15.020 --> 00:19:18.660
All right Pedro, let's look at your okay

00:19:20.380 --> 00:19:21.900
How we doing

00:19:29.520 --> 00:19:30.160
So

00:19:30.860 --> 00:19:36.660
My approach was to like I put a filter on CSV files so I could make it into JSON

00:19:36.660 --> 00:19:39.000
Well, I think I got that working

00:19:43.680 --> 00:19:52.180
Yeah, right there so I got those guys going oh nice nice and then and then I guess after that I was

00:19:52.180 --> 00:19:58.700
Okay, then just go in and do the JSONs and then merge them and then do whatever you have to do

00:20:00.560 --> 00:20:04.140
Okay, yeah if you were able to have time

00:20:05.700 --> 00:20:06.400
You

00:20:07.320 --> 00:20:08.800
In your queue

00:20:08.800 --> 00:20:15.620
And you notice, you know with those 10,000 files in the queue is it's basically halted, right?

00:20:16.040 --> 00:20:19.440
The reason it's halted is that error log messages

00:20:19.440 --> 00:20:26.080
if you were to start that it would it would clear the queue for the extract text and then the

00:20:26.080 --> 00:20:30.820
Process JSON files can send its queue to the extract text

00:20:30.820 --> 00:20:32.840
You know, so

00:20:32.840 --> 00:20:37.480
You know, I'm glad it's there just so we can point it out and it's a learning ability

00:20:38.120 --> 00:20:43.960
But yeah, you know that would have helped clear to you. I think you're on a good path. I just keep in mind

00:20:43.960 --> 00:20:46.120
You know, there's there's

00:20:46.760 --> 00:20:50.580
You know, you want to reduce the amount of processors you use in a data flow

00:20:50.580 --> 00:20:57.500
So if possible, right you can pick all the files up do some filtering and sorting

00:20:58.340 --> 00:21:02.820
As soon as possible and then you know start sending it off to its own process

00:21:02.820 --> 00:21:07.060
It's all its own flow and then you could merge those at the end as well

00:21:07.780 --> 00:21:10.900
You know, so it's something just just tips and tricks to keep in mind

00:21:11.840 --> 00:21:14.780
But it looks great. I like the labeling

00:21:15.580 --> 00:21:18.780
And those things that you you've got accomplished though. Great job

00:21:21.470 --> 00:21:22.130
Okay

00:21:22.130 --> 00:21:23.730
Shawn good morning

00:21:24.890 --> 00:21:28.610
All right, you can't walk me through your data flow

00:21:28.610 --> 00:21:34.930
Also, I didn't get too much done since we talked about it yesterday. Okay

00:21:35.970 --> 00:21:42.110
But I definitely learned a few lessons that I would so I would change how I had done it today

00:21:42.110 --> 00:21:44.490
But yeah, it's just picking up the CSVs

00:21:46.240 --> 00:21:54.240
I've been in schema burning JSON and then writing it again. And then this one's just I was just picking up the already

00:21:55.080 --> 00:21:57.200
the same output directory, okay

00:21:58.960 --> 00:22:04.940
Which that is one part I would do differently if I was starting over from scratch, okay

00:22:06.260 --> 00:22:14.160
And then I was just messing with this merge content one a little bit while you're going through with the other people this morning

00:22:15.140 --> 00:22:21.720
Oh nice, but it's we're just pick pick the files that were written back up

00:22:21.720 --> 00:22:27.420
better merger them into a single merged JSON and then

00:22:27.420 --> 00:22:29.400
I'm always gonna do the

00:22:29.400 --> 00:22:32.840
SQL statistics on it. Mm-hmm. I

00:22:32.840 --> 00:22:36.540
Like how you know, I like how we you know

00:22:36.540 --> 00:22:43.660
Folks were just extracting it using an Avril schema for the CSV those types things

00:22:43.660 --> 00:22:47.460
And then you know, there was different approaches such as using the sequel

00:22:47.460 --> 00:22:54.820
So that was really nice. I all of the you know, you have a merge content and a merge record

00:22:56.780 --> 00:23:01.500
All of the standard processors have documentation

00:23:04.740 --> 00:23:09.460
Long is pretty helpful. Yeah, you see it bolded. It's a required field if it's not

00:23:10.560 --> 00:23:14.800
You know, they're they're all but yeah merge a group of flow files together

00:23:15.800 --> 00:23:18.020
on a user-defined strategy

00:23:19.560 --> 00:23:27.280
I think you would have got all of those Jason's merged you could have you know

00:23:27.280 --> 00:23:29.080
extracted a few things from them and

00:23:29.600 --> 00:23:34.380
Set up some alert, you know extract the wind speed or something and and you would have been finished

00:23:34.380 --> 00:23:36.580
So great job and

00:23:36.580 --> 00:23:38.400
any questions

00:23:40.000 --> 00:23:40.560
Again

00:23:43.520 --> 00:23:48.620
But I think like you mentioned earlier it's good practice for this

00:23:48.620 --> 00:23:55.700
That's all safety steps in there. It is it is and and a lot of people like to just well

00:23:55.700 --> 00:24:00.900
I don't want to add those processors because I have to go back and delete them or I may leave them in

00:24:00.900 --> 00:24:05.580
You know as this gets deployed, you know those types of things

00:24:05.580 --> 00:24:09.400
So, you know, it's always good to have those safety steps in place

00:24:09.720 --> 00:24:12.320
You know, even even to this day

00:24:13.480 --> 00:24:17.740
You know, I'll create a flow and I'll get ahead of myself and I'm like, oh no

00:24:17.740 --> 00:24:20.560
I forgot to turn on keep source fall and I'm you know

00:24:22.920 --> 00:24:27.220
Because it's missing from the source back to the you know back to a folder

00:24:27.220 --> 00:24:36.000
You know, so I don't believe it. So, you know, I even I get into those situations sometimes so, you know, yeah

00:24:42.060 --> 00:24:46.500
Nice nice no, it looks like you got really far

00:24:50.040 --> 00:24:50.940
You know

00:24:56.360 --> 00:25:04.520
One of the things that that I didn't see enough I've seen in the past is and you know, this is across the board

00:25:05.160 --> 00:25:08.520
You may create a processor group that

00:25:09.960 --> 00:25:11.480
handles the

00:25:11.480 --> 00:25:13.280
picking up and filtering of

00:25:14.360 --> 00:25:14.860
files

00:25:14.860 --> 00:25:21.480
You may have another process group within that, you know that that parent level process group that

00:25:22.360 --> 00:25:24.280
you know handles the

00:25:25.240 --> 00:25:30.400
You know your ETL steps for your CSV and then you may have another process group that

00:25:31.260 --> 00:25:38.400
Handles the JSON and then that way each of those functions can run independently of each other

00:25:38.400 --> 00:25:44.860
You know keep in mind if you have I know you're you're accessing a website

00:25:45.900 --> 00:25:49.560
It's cumbersome. It's not automated and you're downloading it

00:25:49.560 --> 00:25:54.300
But if you were getting a feed of data just written to a disk for instance with you know

00:25:54.300 --> 00:25:56.820
Different data types formats those types things

00:25:56.820 --> 00:26:03.660
You don't want an error or or something else, you know, potentially blocking

00:26:03.660 --> 00:26:08.380
The whole flow so, you know, keep in mind you can bust this up

00:26:08.400 --> 00:26:10.720
into, you know

00:26:10.720 --> 00:26:17.540
Smaller functions so that way, you know, you may you may be processing seen JSON and maybe processing CSV

00:26:18.040 --> 00:26:22.220
CSV could act up, but you know, Jason will continue to process

00:26:22.220 --> 00:26:28.960
So just keep that in mind when you're designing your data flows. You can bust this up put it into

00:26:28.960 --> 00:26:34.400
You know different processor groups link those together with your input and output ports

00:26:35.100 --> 00:26:37.080
And you go from there so

00:26:37.820 --> 00:26:39.220
But great job

00:26:39.220 --> 00:26:43.640
Everyone, you know, you got a lot further everyone got further than I was expecting

00:26:44.280 --> 00:26:49.040
I knew it would it would throw a few curve balls because you know

00:26:49.040 --> 00:26:55.440
We were having to do some ETL steps and then that alerting mechanism I knew would kind of trip folks up

00:26:56.200 --> 00:27:01.560
You know, just keep in mind that you can always go back to the documentation

00:27:02.080 --> 00:27:04.080
You can you know the

00:27:05.140 --> 00:27:09.980
The description of the documentation and not five, you know should include all of this as well

00:27:09.980 --> 00:27:12.060
But you know, everything's on the website

00:27:12.060 --> 00:27:16.780
and then you know, there's a ton a processor for

00:27:16.780 --> 00:27:19.000
You know some of the some of these things

00:27:19.900 --> 00:27:21.620
and then speaking of

00:27:21.620 --> 00:27:26.280
documentation I found I you know, I had mentioned that

00:27:27.980 --> 00:27:33.060
As your was supporting now five more and more and

00:27:33.060 --> 00:27:36.720
So last night I was looking on this

00:27:37.600 --> 00:27:45.740
I was going over what I was going to you know show today and I ran across a

00:27:45.740 --> 00:27:47.400
New the new Microsoft Azure

00:27:48.380 --> 00:27:49.020
Perfect

00:27:50.680 --> 00:27:53.000
So, you know as I mentioned

00:27:53.000 --> 00:28:00.200
Microsoft is starting to really lean into not five. They you know, I I can't confirm nor deny

00:28:00.200 --> 00:28:04.120
but it will become a a potential service within

00:28:05.000 --> 00:28:09.540
And so, you know, they do have a lot of

00:28:09.540 --> 00:28:15.000
Documentation on this I stole you know this graphic for the slides. I'm up to present

00:28:15.000 --> 00:28:18.160
But there's a lot of stuff

00:28:18.160 --> 00:28:26.220
That Microsoft's even realist, you know the putting out so I'm gonna include a link to this and I'm gonna include other links

00:28:26.220 --> 00:28:31.820
You know just so you can take this back and and have that documentation, you know

00:28:31.820 --> 00:28:37.900
One of the biggest things I try to you know, let the class know is I'm gonna give you as much information as I can

00:28:37.900 --> 00:28:41.600
This is a quick three-day training. We're not on a server

00:28:41.600 --> 00:28:49.280
You know in a multi-tenancy environment those types of things so, you know, we've got to do the best we can with what tools we have

00:28:50.000 --> 00:28:57.200
But I definitely get these links out to everyone. But yeah, you know in case you didn't know there is now, you know some

00:28:58.380 --> 00:28:59.500
additional information

00:29:00.000 --> 00:29:03.680
Specifically on Azure. All right. So that being said

00:29:03.680 --> 00:29:06.060
any other questions about

00:29:08.120 --> 00:29:09.880
the Nafa Nafa's

00:29:13.420 --> 00:29:14.300
Registry

00:29:14.300 --> 00:29:16.800
those types of things before we go into

00:29:17.340 --> 00:29:18.320
scalability

00:29:18.320 --> 00:29:20.320
multi-tenancy

00:29:20.320 --> 00:29:26.360
And and you know those types of topics. I'll take some