WEBVTT
Kind: captions
Language: en
00:00:13.559 --> 00:00:20.140
So hello and welcome to today tutorial, today
we will be learning about Aerodynamic Parameter
00:00:20.140 --> 00:00:26.840
Estimation using delta method which I said
I will be discussing right and what we learned
00:00:26.840 --> 00:00:32.050
so far in this section of lecture is aerodynamic
parameter estimation using least square method.
00:00:32.050 --> 00:00:38.190
So I have shown you with the real flight data
how to estimate the parameter switching least
00:00:38.190 --> 00:00:41.680
square methods rate and before starting today’s
lecture.
00:00:41.680 --> 00:00:48.850
I would like to give you reference of two
of my favorite books so you can learn and
00:00:48.850 --> 00:01:01.110
read more about the content so those reference
books are just following so first learning
00:01:01.110 --> 00:01:39.659
by R.V. Jettagawker right and the book is
called “Flight Vehicle System Identification:
00:01:39.659 --> 00:02:08.300
A Time Domain Methodology” by AIAA press.
So this is a very good book you will enjoy
00:02:08.300 --> 00:02:15.860
reading.
I hope and second one is by J.R.Raol, G.Girija
00:02:15.860 --> 00:03:09.299
and J. Singh the title, of the book is “Modeling
and Parameter Estimation from Dynamical System”
00:03:09.299 --> 00:03:19.299
by IET press okay.
.
00:03:19.299 --> 00:03:26.459
So if you read these two books and any other
sources you will have a lot more things to
00:03:26.459 --> 00:03:33.900
learn and these differences also will be useful
for today’s class like whatever contains
00:03:33.900 --> 00:03:39.329
I will be covering tonight. So in this part
of the tutorial we will be learning about
00:03:39.329 --> 00:03:44.830
Delta method so what is Delta method and how
do you use the delta method to estimate aerodynamic
00:03:44.830 --> 00:03:48.160
parameters?
I will be discussing that and since I have
00:03:48.160 --> 00:03:52.430
discussed this is based on the artificial
neural network, so I will be giving you the
00:03:52.430 --> 00:03:57.159
little bit detail about the what is artificial
neural network and how it works and then we
00:03:57.159 --> 00:04:04.409
will see the examples maybe same examples
what we have learned during this fiscal estimation
00:04:04.409 --> 00:04:16.850
and we will learn the process. So I will start
with the delta method, so what is delta method?
00:04:16.850 --> 00:04:23.860
It yes so delta method which proposed by your
course instructor and my supervisor professor
00:04:23.860 --> 00:04:32.580
AK Ghosh and a great researcher in scientist
Dr. Rai Singhania, so it is developed by Professor
00:04:32.580 --> 00:04:55.430
Ghosh and Rai Singhania a way back maybe a
almost two decades back in 1998 yeah.
00:04:55.430 --> 00:05:02.660
So the philosophy of delta method works like
this so it derives or it estimate your aerodynamic
00:05:02.660 --> 00:05:34.010
derivatives, it estimates aerodynamic derivatives
or parameters using feed-forward neural network
00:05:34.010 --> 00:05:48.289
right. When I you see this term came now a
feed-forward neural network, so before proceeding
00:05:48.289 --> 00:05:54.310
to the method I would like to discuss about
this method first, we will get understanding
00:05:54.310 --> 00:06:00.320
of this method and then you will be appreciate
more to this delta method right.
00:06:00.320 --> 00:06:10.070
So let us start with this feed forward neutral
network. So I will write FFNN. so Feed Forward
00:06:10.070 --> 00:06:17.060
Neural Network is a class of neural network,
where you have three different layers; input
00:06:17.060 --> 00:06:23.790
layer, hidden layer, and output layer right
and architecture is feed-forward it flow the
00:06:23.790 --> 00:06:29.139
flow of information just single way in one
direction so it is called Feed-Forward Network
00:06:29.139 --> 00:06:34.790
all right. So basically all the neural network
concept works on the our biological neurons
00:06:34.790 --> 00:06:40.070
right.
So as we know our brain has around 100 billion
00:06:40.070 --> 00:06:46.720
cells and each cells are called neurons and
artificial neural network mimics our biological
00:06:46.720 --> 00:07:00.449
neurons right. So if you see a biological
neurons so it will be something like this
00:07:00.449 --> 00:07:13.300
yeah, so here you receive the information
through dendrite right and it passes the information
00:07:13.300 --> 00:07:29.180
to other neurons like this singles biologically
neuron through axon, this is the output part
00:07:29.180 --> 00:07:37.919
where you pass this information to other neurons
right and then right you will have the special
00:07:37.919 --> 00:07:41.550
kind of satellite contact they are called
synapses right.
00:07:41.550 --> 00:07:51.190
So it passes the information to dendrite with
this context synapses and where you accumulate
00:07:51.190 --> 00:07:57.490
all the things so this is part of this full
this is nucleus and this full part is corner
00:07:57.490 --> 00:08:04.699
soma ok. So this is a roughly the structure
of your biological neuron.
00:08:04.699 --> 00:08:08.550
.
Now how your artificial internal network works
00:08:08.550 --> 00:08:16.379
so as I said it tries to be make this process,
so I will just show you one of the artificial
00:08:16.379 --> 00:08:46.079
network neuron, yeah you have bias this is
output side a dressage input side, so maybe
00:08:46.079 --> 00:08:55.240
here you have input like x1, x2, x3 and y
is your output so each information will be
00:08:55.240 --> 00:09:04.600
having some weight age. So I will write weights
as w1, w2, w3 so w1 weight corresponds to
00:09:04.600 --> 00:09:10.670
first input w1 corresponds to first input
w2 for second and w3 for third.
00:09:10.670 --> 00:09:17.800
It can have the number of n number of neurons
and it is the bias so you have two major parameters
00:09:17.800 --> 00:09:28.550
in this artificial neuron, so this is a single
neuron so I will write artificial neuron you
00:09:28.550 --> 00:09:36.699
know ok, now try to correlate these artificial
neuron with this biological neuron.
00:09:36.699 --> 00:09:39.120
.
So here you receive the information to dendrite
00:09:39.120 --> 00:09:42.890
and here you receive the information in input
layer right so this is called the input layer.
00:09:42.890 --> 00:09:49.310
So these are the inputs attached and all the
information has to pass through these certain
00:09:49.310 --> 00:09:57.240
weighted which is equivalent to synopsis and
then you get the output here right which is
00:09:57.240 --> 00:10:03.000
equivalent to your axons or and then further
processing as I discussed about soma.
00:10:03.000 --> 00:10:11.860
So how soma is related in this so again this
output you need to sum it over with the weight
00:10:11.860 --> 00:10:24.160
age and have to process through some non linear
function right which is called Activation
00:10:24.160 --> 00:10:37.160
Function and then you receive the output of
the neuron right. So this is salvation so
00:10:37.160 --> 00:10:47.720
what happens here like here you will have
x1 plus sorry x1 times w1 plus x2 times w2
00:10:47.720 --> 00:10:59.660
plus x3 times w3 if you have three input plus
bias, so let us say bias be so now this is
00:10:59.660 --> 00:11:06.000
a summation actually so you can represent
this by some number maybe you can call it
00:11:06.000 --> 00:11:08.010
as summation.
.
00:11:08.010 --> 00:11:12.730
Now it should pass through your activation
function on linear function so which you can
00:11:12.730 --> 00:11:26.970
call it f(s) easier at output after passing
through activation function. So now you are
00:11:26.970 --> 00:11:34.310
able to understand what is neuron in this
artificial neural network. With the comparison
00:11:34.310 --> 00:11:39.930
of biological neuron right. So now I will
be talking about structure of artificial neural
00:11:39.930 --> 00:11:47.300
networks which I have mentioned feed-forward
neural network and let me tell you also this
00:11:47.300 --> 00:11:51.860
like this architecture is not the only architecture
in artificial neural network.
00:11:51.860 --> 00:12:05.350
You have other architectures also they are
called so this is as you know Feed Forward
00:12:05.350 --> 00:12:14.120
Neural Network so let me show in terms of
structure so you will be understanding better,
00:12:14.120 --> 00:12:32.060
so it had a three layers basically so input
and maybe I will do one also here so what
00:12:32.060 --> 00:12:46.430
happens like this layer is called your input
layer, and this is your output layer here.
00:12:46.430 --> 00:12:51.010
So in the problem of identification we basically
deal with input and output.
00:12:51.010 --> 00:13:01.720
So the input layer is a output layer and in
between which in the heart of this network
00:13:01.720 --> 00:13:18.279
is called hidden layer right, now this whole
set of neurons is called the input layer,
00:13:18.279 --> 00:13:24.279
whole thing hidden layer, this output you
have single neuron here. So suppose now I
00:13:24.279 --> 00:13:29.089
have kept three neurons in this it means we
have three different inputs so three inputs
00:13:29.089 --> 00:13:35.550
are there in this input layer number of zeros
in hidden layer depends on the complexity
00:13:35.550 --> 00:13:40.250
of the problem right. So if it is more complex
you can have more number of neurons in hidden
00:13:40.250 --> 00:13:44.709
layer and selection of hidden layer also depends
on the again complexity of the problem so
00:13:44.709 --> 00:13:49.930
literature suggests one-hidden layer is good
enough to capture the complexity of problem
00:13:49.930 --> 00:13:51.350
like aero smith application.
.
00:13:51.350 --> 00:13:57.870
So that we have so you only one hidden layer
with any number of hidden neurons right and
00:13:57.870 --> 00:14:03.029
then if you have one output then you will
have one neuron in output layer, if you have
00:14:03.029 --> 00:14:10.550
two output you will have two output layer
right. So this is the architecture of feed-forward
00:14:10.550 --> 00:14:16.850
so let me complete this so suppose now you
have three different inputs again x1, x2,
00:14:16.850 --> 00:14:32.750
and x3 right so each neuron should can get
connected to the next neuron right like this
00:14:32.750 --> 00:14:46.300
okay and it always flows in forward direction
ok and then hidden layer each one will get
00:14:46.300 --> 00:14:55.470
connected to your output neuron like this.
As I said neuron are connected with some weight
00:14:55.470 --> 00:15:02.060
is so information weight is you can write
the push I give the name A and B so here it
00:15:02.060 --> 00:15:14.490
will be w1a you can write or maybe this is
a from the first input so you can write w1a
00:15:14.490 --> 00:15:35.560
right a1 and here you can write w1a2, w1a3
like that w2a1, w2a2 like that. You can assign
00:15:35.560 --> 00:15:42.440
any symbols does not matter but each neuron
will have the weight age right and further
00:15:42.440 --> 00:15:55.199
it will now have w1b weighted maybe w2b and
w3b and output yes so let us say this is of
00:15:55.199 --> 00:15:57.350
course Y okay.
.
00:15:57.350 --> 00:16:06.850
So this is the architecture of feed-forward
neural network apart from that you have other
00:16:06.850 --> 00:16:21.199
architecture also. So I will just write list
the name so this is called RNN Recurrent Neural
00:16:21.199 --> 00:16:46.329
Network, RNN and the Radial Basic Function
Neural Network. As you saw in this architecture
00:16:46.329 --> 00:16:53.820
you have slope information in a single direction
in a forward direction in RNL you will have
00:16:53.820 --> 00:16:59.160
the flow of information in both the direction
which is a bi-directional flow of information
00:16:59.160 --> 00:17:08.740
and here are RBFNN.
.
00:17:08.740 --> 00:17:14.380
In this kind of structure in hidden layers
you will not have a ?ß to the hidden layer
00:17:14.380 --> 00:17:20.480
so hidden layer will have the radial basis
function. So this is how they work, so I will
00:17:20.480 --> 00:17:25.500
be focusing more on this feed-forward TV network
right so let us try to understand is how they
00:17:25.500 --> 00:17:28.060
work. So now this is the architecture.
.
00:17:28.060 --> 00:17:35.660
It so let us take any simple example suppose
you want to have fun like you have function
00:17:35.660 --> 00:17:44.050
like y equals to X1 X2 and X3 so this output
is simply multiplying all the inputs right
00:17:44.050 --> 00:17:54.460
so x1 x2 and x2 is so as I said these this
kind of approach they, they follow the black
00:17:54.460 --> 00:18:02.310
box modeling so we know the relationship between
X and all important. Why suppose they are
00:18:02.310 --> 00:18:07.540
multiplier we have the free notion of that
but now they are in the form of data and from
00:18:07.540 --> 00:18:11.980
the data we are trying to extract those informants
are using the near electric modeling.
00:18:11.980 --> 00:18:17.310
So nearer network does not bother about the
multipliers hood or how they are correlated
00:18:17.310 --> 00:18:24.790
actually this neural network does not have
to understand this structure without understanding
00:18:24.790 --> 00:18:30.270
the structure. It will assign the weights
or all the weight edge to the neurons and
00:18:30.270 --> 00:18:37.520
of course there will be a bias and the bias
so these are the biases okay. So they will
00:18:37.520 --> 00:18:42.430
adjust the weight and bias in such a way that
you will be able to map your input and output.
00:18:42.430 --> 00:18:49.120
So let us say now you have input like one
two three output will become six so this is
00:18:49.120 --> 00:19:03.150
first set of the data right second set of
data you may be you have two three four second
00:19:03.150 --> 00:19:20.580
it will become 24 third you have 2 5 nth so
third output will have this 100 right, yeah.
00:19:20.580 --> 00:19:27.830
So now you see that you have three different
sets of inputs first set second set and third
00:19:27.830 --> 00:19:32.990
set and you have corresponding output 6, 24,
100 so these are the data we have available.
00:19:32.990 --> 00:19:41.200
Now neural network suppose you want nero network
to give you a model with the help of training
00:19:41.200 --> 00:19:45.800
to achieve that then it will create a trained
it means we are trying to train the network
00:19:45.800 --> 00:19:53.630
with the help of input output data so that
it will give you a network where you can if
00:19:53.630 --> 00:20:04.590
you give some other number right, so whole
idea is like if you give 4, 2 and 3. It should
00:20:04.590 --> 00:20:18.860
give you the 24 right so now if you have designed
your an training models and network model.
00:20:18.860 --> 00:20:24.950
Then for this input it should be able to give
you 24 it should predict like that so yeah
00:20:24.950 --> 00:20:31.770
so this is a whole idea right. Now as I said
that you have to train the network and you
00:20:31.770 --> 00:20:37.910
have to assign the updated weight and the
bias such that it can capture the dynamics
00:20:37.910 --> 00:20:44.461
or capture the relationship between input
and output how will you do that. So the most
00:20:44.461 --> 00:20:49.960
frequently used method is back propagation
algorithm so with the help of back propagation
00:20:49.960 --> 00:20:53.180
algorithm you will be able to train your feed
forward neural network.
00:20:53.180 --> 00:20:55.721
.
So I will just show you in a brief how back
00:20:55.721 --> 00:21:04.010
propagation algorithm works right as I said
now training right training of your even network
00:21:04.010 --> 00:21:16.580
so we will do the training using back propagation
algorithm back, back propagation algorithm
00:21:16.580 --> 00:21:58.020
Write BPA okay. So it works like this so just
I will write the solution, w(k+1)=w(k)+ ??E/?w.
00:21:58.020 --> 00:22:00.210
.
Right so now you see that you are trying to
00:22:00.210 --> 00:22:05.570
update your weight any weight actually let
us take this one w1 a1 that we should pick
00:22:05.570 --> 00:22:19.350
up w(k+1)=w(k)+ ??E/?w so it will become w1
a1 okay. So what is the inference of this
00:22:19.350 --> 00:22:25.330
operation like you are trying to update which
means you are trying to get the weight of
00:22:25.330 --> 00:22:30.350
next step with the information of the previous
weight right . We initially you will give
00:22:30.350 --> 00:22:33.060
some weight and then how will you know these
weights are correct or not.
00:22:33.060 --> 00:22:39.350
It will update your weight on the basis of
this so this is basically based on your gradient
00:22:39.350 --> 00:22:45.190
descent all right so here you see like if
you have initially estimated some weight it
00:22:45.190 --> 00:22:54.020
will be 0 w0 so w one will be w 0 plus ETA
times this is a ?E/?w . So let me write this
00:22:54.020 --> 00:23:10.550
term it called the learning parameter yeah
and this is easier cause function which we
00:23:10.550 --> 00:23:26.220
discussed in this square error function or
cost function. So what is E basically easier
00:23:26.220 --> 00:23:40.980
error square E= ½ Sk=1N E2(x) .
Now let us point out one thing here okay,
00:23:40.980 --> 00:23:57.250
so suppose now if your E versus w sometimes
it can be a constant and anything like that
00:23:57.250 --> 00:24:01.930
so this region if you see you will not be
able to update the weight even though you
00:24:01.930 --> 00:24:09.570
have error if you see error versus wait what
de by ?E/?w will make this term zero right
00:24:09.570 --> 00:24:15.180
that time you will not be able to get the
weight so you need to add one more term which
00:24:15.180 --> 00:24:28.800
is called the momentum times momentum parameter
times O(w(k)-w(K-1))one step back okay, so
00:24:28.800 --> 00:24:42.210
this is called the momentum parameter right.
.
00:24:42.210 --> 00:24:49.120
So learning rate learning parameter decides
you the learning rate like how fast how slow
00:24:49.120 --> 00:24:56.180
you are making your network to learn and momentum
parameter helps you to improve the performance
00:24:56.180 --> 00:25:04.020
right. Now each weight you can write like
that as I discussed earlier so now here are
00:25:04.020 --> 00:25:13.050
your inputs like for this hidden layer you
will have w1 select right for this in your
00:25:13.050 --> 00:25:21.310
neuron just I want to show you one neuron
expression. So for this let us say the output
00:25:21.310 --> 00:25:35.110
is suppose x1 this is w2a1 x2 + w3 a1 x3 plus
bias you can name with this bias you can take
00:25:35.110 --> 00:25:45.800
like be a1 so plus be a1 so I am sorry for
messing it up here.
00:25:45.800 --> 00:25:50.890
So this is your expression right so now this
is the summation actually summation of all
00:25:50.890 --> 00:25:57.900
with all the weight is now it has to pass
through some nonlinear function so the very
00:25:57.900 --> 00:26:11.460
popular popularly used nonlinear function
is called a sigmoidal function so sigmoid
00:26:11.460 --> 00:26:27.530
function basically they are all your activation
function so here Yi =1-e-yi/1+e-yi .One suppose
00:26:27.530 --> 00:26:36.780
now in this case you have summation at c1
so this is f(Yi) once it will pass with the
00:26:36.780 --> 00:26:42.900
sigmoidal function then it will become 1-e-c1
once you got and then with the help of this
00:26:42.900 --> 00:26:47.220
see one again c2 will come c3 will come from
here.
00:26:47.220 --> 00:26:50.150
.
Again your weights are assigned with the output
00:26:50.150 --> 00:26:57.110
all the output which curve coming from your
hidden layer Iran's it will go to output layer
00:26:57.110 --> 00:27:07.010
again so you can make it like be let us say
I would be too okay so with that again you
00:27:07.010 --> 00:27:11.360
will have the summation of all the inputs
with buyers and you will transfer to activation
00:27:11.360 --> 00:27:19.220
function right and then it has to go through
like. This now so suppose here first output
00:27:19.220 --> 00:27:27.280
for the first hydrogen it came at y1 right
so now error is how much error will be your
00:27:27.280 --> 00:27:34.910
e=Y-Y1 e2=(Y-Y1)2 .
And then you to design your cost function
00:27:34.910 --> 00:27:44.590
right so this is your house now you see y1
is a function of this weight and this weight
00:27:44.590 --> 00:27:50.100
or this output either is a function of the
previous weight so you have to update first
00:27:50.100 --> 00:27:55.660
time it will you will give some information
like a initial guess it will go forward you
00:27:55.660 --> 00:28:02.530
will see the error with the error it will
update this weight first and then wait from
00:28:02.530 --> 00:28:06.680
this backside so the error is propagating
back action.
00:28:06.680 --> 00:28:12.490
So you are having the connection on the error
by going back side so the principle of back
00:28:12.490 --> 00:28:18.220
propagation algorithm works like that so it
we correct the error by propagating back backwards
00:28:18.220 --> 00:28:22.610
right so that is why it is called the back
propagation algorithm. So that with the help
00:28:22.610 --> 00:28:27.460
of the understanding of back propagation algorithm
and the structure of F F and N N will get
00:28:27.460 --> 00:28:33.290
a network modern right so that is how your
N and works.
00:28:33.290 --> 00:28:36.010
.
Next we will see go back to the again Delta
00:28:36.010 --> 00:28:40.550
method how Delta method works now I just ate
delta method it estimates your error time
00:28:40.550 --> 00:28:47.090
dynamic parameters with the help of artificial
neural network model and basically feed for
00:28:47.090 --> 00:28:52.650
one you can network model right so and it
is a very prominent method you know and it
00:28:52.650 --> 00:28:58.870
is quite intuitive also if you see the aerodynamic
derivative let us take the example of CLa
00:28:58.870 --> 00:29:05.180
so what is the physical significance of CL
a it means with the change in a with a change
00:29:05.180 --> 00:29:15.610
in a how your CL is going to change. You write
?CL /?a by keeping other thing constant all
00:29:15.610 --> 00:29:21.450
if you have other input like d E and Q .
So by keeping those input constant you will
00:29:21.450 --> 00:29:30.140
observe the change in CL because of change
in a right angle of attack so this is the
00:29:30.140 --> 00:29:38.480
basic understanding of the aerodynamic derivative
or this parameter CLa so now how will were
00:29:38.480 --> 00:29:47.050
late this understanding in d method now suppose
this is the model aerodynamic model actually
00:29:47.050 --> 00:29:58.240
which is trained by FFN feed-forward neural
network model but you know very well.
00:29:58.240 --> 00:30:05.390
Now the structure of the model and suppose
now I will give you the plane this with the
00:30:05.390 --> 00:30:14.320
same example what we have used earlier during
lists college tuition explanation then we
00:30:14.320 --> 00:30:26.760
had three input okay, and those inputs where
your a de and QC/2V right and outputs where
00:30:26.760 --> 00:30:35.950
your force coefficient lift force coefficient
CL and CM right so this was basically a structure
00:30:35.950 --> 00:30:43.760
now here we do not have structured aerodynamic
model unlike what we used in less square right.
00:30:43.760 --> 00:30:46.640
.
Here but we have the trained network model
00:30:46.640 --> 00:30:53.880
with the help of a set of input data and set
of output data right now Delta method oh!
00:30:53.880 --> 00:31:00.230
how it works I will just explain you now you
perturb this input first let us say first
00:31:00.230 --> 00:31:17.500
input a you are perturbed by da right. Let
me write here so this is your NN model yes
00:31:17.500 --> 00:31:28.280
so now you have uttered a with da by keeping
de and the other input qcv constant then you
00:31:28.280 --> 00:31:34.290
will see the changes in this.
So I can also write with some different notations
00:31:34.290 --> 00:31:43.480
CL+ is right and then CM also will see some
changes may be and add Cn+ yes I know you
00:31:43.480 --> 00:31:53.500
perturb in the other direction may be a-d
for this time again your NN and model so here
00:31:53.500 --> 00:32:04.780
you will have a-da by keeping this to input
constant qc /2V and this representation you
00:32:04.780 --> 00:32:18.460
can make like CL- and Cm- right so this is
what is here. Now if you observe the output
00:32:18.460 --> 00:32:30.960
CL + and here CL- the changes in force coefficient
live four coefficients by we have made a change
00:32:30.960 --> 00:32:39.520
in positive direction for da negative direction.
Also with da so total changes 2da . So this
00:32:39.520 --> 00:32:48.660
is nothing but again a CLa is 1 now you see
you got this aerodynamic derivative the CL
00:32:48.660 --> 00:32:55.530
a which the help of perturb method you have
perturb perturbed a in both the direction
00:32:55.530 --> 00:33:02.310
to avoid the biases right so positive perturbation
and negative perturbation and then you observe
00:33:02.310 --> 00:33:07.970
the changes in CL in both the direction and
if you divided by total perturbation then
00:33:07.970 --> 00:33:24.290
you will get CLa similarly you will get Cma
like Cm+ - Cm- /2?a like this you can get
00:33:24.290 --> 00:33:29.350
all other derivatives also like if you want
to get CL de.
00:33:29.350 --> 00:33:36.890
.
Then you perturb in with dy+ de+ de and de-
00:33:36.890 --> 00:33:45.130
de by keeping a and q CB constant will get
CL de and Cm de like this so using delta method
00:33:45.130 --> 00:33:53.450
you can estimate all those aerodynamic derivative
with feed-forward neural train model with
00:33:53.450 --> 00:33:58.901
this approach right so this is the fundamental
understanding of delta method. So I will show
00:33:58.901 --> 00:34:08.260
you with examples maybe with the Matlab simulation
how, how did you get the derivative and then
00:34:08.260 --> 00:34:10.859
you can have the comparison with the least
square methods.
00:34:10.859 --> 00:34:15.159
Which I have discussed earlier and also for
your practice if you are interested then I
00:34:15.159 --> 00:34:22.649
can show you with a Matlab toolbox which is
called NN tool, tool so there you will learn
00:34:22.649 --> 00:34:29.700
you can learn how to model any network or
how can you train the network with a very
00:34:29.700 --> 00:34:36.030
simplified toolboxes it is very easy to use
and I can demonstrate in this tutorial so
00:34:36.030 --> 00:34:41.149
that you can practice for different kind of
problem it is not limited only for aerodynamic
00:34:41.149 --> 00:34:45.960
parameter estimation problem. We can employ
in other application also as per your interest
00:34:45.960 --> 00:34:52.270
and requirement here thank you so much.