Cost function for LTI system identification Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?System identification packageschoosing drive signals for system identification?How to perform model fitting for system identificationLow dimensional system identification algorithmsInputs for system identificationSystem Identification with periodic signal confusionBlack-box system identification procedureLinear Predictive Coding for general signalsTransfer function estimation of a noisy systemRegion of convergence of transfer function
Was the pager message from Nick Fury to Captain Marvel unnecessary?
Proving that any solution to the differential equation of an oscillator can be written as a sum of sinusoids.
JImage - Set generated image quality
why doesn't university give past final exams' answers
Maximum rotation made by a symmetric positive definite matrix?
How to remove this numerical artifact?
Does the universe have a fixed centre of mass?
Why did Bronn offer to be Tyrion Lannister's champion in trial by combat?
Why complex landing gears are used instead of simple, reliable and light weight muscle wire or shape memory alloys?
Plotting a Maclaurin series
Why do C and C++ allow the expression (int) + 4*5?
How to show a density matrix is in a pure/mixed state?
Changing order of draw operation in PGFPlots
Can I cut the hair of a conjured korred with a blade made of precious material to harvest that material from the korred?
Weaponising the Grasp-at-a-Distance spell
malloc in main() or malloc in another function: allocating memory for a struct and its members
How do I find my Spellcasting Ability for my D&D character?
Is there night in Alpha Complex?
calculator's angle answer for trig ratios that can work in more than 1 quadrant on the unit circle
What does Sonny Burch mean by, "S.H.I.E.L.D. and HYDRA don't even exist anymore"?
How do Java 8 default methods hеlp with lambdas?
Why did Israel vote against lifting the American embargo on Cuba?
How many time has Arya actually used Needle?
Why does BitLocker not use RSA?
Cost function for LTI system identification
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Announcing the arrival of Valued Associate #679: Cesar Manara
Unicorn Meta Zoo #1: Why another podcast?System identification packageschoosing drive signals for system identification?How to perform model fitting for system identificationLow dimensional system identification algorithmsInputs for system identificationSystem Identification with periodic signal confusionBlack-box system identification procedureLinear Predictive Coding for general signalsTransfer function estimation of a noisy systemRegion of convergence of transfer function
$begingroup$
I am currently reading and trying to understand a paper (Kulkarni and Colburn, 2004) that utilizes system identification methods to approximate head-related transfer functions.
The general approach is to
- Compute an autoregressive (all-pole-) estimate of the transfer function using the autocorrelation method for linear prediction.
- Use the AR estimate as a starting point to compute a pole-zero-representation of the system transfer function iteratively.
- Evaluate the result of the estimation process on a logarithmic scale (Error in dB).
For the iterative procedure, the authors are proposing a cost function
$hatC = frac12piint_-pi^pi|H(e^jomega)A(e^jomega) - B(e^jomega)|^2 domega $,
where $hatH$ is the system transfer function, $A$ is the DTFT of the recursive coefficients and $B$ the DTFT of the transversal coefficients.
I understand this approach originates from this paper (Kalman, 1958).
This cost function is then extended for the iterative process as
$hatC_i = frac12piint_-pi^pi|fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
where the index $i$ denotes variables corresponding to the $i$th iteration.
The iterative modification originates from this paper (Steiglitz and McBride, 1965).
In order to find a solution on a decibel scale, a weighting function $W$ is introduced:
$hatC_i = frac12piint_-pi^pi |W_i(e^jomega)|^2 |fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
which is introduced in the paper as
$W_i(e^jomega) = fraclogleft( H(e^jomega)right) - logleft( fracB_i-1(e^jomega)A_i-1(e^jomega)right)$.
I understand this weighting function is the squared error in logarithmic scale between true and approximated transfer function, divided by the first cost function.
However, i have trouble understanding the process of arriving at the iterative cost function for several reasons. I would like to ask the following questions:
- Why is the first cost function $hatC$ preferred to, say, $frac12piint_-pi^pi |H(e^jomega) - B(e^jomega)/A(e^jomega)|^2$ in the first place? what does it do?
- In the iterative cost function $hatC_i$, what is the purpose of dividing by the recursive proportion of the last transfer function estimate?
- For what reason is a weighting function introduced for logarithmic error minimization, rather than just using its numerator as a cost function directly?
I would really appreciate any help or pointers into the right direction.
linear-systems transfer-function system-identification autoregressive-model
$endgroup$
add a comment |
$begingroup$
I am currently reading and trying to understand a paper (Kulkarni and Colburn, 2004) that utilizes system identification methods to approximate head-related transfer functions.
The general approach is to
- Compute an autoregressive (all-pole-) estimate of the transfer function using the autocorrelation method for linear prediction.
- Use the AR estimate as a starting point to compute a pole-zero-representation of the system transfer function iteratively.
- Evaluate the result of the estimation process on a logarithmic scale (Error in dB).
For the iterative procedure, the authors are proposing a cost function
$hatC = frac12piint_-pi^pi|H(e^jomega)A(e^jomega) - B(e^jomega)|^2 domega $,
where $hatH$ is the system transfer function, $A$ is the DTFT of the recursive coefficients and $B$ the DTFT of the transversal coefficients.
I understand this approach originates from this paper (Kalman, 1958).
This cost function is then extended for the iterative process as
$hatC_i = frac12piint_-pi^pi|fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
where the index $i$ denotes variables corresponding to the $i$th iteration.
The iterative modification originates from this paper (Steiglitz and McBride, 1965).
In order to find a solution on a decibel scale, a weighting function $W$ is introduced:
$hatC_i = frac12piint_-pi^pi |W_i(e^jomega)|^2 |fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
which is introduced in the paper as
$W_i(e^jomega) = fraclogleft( H(e^jomega)right) - logleft( fracB_i-1(e^jomega)A_i-1(e^jomega)right)$.
I understand this weighting function is the squared error in logarithmic scale between true and approximated transfer function, divided by the first cost function.
However, i have trouble understanding the process of arriving at the iterative cost function for several reasons. I would like to ask the following questions:
- Why is the first cost function $hatC$ preferred to, say, $frac12piint_-pi^pi |H(e^jomega) - B(e^jomega)/A(e^jomega)|^2$ in the first place? what does it do?
- In the iterative cost function $hatC_i$, what is the purpose of dividing by the recursive proportion of the last transfer function estimate?
- For what reason is a weighting function introduced for logarithmic error minimization, rather than just using its numerator as a cost function directly?
I would really appreciate any help or pointers into the right direction.
linear-systems transfer-function system-identification autoregressive-model
$endgroup$
add a comment |
$begingroup$
I am currently reading and trying to understand a paper (Kulkarni and Colburn, 2004) that utilizes system identification methods to approximate head-related transfer functions.
The general approach is to
- Compute an autoregressive (all-pole-) estimate of the transfer function using the autocorrelation method for linear prediction.
- Use the AR estimate as a starting point to compute a pole-zero-representation of the system transfer function iteratively.
- Evaluate the result of the estimation process on a logarithmic scale (Error in dB).
For the iterative procedure, the authors are proposing a cost function
$hatC = frac12piint_-pi^pi|H(e^jomega)A(e^jomega) - B(e^jomega)|^2 domega $,
where $hatH$ is the system transfer function, $A$ is the DTFT of the recursive coefficients and $B$ the DTFT of the transversal coefficients.
I understand this approach originates from this paper (Kalman, 1958).
This cost function is then extended for the iterative process as
$hatC_i = frac12piint_-pi^pi|fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
where the index $i$ denotes variables corresponding to the $i$th iteration.
The iterative modification originates from this paper (Steiglitz and McBride, 1965).
In order to find a solution on a decibel scale, a weighting function $W$ is introduced:
$hatC_i = frac12piint_-pi^pi |W_i(e^jomega)|^2 |fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
which is introduced in the paper as
$W_i(e^jomega) = fraclogleft( H(e^jomega)right) - logleft( fracB_i-1(e^jomega)A_i-1(e^jomega)right)$.
I understand this weighting function is the squared error in logarithmic scale between true and approximated transfer function, divided by the first cost function.
However, i have trouble understanding the process of arriving at the iterative cost function for several reasons. I would like to ask the following questions:
- Why is the first cost function $hatC$ preferred to, say, $frac12piint_-pi^pi |H(e^jomega) - B(e^jomega)/A(e^jomega)|^2$ in the first place? what does it do?
- In the iterative cost function $hatC_i$, what is the purpose of dividing by the recursive proportion of the last transfer function estimate?
- For what reason is a weighting function introduced for logarithmic error minimization, rather than just using its numerator as a cost function directly?
I would really appreciate any help or pointers into the right direction.
linear-systems transfer-function system-identification autoregressive-model
$endgroup$
I am currently reading and trying to understand a paper (Kulkarni and Colburn, 2004) that utilizes system identification methods to approximate head-related transfer functions.
The general approach is to
- Compute an autoregressive (all-pole-) estimate of the transfer function using the autocorrelation method for linear prediction.
- Use the AR estimate as a starting point to compute a pole-zero-representation of the system transfer function iteratively.
- Evaluate the result of the estimation process on a logarithmic scale (Error in dB).
For the iterative procedure, the authors are proposing a cost function
$hatC = frac12piint_-pi^pi|H(e^jomega)A(e^jomega) - B(e^jomega)|^2 domega $,
where $hatH$ is the system transfer function, $A$ is the DTFT of the recursive coefficients and $B$ the DTFT of the transversal coefficients.
I understand this approach originates from this paper (Kalman, 1958).
This cost function is then extended for the iterative process as
$hatC_i = frac12piint_-pi^pi|fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
where the index $i$ denotes variables corresponding to the $i$th iteration.
The iterative modification originates from this paper (Steiglitz and McBride, 1965).
In order to find a solution on a decibel scale, a weighting function $W$ is introduced:
$hatC_i = frac12piint_-pi^pi |W_i(e^jomega)|^2 |fracH(e^jomega)A_i(e^jomega)A_i-1(e^jomega) - fracB_i(e^jomega)A_i-1(e^jomega)|^2 domega $,
which is introduced in the paper as
$W_i(e^jomega) = fraclogleft( H(e^jomega)right) - logleft( fracB_i-1(e^jomega)A_i-1(e^jomega)right)$.
I understand this weighting function is the squared error in logarithmic scale between true and approximated transfer function, divided by the first cost function.
However, i have trouble understanding the process of arriving at the iterative cost function for several reasons. I would like to ask the following questions:
- Why is the first cost function $hatC$ preferred to, say, $frac12piint_-pi^pi |H(e^jomega) - B(e^jomega)/A(e^jomega)|^2$ in the first place? what does it do?
- In the iterative cost function $hatC_i$, what is the purpose of dividing by the recursive proportion of the last transfer function estimate?
- For what reason is a weighting function introduced for logarithmic error minimization, rather than just using its numerator as a cost function directly?
I would really appreciate any help or pointers into the right direction.
linear-systems transfer-function system-identification autoregressive-model
linear-systems transfer-function system-identification autoregressive-model
edited 17 mins ago
Jonas Schwarz
asked 1 hour ago
Jonas SchwarzJonas Schwarz
262112
262112
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The chosen cost function is the mean squared error, i.e., the integral over a squared magnitude of the difference between frequency responses. The function
$$E(e^jomega)=H(e^jomega)-fracB(e^jomega)A(e^j omega)tag1$$
depends on frequency, so you can't minimize it directly, unless you want to minimize it for exactly one frequency $omega$, which is of course pointless. You can choose several error measures depending on $E(e^jomega)$ given in $(1)$. Two common choices are
$$varepsilon_1=max_omegaW(omega)|E(e^jomega)|tag2$$
and
$$varepsilon_2=int_0^piW(omega)|E(e^jomega)|^2domegatag3$$
with some positive weighting function $W(omega)$. Note that $varepsilon_1$ and $varepsilon_2$ given by $(2)$ and $(3)$ do not depend on frequency.
The authors of the paper you refer to chose to minimize the weighted mean square error given by $(3)$. However, instead of using the linear difference $(1)$, they chose to minimize the average logarithmic difference, i.e., the average error on a dB scale.
The problem with the minimization of $(3)$ is that for IIR filters, it results in a non-linear optimization problem, which is much harder to solve directly, and which might also have locally optimal solutions that are far from the global optimum. The cost function $hatC$ in your question is linear in the filter coefficients. The point of the iteration is now to solve a sequence of linear minimization problems (which is simple, just solve a system of linear equations) in order to compute the solution of the originally non-linear optimization problem (the minimization of $(3)$). Note that if convergence is achieved, then $A_i-1(e^jomega)=A_i(e^jomega)$, so the cost function $hatC_i$ is identical to the cost function of the original non-linear problem. Yet, only linear minimization problems are solved in each iteration.
I'm not sure I completely understand your last question, but the weighting function is there to change to problem from minimizing the average squared difference between frequency responses to the average squared differences between the logarithms of the magnitude responses. The necessary weighting function is unknown, but if the procedure converges - note that there is no guarantee that it does in all cases - the final weight function is such that the mean squared logarithmic error is minimized. Note that this latter problem is highly non-linear, whereas the proposed procedure only solves linear subproblems.
$endgroup$
$begingroup$
Thank you for your response! I now understand why the weighting is utilized. Could you elaborate on what is achieved by dividing by $A_i-1$ in the iterative cost function? Does it force convergence?
$endgroup$
– Jonas Schwarz
20 mins ago
$begingroup$
Also, i made an error in my first question, i meant to ask about the integral over the difference of true and estimated transfer function instead of a frequency-dependent error. I'll edit the question.
$endgroup$
– Jonas Schwarz
19 mins ago
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "295"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f56860%2fcost-function-for-lti-system-identification%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The chosen cost function is the mean squared error, i.e., the integral over a squared magnitude of the difference between frequency responses. The function
$$E(e^jomega)=H(e^jomega)-fracB(e^jomega)A(e^j omega)tag1$$
depends on frequency, so you can't minimize it directly, unless you want to minimize it for exactly one frequency $omega$, which is of course pointless. You can choose several error measures depending on $E(e^jomega)$ given in $(1)$. Two common choices are
$$varepsilon_1=max_omegaW(omega)|E(e^jomega)|tag2$$
and
$$varepsilon_2=int_0^piW(omega)|E(e^jomega)|^2domegatag3$$
with some positive weighting function $W(omega)$. Note that $varepsilon_1$ and $varepsilon_2$ given by $(2)$ and $(3)$ do not depend on frequency.
The authors of the paper you refer to chose to minimize the weighted mean square error given by $(3)$. However, instead of using the linear difference $(1)$, they chose to minimize the average logarithmic difference, i.e., the average error on a dB scale.
The problem with the minimization of $(3)$ is that for IIR filters, it results in a non-linear optimization problem, which is much harder to solve directly, and which might also have locally optimal solutions that are far from the global optimum. The cost function $hatC$ in your question is linear in the filter coefficients. The point of the iteration is now to solve a sequence of linear minimization problems (which is simple, just solve a system of linear equations) in order to compute the solution of the originally non-linear optimization problem (the minimization of $(3)$). Note that if convergence is achieved, then $A_i-1(e^jomega)=A_i(e^jomega)$, so the cost function $hatC_i$ is identical to the cost function of the original non-linear problem. Yet, only linear minimization problems are solved in each iteration.
I'm not sure I completely understand your last question, but the weighting function is there to change to problem from minimizing the average squared difference between frequency responses to the average squared differences between the logarithms of the magnitude responses. The necessary weighting function is unknown, but if the procedure converges - note that there is no guarantee that it does in all cases - the final weight function is such that the mean squared logarithmic error is minimized. Note that this latter problem is highly non-linear, whereas the proposed procedure only solves linear subproblems.
$endgroup$
$begingroup$
Thank you for your response! I now understand why the weighting is utilized. Could you elaborate on what is achieved by dividing by $A_i-1$ in the iterative cost function? Does it force convergence?
$endgroup$
– Jonas Schwarz
20 mins ago
$begingroup$
Also, i made an error in my first question, i meant to ask about the integral over the difference of true and estimated transfer function instead of a frequency-dependent error. I'll edit the question.
$endgroup$
– Jonas Schwarz
19 mins ago
add a comment |
$begingroup$
The chosen cost function is the mean squared error, i.e., the integral over a squared magnitude of the difference between frequency responses. The function
$$E(e^jomega)=H(e^jomega)-fracB(e^jomega)A(e^j omega)tag1$$
depends on frequency, so you can't minimize it directly, unless you want to minimize it for exactly one frequency $omega$, which is of course pointless. You can choose several error measures depending on $E(e^jomega)$ given in $(1)$. Two common choices are
$$varepsilon_1=max_omegaW(omega)|E(e^jomega)|tag2$$
and
$$varepsilon_2=int_0^piW(omega)|E(e^jomega)|^2domegatag3$$
with some positive weighting function $W(omega)$. Note that $varepsilon_1$ and $varepsilon_2$ given by $(2)$ and $(3)$ do not depend on frequency.
The authors of the paper you refer to chose to minimize the weighted mean square error given by $(3)$. However, instead of using the linear difference $(1)$, they chose to minimize the average logarithmic difference, i.e., the average error on a dB scale.
The problem with the minimization of $(3)$ is that for IIR filters, it results in a non-linear optimization problem, which is much harder to solve directly, and which might also have locally optimal solutions that are far from the global optimum. The cost function $hatC$ in your question is linear in the filter coefficients. The point of the iteration is now to solve a sequence of linear minimization problems (which is simple, just solve a system of linear equations) in order to compute the solution of the originally non-linear optimization problem (the minimization of $(3)$). Note that if convergence is achieved, then $A_i-1(e^jomega)=A_i(e^jomega)$, so the cost function $hatC_i$ is identical to the cost function of the original non-linear problem. Yet, only linear minimization problems are solved in each iteration.
I'm not sure I completely understand your last question, but the weighting function is there to change to problem from minimizing the average squared difference between frequency responses to the average squared differences between the logarithms of the magnitude responses. The necessary weighting function is unknown, but if the procedure converges - note that there is no guarantee that it does in all cases - the final weight function is such that the mean squared logarithmic error is minimized. Note that this latter problem is highly non-linear, whereas the proposed procedure only solves linear subproblems.
$endgroup$
$begingroup$
Thank you for your response! I now understand why the weighting is utilized. Could you elaborate on what is achieved by dividing by $A_i-1$ in the iterative cost function? Does it force convergence?
$endgroup$
– Jonas Schwarz
20 mins ago
$begingroup$
Also, i made an error in my first question, i meant to ask about the integral over the difference of true and estimated transfer function instead of a frequency-dependent error. I'll edit the question.
$endgroup$
– Jonas Schwarz
19 mins ago
add a comment |
$begingroup$
The chosen cost function is the mean squared error, i.e., the integral over a squared magnitude of the difference between frequency responses. The function
$$E(e^jomega)=H(e^jomega)-fracB(e^jomega)A(e^j omega)tag1$$
depends on frequency, so you can't minimize it directly, unless you want to minimize it for exactly one frequency $omega$, which is of course pointless. You can choose several error measures depending on $E(e^jomega)$ given in $(1)$. Two common choices are
$$varepsilon_1=max_omegaW(omega)|E(e^jomega)|tag2$$
and
$$varepsilon_2=int_0^piW(omega)|E(e^jomega)|^2domegatag3$$
with some positive weighting function $W(omega)$. Note that $varepsilon_1$ and $varepsilon_2$ given by $(2)$ and $(3)$ do not depend on frequency.
The authors of the paper you refer to chose to minimize the weighted mean square error given by $(3)$. However, instead of using the linear difference $(1)$, they chose to minimize the average logarithmic difference, i.e., the average error on a dB scale.
The problem with the minimization of $(3)$ is that for IIR filters, it results in a non-linear optimization problem, which is much harder to solve directly, and which might also have locally optimal solutions that are far from the global optimum. The cost function $hatC$ in your question is linear in the filter coefficients. The point of the iteration is now to solve a sequence of linear minimization problems (which is simple, just solve a system of linear equations) in order to compute the solution of the originally non-linear optimization problem (the minimization of $(3)$). Note that if convergence is achieved, then $A_i-1(e^jomega)=A_i(e^jomega)$, so the cost function $hatC_i$ is identical to the cost function of the original non-linear problem. Yet, only linear minimization problems are solved in each iteration.
I'm not sure I completely understand your last question, but the weighting function is there to change to problem from minimizing the average squared difference between frequency responses to the average squared differences between the logarithms of the magnitude responses. The necessary weighting function is unknown, but if the procedure converges - note that there is no guarantee that it does in all cases - the final weight function is such that the mean squared logarithmic error is minimized. Note that this latter problem is highly non-linear, whereas the proposed procedure only solves linear subproblems.
$endgroup$
The chosen cost function is the mean squared error, i.e., the integral over a squared magnitude of the difference between frequency responses. The function
$$E(e^jomega)=H(e^jomega)-fracB(e^jomega)A(e^j omega)tag1$$
depends on frequency, so you can't minimize it directly, unless you want to minimize it for exactly one frequency $omega$, which is of course pointless. You can choose several error measures depending on $E(e^jomega)$ given in $(1)$. Two common choices are
$$varepsilon_1=max_omegaW(omega)|E(e^jomega)|tag2$$
and
$$varepsilon_2=int_0^piW(omega)|E(e^jomega)|^2domegatag3$$
with some positive weighting function $W(omega)$. Note that $varepsilon_1$ and $varepsilon_2$ given by $(2)$ and $(3)$ do not depend on frequency.
The authors of the paper you refer to chose to minimize the weighted mean square error given by $(3)$. However, instead of using the linear difference $(1)$, they chose to minimize the average logarithmic difference, i.e., the average error on a dB scale.
The problem with the minimization of $(3)$ is that for IIR filters, it results in a non-linear optimization problem, which is much harder to solve directly, and which might also have locally optimal solutions that are far from the global optimum. The cost function $hatC$ in your question is linear in the filter coefficients. The point of the iteration is now to solve a sequence of linear minimization problems (which is simple, just solve a system of linear equations) in order to compute the solution of the originally non-linear optimization problem (the minimization of $(3)$). Note that if convergence is achieved, then $A_i-1(e^jomega)=A_i(e^jomega)$, so the cost function $hatC_i$ is identical to the cost function of the original non-linear problem. Yet, only linear minimization problems are solved in each iteration.
I'm not sure I completely understand your last question, but the weighting function is there to change to problem from minimizing the average squared difference between frequency responses to the average squared differences between the logarithms of the magnitude responses. The necessary weighting function is unknown, but if the procedure converges - note that there is no guarantee that it does in all cases - the final weight function is such that the mean squared logarithmic error is minimized. Note that this latter problem is highly non-linear, whereas the proposed procedure only solves linear subproblems.
answered 1 hour ago
Matt L.Matt L.
51.6k23994
51.6k23994
$begingroup$
Thank you for your response! I now understand why the weighting is utilized. Could you elaborate on what is achieved by dividing by $A_i-1$ in the iterative cost function? Does it force convergence?
$endgroup$
– Jonas Schwarz
20 mins ago
$begingroup$
Also, i made an error in my first question, i meant to ask about the integral over the difference of true and estimated transfer function instead of a frequency-dependent error. I'll edit the question.
$endgroup$
– Jonas Schwarz
19 mins ago
add a comment |
$begingroup$
Thank you for your response! I now understand why the weighting is utilized. Could you elaborate on what is achieved by dividing by $A_i-1$ in the iterative cost function? Does it force convergence?
$endgroup$
– Jonas Schwarz
20 mins ago
$begingroup$
Also, i made an error in my first question, i meant to ask about the integral over the difference of true and estimated transfer function instead of a frequency-dependent error. I'll edit the question.
$endgroup$
– Jonas Schwarz
19 mins ago
$begingroup$
Thank you for your response! I now understand why the weighting is utilized. Could you elaborate on what is achieved by dividing by $A_i-1$ in the iterative cost function? Does it force convergence?
$endgroup$
– Jonas Schwarz
20 mins ago
$begingroup$
Thank you for your response! I now understand why the weighting is utilized. Could you elaborate on what is achieved by dividing by $A_i-1$ in the iterative cost function? Does it force convergence?
$endgroup$
– Jonas Schwarz
20 mins ago
$begingroup$
Also, i made an error in my first question, i meant to ask about the integral over the difference of true and estimated transfer function instead of a frequency-dependent error. I'll edit the question.
$endgroup$
– Jonas Schwarz
19 mins ago
$begingroup$
Also, i made an error in my first question, i meant to ask about the integral over the difference of true and estimated transfer function instead of a frequency-dependent error. I'll edit the question.
$endgroup$
– Jonas Schwarz
19 mins ago
add a comment |
Thanks for contributing an answer to Signal Processing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f56860%2fcost-function-for-lti-system-identification%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown