Octave also supports linear least squares minimization. That is,
Octave can find the parameter b such that the model
y = x*b
fits data (x,y) as well as possible, assuming zero-mean
Gaussian noise. If the noise is assumed to be isotropic the problem
can be solved using the ‘\’ or ‘/’ operators, or the ols
function. In the general case where the noise is assumed to be anisotropic
the gls
is needed.
Ordinary least squares estimation for the multivariate model y = x*b + e with mean (e) = 0 and cov (vec (e)) = kron (s, I). where y is a t by p matrix, x is a t by k matrix, b is a k by p matrix, and e is a t by p matrix.
Each row of y and x is an observation and each column a variable.
The return values beta, sigma, and r are defined as follows.
- beta
- The OLS estimator for b. beta is calculated directly via
inv (x'*x) * x' * y
if the matrixx'*x
is of full rank. Otherwise, beta= pinv (
x) *
y wherepinv (
x)
denotes the pseudoinverse of x.- sigma
- The OLS estimator for the matrix s,
sigma = (y-x*beta)' * (y-x*beta) / (t-rank(x))- r
- The matrix of OLS residuals, r
=
y-
x*
beta.
Generalized least squares estimation for the multivariate model y = x*b + e with mean (e) = 0 and cov (vec (e)) = (s^2) o, where y is a t by p matrix, x is a t by k matrix, b is a k by p matrix, e is a t by p matrix, and o is a t*p by t*p matrix.
Each row of y and x is an observation and each column a variable. The return values beta, v, and r are defined as follows.
- beta
- The GLS estimator for b.
- v
- The GLS estimator for s^2.
- r
- The matrix of GLS residuals, r = y - x*beta.
See also: ols.
Minimize
norm (
c*
x- d)
subject to x>= 0
. c and d must be real. x0 is an optional initial guess for x.Outputs:
- resnorm
The squared 2-norm of the residual: norm(c*x-d)^2
- residual
The residual: d-c*x
- exitflag
An indicator of convergence. 0 indicates that the iteration count was exceeded, and therefore convergence was not reached; >0 indicates that the algorithm converged. (The algorithm is stable and will converge given enough iterations.)
- output
A structure with two fields:
- "algorithm": The algorithm used ("nnls")
- "iterations": The number of iterations taken.
- lambda
Not implemented.
Create options struct for optimization functions.
Valid parameters are:
- AutoScaling
- ComplexEqn
- FinDiffType
- FunValCheck When enabled, display an error if the objective function returns a complex value or NaN. Must be set to "on" or "off" [default].
- GradObj When set to "on", the function to be minimized must return a second argument which is the gradient, or first derivative, of the function at the point x. If set to "off" [default], the gradient is computed via finite differences.
- Jacobian When set to "on", the function to be minimized must return a second argument which is the Jacobian, or first derivative, of the function at the point x. If set to "off" [default], the Jacobian is computed via finite differences.
- MaxFunEvals Maximum number of function evaluations before optimization stops. Must be a positive integer.
- MaxIter Maximum number of algorithm iterations before optimization stops. Must be a positive integer.
- OutputFcn A user-defined function executed once per algorithm iteration.
- TolFun Termination criterion for the function output. If the difference in the calculated objective function between one algorithm iteration and the next is less than
TolFun
the optimization stops. Must be a positive scalar.- TolX Termination criterion for the function input. If the difference in x, the current search point, between one algorithm iteration and the next is less than
TolX
the optimization stops. Must be a positive scalar.- TypicalX
- Updating