# Have you implemented gradient descend including covariance?

Hello dear fellows, I'm implementing a gradient descend (in this case an ascend) to find a maximum log-likelihood of a complex function by varying it's parameters when comparing it to real measured values. The implementation using only the first order derivative works, but kinda ill, because some of the parameters are strongly correlated (not independent) and sometimes the convergence gets out of control. So I'm trying to implement it using covariance as well, using the mathematical operator gradient (symbolized by an inverse triangle). The covariance matrix (i.e. gradient) is done, I'm just not getting it to work when using the X update for the iteration. Currently I'm using NewX = (Identity+alpha*gradient)*X, but I'm not sure this is right. Can anyone help?