p"(X) [H]. Thus the Hessian of p is matrix positive and since, in the noncommutative setting, positive polynomials are sums of squares we obtain the following theorem. 8. If p is matrix convex, then its Hessian p" (x) [h] is a sum of squares. 2.. 3. 2 by example. 2 in the case k == 2. 9. The one-variable polynomial p == x 4 is not matrix convex.

On the other hand the positive functionals on ~~ separate the points of IR(x, X*)k. See for details [HMP04]. Assume that p ~ E 2 and let k ~ (d + 2)/2, so that p E lR(x, X*)2k-2. 2 there exists a tuple M of operators acting on a Hilbert space H of dimension N(k) and a vector ~ E H, such that o s (p(M, M*)~,~) == L(p) < 0, a contradiction. 0 When compared to the commutative framework, this theorem is stronger in the sense that it does not assume a strict positivity of p on a well chosen "spectrum".

The Hessian of r(x) is then, where r(x) == (J - LA(X))-lC*. The heuristic argument is that there is an X E §n(lRg ) (with n as large as necessary) close to 0 and a vector v so that f(X)v has components Zl, ... , Zd E lRn which are independent. A minimality hypothesis on the descriptor realization allows for an argument similar to that of the CHSY-Lemma to prevail with the conclusion that 40 MAURICIO C. DE OLIVEIRA ET AL. {LA[H]r(X)v : H E Sn(lRg)} has small codimension. , is nearly positive definite.