namespace Eigen { /** \eigenManualPage TutorialMatrixArithmetic Matrix and vector arithmetic This page aims to provide an overview and some details on how to perform arithmetic between matrices, vectors and scalars with Eigen. \eigenAutoToc \section TutorialArithmeticIntroduction Introduction Eigen offers matrix/vector arithmetic operations either through overloads of common C++ arithmetic operators such as +, -, *, or through special methods such as dot(), cross(), etc. For the Matrix class (matrices and vectors), operators are only overloaded to support linear-algebraic operations. For example, \c matrix1 \c * \c matrix2 means matrix-matrix product, and \c vector \c + \c scalar is just not allowed. If you want to perform all kinds of array operations, not linear algebra, see the \ref TutorialArrayClass "next page". \section TutorialArithmeticAddSub Addition and subtraction The left hand side and right hand side must, of course, have the same numbers of rows and of columns. They must also have the same \c Scalar type, as Eigen doesn't do automatic type promotion. The operators at hand here are: \li binary operator + as in \c a+b \li binary operator - as in \c a-b \li unary operator - as in \c -a \li compound operator += as in \c a+=b \li compound operator -= as in \c a-=b
Example:Output:
\section TutorialArithmeticScalarMulDiv Scalar multiplication and division Multiplication and division by a scalar is very simple too. The operators at hand here are: \li binary operator * as in \c matrix*scalar \li binary operator * as in \c scalar*matrix \li binary operator / as in \c matrix/scalar \li compound operator *= as in \c matrix*=scalar \li compound operator /= as in \c matrix/=scalar
Example:Output:
\include tut_arithmetic_scalar_mul_div.cpp \verbinclude tut_arithmetic_scalar_mul_div.out
\section TutorialArithmeticMentionXprTemplates A note about expression templates This is an advanced topic that we explain on \ref TopicEigenExpressionTemplates "this page", but it is useful to just mention it now. In Eigen, arithmetic operators such as \c operator+ don't perform any computation by themselves, they just return an "expression object" describing the computation to be performed. The actual computation happens later, when the whole expression is evaluated, typically in \c operator=. While this might sound heavy, any modern optimizing compiler is able to optimize away that abstraction and the result is perfectly optimized code. For example, when you do: \code VectorXf a(50), b(50), c(50), d(50); ... a = 3*b + 4*c + 5*d; \endcode Eigen compiles it to just one for loop, so that the arrays are traversed only once. Simplifying (e.g. ignoring SIMD optimizations), this loop looks like this: \code for(int i = 0; i < 50; ++i) a[i] = 3*b[i] + 4*c[i] + 5*d[i]; \endcode Thus, you should not be afraid of using relatively large arithmetic expressions with Eigen: it only gives Eigen more opportunities for optimization. \section TutorialArithmeticTranspose Transposition and conjugation The transpose \f$a^T \f$, conjugate \f$\bar{a} \f$, and adjoint (i.e., conjugate transpose) \f$a^* \f$ of a matrix or vector \f$a \f$ are obtained by the member functions \link DenseBase::transpose() transpose()\endlink, \link MatrixBase::conjugate() conjugate()\endlink, and \link MatrixBase::adjoint() adjoint()\endlink, respectively.
Example:Output:
\include tut_arithmetic_transpose_conjugate.cpp \verbinclude tut_arithmetic_transpose_conjugate.out
For real matrices, \c conjugate() is a no-operation, and so \c adjoint() is equivalent to \c transpose(). As for basic arithmetic operators, \c transpose() and \c adjoint() simply return a proxy object without doing the actual transposition. If you do b = a.transpose(), then the transpose is evaluated at the same time as the result is written into \c b. However, there is a complication here. If you do a = a.transpose(), then Eigen starts writing the result into \c a before the evaluation of the transpose is finished. Therefore, the instruction a = a.transpose() does not replace \c a with its transpose, as one would expect:
Example:Output:
\include tut_arithmetic_transpose_aliasing.cpp \verbinclude tut_arithmetic_transpose_aliasing.out
This is the so-called \ref TopicAliasing "aliasing issue". In "debug mode", i.e., when \ref TopicAssertions "assertions" have not been disabled, such common pitfalls are automatically detected. For \em in-place transposition, as for instance in a = a.transpose(), simply use the \link DenseBase::transposeInPlace() transposeInPlace()\endlink function:
Example:Output:
\include tut_arithmetic_transpose_inplace.cpp \verbinclude tut_arithmetic_transpose_inplace.out
There is also the \link MatrixBase::adjointInPlace() adjointInPlace()\endlink function for complex matrices. \section TutorialArithmeticMatrixMul Matrix-matrix and matrix-vector multiplication Matrix-matrix multiplication is again done with \c operator*. Since vectors are a special case of matrices, they are implicitly handled there too, so matrix-vector product is really just a special case of matrix-matrix product, and so is vector-vector outer product. Thus, all these cases are handled by just two operators: \li binary operator * as in \c a*b \li compound operator *= as in \c a*=b (this multiplies on the right: \c a*=b is equivalent to a = a*b)
Example:Output:
\include tut_arithmetic_matrix_mul.cpp \verbinclude tut_arithmetic_matrix_mul.out
Example:Output:
\include tut_arithmetic_dot_cross.cpp \verbinclude tut_arithmetic_dot_cross.out
Remember that cross product is only for vectors of size 3. Dot product is for vectors of any sizes. When using complex numbers, Eigen's dot product is conjugate-linear in the first variable and linear in the second variable. \section TutorialArithmeticRedux Basic arithmetic reduction operations Eigen also provides some reduction operations to reduce a given matrix or vector to a single value such as the sum (computed by \link DenseBase::sum() sum()\endlink), product (\link DenseBase::prod() prod()\endlink), or the maximum (\link DenseBase::maxCoeff() maxCoeff()\endlink) and minimum (\link DenseBase::minCoeff() minCoeff()\endlink) of all its coefficients.
Example:Output:
\include tut_arithmetic_redux_basic.cpp \verbinclude tut_arithmetic_redux_basic.out
The \em trace of a matrix, as returned by the function \link MatrixBase::trace() trace()\endlink, is the sum of the diagonal coefficients and can also be computed as efficiently using a.diagonal().sum(), as we will see later on. There also exist variants of the \c minCoeff and \c maxCoeff functions returning the coordinates of the respective coefficient via the arguments:
Example:Output:
\include tut_arithmetic_redux_minmax.cpp \verbinclude tut_arithmetic_redux_minmax.out
\section TutorialArithmeticValidity Validity of operations Eigen checks the validity of the operations that you perform. When possible, it checks them at compile time, producing compilation errors. These error messages can be long and ugly, but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example: \code Matrix3f m; Vector4f v; v = m*v; // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES \endcode Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time. Eigen then uses runtime assertions. This means that the program will abort with an error message when executing an illegal operation if it is run in "debug mode", and it will probably crash if assertions are turned off. \code MatrixXf m(3,3); VectorXf v(4); v = m * v; // Run-time assertion failure here: "invalid matrix product" \endcode For more details on this topic, see \ref TopicAssertions "this page". */ }