Releases: roboptim/roboptim-core
Release 3.2
Summary
- */!\ API breaking change /!*
ProblemandSolverare now only templated on the matrix type. See the migration guide in the next section. Impact on end-user code is limited, the biggest changes happen in plugin codes. latex/dvipsare no longer required for building the documentation. A minimal MathJax will be bundled instead. This explains the larger tarballs.- Minor fixes:
- Fix
NumericQuadraticFunctionhessian, - Fix some alignment issues for 32-bit systems,
- Fix
updateSparseBlockhelper function, - Fix
CachedFunction.
- Fix
- Clarify usage of Lagrange multipliers vector λ in
Resultstructure. - Add new
SolverCallbackclass to be used with theMultiplexer. - Improve printing for several functions.
- Add
jacobian(x)andconstraintsOutputSize()methods toProblem. - Add support for precompiled templates of matrix types (Dense/Sparse).
Migration from 3.1 to 3.2
ProblemandSolverwere templated on the cost function and constraints types, for instance:
// Specify the solver type that will be used
typedef Solver<DifferentiableFunction, boost::mpl::vector<LinearFunction, DifferentiableFunction> > solver_t;
// Deduce the problem type, which is Problem<DifferentiableFunction, boost::mpl::vector<LinearFunction, DifferentiableFunction> >
typedef solver_t::problem_t problem_t;Now, this is handled at runtime, and all that remains is the matrix type used:
// Specify the solver type that will be used
typedef Solver<EigenMatrixDense> solver_t;
// Deduce the problem type, which is Problem<EigenMatrixDense>
typedef solver_t::problem_t problem_t;- The following
Problemconstructor is now deprecated:
// Instantiate the cost function
CostFunction cost (param);
// Create problem
solver_t::problem_t pb (cost);Instead, use the boost::shared_ptr version:
// Instantiate the cost function
boost::shared_ptr<CostFunction> f (new CostFunction (param));
// Create problem
solver_t::problem_t pb (cost);- Flags are now used to identify a function's "true" type at runtime. For instance:
// f is a (shared) pointer to a function, and we want to cast it to a LinearFunction
// if that is possible
LinearFunction* g = 0;
if (f->asType<LinearFunction> ())
{
g = f->castInto<LinearFunction> ();
}The implementation thus relies on a cheap static_cast rather than an expensive dynamic_cast. Note that castInto accepts a boolean parameter to enable the asType check internally, and throws if the cast is invalid.
Release 3.1
Summary
ColMajor/RowMajorsupport has been improved (cf. #89). Default is back toColMajorsince this is Eigen's default mode, but that can be changed with a CMake variable.- Allocation checking has been improved (cf. #92).
- Multiplots are now available with the
matplotlibplotting backend (cf. #94). - Added
vector_tandboolto the solver parameter types. As a consequence,std::stringparameters should not rely on automatic conversion fromconst char*(cf. 7a0bbb7). Basically:
// This will be converted to bool:
parameters["key"].value = "value";
// While this will be a string:
parameters["key"].value = std::string ("value");Release 3.0
Summary
New features:
- Lots of new functions (cos, sin, etc.), operators (plus, minus, scalar, map, etc.) and decorators (cached function etc.),
- Callback system (logger, callback multiplexer),
- Simple Matplotlib support for visualization,
- Function pools,
- Set argument names (useful for plotting).
Improvements:
- Support
Eigen::Ref: now RobOptim functions accept blocks/segments of Eigen matrices as input, - Improved
CachedFunctionwith LRU cache, - Automatic conversion of constraints/cost function types when creating problems,
- Faster forward-difference rule for finite differences.
Other changes:
- Removed exception specifiers (
void function(...))throw() - Storage order was changed from Eigen's default (column-major) to row-major. The storage order is available in the
GenericFunctionTraits(StorageOrder). - Move metaprogramming magic to
roboptim/core/detail/utility.hh - Merge
roboptim/core/visualization/util.hhwithroboptim/core/util.hh
Migration from 2.0 to 3.0
- Remove all the
throw ()exception specifiers - For Eigen::Ref support, a new API is used. For instance, for
argument_t:
argument_t& ---> argument_ref
const argument_t& ---> const_argument_ref
Same goes for vector_t, matrix_t, result_t, gradient_t, jacobian_t and hessian_t.
For instance, the signature of impl_compute was:
void impl_compute (result_t& result, const argument_t& argument) const throw ();Now, it is:
void impl_compute (result_ref result, const_argument_ref argument) const;The reason behind this change is that we now use Eigen::Ref, and const references become const Eigen::Ref<const xxx_t>&, while references are Eigen::Ref<xxx_t>&. That way, we keep signatures simple, and using Eigen::Refs makes it possible to avoid both temporary allocations and extra copies, thus increasing RobOptim's performance. However, note that you SHOULD NOT use these typedefs as return types, since it would return references to temporary objects.
Release 2.0
The version 2.0 of roboptim-core now depends on Eigen for matrix computations by default.
Traits allow the user to use its own type. Support for Eigen dense and sparse matrices is built-in.