swift-nav / albatross Goto Github PK
View Code? Open in Web Editor NEWA framework for statistical modelling in C++.
License: MIT License
A framework for statistical modelling in C++.
License: MIT License
As @jbangelo suggested it'd be nice to switch the trait inspection style from something like this:
template <typename X> class is_complete {
template <typename T, typename = decltype(!sizeof(T))>
static std::true_type test(int);
template <typename T> static std::false_type test(...);
public:
static constexpr bool value = decltype(test<X>(0))::value;
};
to something like this:
template<typename T, typename = void>
struct is_complete : std::false_type { };
template<typename T>
struct is_complete<T, typename std::enable_if<sizeof(T) != 0>::type> : std::true_type { };
There is an unfortunate edge case which occurs when using a templated _call_impl
in a CovarianceFunction
along with the Measurement
wrapper used in Gaussian process fits. For example, consider the following covariance function:
struct T {};
struct Foo : public CovarianceFunction<Foo> {
template <typename X, typename Y>
double _call_impl(const X&, const Y&) {
return 0.;
}
double _call_impl(const T &, const T &) {
return 1.;
}
}
When called directly you would expect the following behavior:
Foo foo;
T t;
int x;
//
foo(t, t); // 1.
foo(x, t); // 0.
foo(x, x) // 0.
Basically, you would expect that a call with types T
would get caught by the specialization of _call_impl
while everything else would get caught by the templated catch all. This is true, but the confusing part comes when used in a Gaussian process. Consider this example:
const auto gp = gp_from_covariance(foo);
RegressionDataset<T> some_dataset = get_dataset();
const auto fit_model = gp.fit(some_dataset);
If you inspected the resulting covariance matrix fit_model.get_fit().train_covariance;
you'd find that it consists entirely of zeros!! What happens is that the call to .fit()
will wrap the input with a Measurement
wrapper before evaluating the covariance:
template <typename FeatureType,
typename std::enable_if<
has_call_operator<CovFunc, FeatureType, FeatureType>::value,
int>::type = 0>
CholeskyFit<FeatureType>
_fit_impl(const std::vector<FeatureType> &features,
const MarginalDistribution &targets) const {
const auto measurement_features = as_measurements(features);
Eigen::MatrixXd cov = covariance_function_(measurement_features);
return CholeskyFit<FeatureType>(features, cov, targets);
}
so foo
will actually end up being called with types Measurement<T>
not T
explicitly. Without the catch all _call_impl
in Foo
it would actually behave as expected because the covariance function is first be inspected for a valid call with Measurement<T>
and if one is not found the Measurement<>
wrapper is stripped of and the covariance is called wit the underlying type.
To fix this behavior you could add another method to Foo
that looked like:
template <typename X, typename Y>
double _call_impl(const Measurement<X>&, const Measurement<Y>&) = delete;
To for the compiler to choose the desired method ... but knowing to do so requires pretty intimate knowledge of the inner workings of albatross
, ideally we could find a way to either just "do the right thing" (which I think would be to only use a Measurement
method if a specialized version has been defined.) but an alternative would be to place a static failure which warned the user of possibly confusing behavior.
Each CovarianceTerm
has a set of parameters and as you compose terms to result in a final CovarianceFunction
those parameters are aggregated. If however there duplicate CovarianceTerm
s there will be an overlap in parameter names which will almost certainly fail.
To get around this the aggregation step may need to add a suffix to duplicate parameter names and have the corresponding dispatching logic inside set_param
.
Currently all Gaussian processes in albatross
assume the mean is zero. It would be nice to be able to provide an arbitrary function which can remove apriori means. There are a few ways this could happen.
-We could implement a MeanFunction
class which functions very much like the CovarianceFunction
. The GaussianProcessBase
class would then need an additional template parameter which could default to ZeroMean
.
-We could hijack the CovarianceFunction
to contain a _mean_impl
method which then let's us perform very similar logic to _call_impl
to provide an optionally defined cov_func.mean(features)
method. This would avoid the need for an additional template parameter ... but it could get confusing that the CovarianceFunction
acts as a MeanFunction
as well.
As it stands there are two different ways of specifying measurement noise. You can pass in the measurement variance through the targets
, or include an IndependentNoise
covariance function. The IndependentNoise
however may behave different from expectations, namely it will be applied to ALL covariance function computations, not just the training data. So if you make a prediction at some new location you'll find it contains the noise. This aligns more with a nugget (and should perhaps have it's name changed). To get the behavior closer to expectations there should probably be an additional way to specify the measurement noise such that it only get's applied during training, not testing.
The error is reported as:
/home/peddie/albatross/third_party/cereal/include/cereal/external/rapidjson/internal/stack.h:117:13: runtime error: applying non-zero offset 16 to null pointer
Here is a CI job that shows this error.
This issue has been fixed in upstream rapidjson in this commit about 2 years ago. Unfortunately cereal bundles its own version of rapidjson directly in the repo rather than relying on upstream or using a submodule, and what’s more, the bundled version appears to be customized for cereal, meaning it’s not a simple copy-and-paste PR to the cereal project.
The switch to CRTP isn't reflected in the documentation.
[ RUN ] test_core_dataset/DatasetOperatorTester/0.test_multiply_dataset
albatross_unit_tests: albatross/third_party/eigen/Eigen/src/Core/Assign_MKL.h:132: static void Eigen::internal::Assignment<Eigen::Array<double, -1, 1, 0>, Eigen::CwiseUnaryOp<Eigen::internal::scalar_sqrt_op<double>, const Eigen::ArrayWrapper<const Eigen::Matrix<double, -1, 1, 0>>>, Eigen::internal::assign_op<double, double>, Eigen::internal::Dense2Dense>::run(DstXprType &, const Eigen::internal::Assignment<type-parameter-0-0, CwiseUnaryOp<Eigen::internal::scalar_sqrt_op<double>, type-parameter-0-1>, Eigen::internal::assign_op<double, double>, Eigen::internal::Dense2Dense, typename enable_if<vml_assign_traits<DstXprType, SrcXprNested>::EnableVml, void>::type>::SrcXprType &, const assign_op<double, double> &) [DstXprType = Eigen::Array<double, -1, 1, 0>, SrcXprType = Eigen::CwiseUnaryOp<Eigen::internal::scalar_sqrt_op<double>, const Eigen::ArrayWrapper<const Eigen::Matrix<double, -1, 1, 0>>>, Functor = Eigen::internal::assign_op<double, double>, Kind = Eigen::internal::Dense2Dense, EnableIf = void]: Assertion `dst.rows() == src.rows() && dst.cols() == src.cols()' failed.
Aborted
Looks like a mistaken assumption about sizes somewhere. This failure also occurs in:
test_multiply_with_matrix_joint
test_multiply_with_matrix_marginal
test_multiply_with_sparse_matrix_joint
test_multiply_with_sparse_matrix_marginal
test_multiply_with_vector
albatross/tests/test_sparse_gp.cc:405: Failure
Expected: ((direct_pred.covariance - iter_pred.covariance).norm()) < (1e-5), actual: 1.53745e-05 vs 1e-05
[ FAILED ] SparseGaussianProcessTest/0.test_rebase_and_update, where TypeParam = albatross::LeaveOneIntervalOut (0 ms)
Not reassuring to encounter subtle numerical differences like this.
Ransac models have several associated parameters, it'd be nice to set them up as actual model parameters so they can be tuned (or set to FixedPriors
to avoid tuning).
In this CI build using the thread sanitizer, we get the following error:
[----------] 10 tests from test_groupby/GroupByTester/0, where TypeParam = albatross::BoolClassMethodGrouper
[ RUN ] test_groupby/GroupByTester/0.test_groupby_access_methods
==================
WARNING: ThreadSanitizer: heap-use-after-free (pid=6243)
Read of size 8 at 0x7b0400005210 by main thread:
#0 std::_Bit_reference::operator bool() const /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:96:17 (albatross_unit_tests+0x68360e) (BuildId: 8d743b0ee98587d31ebbfc53e9fff979f9e97fa7)
#1 albatross::gtest_suite_GroupByTester_::test_groupby_access_methods<albatross::BoolClassMethodGrouper>::TestBody() /home/runner/work/albatross/albatross/tests/test_group_by.cc:231:19 (albatross_unit_tests+0x68360e)
#2 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) <null> (albatross_unit_tests+0xa45d95) (BuildId: 8d743b0ee98587d31ebbfc53e9fff979f9e97fa7)
Previous write of size 8 at 0x7b0400005210 by main thread:
#0 operator delete(void*) <null> (albatross_unit_tests+0x3bd36e) (BuildId: 8d743b0ee98587d31ebbfc53e9fff979f9e97fa7)
#1 std::__new_allocator<unsigned long>::deallocate(unsigned long*, unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/new_allocator.h:158:2 (albatross_unit_tests+0x683606) (BuildId: 8d743b0ee98587d31ebbfc53e9fff979f9e97fa7)
#2 std::allocator_traits<std::allocator<unsigned long> >::deallocate(std::allocator<unsigned long>&, unsigned long*, unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/alloc_traits.h:496:13 (albatross_unit_tests+0x683606)
#3 std::_Bvector_base<std::allocator<bool> >::_M_deallocate() /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:650:6 (albatross_unit_tests+0x683606)
#4 std::_Bvector_base<std::allocator<bool> >::~_Bvector_base() /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:622:15 (albatross_unit_tests+0x683606)
#5 albatross::gtest_suite_GroupByTester_::test_groupby_access_methods<albatross::BoolClassMethodGrouper>::TestBody() /home/runner/work/albatross/albatross/tests/test_group_by.cc:230:26 (albatross_unit_tests+0x683606)
#6 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) <null> (albatross_unit_tests+0xa45d95) (BuildId: 8d743b0ee98587d31ebbfc53e9fff979f9e97fa7)
SUMMARY: ThreadSanitizer: heap-use-after-free /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:96:17 in std::_Bit_reference::operator bool() const
==================
The same error appears in the address sanitizer run:
[----------] 10 tests from test_groupby/GroupByTester/0, where TypeParam = albatross::BoolClassMethodGrouper
[ RUN ] test_groupby/GroupByTester/0.test_groupby_access_methods
==6346==ERROR: AddressSanitizer: heap-use-after-free on address 0x6020019c8af0 at pc 0x55912b884b77 bp 0x7ffd0a545af0 sp 0x7ffd0a545ae8
READ of size 8 at 0x6020019c8af0 thread T0
#0 0x55912b884b76 in std::_Bit_reference::operator bool() const /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:96:17
#1 0x55912bcff9cd in albatross::gtest_suite_GroupByTester_::test_groupby_access_methods<albatross::BoolClassMethodGrouper>::TestBody() /home/runner/work/albatross/albatross/tests/test_group_by.cc:231:19
#2 0x55912c2413e5 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17d13e5) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#3 0x55912c21834c in testing::Test::Run() (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17a834c) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#4 0x55912c219c10 in testing::TestInfo::Run() (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17a9c10) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#5 0x55912c21a511 in testing::TestSuite::Run() (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17aa511) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#6 0x55912c22b64b in testing::internal::UnitTestImpl::RunAllTests() (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17bb64b) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#7 0x55912c242205 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17d2205) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#8 0x55912c22aee1 in testing::UnitTest::Run() (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17baee1) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#9 0x55912c24b1da in main (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17db1da) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#10 0x7f3469968d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f) (BuildId: 69389d485a9793dbe873f0ea2c93e02efaa9aa3d)
#11 0x7f3469968e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f) (BuildId: 69389d485a9793dbe873f0ea2c93e02efaa9aa3d)
#12 0x55912b7bb2f4 in _start (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0xd4b2f4) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
0x6020019c8af0 is located 0 bytes inside of 8-byte region [0x6020019c8af0,0x6020019c8af8)
freed by thread T0 here:
#0 0x55912b87976d in operator delete(void*) (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0xe0976d) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#1 0x55912b88f380 in std::__new_allocator<unsigned long>::deallocate(unsigned long*, unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/new_allocator.h:158:2
#2 0x55912b88f380 in std::allocator_traits<std::allocator<unsigned long> >::deallocate(std::allocator<unsigned long>&, unsigned long*, unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/alloc_traits.h:496:13
#3 0x55912b88f380 in std::_Bvector_base<std::allocator<bool> >::_M_deallocate() /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:650:6
#4 0x55912bcff9a8 in std::_Bvector_base<std::allocator<bool> >::~_Bvector_base() /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:622:15
#5 0x55912bcff9a8 in albatross::gtest_suite_GroupByTester_::test_groupby_access_methods<albatross::BoolClassMethodGrouper>::TestBody() /home/runner/work/albatross/albatross/tests/test_group_by.cc:230:26
#6 0x55912c2413e5 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17d13e5) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
previously allocated by thread T0 here:
#0 0x55912b878f0d in operator new(unsigned long) (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0xe08f0d) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
#1 0x55912b88ffd5 in std::__new_allocator<unsigned long>::allocate(unsigned long, void const*) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/new_allocator.h:137:27
#2 0x55912b88ffd5 in std::allocator_traits<std::allocator<unsigned long> >::allocate(std::allocator<unsigned long>&, unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/alloc_traits.h:464:20
#3 0x55912b890f20 in std::_Bvector_base<std::allocator<bool> >::_M_allocate(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:631:21
#4 0x55912b890f20 in std::vector<bool, std::allocator<bool> >::_M_insert_aux(std::_Bit_iterator, bool) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/vector.tcc:926:29
#5 0x55912b881577 in std::vector<bool, std::allocator<bool> >::push_back(bool) /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:1109:4
#6 0x55912bd05d09 in std::vector<bool, std::allocator<bool> > albatross::map_keys<std::map, bool, albatross::RegressionDataset<int> >(std::map<bool, albatross::RegressionDataset<int> > const&) /home/runner/work/albatross/albatross/include/albatross/src/utils/map_utils.hpp:50:10
#7 0x55912bcff858 in albatross::GroupedBase<bool, albatross::RegressionDataset<int> >::keys() const /home/runner/work/albatross/albatross/include/albatross/src/indexing/group_by.hpp:102:46
#8 0x55912bcff858 in albatross::gtest_suite_GroupByTester_::test_groupby_access_methods<albatross::BoolClassMethodGrouper>::TestBody() /home/runner/work/albatross/albatross/tests/test_group_by.cc:230:39
#9 0x55912c2413e5 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/home/runner/work/albatross/albatross/build/tests/albatross_unit_tests+0x17d13e5) (BuildId: 6b72824ac0b8f5c2cc11778d09d94ef6ed006f7e)
SUMMARY: AddressSanitizer: heap-use-after-free /usr/bin/../lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_bvector.h:96:17 in std::_Bit_reference::operator bool() const
Shadow bytes around the buggy address:
0x0c0480331100: fa fa fd fd fa fa fd fa fa fa fd fd fa fa fd fd
0x0c0480331110: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fd
0x0c0480331120: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fa
0x0c0480331130: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fa
0x0c0480331140: fa fa fd fa fa fa fd fd fa fa fd fd fa fa fd fd
=>0x0c0480331150: fa fa 00 04 fa fa fd fd fa fa 00 04 fa fa[fd]fa
0x0c0480331160: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0480331170: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0480331180: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0480331190: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c04803311a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==6346==ABORTING
That doesn't guarantee it's not a false positive, but it's probably worth looking into. I didn't see anything obvious, but I do notice we use a lot of const int &x
arguments to functions defined in that test suite, which makes me wonder whether we're encountering lifetime issues.
At the moment the definition of an IndexingFunction
is pretty loose. They are often used in things like cross validation (and in turn in Ransac) and are expected to functors which take a vector of features and return a map from a group name (string) to the indices which correspond to that group.
Like the models, index functions need to be capable of dealing with a variety of different feature types which could make the CRTP approach a good option.
template <typename T>
struct IndexingFunc {
template <typename FeatureType,
typename enable_if<is_valid<T, FeatureType>::value, int>::type = 0>
FoldIndexer operator()(const std::vector<FeatureType> &features) const {
derived()._call_impl(features);
}
}
Currently the covariance terms have parameters and those parameters can be aggregated and tuned in optimization routines, but we are restricted to gradient free optimization routines. It'd be nice to be able to switch to something like L-BFGS.
How to best do this isn't clear but it could follow the interface for call operators. A CovarianceTerm
which supports gradients would be required to include a method,
std::map<std::string, double> gradient(X &x, Y&y) const
which would return a map from parameter name to gradient in the vicinity of the current parameter value for any two supported features x
and y
.
Alternatively gradient
could return a vector, either std::
or Eigen::
which is assumed to follow the same order as get_params_as_vector
. Though subsequent concatenation of these vectors might get confusing.
Summation and other operations on CovarianceTerm
s would need to be defined to follow the chain rule. In order to decide if a gradient
method is simply not defined, or should be assumed zero we'd have to use trait inspection with logic along the lines of "if the gradient is not defined but the ()
operator is then the gradient must not be provided."
When using an overloaded function with the group_by
methods the compiler isn't happy. We get a long string of errors which are relatively easy to identify since this message repeats:
albatross::RegressionDataset<Feature>::group_by(<unresolved overloaded function type>)
but it'll be buried in a long string of template candidate failures which makes it tough to spot.
Ideally we would:
It's often the case that we precompute a bunch of Distributions, but then need some subset of that for use as a function argument. For example we might have:
EvaluationMetric<MarginalDistribution> metric;
std::vector<RegressionDataset<X>> folds;
std::map<std::string, JointDistribution> predictions;
and we'd want to call cross_validated_scores(metric, folds, predictions)
, but since the metric
takes a MarginalDistribution
and our predictions are JointDistribution
s things don't work out gracefully.
The current approach is to specialize each function that needs to do this sort of conversion (see cross_validated_scores
), but ideally we'd be able to set up a set of conversion methods that would let us do:
JointDistribution joint;
MarginalDistribution x = joint;
When covariance functions get composed the parameters also get aggregated ... but what happens when two sub covariance functions contain the same parameter???
We can compute the log likelihood of a model given data but it'd be nice to be able to also provide priors over the parameters which would be included in that likelihood.
This could be done at the model level. Ie, a model would have a set of parameters and then a corresponding map from parameter name to prior. It would be up to each specific model whether or not it should include priors.
It could also be done at the ParmeterHandlingMixin
level. In this case each parameter in a parameter store would be assigned a prior (which might default to none).
Current attempts to get clang format tests to work on travis only work with clang 3.8.
At the moment the indexing functors (such as LeaveOneOut
and LeaveOneGroupOut
) are effectively just functions. A common use case however looks like this:
LeaveOneOut loo;
const auto indexer = loo(dataset);
const auto folds = folds_from_indexer(dataset, indexer);
It'd be nice if the syntax were closer to:
LeaveOneOut loo;
const auto folds = loo.get_folds(dataset);
alternatively this could be added to the datasets.
LeaveOneOut loo;
const auto folds = dataset.get_folds(loo);
macos / Xcode Version 10.2.1 (10E1001)
In file included from
...../albatross/tests/test_model_metrics.cc:15:
...../albatross/include/albatross/Tune:16:10: fatal error: 'nlopt.hpp' file not found
#include <nlopt.hpp>
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.