Comments (6)
No problem Pietro! I love it when people have the courage to simplify code and then show responsibility to fix the resulting setbacks. That's way better than someone who is afraid to break anything ;)
from clad.
@PetroZarytskyi This also is related to #904, please take a look.
from clad.
I am adding another reproducer, which fails with the same errors.
Both of these can be added as tests or verified somehow before merging the corresponding PR.
#include "clad/Differentiator/Differentiator.h"
/// Calculates the binomial coefficient n over k.
/// Equivalent to TMath::Binomial, but inlined.
inline double binomial(int n, int k)
{
if (k == 0 || n == k)
return 1;
int k1 = std::min(k, n - k);
int k2 = n - k1;
double fact = k2 + 1;
for (double i = k1; i > 1.; --i) {
fact *= (k2 + i) / i;
}
return fact;
}
/// The caller needs to make sure that there is at least one coefficient.
inline double bernstein(double x, double xmin, double xmax, double *coefs, int nCoefs)
{
double xScaled = (x - xmin) / (xmax - xmin); // rescale to [0,1]
int degree = nCoefs - 1; // n+1 polys of degree n
// in case list of arguments passed is empty
if (degree == 0) {
return coefs[0];
} else if (degree == 1) {
double a0 = coefs[0]; // c0
double a1 = coefs[1] - a0; // c1 - c0
return a1 * xScaled + a0;
} else if (degree == 2) {
double a0 = coefs[0]; // c0
double a1 = 2 * (coefs[1] - a0); // 2 * (c1 - c0)
double a2 = coefs[2] - a1 - a0; // c0 - 2 * c1 + c2
return (a2 * xScaled + a1) * xScaled + a0;
}
double t = xScaled;
double s = 1. - xScaled;
double result = coefs[0] * s;
for (int i = 1; i < degree; i++) {
result = (result + t * binomial(degree, i) * coefs[i]) * s;
t *= xScaled;
}
result += t * coefs[degree];
return result;
}
double wrapper(double* params) {
double RooNLLVarNewWeightSum = 0.0;
double RooNLLVarNewResult = 0.0;
double t5[] = {params[0],params[1],params[2],params[3]};
for(int loopIdx0 = 0; loopIdx0 < 69; loopIdx0++) {
const double t7 = bernstein(2.0, 0.000000, 100.000000, t5, 4);
const double t8 = t7/4;
RooNLLVarNewResult += t8;
}
return RooNLLVarNewResult;
}
int main() {
std::vector<double> parametersVec = {
0.25, 0.25, 0.5, 1, 1.5, 2
};
double d_params[] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
auto f_dx = clad::gradient(wrapper);
f_dx.execute(parametersVec.data(), d_params);
std::cout << "autodiff: " << std::endl;
std::cout << d_params[0] << std::endl;
std::cout << d_params[1] << std::endl;
std::cout << d_params[2] << std::endl;
std::cout << d_params[3] << std::endl;
std::cout << d_params[4] << std::endl;
std::cout << d_params[5] << std::endl;
auto numDiff = [&](int i) {
const double eps = 1e-6;
std::vector<double> p{parametersVec};
p[i] = parametersVec[i] - eps;
double nllValDown = wrapper(p.data());
p[i] = parametersVec[i] + eps;
double nllValUp = wrapper(p.data());
return (nllValUp - nllValDown) / (2 * eps);
};
std::cout << "num diff: " << std::endl;
std::cout << numDiff(0) << std::endl;
std::cout << numDiff(1) << std::endl;
std::cout << numDiff(2) << std::endl;
std::cout << numDiff(3) << std::endl;
std::cout << numDiff(4) << std::endl;
std::cout << numDiff(5) << std::endl;
return 0;
}
from clad.
Or even smaller:
#include <Math/CladDerivator.h>
double wrapper(double *params)
{
double out = 0.0;
for (int i = 0; i < 1; i++) {
double arr[] = {1.};
out += arr[0] * params[0];
}
return out;
}
#pragma clad ON
void wrapper_req()
{
clad::gradient(wrapper, "params");
}
#pragma clad OFF
void Repro()
{
std::vector<double> parametersVec = {0.5};
std::vector<double> gradientVec(parametersVec.size());
wrapper(parametersVec.data());
wrapper_grad(parametersVec.data(), gradientVec.data());
std::cout << "Clad diff:" << std::endl;
for (std::size_t i = 0; i < parametersVec.size(); ++i) {
std::cout << gradientVec[i] << std::endl;
}
auto numDiff = [&](int i) {
const double eps = 1e-6;
std::vector<double> p{parametersVec};
p[i] = parametersVec[i] - eps;
double nllValDown = wrapper(p.data());
p[i] = parametersVec[i] + eps;
double nllValUp = wrapper(p.data());
return (nllValUp - nllValDown) / (2 * eps);
};
std::cout << "Num diff:" << std::endl;
for (std::size_t i = 0; i < parametersVec.size(); ++i) {
std::cout << numDiff(i) << std::endl;
}
}
Output:
Clad diff:
1.3488e-321
Num diff:
1
from clad.
Note that unfortunately, the regression is exactly in the RooFit classes that are used in the CMS Run 1 Higgs discovery!
from clad.
Hi everyone. I believe the issue in the generated code is that the value of arr[0] is popped before it's used in the reverse pass:
// forward sweep (inside a loop)
...
clad::push(_t3, arr[0]);
out += clad::back(_t3) * params[0];
...
// reverse sweep (inside a loop)
...
clad::pop(_t3); <- the value is popped before it's used
_d_params[0] += clad::back(_t3) * _r_d0;
...
The reverse sweep should be
...
_d_params[0] += clad::back(_t3) * _r_d0;
clad::pop(_t3);
...
I will work on this. I want to apologize that #904 caused some problems. It needed a more in-depth review.
from clad.
Related Issues (20)
- Model the source locations pointing to the primal function
- Static qualifier is dropped when differentiating methods with out-of-line definitions in forward mode
- Clad fails to differentiate functor call expressions on debug build
- Unnecessary identifier in VarDecl that starts with "_"
- `clad::hessian` doesn't like pointer dereferencing HOT 12
- Forward mode custom derivatives for constructors cannot be specified
- Potential pitfall when passing the result of unary operation `+` on a lambda to `clad::differentiate`
- Dereferencing of input parameters doesn't work in forward mode
- Building calls to pushforwards of functions that accept string arguments fails
- Pointer arithmetics don't work in forward mode
- Trouble differentiating references in different scope
- Functions that return integers can't be used in forward mode
- Move the binder service from xeus-cling to xeus-cpp
- Fix types of literals created in the generated code
- Void function calls with literal arguments crash Clad in forward mode
- Tests fail to compile: clang crashes and the error is printed: You enabled Kokkos OpenMP support without enabling OpenMP in the compiler! HOT 1
- Differentiate argument expressions in calls to non-differentiable functions.
- Improve handling of dereferenced non-differentiable variables.
- Add support for `std::array` in the reverse mode
- Create better zero types in `VisitorBase::getZeroInit` as discussed in #989
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from clad.