Giter Club home page Giter Club logo

Comments (6)

zhouphd avatar zhouphd commented on June 15, 2024 1

@sunshineatnoon
I tried to implement the CPU mode, but this still can not pass the runtest. Hope it helps, :)

`// ------------------------------------------------------------------
// Fast R-CNN
// Copyright (c) 2015 Microsoft
// Licensed under The MIT License [see fast-rcnn/LICENSE for details]
// Written by Ross Girshick
// ------------------------------------------------------------------

#include "caffe/fast_rcnn_layers.hpp"

namespace caffe {

template
void SmoothL1LossLayer::LayerSetUp(
const vector<Blob>& bottom, const vector<Blob>& top) {
SmoothL1LossParameter loss_param = this->layer_param_.smooth_l1_loss_param();
sigma2_ = loss_param.sigma() * loss_param.sigma();
has_weights_ = (bottom.size() >= 3);
if (has_weights_) {
CHECK_EQ(bottom.size(), 4) << "If weights are used, must specify both "
"inside and outside weights";
}
}

template
void SmoothL1LossLayer::Reshape(
const vector<Blob>& bottom, const vector<Blob>& top) {
LossLayer::Reshape(bottom, top);
CHECK_EQ(bottom[0]->channels(), bottom[1]->channels());
CHECK_EQ(bottom[0]->height(), bottom[1]->height());
CHECK_EQ(bottom[0]->width(), bottom[1]->width());
if (has_weights_) {
CHECK_EQ(bottom[0]->channels(), bottom[2]->channels());
CHECK_EQ(bottom[0]->height(), bottom[2]->height());
CHECK_EQ(bottom[0]->width(), bottom[2]->width());
CHECK_EQ(bottom[0]->channels(), bottom[3]->channels());
CHECK_EQ(bottom[0]->height(), bottom[3]->height());
CHECK_EQ(bottom[0]->width(), bottom[3]->width());
}
diff_.Reshape(bottom[0]->num(), bottom[0]->channels(),
bottom[0]->height(), bottom[0]->width());
errors_.Reshape(bottom[0]->num(), bottom[0]->channels(),
bottom[0]->height(), bottom[0]->width());
// vector of ones used to sum
ones_.Reshape(bottom[0]->num(), bottom[0]->channels(),
bottom[0]->height(), bottom[0]->width());
for (int i = 0; i < bottom[0]->count(); ++i) {
ones_.mutable_cpu_data()[i] = Dtype(1);
}
}

template
void SmoothL1LossLayer::Forward_cpu(const vector<Blob>& bottom,
const vector<Blob
>& top) {
// NOT_IMPLEMENTED;
int count = bottom[0]->count();
//int num = bottom[0]->num();
const Dtype* in = diff_.cpu_data();
Dtype* out = errors_.mutable_cpu_data();
caffe_set(errors_.count(), Dtype(0), out);

caffe_sub(
count,
bottom[0]->cpu_data(),
bottom[1]->cpu_data(),
diff_.mutable_cpu_data()); // d := b0 - b1
if (has_weights_) {
// apply "inside" weights
caffe_mul(
count,
bottom[2]->cpu_data(),
diff_.cpu_data(),
diff_.mutable_cpu_data()); // d := w_in * (b0 - b1)
}

for (int index = 0;index < count; ++index){
Dtype val = in[index];
Dtype abs_val = abs(val);
if (abs_val < 1.0 / sigma2_) {
out[index] = 0.5 * val * val * sigma2_;
} else {
out[index] = abs_val - 0.5 / sigma2_;
}
}

if (has_weights_) {
// apply "outside" weights
caffe_mul(
count,
bottom[3]->cpu_data(),
errors_.cpu_data(),
errors_.mutable_cpu_data()); // d := w_out * SmoothL1(w_in * (b0 - b1))
}

Dtype loss = caffe_cpu_dot(count, ones_.cpu_data(), errors_.cpu_data());
top[0]->mutable_cpu_data()[0] = loss / bottom[0]->num();
}

template
void SmoothL1LossLayer::Backward_cpu(const vector<Blob>& top,
const vector& propagate_down, const vector<Blob
>& bottom) {
// NOT_IMPLEMENTED;
int count = diff_.count();
//int num = diff_.num();
const Dtype* in = diff_.cpu_data();
Dtype* out = errors_.mutable_cpu_data();
caffe_set(errors_.count(), Dtype(0), out);

for (int index = 0;index < count; ++index){
Dtype val = in[index];
Dtype abs_val = abs(val);
if (abs_val < 1.0 / sigma2_) {
out[index] = sigma2_ * val;
} else {
out[index] = (Dtype(0) < val) - (val < Dtype(0));
}
}

for (int i = 0; i < 2; ++i) {
if (propagate_down[i]) {
const Dtype sign = (i == 0) ? 1 : -1;
const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
caffe_cpu_axpby(
count, // count
alpha, // alpha
diff_.cpu_data(), // x
Dtype(0), // beta
bottom[i]->mutable_cpu_diff()); // y
if (has_weights_) {
// Scale by "inside" weight
caffe_mul(
count,
bottom[2]->cpu_data(),
bottom[i]->cpu_diff(),
bottom[i]->mutable_cpu_diff());
// Scale by "outside" weight
caffe_mul(
count,
bottom[3]->cpu_data(),
bottom[i]->cpu_diff(),
bottom[i]->mutable_cpu_diff());
}
}
}
}

#ifdef CPU_ONLY
STUB_GPU(SmoothL1LossLayer);
#endif

INSTANTIATE_CLASS(SmoothL1LossLayer);
REGISTER_LAYER_CLASS(SmoothL1Loss);

} // namespace caffe
`

from fast-rcnn.

rbgirshick avatar rbgirshick commented on June 15, 2024

train_net.py requires some minor code changes to support CPU training (add a --cpu option to argparser and then handle it appropriately; see demo.py for an example of this). I might add this to the code, but for my workflow CPU-based training is not very useful. Feel free to PR the change (if you do, you should also add a CPU option to test_net.py).

from fast-rcnn.

ssakhavi avatar ssakhavi commented on June 15, 2024

from fast-rcnn.

sunshineatnoon avatar sunshineatnoon commented on June 15, 2024

@ssakhavi Have you tried to train rcnn with cpu?

from fast-rcnn.

sunshineatnoon avatar sunshineatnoon commented on June 15, 2024

@rbgirshick hi~ It seems that the SmoothL1LossLayer has not been implemented on CPU, so even after changing code in train_net.py, I still cannot train with CPU.

from fast-rcnn.

ericromanenghi avatar ericromanenghi commented on June 15, 2024

there is some layer that does not support CPU because they are only implemented for GPU

from fast-rcnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.