Comments (15)
Found the bug! Thanks a lot for filing the issue. Fixed with PR #1891.
from burn.
are you using the version of burn from crates.io or the version on main?
Is the model you used hosted somewhere? if not do you mind if I took a look at it?
from burn.
@skewballfox I'm using version 0.13.2 from crates.io and I've been using some torchvision models such as resnet18, efficientnet_v2_s and mobilenet_v2 each modified for binary classification (see below).
class Net(nn.Module):
def __init__(self, model='mn'):
super(Net, self).__init__()
if model == 'resnet':
self.backbone = models.resnet18(weights='IMAGENET1K_V1')
num_features = self.backbone.fc.in_features
self.backbone.fc = nn.Linear(num_features, 1)
elif model == "efficientnet":
self.backbone = models.efficientnet_v2_s(weights='IMAGENET1K_V1', )
num_features = self.backbone.classifier[1].in_features
self.backbone.classifier[1] = nn.Linear(num_features, 1)
elif model == "mn":
self.backbone = models.mobilenet_v2(weights="DEFAULT")
idx = 1
num_features = self.backbone.classifier[idx].in_features
self.backbone.classifier[idx] = nn.Linear(num_features, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.backbone(x)
x = self.sigmoid(x)
return x
from burn.
@galenoshea This is a run time error right? Does cargo build
runs successfully for the project generating the error?
If it's not too much trouble, and you would rather not send us the onnx file. Could you try recreating with just one of the overloaded models? trying to narrow down the search a bit
from burn.
Cargo successfully builds and it is a runtime error.
I just tried reproducing and found that Resnet18 works while Mobilenet and efficientnet have issues.
I'm using images of size 224, but I've seen similar issues before when using awkward input sizes. Specifically, this happens when at a given layer when the an odd number of channels are trying to halve (Note the error from above, [1, 1, 4, 4]
to [1, 1, 3, 3]
) could be in relation to the size of a previous layer (1, 1, 7, 7). In this case, the forward pass works fine for all models, but the backward pass might be encountering a similar issue.
from burn.
can you navigate to that specific location pointed to in the traceback ( /home/goshea/.cargo/registry/src/index.crates.io-6f17d22bba15001f/ndarray-0.15.6/src/lib.rs:1529
or whatever location pointed to by the latest version), and tell me the name of the variable passed and what function it's being passed to?
from burn.
I apologize I'm not sure how to get var info but these are the 2 functions that break when using the 2 backends.
LibTorch Backend
tensor.rs 535
/// Copies values from the argument tensor to the input tensor.
pub fn copy_(&mut self, src: &Tensor) {
self.f_copy_(src).unwrap()
}
NdArray Backend
lib.rs line 1529
/// Private Methods
impl<A, S, D> ArrayBase<S, D>
where
S: Data<Elem = A>,
D: Dimension,
{
#[inline]
fn broadcast_unwrap<E>(&self, dim: E) -> ArrayView<'_, A, E>
where
E: Dimension,
{
#[cold]
#[inline(never)]
fn broadcast_panic<D, E>(from: &D, to: &E) -> !
where
D: Dimension,
E: Dimension,
{
panic!(
"ndarray: could not broadcast array from shape: {:?} to: {:?}",
from.slice(),
to.slice()
)
}
match self.broadcast(dim.clone()) {
Some(it) => it,
None => broadcast_panic(&self.dim, &dim),
}
}
from burn.
I apologize I'm not sure how to get var info but these are the 2 functions that break when using the 2 backends.
You're good. I was hoping that traceback pointed to something in the generated model(so then we could figure out what step in burn-import needs work), but that's not the case here.
from burn.
Sounds like it's a bug in the backward pass of the Burn's OP.
CCing @nathanielsimard , @louisfd , and @laggui, maybe the have some idea.
from burn.
Cargo successfully builds and it is a runtime error.
I just tried reproducing and found that Resnet18 works while Mobilenet and efficientnet have issues.
I'm using images of size 224, but I've seen similar issues before when using awkward input sizes. Specifically, this happens when at a given layer when the an odd number of channels are trying to halve (Note the error from above,
[1, 1, 4, 4]
to[1, 1, 3, 3]
) could be in relation to the size of a previous layer (1, 1, 7, 7). In this case, the forward pass works fine for all models, but the backward pass might be encountering a similar issue.
Could you share one of the ONNX models so we can try to reproduce this issue?
from burn.
@laggui doesn't allow me to drop here, where do I share the model?
We don’t support that file type.
Try again with GIF, JPEG, JPG, MOV, MP4, PNG, SVG, WEBM, CPUPROFILE, CSV, DMP, DOCX, FODG, FODP, FODS, FODT, GZ, JSON, JSONC, LOG, MD, ODF, ODG, ODP, ODS, ODT, PATCH, PDF, PPTX, TGZ, TXT, XLS, XLSX or ZIP.
from burn.
You could upload it somewhere (e.g., google drive) and share the link.
Or, if it's a torchvision model you could share the script you used to generate the ONNX model with pytorch.
/edit: ah right as pointed out below github supports zip format so you can zip the onnx file to upload it here.
from burn.
You need to zip it
from burn.
here's the zip and you'll find the func for creating the model above
from burn.
I can reproduce the issue with the provided onnx model on both ndarray and torch backends.
ndarray: could not broadcast array from shape: [1, 1, 4, 4] to: [1, 1, 3, 3]
The issue happens in the conv2d backward with groups, specifically this line.
from burn.
Related Issues (20)
- The trait `std::clone::Clone` is not implemented for `BenchmarkModuleRecord<B>
- [Book] Add custom dataset, loader and batcher detailed example HOT 1
- Crate zip 1.3.0 is yanked
- Alternative design for node, argument handling in burn import HOT 2
- Crate zip 1.3.0 is yanked
- Remove the Copy restriction from `impl<const N: usize, T, B> Module<B> for [T; N]`
- Batch matrix multiply leads to vulkan error on WGPU HOT 2
- CUDA backend does not work with rust nightly HOT 2
- Distributions from space bounds
- burn-import: Add optional or configurable logging to generated burn graph HOT 1
- Crate zip 1.3.0 is yanked
- burn-import initializer tensor not added to scope HOT 16
- to_device not working HOT 2
- Loading record on initialized model set `Option<Module>` to None HOT 11
- The `model.load_record()` method turns off the activation function during the forward pass HOT 2
- Trouble importing FaceONNX detector model: `Only tensor indices is valid` HOT 2
- [Cube] Polishing
- "condvar wait not supported" in wasm-bindgen tests in browser with WebGPU backend HOT 1
- Crash on Chrome WebGPU for kernels that bind with aliasing.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from burn.