Giter Club home page Giter Club logo

l4casadi's Introduction

PyPI version L4CasADi CI Downloads

*Due to a bug, Hessians for functions with non-scalar outputs were wrong prior to version 1.3.0


Learning 4 CasADi Framework

L4CasADi enables the seamless integration of PyTorch-learned models with CasADi for efficient and potentially hardware-accelerated numerical optimization. The only requirement on the PyTorch model is to be traceable and differentiable.

Collision-free minimum snap optimized trajectory through a NeRF Energy Efficient Fish Navigation in Turbulent Flow
Open In Colab                                                Open In Colab                      

Two L4CasADi examples: Collision-free trajectory through a NeRF and Navigation in Turbulent Flow

arXiv: Learning for CasADi: Data-driven Models in Numerical Optimization

Talk: Youtube

Table of Content

If you use this framework please cite the following two paper

@article{salzmann2023neural,
  title={Real-time Neural-MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms},
  author={Salzmann, Tim and Kaufmann, Elia and Arrizabalaga, Jon and Pavone, Marco and Scaramuzza, Davide and Ryll, Markus},
  journal={IEEE Robotics and Automation Letters},
  doi={10.1109/LRA.2023.3246839},
  year={2023}
}
@inproceedings{{salzmann2024l4casadi,
  title={Learning for CasADi: Data-driven Models in Numerical Optimization},
  author={Salzmann, Tim and Arrizabalaga, Jon and Andersson, Joel and Pavone, Marco and Ryll, Markus},
  booktitle={Learning for Dynamics and Control Conference (L4DC)},
  year={2024}
}

Projects using L4CasADi

  • Real-time Neural-MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms
    Paper | Code
  • Neural Potential Field for Obstacle-Aware Local Motion Planning
    Paper | Video | Code
  • N-MPC for Deep Neural Network-Based Collision Avoidance exploiting Depth Images
    Paper | Code

If your project is using L4CasADi and you would like to be featured here, please reach out.


Installation

Prerequisites

Independently if you install from source or via pip you will need to meet the following requirements:

  • Working build system: CMake compatible C++ compiler.
  • PyTorch (>=2.0) installation in your python environment.
    python -c "import torch; print(torch.__version__)"

Pip Install (CPU Only)

  • Ensure Torch CPU-version is installed
    pip install torch>=2.0 --index-url https://download.pytorch.org/whl/cpu
  • Ensure all build dependencies are installed
setuptools>=68.1
scikit-build>=0.17
cmake>=3.27
ninja>=1.11
  • Run
    pip install l4casadi --no-build-isolation

From Source (CPU Only)

  • Clone the repository
    git clone https://github.com/Tim-Salzmann/l4casadi.git

  • All build dependencies installed via
    pip install -r requirements_build.txt

  • Build from source
    pip install . --no-build-isolation

The --no-build-isolation flag is required for L4CasADi to find and link against the installed PyTorch.

GPU (CUDA)

CUDA installation requires nvcc to be installed which is part of the CUDA toolkit and can be installed on Linux via sudo apt-get -y install cuda-toolkit-XX-X (where XX-X is your installed Cuda version - e.g. 12-3). Once the CUDA toolkit is installed nvcc is commonly found at /usr/local/cuda/bin/nvcc.

Make sure nvcc -V can be executed and run pip install l4casadi --no-build-isolation or CUDACXX=<PATH_TO_NVCC> pip install . --no-build-isolation to build from source.

If nvcc is not automatically part of your path you can specify the nvcc path for L4CasADi. E.g. CUDACXX=<PATH_TO_NVCC> pip install l4casadi --no-build-isolation.


Online Learning and Updating

L4CasADi supports updating the PyTorch model online in the CasADi graph. To use this feature, pass mutable=True when initializing a L4CasADi. To update the model, call the update function on the L4CasADi object. You can optionally pass an updated model as parameter. If no model is passed, the reference passed at initialization is assumed to be updated and will be used for the update.


Naive L4CasADi

While L4CasADi was designed with efficiency in mind by internally leveraging torch's C++ interface, this can still result in overhead, which can be disproportionate for small, simple models. Thus, L4CasADi additionally provides a NaiveL4CasADiModule which directly recreates the PyTorch computational graph using CasADi operations and copies the weights --- leading to a pure C computational graph without context switches to torch. However, this approach is limited to a small predefined subset of PyTorch operations --- only MultiLayerPerceptron models and CPU inference are supported.

The torch framework overhead dominates for networks smaller than three hidden layers, each with 64 neurons (or equivalent). For models smaller than this size we recommend using the NaiveL4CasADiModule. For larger models, the overhead becomes negligible and L4CasADi should be used.

naive_mlp = l4c.naive.MultiLayerPerceptron(2, 128, 1, 2, 'Tanh')
l4c_model = l4c.L4CasADi(naive_mlp, model_expects_batch_dim=True)
x_sym = cs.MX.sym('x', 2, 1)
y_sym = l4c_model(x_sym)


Real-time L4CasADi

Real-time L4Casadi (former Approximated approach in ML-CasADi) is the underlying framework powering Real-time Neural-MPC. It replaces complex models with local Taylor approximations. For certain optimization procedures (such as MPC with multiple shooting nodes) this can lead to improved optimization times. However, Real-time L4Casadi, comes with many restrictions (only Python, no C(++) code generation, ...) and is therefore not a one-to-one replacement for L4Casadi. Rather it is a complementary framework for certain special use cases.

More information here.

l4c_model = l4c.RealTimeL4CasADi(pyTorch_model, approximation_order=1) # approximation_order=2
x_sym = cs.MX.sym('x', 2, 1)
y_sym = l4c_model(x_sym)
casadi_func = cs.Function('model_rt_approx',
[x_sym, l4c_model.get_sym_params()],
[y_sym])
x = np.ones([1, size_in]) # torch needs batch dimension
casadi_param = l4c_model.get_params(x)
casadi_out = casadi_func(x.transpose((-2, -1)), casadi_param) # transpose for vector rep. expected by casadi


Examples

l4c_model = l4c.L4CasADi(pyTorch_model, model_expects_batch_dim=True, device='cpu') # device='cuda' for GPU
x_sym = cs.MX.sym('x', 2, 1)
y_sym = l4c_model(x_sym)
f = cs.Function('y', [x_sym], [y_sym])
df = cs.Function('dy', [x_sym], [cs.jacobian(y_sym, x_sym)])
ddf = cs.Function('ddy', [x_sym], [cs.hessian(y_sym, x_sym)[0]])
x = cs.DM([[0.], [2.]])
print(l4c_model(x))
print(f(x))
print(df(x))
print(ddf(x))

Please note that only casadi.MX symbolic variables are supported as input.

Multi-input multi-output functions can be realized by concatenating the symbolic inputs when passing to the model and splitting them inside the PyTorch function.

To use GPU (CUDA) simply pass device="cuda" to the L4CasADi constructor.

Further examples:


Acados Integration

To use this framework with Acados:

An example of how a PyTorch model can be used as dynamics model in the Acados framework for Model Predictive Control can be found in examples/acados.py

To use L4CasADi with Acados you will have to set model_external_shared_lib_dir and model_external_shared_lib_name in the AcadosOcp.solver_options accordingly.

ocp.solver_options.model_external_shared_lib_dir = l4c_model.shared_lib_dir
ocp.solver_options.model_external_shared_lib_name = l4c_model.name

l4casadi/examples/acados.py

Lines 156 to 160 in 421de6e

ocp.solver_options.model_external_shared_lib_dir = self.external_shared_lib_dir
if COST == 'LINEAR_LS':
ocp.solver_options.model_external_shared_lib_name = self.external_shared_lib_name
else:
ocp.solver_options.model_external_shared_lib_name = self.external_shared_lib_name + ' -l' + l4c_y_expr.name


FYIs

Batch Dimension

If your PyTorch model expects a batch dimension as first dimension (which most models do) you should pass model_expects_batch_dim=True to the L4CasADi constructor. The MX input to the L4CasADi component is then expected to be a vector of shape [X, 1]. L4CasADi will add a batch dimension of 1 automatically such that the input to the underlying PyTorch model is of shape [1, X].


Warm Up

Note that PyTorch builds the graph on first execution. Thus, the first call(s) to the CasADi function will be slow. You can warm up to the execution graph by calling the generated CasADi function one or multiple times before using it.

l4casadi's People

Contributors

tim-salzmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

l4casadi's Issues

from importlib.resources import files

Hi Tim,
executing examples files shows the following error even though the requirments all satesfied when pip3 install . is executed. In version before last updating, I could run readme.py without any error but there were some errors when running other files (acados.py and simple_nlp). With last update, the following error with all files.

Traceback (most recent call last):
  File "readme.py", line 3, in <module>
    import l4casadi as l4c
  File "/home/vladislav/.local/lib/python3.8/site-packages/l4casadi/__init__.py", line 1, in <module>
    from importlib.resources import files
ImportError: cannot import name 'files' from 'importlib.resources' (/usr/lib/python3.8/importlib/resources.py)

OS. Ubuntu 20.04

Best,
Muhammad

OSError: liblearned_dyn.so: cannot open shared object file: No such file or directory

Environment:

Windows10-22H2-lastest
WSL2 - Ubuntu22.04
Pycharm2022.3.1
python3.10 venv with CUDA 11.7

Problem:

I can run examples/readme.py and examples/simple_nlp.py successfully,
but when I run examples/acados.py, the output is:

/home/lty/venv/l4/bin/python /mnt/l/Python/MPC/l4casadi/examples/acados.py 
/mnt/l/Python/MPC/l4casadi/l4casadi/template_generation/bin/t_renderer
Warning: Please note that the following versions of CasADi  are officially supported: 3.5.6 or 3.5.5 or 3.5.4 or 3.5.3 or 3.5.2 or 3.5.1 or 3.4.5 or 3.4.0.
 If there is an incompatibility with the CasADi generated code, please consider changing your CasADi version.
Version 3.6.3 currently in use.
rm -f libacados_ocp_solver_wr.so
rm -f acados_solver_wr.o
cc -fPIC -std=c99   -O2 -I/home/lty/env/acados/include -I/home/lty/env/acados/include/acados -I/home/lty/env/acados/include/blasfeo/include -I/home/lty/env/acados/include/hpipm/include  -c -o acados_solver_wr.o acados_solver_wr.c
cc -fPIC -std=c99   -O2 -I/home/lty/env/acados/include -I/home/lty/env/acados/include/acados -I/home/lty/env/acados/include/blasfeo/include -I/home/lty/env/acados/include/hpipm/include  -c -o wr_model/wr_expl_ode_fun.o wr_model/wr_expl_ode_fun.c
cc -fPIC -std=c99   -O2 -I/home/lty/env/acados/include -I/home/lty/env/acados/include/acados -I/home/lty/env/acados/include/blasfeo/include -I/home/lty/env/acados/include/hpipm/include  -c -o wr_model/wr_expl_vde_forw.o wr_model/wr_expl_vde_forw.c
cc -fPIC -std=c99   -O2 -I/home/lty/env/acados/include -I/home/lty/env/acados/include/acados -I/home/lty/env/acados/include/blasfeo/include -I/home/lty/env/acados/include/hpipm/include  -c -o wr_model/wr_expl_vde_adj.o wr_model/wr_expl_vde_adj.c
cc -shared acados_solver_wr.o wr_model/wr_expl_ode_fun.o wr_model/wr_expl_vde_forw.o wr_model/wr_expl_vde_adj.o -o libacados_ocp_solver_wr.so -L/home/lty/env/acados/lib -lacados -lhpipm -lblasfeo -lm \
-L/mnt/l/Python/MPC/l4casadi/examples/_l4c_generated -llearned_dyn
acados was compiled without OpenMP.
Traceback (most recent call last):
  File "/mnt/l/Python/MPC/l4casadi/examples/acados.py", line 219, in <module>
    run()
  File "/mnt/l/Python/MPC/l4casadi/examples/acados.py", line 187, in run
    external_shared_lib_name=learned_dyn_model.name).solver
  File "/mnt/l/Python/MPC/l4casadi/examples/acados.py", line 83, in solver
    return AcadosOcpSolver(self.ocp())  # 求解器
  File "/home/lty/env/acados/interfaces/acados_template/acados_template/acados_ocp_solver.py", line 960, in __init__
    self.shared_lib = CDLL(self.shared_lib_name)
  File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: liblearned_dyn.so: cannot open shared object file: No such file or directory

end

I think that mean there is no "liblearned_dyn.so", could you tell me how to solve this problem? thank you

real time L4casadi c++

Hi Tim,

When will the C++ version of real time L4casadi be online? I am currently troubled by the python gc mechanism and want to completely rewrite it in c++.

thanks

Integrating L4Casadi Generated C Code into C++ Project with CMake

Hi Tim,

I have generated C code using L4Casadi and would like to integrate it into my existing C++ project. I am currently utilizing CMake as my build system, but I'm unsure about how to properly modify the CMakeLists.txt file to include and link the L4Casadi generated C code.

Can you provide an example of how to modify the CMakeLists.txt file to include the generated C files?

I would really appreciate any guidance or examples. Thank you!

Encounter issue when transfer l4c.L4CasADi to l4c.RealTimeL4CasADi

Hi, Tim

Sorry for bother again.

I have been always using the l4c.L4CasADi(MultiLayerPerceptron()) to transfer torch.Tensor model to casadi symbolic one for SQP_RTI NLP solving. For accelerating the code computing, i want to transfer 2-order Taylor approximation to 1-order one. Therefore, the only change of the code is change 'learned_dyn_model = l4c.L4CasADi(nn, model_expects_batch_dim = True, name = 'learned_dyn')' to 'learned_dyn_model = l4c.RealTimeL4CasADi(nn, approximation_order=1)'.

The error reports are as follow:
RuntimeError: Error in Function::Function for 'RealTime_nn_mpc_expl_ode_fun' [MXFunction] at .../casadi/core/function.cpp:249:
.../casadi/core/function_internal.cpp:145: Error calling MXFunction::init for 'RealTime_nn_mpc_expl_ode_fun':
.../casadi/core/mx_function.cpp:406: RealTime_nn_mpc_expl_ode_fun::init: Initialization failed since variables [f_a, df_a, a] are free. These symbols occur in the output expressions but you forgot to declare these as inputs. Set option 'allow_free' to allow free variables.

Would you please offer some solutions or demos about RealTimeL4CasADi object's usage for nlp solving?

Best regards,
Hua

propagate_const library is not available on Windows

Hi, I'm trying to use it on Windows11, it seems that the experimental/propagate_const is not available in the current standard library, is it possible to use it on windows like under conda environment?

One more question is, I also tried to run the code readme.py under Linux conda environment, but when the code runs to

y_sym = l4c_model(x_sym)

the program dies says the .so generated file has something wrong

thank you!

issue with t_renderer

Hi Tim! I have run my implementation with l4casadi and Acados and it was working fine. However, in last time I have executed again pip install . . After this moment I couldn't run my implementation as before. Now I have the following error when i run my code. I have installed l4casadi again but the error still same.

  File "/home/vladislav/.local/lib/python3.9/site-packages/l4casadi/template_generation/render.py", line 88, in render_template
    raise Exception(f'Rendering of {in_file} failed!\n\nAttempted to execute OS command:\n{os_cmd}\n\n')
Exception: Rendering of casadi_function.in.cpp failed!

Attempted to execute OS command:
/home/vladislav/.local/lib/python3.9/site-packages/l4casadi/template_generation/bin/t_renderer '/home/vladislav/.local/lib/python3.9/site-packages/l4casadi/template_generation/c_templates_tera/**/*' 'casadi_function.in.cpp' 'y_expr.json' 'y_expr.cpp'

Any suggestion?
Best kind!
Muhammad

Encountered errors in pip install. L4Casadi on the machine with linux_aarch64

Hi,
When i try to run "pip install ." for installation L4, it failed during linking CXX shared library libl4casadi.so;

The reported code are as follow:
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
[1/3] Building CXX object CMakeFiles/l4casadi.dir/src/l4casadi.cpp.o
[2/3] Linking CXX shared library libl4casadi.so
FAILED: libl4casadi.so
/usr/bin/ld: /app/ddc/l4casadi/libl4casadi/libtorch/lib/libtorch.so: error adding symbols: file in wrong format
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
ERROR: Failed building wheel for l4casadi
Failed to build l4casadi
ERROR: Could not build wheels for l4casadi, which is required to install pyproject.toml-based projects

can you provide some solutions?

thanks, sincerely!

Need an example of time series input

Dear Tim Salzmann

Hello, your project has been of great help to me. I have successfully implemented an MPC controller based on the point2point deep learning model.

Due to the long and uncertain time delay of my controlled system. I need to use the deep learning prediction model of seq2point to identify. However, using l4casadi to achieve MPC control based on the seq2point model is very difficult for me. I couldn't find related example of it. Could you please add an example of using seq2point or seq2seq deep learning models to implement MPC?

Thank you again for your project's help to me,
Best wishes,
438749902

I encountered a problem when executing the "pip install l4casadi --no-build-isolation" command, what should I do?

An error occurred while configuring with CMake.
Command:
'C:\ProgramData\Anaconda3\envs\l4casadi\lib\site-packages\cmake\data\bin/cmake.exe' 'C:\Users\user\AppData\Local\Temp\pip-install-csyt4_35\l4casadi_1acbdd3d21dd4f379862581db11111ec\libl4
casadi' -G Ninja '-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\ProgramData\Anaconda3\envs\l4casadi\lib\site-packages\ninja\data\bin\ninja' -D_SKBUILD_FORCE_MSVC=1930 --no-warn-unused-cli '-DCMAKE_INSTALL_PREF
IX:PATH=C:\Users\user\AppData\Local\Temp\pip-install-csyt4_35\l4casadi_1acbdd3d21dd4f379862581db11111ec_skbuild\win-amd64-3.9\cmake-install' -DPYTHON_VERSION_STRING:STRING=3.9.19 -DSKBUILD:INTERN
AL=TRUE '-DCMAKE_MODULE_PATH:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\lib\site-packages\skbuild\resources\cmake' '-DPYTHON_EXECUTABLE:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\python.exe' '-D
PYTHON_INCLUDE_DIR:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\Include' '-DPYTHON_LIBRARY:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\libs\python39.lib' '-DPython_EXECUTABLE:PATH=C:\ProgramData\An
aconda3\envs\l4casadi\python.exe' '-DPython_ROOT_DIR:PATH=C:\ProgramData\Anaconda3\envs\l4casadi' -DPython_FIND_REGISTRY:STRING=NEVER '-DPython_INCLUDE_DIR:PATH=C:\ProgramData\Anaconda3\envs\l4cas
adi\Include' '-DPython_LIBRARY:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\libs\python39.lib' '-DPython_NumPy_INCLUDE_DIRS:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\lib\site-packages\numpy\core
include' '-DPython3_EXECUTABLE:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\python.exe' '-DPython3_ROOT_DIR:PATH=C:\ProgramData\Anaconda3\envs\l4casadi' -DPython3_FIND_REGISTRY:STRING=NEVER '-DPyth
on3_INCLUDE_DIR:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\Include' '-DPython3_LIBRARY:PATH=C:\ProgramData\Anaconda3\envs\l4casadi\libs\python39.lib' '-DPython3_NumPy_INCLUDE_DIRS:PATH=C:\Program
Data\Anaconda3\envs\l4casadi\lib\site-packages\numpy\core\include' '-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\ProgramData\Anaconda3\envs\l4casadi\lib\site-packages\ninja\data\bin\ninja' '-DCMAKE_TORCH_PATH=C:\Users\user\AppData\Roaming\Python\Python39\site-packages\torch' -DCMAKE_BUILD_TYPE:STRING=Release
Source directory:
C:\Users\user\AppData\Local\Temp\pip-install-csyt4_35\l4casadi_1acbdd3d21dd4f379862581db11111ec\libl4casadi
Working directory:
C:\Users\user\AppData\Local\Temp\pip-install-csyt4_35\l4casadi_1acbdd3d21dd4f379862581db11111ec_skbuild\win-amd64-3.9\cmake-build
Please see CMake's output for more information.

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for l4casadi
Failed to build l4casadi
ERROR: Could not build wheels for l4casadi, which is required to install pyproject.toml-based projects

l4c trouble in notebook

Dear colleagues.
The problem of casaid-torch fusion was struggling us for a long time. So we highly appreciate your project and will mention you in our papers, thanks.

The problems:
if run the following code in notebook twice there will be the error:
deep_model = DeepModel(5, 40)
mx_inp = cs.MX.sym('x', deep_model.input_layer.in_features, 1)
l4c.L4CasADi(deep_model, has_batch=True)(mx_inp)

deep_model = DeepModel(10, 40)
mx_inp = cs.MX.sym('x', deep_model.input_layer.in_features, 1)
l4c.L4CasADi(deep_model, has_batch=True)(mx_inp)
Error
File ~/Desktop/Docker/venv/lib/python3.9/site-packages/l4casadi/l4casadi.py:53, in L4CasADi.forward(self, inp)
50 if not self._ready:
51 self.get_ready(inp)
---> 53 out = self._ext_cs_fun(inp) # type: ignore[misc]
55 return out

File ~/Desktop/Docker/venv/lib/python3.9/site-packages/casadi/casadi.py:23339, in Function.call(self, *args, **kwargs)
23336 raise SyntaxError('Function evaluation requires all arguments to be named or none')
23337 if len(args)>0:
23338 # Ordered inputs -> return tuple

23339 ret = self.call(args)
23340 if len(ret)==0:
...

  • The input dimension N-by-M (here 10-by-1)
  • A scalar, i.e. 1-by-1
  • M-by-N if N=1 or M=1 (i.e. a transposed vector)
  • N-by-M1 if K*M1=M for some K (argument repeated horizontally)
  • N-by-P*M, indicating evaluation with multiple arguments (P must be a multiple of 1 for consistency with previous inputs)

I believe the l4c somehow contains the information about the prev model. but again, it only happens if run it twice in a notebook.
I found several problems when running the l4c.L4CasADi twice in the cell. they all look like l4c capture model inside.

Nerf trajopt example on cuda

Hey Tim,

I'm trying to get the nerf trajectory optimization example running on cuda. I was able to get it running on cpu, and edited lines 171-179 as follows:

# --------------------------------- Load NERF -------------------------------- #
    model = DensityNeRF()
    model_path = os.path.join(os.path.dirname(__file__), "nerf_model.tar")
    model.load_state_dict(
        torch.load(model_path, map_location="cuda")["network_fn_state_dict"],
        strict=False,
    )
    # -------------------------- Create L4CasADi Module -------------------------- #
    l4c_nerf = l4c.L4CasADi(model, device="cuda")

However, when I run this, the second nlp solver seems to hang:

(l4casadi) adam@adam-ubuntu:~/Sandbox/l4casadi/examples/nerf_trajectory_optimization$ python nerf_trajectory_optimization.py 
This is Ipopt version 3.14.11, running with linear solver MUMPS 5.4.1.

Number of nonzeros in equality constraint Jacobian...:       36
Number of nonzeros in inequality constraint Jacobian.:     2700
Number of nonzeros in Lagrangian Hessian.............:       90

Total number of variables............................:       18
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        6
Total number of inequality constraints...............:      300
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:      300
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.0161843e+02 1.19e+00 1.00e+02  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  3.5003351e+01 4.43e-08 2.22e+00  -1.0 4.80e+03    -  3.01e-01 1.00e+00f  1
   2  1.0827344e+01 4.02e-08 1.13e+00  -1.0 2.26e+03    -  8.49e-01 1.00e+00f  1
   3  2.7440984e+00 1.55e-09 2.47e-11  -1.0 1.85e+03    -  1.00e+00 1.00e+00f  1
   4  4.0409223e-01 2.04e-08 4.75e-01  -2.5 1.44e+03    -  9.40e-01 1.00e+00f  1
   5  7.3485968e-02 6.77e-10 2.87e-11  -2.5 1.15e+03    -  1.00e+00 1.00e+00f  1
   6  4.1873740e-02 4.36e-10 1.37e-11  -3.8 5.28e+02    -  1.00e+00 1.00e+00f  1
   7  4.0472866e-02 6.24e-11 6.51e-12  -3.8 2.85e+02    -  1.00e+00 1.00e+00f  1
   8  4.0419962e-02 9.64e-12 8.72e-12  -5.7 5.41e+01    -  1.00e+00 1.00e+00f  1
   9  4.0419926e-02 6.06e-13 1.36e-11  -8.6 1.88e+00    -  1.00e+00 1.00e+00h  1

Number of Iterations....: 9

                                   (scaled)                 (unscaled)
Objective...............:   2.9568343777355507e-02    4.0419925943644970e-02
Dual infeasibility......:   1.3623588368538719e-11    1.8623445299792427e-11
Constraint violation....:   6.0595972684041044e-13    6.0595972684041044e-13
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   9.9614675922336054e-09    1.3617326198583336e-08
Overall NLP error.......:   9.9614675922336054e-09    1.3617326198583336e-08


Number of objective function evaluations             = 10
Number of objective gradient evaluations             = 10
Number of equality constraint evaluations            = 10
Number of inequality constraint evaluations          = 10
Number of equality constraint Jacobian evaluations   = 10
Number of inequality constraint Jacobian evaluations = 10
Number of Lagrangian Hessian evaluations             = 9
Total seconds in IPOPT                               = 0.594

EXIT: Optimal Solution Found.
/home/adam/anaconda3/envs/l4casadi/lib/python3.10/site-packages/torch/jit/_check.py:177: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
  warnings.warn(
CUDA is available! Using GPU cuda.
This is Ipopt version 3.14.11, running with linear solver MUMPS 5.4.1.

Number of nonzeros in equality constraint Jacobian...:       36
Number of nonzeros in inequality constraint Jacobian.:     5400
Number of nonzeros in Lagrangian Hessian.............:      171

Any thoughts on what the issue might be? I'm using a conda environment with python 3.10 and torch 2.3.0.

Using realtime l4casadi with acados

Hello Tim,

I am trying to use a neural network model in a real-time mpc framework that is running with acados. What is the recommended way of doing this. A bit of context:

Even when I am using a simple MLP that is wrapped with l4casadi and compiled with acados, the computation time of each MPC step becomes prohibitively expensive to run in real time.

To circumvent this I am trying to use the RealTimeL4Casadi class. However, I am unsure on how to get it to work with acados. Do you have a simple use case or example that can help me get started ?

Best,
Ioannis

RealTimeL4CasADi Approximation Update

Hi Tim,

Many thanks for this library! I am solving a Nonlinear TO problem using casadi where a PyTorch MLP is used as one of the constraints. Part of the input to the network are the decision variables and the other part are coming from a predefined trajectory. I am not satisfied with the solution speed with the L4CasADi model so I am trying out the Taylor approximations with RealTimeL4CasADi. I am having a hard time understanding from the examples how can I update the approximation every time I solve the problem.

My code structure is as follows:

  1. Construct the RealTimeL4CasADi and casadi functions using get_sym_params
    x_sym = ca.MX.sym(in_N)
    f_order2 = l4c.realtime.RealTimeL4CasADi(f, approximation_order=2)
    y_sym = f_order2(x_sym)
    f_casadi_function = ca.Function("f", [x_sym, self.f_order2.get_sym_params()], [y_sym])

  2. Construct the optimization problem once at the beginning of the code (I am using the Opti stack for this):
    for j in range(N):
    network_input_mx = ca.vertcat(mx_param, mx_decision_var)
    consraint = f_casadi_function(network_input_mx, f_order2_params) (need to use f_order2.get_params(np.array(?)) first)
    constraints.append(constraint)
    for j in range(N):
    self.opti.subject_to(constraints[j] > min_value)

  3. Each loop of the code:
    while True:

    • Set numerical parameters of new predifined trajectory and initial guesses for decision variables
      opti.set_value(mx_param, param_value)
      opti.set_initial(mx_decision_var, decision_var_init)
    • Solve the problem
      opti.solve()

My issue is that it's not clear to me how to update the approximation after the initial construction of the optimization problem in 2. In order to construct the constraint, a f_order2_params had to be computed with some numerical value, since f_order2.get_params() expects a numerical numpy array (right?). However, at that point, I don't know the trajectory apriori (since it changes every loop). For now I set it to a zero numpy array, but how can I update the approximation before each loop when I get a new trajectory and before each opti.solve()? I think it would make sense to update the approximation based on the input trajectory for the known parts of the network input and the initial guess of the decision variable as the variable part of the network input.

Hope that my question and code are clear.

Best regards,
Abdelrahman

Fail to build the project

Hi, Tim,

Thanks for this great work. I was about to try this out, I managed to install most of them but fail when I try to install the library itself with pip install . --no-build-isolation

The error message is as follow. It is also very confusing as the error message suggest no cuda found but it also tells me the version of the cuda-toolkit (12.1). I checked the cuda-toolkit, which was installed via micromamba (conda substitution), I also tried out all sorts of possible way to set custom cuda location, but none of them can help me get it complied.

Therefore, I would like to ask if you know how should I get the repo built with custom cuda installation location? Which specific file from the cuda lib does the compilation really need? Thanks!

❯ pip3 install . --no-build-isolation
Processing /export/home/yang/git/l4casadi
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: torch in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from l4casadi==1.3.0) (2.2.1)
Requirement already satisfied: casadi>=3.6 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from l4casadi==1.3.0) (3.6.5)
Requirement already satisfied: jinja2>=3.1 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from l4casadi==1.3.0) (3.1.3)
Requirement already satisfied: numpy in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from casadi>=3.6->l4casadi==1.3.0) (1.26.4)
Requirement already satisfied: MarkupSafe>=2.0 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from jinja2>=3.1->l4casadi==1.3.0) (2.1.5)
Requirement already satisfied: filelock in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (3.13.1)
Requirement already satisfied: typing-extensions>=4.8.0 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (4.10.0)
Requirement already satisfied: sympy in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (1.12)
Requirement already satisfied: networkx in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (3.2.1)
Requirement already satisfied: fsspec in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (2024.2.0)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (12.1.105)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (12.1.105)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (12.1.105)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (8.9.2.26)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (12.1.3.1)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (11.0.2.54)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (10.3.2.106)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (11.4.5.107)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (12.1.0.106)
Requirement already satisfied: nvidia-nccl-cu12==2.19.3 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (2.19.3)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (12.1.105)
Requirement already satisfied: triton==2.2.0 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from torch->l4casadi==1.3.0) (2.2.0)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch->l4casadi==1.3.0) (12.3.101)
Requirement already satisfied: mpmath>=0.19 in /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages (from sympy->torch->l4casadi==1.3.0) (1.3.0)
Building wheels for collected packages: l4casadi
  Building wheel for l4casadi (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for l4casadi (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [107 lines of output]
      
      
      --------------------------------------------------------------------------------
      -- Trying 'Ninja' generator
      --------------------------------
      ---------------------------
      ----------------------
      -----------------
      ------------
      -------
      --
      CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
        Compatibility with CMake < 3.5 will be removed from a future version of
        CMake.
      
        Update the VERSION argument <min> value or use a ...<max> suffix to tell
        CMake that the project does not need compatibility with older versions.
      
      Not searching for unused variables given on the command line.
      
      -- The C compiler identification is GNU 12.3.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- The CXX compiler identification is GNU 12.3.0
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Configuring done (0.6s)
      -- Generating done (0.0s)
      -- Build files have been written to: /export/home/yang/git/l4casadi/_cmake_test_compile/build
      --
      -------
      ------------
      -----------------
      ----------------------
      ---------------------------
      --------------------------------
      -- Trying 'Ninja' generator - success
      --------------------------------------------------------------------------------
      
      Configuring Project
        Working directory:
          /export/home/yang/git/l4casadi/_skbuild/linux-x86_64-3.9/cmake-build
        Command:
          /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/cmake/data/bin/cmake /export/home/yang/git/l4casadi/libl4casadi -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/home/yang/micromamba/envs/jepa/lib/python3.9/s
ite-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/export/home/yang/git/l4casadi/_skbuild/linux-x86_64-3.9/cmake-install -DPYTHON_VERSION_STRING:STRING=3.9.18 -DSKBUILD:INTERNAL=TRUE -DCM
AKE_MODULE_PATH:PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/home/yang/micromamba/envs/jepa/bin/python3.9 -DPYTHON_INCLUDE_DIR:PATH=/home/yang/microma
mba/envs/jepa/include/python3.9 -DPYTHON_LIBRARY:PATH=/home/yang/micromamba/envs/jepa/lib/libpython3.9.so -DPython_EXECUTABLE:PATH=/home/yang/micromamba/envs/jepa/bin/python3.9 -DPython_ROOT_DIR:PATH=/home/yang/micromamb
a/envs/jepa -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/home/yang/micromamba/envs/jepa/include/python3.9 -DPython_NumPy_INCLUDE_DIRS:PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/num
py/core/include -DPython3_EXECUTABLE:PATH=/home/yang/micromamba/envs/jepa/bin/python3.9 -DPython3_ROOT_DIR:PATH=/home/yang/micromamba/envs/jepa -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/home/yang/m
icromamba/envs/jepa/include/python3.9 -DPython3_NumPy_INCLUDE_DIRS:PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/numpy/core/include -DCMAKE_MAKE_PROGRAM:FILEPATH=/home/yang/micromamba/envs/jepa/lib/pyt
hon3.9/site-packages/ninja/data/bin/ninja -DCMAKE_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ar -DCMAKE_CXX_COMPILER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_C_COMPI
LER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_RANLIB=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ranlib -DCMAKE_CXX_COMPILER_RANLIB=/home/yang/micromamba/envs/jepa/bin/x8
6_64-conda-linux-gnu-gcc-ranlib -DCMAKE_C_COMPILER_RANLIB=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_LINKER=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ld -DCMAKE_STRIP=/
home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-strip -DCMAKE_BUILD_TYPE=Release -DCMAKE_TORCH_PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/torch -DCMAKE_AR=/home/yang/micromamba/envs/jepa/b
in/x86_64-conda-linux-gnu-ar -DCMAKE_CXX_COMPILER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_C_COMPILER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_RANLI
B=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ranlib -DCMAKE_CXX_COMPILER_RANLIB=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_C_COMPILER_RANLIB=/home/yang/micromamba/envs/j
epa/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_LINKER=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ld -DCMAKE_STRIP=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-strip -DCMAKE_BUILD_TYPE=Rel
ease
      
      CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
        Compatibility with CMake < 3.5 will be removed from a future version of
        CMake.
      
        Update the VERSION argument <min> value or use a ...<max> suffix to tell
        CMake that the project does not need compatibility with older versions.
      
      Not searching for unused variables given on the command line.
      
      -- The C compiler identification is GNU 12.3.0
      -- The CXX compiler identification is GNU 12.3.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Could NOT find CUDA (missing: CUDA_INCLUDE_DIRS) (found version "12.1")
      CMake Warning at /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/torch/share/cmake/Caffe2/public/cuda.cmake:31 (message):
        Caffe2: CUDA cannot be found.  Depending on whether you are building Caffe2
        or a Caffe2 dependent library, the next warning / error will give you more
        info.
      Call Stack (most recent call first):
        /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:87 (include)
        /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
        CMakeLists.txt:12 (find_package)
      
      
      CMake Error at /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:91 (message):
        Your installed Caffe2 version uses CUDA but I cannot find the CUDA
        libraries.  Please set the proper CUDA prefixes and / or install CUDA.
      Call Stack (most recent call first):
        /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
        CMakeLists.txt:12 (find_package)
      
      
      -- Configuring incomplete, errors occurred!
      Traceback (most recent call last):
        File "/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/skbuild/setuptools_wrap.py", line 666, in setup
          env = cmkr.configure(
        File "/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/skbuild/cmaker.py", line 357, in configure
          raise SKBuildError(msg)
      
      An error occurred while configuring with CMake.
        Command:
          /home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/cmake/data/bin/cmake /export/home/yang/git/l4casadi/libl4casadi -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/home/yang/micromamba/envs/jepa/lib/python3.9/s
ite-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/export/home/yang/git/l4casadi/_skbuild/linux-x86_64-3.9/cmake-install -DPYTHON_VERSION_STRING:STRING=3.9.18 -DSKBUILD:INTERNAL=TRUE -DCM
AKE_MODULE_PATH:PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/home/yang/micromamba/envs/jepa/bin/python3.9 -DPYTHON_INCLUDE_DIR:PATH=/home/yang/microma
mba/envs/jepa/include/python3.9 -DPYTHON_LIBRARY:PATH=/home/yang/micromamba/envs/jepa/lib/libpython3.9.so -DPython_EXECUTABLE:PATH=/home/yang/micromamba/envs/jepa/bin/python3.9 -DPython_ROOT_DIR:PATH=/home/yang/micromamb
a/envs/jepa -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/home/yang/micromamba/envs/jepa/include/python3.9 -DPython_NumPy_INCLUDE_DIRS:PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/num
py/core/include -DPython3_EXECUTABLE:PATH=/home/yang/micromamba/envs/jepa/bin/python3.9 -DPython3_ROOT_DIR:PATH=/home/yang/micromamba/envs/jepa -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/home/yang/m
icromamba/envs/jepa/include/python3.9 -DPython3_NumPy_INCLUDE_DIRS:PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/numpy/core/include -DCMAKE_MAKE_PROGRAM:FILEPATH=/home/yang/micromamba/envs/jepa/lib/pyt
hon3.9/site-packages/ninja/data/bin/ninja -DCMAKE_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ar -DCMAKE_CXX_COMPILER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_C_COMPI
LER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_RANLIB=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ranlib -DCMAKE_CXX_COMPILER_RANLIB=/home/yang/micromamba/envs/jepa/bin/x8
6_64-conda-linux-gnu-gcc-ranlib -DCMAKE_C_COMPILER_RANLIB=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_LINKER=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ld -DCMAKE_STRIP=/
home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-strip -DCMAKE_BUILD_TYPE=Release -DCMAKE_TORCH_PATH=/home/yang/micromamba/envs/jepa/lib/python3.9/site-packages/torch -DCMAKE_AR=/home/yang/micromamba/envs/jepa/b
in/x86_64-conda-linux-gnu-ar -DCMAKE_CXX_COMPILER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_C_COMPILER_AR=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_RANLI
B=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ranlib -DCMAKE_CXX_COMPILER_RANLIB=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_C_COMPILER_RANLIB=/home/yang/micromamba/envs/j
epa/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_LINKER=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-ld -DCMAKE_STRIP=/home/yang/micromamba/envs/jepa/bin/x86_64-conda-linux-gnu-strip -DCMAKE_BUILD_TYPE=Rel
ease
        Source directory:
          /export/home/yang/git/l4casadi/libl4casadi
        Working directory:
          /export/home/yang/git/l4casadi/_skbuild/linux-x86_64-3.9/cmake-build
      Please see CMake's output for more information.
      
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for l4casadi
Failed to build l4casadi
ERROR: Could not build wheels for l4casadi, which is required to install pyproject.toml-based projects

Error when using a conditional variational autoencoder

Hello !

First of all thank you for the great package.

I would like to bring to your attention an error that I am getting when trying to use l4casadi with a conditional variational autoencoder. The error is the following (when running the forward method of the trained model):

RuntimeError: It appears that you're trying to get value out of a tracing tensor with aten._local_scalar_dense.default - erroring out! It's likely that this is caused by data-dependent control flow or similar. It may be possible to trace this with dynamic shapes; try setting tracing_mode='symbolic' in your make_fx call.

Am I doing something wrong, or will l4casadi fundamentally not work for network architectures like the CVAE which require sampling from a probability distribution.

Best regards,
Ioannis

Unable to integrate the lib.so exported by L4Casadi and Acados into C

Hi, Tim

I constructed the OCP problem using L4casadi and acados in Python and generated libacados_ OCP_ Solver.so;

Aiming to develop this SQP_RTI solver in C environment, I developed my own main.cpp based on the template in interfaces/acados_template/acados_template/c_templates_tera/main.in.c .

Previously, all the required libs have been confirmed to be connected (Cmakelist compilation passed)
It reports "segmentation fault" when run the SQP solving code "status = {model_name}_acados_solve(acados_ocp_capsule);".

Can you provide some solutions of the above issue?

only for pytorch cpu version

Can you give a more detailed introduction to this library? Can it only be used for cpu version of pytorch? How is the calculation speed of this version compared to the previous pytorch combined with acados?

Multiple inputs in the forward call

Hi Tim,

I have been working with l4casadi for a couple of months and now I am a bit stuck with the case when my network uses multiple inputs at the forward call. FYI I am implementing an NN-based MPC with acados.

If I am correct l4casadi expects a single input argument as in the acados.py example:
res_model = self.learned_dyn(x)
In my case I would like to call it with:
res_model = self.learned_dyn(a,s) (action and state)
The thing is that in my network's forward method, I wish to modify the inputs before passing it through the network, like this (scaling):

def forward(self, a, s):

        # scale
        a = (a - self.action_mean) / self.action_std
        # scale
        s = (s - self.state_mean) / self.state_std

        # run through network
        x = torch.cat([a, s], dim=1)
        x = self.linear(x)
        x = self.hidden(x)
        x = self.out(x)

        # scale
        x = (x * self.output_std) + self.output_mean
        return x

It works with PyTorch but not when wrapped within l4casadi. I get the error:

L4CasADi.forward() takes 2 positional arguments but 3 were given

with
f_expl = self.learned_dyn(self.inputs,self.state)

Thus, considering a single argument forward method, I have tried to extract the data:

    def forward(self, s):

        # scale
        action = s[0][0:self.action_size]
        state = s[0][self.action_size:]
        a = (action - self.action_mean) / self.action_std
        # scale
        s = (state - self.state_mean) / self.state_std

        # run through network
        x = torch.cat([a, s])
        x = self.linear(x)
        x = self.hidden(x)
        x = self.out(x)

        # scale
        x = (x * self.output_std) + self.output_mean

        return x

It is correct when called with PyTorch but with l4casadi I get the following error:

L4CasADi requires the model output to be a matrix (2 dimensions) but has
1 dimensions. For models which expects a batch dimension,
the output should be a matrix of [1, d]

  • Is it related to the model_expects_batch_dim parameter ? I am not sure I got the meaning of this parameter.
  • Does extracting the data the way I did in the forward method break some internal functionalities of l4casadi or the symbolic abstraction ?

Regards,
Bryan

Weird issue with using l4casadi neural network

I am running into an issue where I can't seem to be able to use the pytorch neural network as the dynamics model for my MPC code.

I reduced my code below to this one snippet which you could run on your end. I am running into an issue where the output of the state decision variables are much more different than what I would expect it to be.

It isn't violating any of my dynamics constraints, and the pytorch network performs as expected outside of this script. (It can approximate the dynamics well).

I tried replacing my dynamics constraints of the network with dynamics constraints of the Ground Truth Matrices (A,B) which represent the target dynamics I am trying to model. Using these matrices, everything works perfectly! However, putting the network back causes issues. I can't seem to figure out why the network wrapped dynamics is causing a lot of issues.

This is my code snippet below. I pre-trained my model at this point.

This is my first issue, so let me know what I am missing to make this a good issue! Thanks! :)

import casadi   as cs
import l4casadi as l4c
import torch
import torch.nn as nn
import numpy as np

def createGTMatrices():
    T = 1
    mu_GM = 1000
    R = 100

    n = np.sqrt(mu_GM / (R*R*R))
    A = np.zeros((6,6))
    B = np.zeros((6,3))

    S = np.sin(n*T)
    C = np.cos(n*T)
    
    A[0,:] = np.array([4-3*C,0,0,S/n,2*(1-C)/n,0])
    A[1,:] = np.array([6*(S-n*T),1,0,-2*(1-C)/n,(4*S-3*n*T)/n,0])
    A[2,:] = np.array([0,0,C,0,0,S/n])
    A[3,:] = np.array([3*n*S,0,0,C,2*S,0])
    A[4,:] = np.array([-6*n*(1-C),0,0,-2*S,4*C-3,0])
    A[5,:] = np.array([0,0,-n*S,0,0,C])

    B[0,:] = np.array([(1-C)/(n*n),(2*n*T-2*S)/(n*n),0])
    B[1,:] = np.array([-(2*n*T-2*S)/(n*n),(4*(1-C)/(n*n))-(3*T*T/2),0])
    B[2,:] = np.array([0,0,(1-C)/(n*n)])
    B[3,:] = np.array([S/n,2*(1-C)/n, 0])
    B[4,:] = np.array([-2*(1-C)/n,(4*S/n) - (3*T),0])
    B[5,:] = np.array([0,0,S/n])

    return A,B

class SimpleDynamicsNetwork(nn.Module):
    def __init__(self):
        super().__init__() #must call

        #setting up hidden layer

        # control input is 3, state input is 6... that makes 9
        self.layers = []
        self.layers.append(nn.Linear(9,64,bias=True))
        self.layers.append(nn.Linear(64,128,bias=True))
        self.layers.append(nn.Linear(128,256,bias=True))
        self.layers.append(nn.Linear(256,256,bias=True))
        self.layers.append(nn.Linear(256,256,bias=True))
        self.layers.append(nn.Linear(256,128,bias=True))
        self.layers.append(nn.Linear(128,64,bias=True))
        self.layers.append(nn.Linear(64,6,bias=True))
        self.model = nn.Sequential(*self.layers)
    
    def forward(self, inp):
        return self.model(inp.T).T

if __name__ == '__main__':
    device = 'cpu'
    network = SimpleDynamicsNetwork()
    network_load = torch.load('./models/hcw_simple_model.pt').to(device)
    network.load_state_dict(network_load.state_dict())

    A,B = createGTMatrices()

    l4c_model = l4c.L4CasADi(network,has_batch=False,device='cpu')

    Q = np.eye(6)
    R = np.eye(3)

    pos0 = np.ones(3)*-5
    vel0 = np.ones(3)*0.1
    x0_np = np.array([pos0,vel0]).reshape((6,1))
    x0 = cs.MX(x0_np)
    x = cs.MX.sym('x',6,10)
    u = cs.MX.sym('u',3,10)

    #setup objective
    L = cs.sum2(cs.sum1(cs.MX(np.diag(Q))*(x*x)) + cs.sum1(cs.MX(np.diag(R))*(u*u)))

    #setup dynamic constraint
    xs = cs.horzcat(x0,x[:,:-1])

    #setup using neural net
    outs_net = []
    for i in range(xs.shape[1]):
        inp_net = cs.vertcat(xs[:,i], u[:,i])
        out = l4c_model(inp_net)
        outs_net.append(out)
    pred_xs = cs.horzcat(*outs_net)

    #setup using ground truth dynamics
    # pred_xs = A@xs + B@u

    dyn_constr = (pred_xs - x).reshape((-1,1))

    #setup decision variable constraints
    m,n = x.shape
    ubx_x = [cs.inf]*(m*n)
    lbx_x = [-cs.inf]*(m*n)
    m,n = u.shape
    ubx_u = [0.1]*(m*n)
    lbx_u = [-0.1]*(m*n)

    ubx = ubx_x + ubx_u
    lbx = lbx_x + lbx_u

    #setup decision variables
    opt_x = cs.vertcat(x.reshape((-1,1)), u.reshape((-1,1)))

    #solve
    options = {'printLevel': "none"}
    solver = cs.qpsol('solver','qpoases', {'f': L, 'x': opt_x, 'g': dyn_constr}, options)
    solution = solver(lbx=lbx,ubx=ubx,lbg=0,ubg=0)
    print('\n\ndynamics constraint output:', solution['g']) 


    #use solution to get optimized control input and state vars
    decision_solution  = np.array(solution['x'])
    final_x_idx = 6*10 # dim * horizon
    x_sol = decision_solution[:final_x_idx].reshape((-1,6)).T
    u_sol = decision_solution[final_x_idx:].reshape((-1,3)).T

    print('\n\sol output1:')
    print(x_sol[:,0])

    net_x = torch.Tensor(np.vstack((x0_np,u_sol[:,0].reshape((3,1)))))
    out = network(net_x)
    print("torch output1:")
    print(out)

    print('sol output2:')
    print(x_sol[:,1])

    net_x = torch.Tensor(np.vstack((x_sol[:,0].reshape((6,1)),u_sol[:,1].reshape((3,1)))))
    out = network(net_x)
    print('torch output2:')
    print(out)

    print('sol output3:')
    print(x_sol[:,2])

    net_x = torch.Tensor(np.vstack((x_sol[:,1].reshape((6,1)),u_sol[:,2].reshape((3,1)))))
    out = network(net_x)
    print('torch output3:')
    print(out)

This is my output when I am using the neural network

\sol output1:
[-4.90752372 -4.80313523 -4.79756388  0.08495075  0.19375143  0.20483297]
torch output1:
tensor([-4.8533, -4.8541, -4.8475,  0.1944,  0.1907,  0.2049])
sol output2:
[-6.96299353 -4.73210715  0.28172673 -4.65692974 -7.25001734  0.18482127]
torch output2:
tensor([-4.7728, -4.5631, -4.5404,  0.1856,  0.2852,  0.3095])
sol output3:
[-9.1666455   0.34328211 -8.17935721 -6.80793295 -4.70581538 -4.55626854]
torch output3:
tensor([-11.8078, -11.7809,   0.5164,  -5.0308,  -6.8436,   0.2844])

This is my output when I am using the Ground Truth Matrices

\sol output1:
[-4.85330415 -4.85414128 -4.84752104  0.19442135  0.19072214  0.20493251]
torch output1:
tensor([-4.8533, -4.8541, -4.8475,  0.1944,  0.1907,  0.2049])
sol output2:
[-4.60911402 -4.62061115 -4.5902033   0.29497233  0.2752782   0.30966009]
torch output2:
tensor([-4.6091, -4.6206, -4.5902,  0.2950,  0.2753,  0.3097])
sol output3:
[-4.26134973 -4.30576849 -4.22830407  0.40155239  0.35328366  0.41407805]
torch output3:
tensor([-4.2614, -4.3058, -4.2283,  0.4016,  0.3533,  0.4141])

liblearned_dyn.so uses absolute path for .pt file – Allow Relative Path or Input Argument

I've encountered a challenge with liblearned_dyn.so where it seems to open the .pt file using an absolute (global) path. This poses an issue when the project is used across different PCs where directory structures might differ.

Possible Solution:
Modify the library to accept the .pt file path as an input argument.
Or, implement support for relative paths to ensure the .pt file is located correctly.
Any advice or practical solutions would be greatly appreciated!

support for python 3.8

I need to run l4casadi in python 3.8 since it makes it easier to use with ROS2 foxy. Is it possible to make it backwards compatible with Python 3.8 ?

Thanks !

Issue with pip install

Hi,

I am having troubles using pip install. I already checked and I have all the prerequisites, but when I try to run
pip install l4casadi --no-build-isolation
I encounter the following error:

      An error occurred while building with CMake.
        Command:
          'C:\Users\marce\miniconda3\envs\testenv2\Lib\site-packages\cmake\data\bin/cmake.exe' --build . --target install --config Release --
        Install target:
          install
        Source directory:
          C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-yifbqia9\l4casadi_98057dd6bd884a00bd724e3b3930b6b3
        Working directory:
          C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-yifbqia9\l4casadi_98057dd6bd884a00bd724e3b3930b6b3\_skbuild\win-amd64-3.12\cmake-build
      Please check the install target is valid and see CMake's output for more information.

I never used C++ or CMake. If anyone has any suggestion on what the problem might be I would be very thankful. Let me know if you need more information.

Thank you in advance,
Marcelo

Size of model inputs and model parameters

Hi Tim!
I am trying to use L4CasADi with CNN model. The input size of model I have is ~ 140k. The model has more than million parameters. Actually I still have problems with cuda memory so I don't know if L4CasADi can process this model for implementation with Acados. The model works fine without L4CasADi. Do you have any suggestions for fixing this problem? Maybe there is a limit for model parameters or inputs.

Best,
Muhammad

The code portability issue for trained networks

Hi Tim,

When I try to porting my whole solvable project, l4casadi-acados solve frame, to another computer(computer1 - computer2), problems are encounted. The error reports are as follow:
terminate called after throwing an instance of 'c10::Error'
what(): open file failed because of errno 2 on fopen: . file path : /computer1/trained_net.pt (absolute path)
A stupid method is to create the same folder path of comput1 on the compute2 and copy *.pt to the folder, which result in poor portability.
How can we change call logic of *.pt from the absolute path to relative one?

Best
Hu

l4casadi on Jetson Nano

Hi Tim,

Thank you for this great work, it helps a lot.
I wish to run l4casadi (with acados) on a Jetson Nano but as you probably know it does not support pytorch > 1.13 since Jetson Nano is limited to CUDA 10.2. Do you think one can "downgrade" l4casadi to pytorch 1.13 ? I naively replaced the following line to make it run with pytorch 1.13 to see if it would work directly:
from torch.func import vmap, jacrev, hessian replaced by from functorch import vmap, jacrev, hessian
But I got a segmentation fault after running simple_nlp.py example (at line 26).

Does it makes sense to make it work with pytorch 1.13 and it is feasible ?

Thanks in advance,
Bryan

OSError:undefined symbol

Dear colleagues:
I have some questions about the L4c installation.

Environment:
WSL2 - Ubuntu 20.04
python 3.10 in Anconda
CUDA 12.2
PyTorch==2.0.0

Problem:
I installed the L4c according to the requirement of Installation
But when I run example/readme.py the outputs is

Exception has occurred: OSError /home/wen/anaconda3/envs/Pytorch/lib/python3.10/site-packages/l4casadi/lib/libl4casadi.so: undefined symbol: _ZN5torch3jit22optimize_for_inferenceERNS0_6ModuleERKSt6vectorISsSaISsEE File "/home/wen/ToolBox/L4CasADi/l4casadi/examples/simple_nlp.py", line 2, in <module> import l4casadi as l4c OSError: /home/wen/anaconda3/envs/Pytorch/lib/python3.10/site-packages/l4casadi/lib/libl4casadi.so: undefined symbol: _ZN5torch3jit22optimize_for_inferenceERNS0_6ModuleERKSt6vectorISsSaISsEE

Could you tell me how to solve this problem? thank you

error when load a trained net

My model is loaded

net = torch.load('save.pt')
net.to('cpu')

Here is the model summary

Sequential(
  (0): LayerNorm((35,), eps=1e-05, elementwise_affine=True)
  (1): Linear(in_features=35, out_features=512, bias=True)
  (2): Softplus(beta=1, threshold=20)
  (3): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
  (4): Dropout(p=0.1, inplace=False)
  (5): Linear(in_features=512, out_features=128, bias=True)
  (6): Softplus(beta=1, threshold=20)
  (7): Dropout(p=0.1, inplace=False)
  (8): Linear(in_features=128, out_features=3, bias=True)
)

when i use l4casadi as below

l4c_model = l4c.L4CasADi(net, has_batch=True)
x = ca.MX.sym('input',35)
y = l4c_model(x)

error is below

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[6], line 3
      1 l4c_model = l4c.L4CasADi(net, has_batch=True)
      2 x = ca.MX.sym('input',35)
----> 3 y = l4c_model(x)

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/l4casadi/l4casadi.py:30, in L4CasADi.__call__(self, *args)
     29 def __call__(self, *args):
---> 30     return self.forward(*args)

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/l4casadi/l4casadi.py:42, in L4CasADi.forward(self, inp)
     39         raise ValueError("For batched PyTorch models only vector inputs are allowed.")
     41 if not self._ready:
---> 42     self.get_ready(inp)
     44 out = self._ext_cs_fun(inp)  # type: ignore[misc]
     46 return out

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/l4casadi/l4casadi.py:56, in L4CasADi.get_ready(self, inp)
     53 rows, cols = inp.shape  # type: ignore[attr-defined]
     55 self.maybe_make_generation_dir()
---> 56 self.export_torch_traces(rows, cols)
     57 self.generate_cpp_function_template(rows, cols)
     58 self.compile_cs_function()

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/l4casadi/l4casadi.py:123, in L4CasADi.export_torch_traces(self, rows, cols)
    120 torch.jit.trace(self.model, d_inp).save((out_folder / f'{self.name}_forward.pt').as_posix())
    122 if self.has_batch:
--> 123     torch.jit.trace(
    124         make_fx(vmap(jacrev(self.model)))(d_inp), d_inp).save(
    125         (out_folder / f'{self.name}_jacrev.pt').as_posix())
    127     torch.jit.trace(
    128         make_fx(vmap(hessian(self.model)))(d_inp), d_inp).save(
    129         (out_folder / f'{self.name}_hess.pt').as_posix())
    130 else:

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/jit/_trace.py:794, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_kwarg_inputs, _store_inputs)
    792         else:
    793             raise RuntimeError("example_kwarg_inputs should be a dict")
--> 794     return trace_module(
    795         func,
    796         {"forward": example_inputs},
    797         None,
    798         check_trace,
    799         wrap_check_inputs(check_inputs),
    800         check_tolerance,
    801         strict,
    802         _force_outplace,
    803         _module_class,
    804         example_inputs_is_kwarg=isinstance(example_kwarg_inputs, dict),
    805         _store_inputs=_store_inputs
    806     )
    807 if (
    808     hasattr(func, "__self__")
    809     and isinstance(func.__self__, torch.nn.Module)
    810     and func.__name__ == "forward"
    811 ):
    812     if example_inputs is None:

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/jit/_trace.py:1056, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_inputs_is_kwarg, _store_inputs)
   1054 else:
   1055     example_inputs = make_tuple(example_inputs)
-> 1056     module._c._create_method_from_trace(
   1057         method_name,
   1058         func,
   1059         example_inputs,
   1060         var_lookup_fn,
   1061         strict,
   1062         _force_outplace,
   1063         argument_names,
   1064         _store_inputs
   1065     )
   1067 check_trace_method = module._c._get_method(method_name)
   1069 # Check the trace against new traces created from user-specified inputs

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/fx/graph_module.py:662, in GraphModule.recompile.<locals>.call_wrapped(self, *args, **kwargs)
    661 def call_wrapped(self, *args, **kwargs):
--> 662     return self._wrapped_call(self, *args, **kwargs)

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/fx/graph_module.py:281, in _WrappedCall.__call__(self, obj, *args, **kwargs)
    279     raise e.with_traceback(None)
    280 else:
--> 281     raise e

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/fx/graph_module.py:271, in _WrappedCall.__call__(self, obj, *args, **kwargs)
    269         return self.cls_call(obj, *args, **kwargs)
    270     else:
--> 271         return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
    272 except Exception as e:
    273     assert e.__traceback__

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py:1488, in Module._slow_forward(self, *input, **kwargs)
   1486         recording_scopes = False
   1487 try:
-> 1488     result = self.forward(*input, **kwargs)
   1489 finally:
   1490     if recording_scopes:

File <eval_with_key>.1:85, in forward(self, arg0_1)
     83 clone_1 = torch.ops.aten.clone.default(expand_1, memory_format = torch.contiguous_format);  expand_1 = None
     84 clone_2 = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format);  expand = None
---> 85 native_layer_norm_backward = torch.ops.aten.native_layer_norm_backward.default(mul, clone_2, [512], clone_1, clone, None, None, [True, False, False]);  mul = clone_2 = clone_1 = clone = None
     86 getitem_6 = native_layer_norm_backward[0]
     87 getitem_7 = native_layer_norm_backward[1]

File ~/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/_ops.py:287, in OpOverload.__call__(self, *args, **kwargs)
    286 def __call__(self, *args, **kwargs):
--> 287     return self._op(*args, **kwargs or {})

RuntimeError: Found an unsupported argument type in the JIT tracer. File a bug report.

when i remove the LayerNorm and Dropout, there is no error raised

acados.py library loading issue

Hi Tim,

Much thanks for your help with setting up both Acados and l4casadi! I was trying to run the acados.py example with l4casadi, but ran into some library-not-found error regarding liblearned_dyn.dylib. I double checked that I had set the PATH and DIR for acados in the terminal:

export DYLD_LIBRARY_PATH=/Users/jing/acados/lib
export ACADOS_SOURCE_DIR=/Users/jing/acados

I also ran some ocp examples from the acados packacge and they all ran successfully. Moreover, I installed the latest l4casadi version and can successfully run the naive examples.

The error message from running acados.py in the examples folder is as follows:

Traceback (most recent call last):
File "/Users/jing/l4casadi/examples/acados.py", line 219, in
run()
File "/Users/jing/l4casadi/examples/acados.py", line 185, in run
solver = MPC(model=model.model(), N=N,
File "/Users/jing/l4casadi/examples/acados.py", line 85, in solver
return AcadosOcpSolver(self.ocp())
File "/Users/jing/acados/interfaces/acados_template/acados_template/acados_ocp_solver.py", line 987, in init
self.shared_lib = CDLL(self.shared_lib_name)
File "/Users/jing/opt/anaconda3/envs/building_mpc/lib/python3.9/ctypes/init.py", line 374, in init
self._handle = _dlopen(self._name, mode)
OSError: dlopen(/Users/jing/l4casadi/c_generated_code/libacados_ocp_solver_wr.dylib, 0x0006): Library not loaded: liblearned_dyn.dylib
Referenced from: <7A2E692D-5D49-3F79-B0FC-336276534C04> /Users/jing/l4casadi/c_generated_code/libacados_ocp_solver_wr.dylib
Reason: tried: '/Users/jing/acados/lib/liblearned_dyn.dylib' (no such file), 'liblearned_dyn.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OSliblearned_dyn.dylib' (no such file), 'liblearned_dyn.dylib' (no such file), '/Users/jing/acados/lib/liblearned_dyn.dylib' (no such file), '/Users/jing/l4casadi/liblearned_dyn.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/jing/l4casadi/liblearned_dyn.dylib' (no such file), '/Users/jing/l4casadi/liblearned_dyn.dylib' (no such file)

Any pointers to try to resolve this problem is much appreciated!

Tera_renderer for MacOS

The instruction states that I should "compile tera_renderer from source and place the binary in l4casadi/template_generation/bin" for macOS. From my limited understanding, this means
1. "git clone" the tera_renderer repo onto my home directory
2. perform the "cargo build" as instructed in the tera_renderer github repo
3. Find the binary files after step 2 and put them to the specified l4casadi location.

However, for step 3, I looked through the tera_renderer folder and could not find any .exe files (which I think is the binary file the README was referring to?). Can you help point me to the right binary files? I do see that there is a folder named "bin" but it is empty except for .gitignore in "/Users/jing/tera_renderer/acados/bin". I also noticed that when I try to run any examples, the terminal still says "Dowloading https://github.com/acados/tera_renderer/releases/download/v0.0.34/t_renderer-v0.0.34-osx", which ends with "Successfully downloaded t_renderer.
Segmentation fault: 11". Does this mean I have successfully run the exmaples?

Moreover, if any of my steps are wrong please also let me know!

Much thanks!
Jing

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.