Giter Club home page Giter Club logo

faasm's Introduction

Faasm Tests License Release Contributors codecov

Faasm is a high-performance stateful serverless runtime.

Faasm provides multi-tenant isolation, yet allows functions to share regions of memory. These shared memory regions give low-latency concurrent access to data, and are synchronised globally to support large-scale parallelism across multiple hosts.

Faasm combines software fault isolation from WebAssembly with standard Linux tooling, to provide security and resource isolation at low cost. Faasm runs functions side-by-side as threads of a single runtime process, with low overheads and fast boot times.

Faasm defines a custom host interface that extends WASI to include function inputs and outputs, chaining functions, managing state, accessing the distributed filesystem, dynamic linking, pthreads, OpenMP and MPI.

Our paper from Usenix ATC '20 on Faasm can be found here.

Please see the full documentation for more details on the code and architecture.

Quick start

Update submodules and activate the virtual environment:

git submodule update --init --recursive
source ./bin/workon.sh

Start a Faasm cluster locally using docker compose:

faasmctl deploy.compose

To compile, upload and invoke a C++ function using this local cluster you can use the faasm/cpp container:

faasmctl cli.cpp

# Compile the demo function
inv func demo hello

# Upload the demo "hello" function
inv func.upload demo hello

# Invoke the function
inv func.invoke demo hello

For more information on next steps you can look at the getting started docs

Acknowledgements

This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825184 (CloudButton), the UK Engineering and Physical Sciences Research Council (EPSRC) award 1973141, and a gift from Intel Corporation under the TFaaS project.

faasm's People

Contributors

csegarragonz avatar dependabot[bot] avatar dgoltzsche avatar eigenraven avatar j-heinemann avatar jchesterpivotal avatar mfournial avatar mkvoya avatar mmathys avatar prp avatar shillaker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faasm's Issues

./bin/provision_bench_host.sh [spdlog : Clean up] Fails

The provision script fails on spdlog cleanup. This prevents the playbook from completing.

/usr/local/code/Faasm master*
❯ ./bin/provision_bench_host.sh
/usr/local/code/Faasm/ansible /usr/local/code/Faasm

PLAY [all] ************************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************************
ok: [localhost]

TASK [install python] *************************************************************************************************************************************
[sudo] password for sean: 
changed: [localhost]

TASK [llvm9 : Add LLVM 9 repo] ****************************************************************************************************************************
ok: [localhost]

TASK [llvm9 : Install llvm and other deps] ****************************************************************************************************************
ok: [localhost]

TASK [llvm9 : Install clang and other deps] ***************************************************************************************************************
ok: [localhost]

TASK [llvm9 : Symlinks] ***********************************************************************************************************************************
ok: [localhost] => (item=clang)
ok: [localhost] => (item=clang++)
ok: [localhost] => (item=clang-cpp)
ok: [localhost] => (item=wasm-ld)
ok: [localhost] => (item=llvm-ar)
ok: [localhost] => (item=lld)

TASK [protobuf : System deps for protobuf] ****************************************************************************************************************
ok: [localhost]

TASK [protobuf : Check if protobuf installed] *************************************************************************************************************
ok: [localhost]

TASK [protobuf : Download protobuf source] ****************************************************************************************************************
skipping: [localhost] => (item=curl -O -L https://github.com/google/protobuf/releases/download/v3.6.0/protobuf-cpp-3.6.0.tar.gz) 
skipping: [localhost] => (item=tar xvf protobuf-cpp-3.6.0.tar.gz) 

TASK [protobuf : Build and install] ***********************************************************************************************************************
skipping: [localhost] => (item=./configure --prefix=/usr CC=/usr/bin/clang CPP=/usr/bin/clang-cpp CXX=/usr/bin/clang++) 
skipping: [localhost] => (item=make) 
skipping: [localhost] => (item=make install) 
skipping: [localhost] => (item=ldconfig) 

TASK [protobuf : Clean up] ********************************************************************************************************************************
ok: [localhost] => (item=/tmp/protobuf-cpp-3.6.0.tar.gz)
ok: [localhost] => (item=/tmp/protobuf-cpp-3.6.0)

TASK [perf : Install perf] ********************************************************************************************************************************
ok: [localhost]

TASK [perf : Allow everyone to run perf] ******************************************************************************************************************
ok: [localhost]

TASK [perf : Turn off kptr_restrict] **********************************************************************************************************************
ok: [localhost]

TASK [perf : Add permissions on this session] *************************************************************************************************************
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo

changed: [localhost]

TASK [perf : Turn off kptr_restrict in this session] ******************************************************************************************************
changed: [localhost]

TASK [linux : Install deps with apt] **********************************************************************************************************************
ok: [localhost]

TASK [code : Set up directory] ****************************************************************************************************************************
ok: [localhost]

TASK [code : Install system deps] *************************************************************************************************************************
ok: [localhost]

TASK [code : Check out code] ******************************************************************************************************************************
ok: [localhost]

TASK [python : Install Python system deps] ****************************************************************************************************************
ok: [localhost]

TASK [python : Update pip] ********************************************************************************************************************************
changed: [localhost]

TASK [python : Install python deps] ***********************************************************************************************************************
changed: [localhost]

TASK [spdlog : Check if spdlog installed] *****************************************************************************************************************
ok: [localhost]

TASK [spdlog : Download the spdlog release] ***************************************************************************************************************
skipping: [localhost] => (item=wget https://github.com/gabime/spdlog/archive/v1.2.1.tar.gz) 
skipping: [localhost] => (item=tar -xf v1.2.1.tar.gz) 
skipping: [localhost] => (item=mkdir spdlog-1.2.1/build) 

TASK [spdlog : Make and install] **************************************************************************************************************************
skipping: [localhost] => (item=cmake ..) 
skipping: [localhost] => (item=make) 
skipping: [localhost] => (item=make install) 

TASK [spdlog : Clean up] **********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "rmtree failed: [Errno 13] Permission denied: 'example'"}

PLAY RECAP ************************************************************************************************************************************************
localhost                  : ok=22   changed=5    unreachable=0    failed=1    skipped=4    rescued=0    ignored=0   

Looking in the relevant directory, it seems like this is failing because a bunch of subdirectories are owned by root.

/tmp/spdlog-1.2.1/build
❯ ls -al
total 28
drwxr-xr-x  7 sean sean 4096 Dec  2 17:39 .
drwxr-xr-x  3 sean sean 4096 Dec  2 17:39 ..
drwxr-xr-x 35 root root 4096 Dec  2 17:00 CMakeFiles
drwxr-xr-x  3 root root 4096 Dec  2 16:58 Testing
drwxr-xr-x  4 root root 4096 Dec  2 17:00 bench
drwxr-xr-x  4 root root 4096 Dec  2 16:59 example
drwxr-xr-x  4 root root 4096 Dec  2 17:00 tests

I then tried to rerun as root after running sudo su:

This led to a failure in the Check out code task.

TASK [code : Check out code] *****************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to init/update submodules: Submodule 'third-party/WAVM' (https://github.com/Shillaker/WAVM.git) registered for path 'third-party/WAVM'\nSubmodule 'third-party/eigen' (https://github.com/Shillaker/eigen-git-mirror) registered for path 'third-party/eigen'\nSubmodule 'third-party/faasm-clapack' ([email protected]:Shillaker/faasm-clapack.git) registered for path 'third-party/faasm-clapack'\nSubmodule 'third-party/gem3-mapper' (https://github.com/Shillaker/gem3-mapper.git) registered for path 'third-party/gem3-mapper'\nSubmodule 'third-party/libpng' (https://github.com/glennrp/libpng.git) registered for path 'third-party/libpng'\nSubmodule 'third-party/llvm-project' (https://github.com/llvm/llvm-project.git) registered for path 'third-party/llvm-project'\nSubmodule 'third-party/musl' (https://github.com/Shillaker/musl) registered for path 'third-party/musl'\nSubmodule 'third-party/pyfaasm' (https://github.com/Shillaker/pyfaasm.git) registered for path 'third-party/pyfaasm'\nSubmodule 'third-party/pyodide' (http://github.com/Shillaker/pyodide.git) registered for path 'third-party/pyodide'\nSubmodule 'third-party/tensorflow' (https://github.com/Shillaker/tensorflow.git) registered for path 'third-party/tensorflow'\nSubmodule 'third-party/zlib' (https://github.com/madler/zlib.git) registered for path 'third-party/zlib'\nCloning into '/usr/local/code/faasm/third-party/WAVM'...\nCloning into '/usr/local/code/faasm/third-party/eigen'...\nCloning into '/usr/local/code/faasm/third-party/faasm-clapack'...\nHost key verification failed.\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\nfatal: clone of '[email protected]:Shillaker/faasm-clapack.git' into submodule path '/usr/local/code/faasm/third-party/faasm-clapack' failed\nFailed to clone 'third-party/faasm-clapack'. Retry scheduled\nCloning into '/usr/local/code/faasm/third-party/gem3-mapper'...\nCloning into '/usr/local/code/faasm/third-party/libpng'...\nCloning into '/usr/local/code/faasm/third-party/llvm-project'...\nCloning into '/usr/local/code/faasm/third-party/musl'...\nCloning into '/usr/local/code/faasm/third-party/pyfaasm'...\nCloning into '/usr/local/code/faasm/third-party/pyodide'...\nwarning: redirecting to https://github.com/Shillaker/pyodide.git/\nCloning into '/usr/local/code/faasm/third-party/tensorflow'...\nCloning into '/usr/local/code/faasm/third-party/zlib'...\nCloning into '/usr/local/code/faasm/third-party/faasm-clapack'...\nHost key verification failed.\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\nfatal: clone of '[email protected]:Shillaker/faasm-clapack.git' into submodule path '/usr/local/code/faasm/third-party/faasm-clapack' failed\nFailed to clone 'third-party/faasm-clapack' a second time, aborting\n"}

Permission error creating `/usr/local/faasm/` when running `set_up_benchmarks.sh`

I'm not sure if this is supposed to be /usr/local/code/faasm/object/demo, but I used sudo to mkdir and chown /usr/local/faasm to get past this.

[2019-12-03 12:35:13.429] [console] [info] Generating machine code for demo/noop
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
  what():  boost::filesystem::create_directories: Permission denied: "/usr/local/faasm/object/demo"
./bin/set_up_benchmarks.sh: line 30:  7098 Aborted                 (core dumped) ./bin/codegen_func demo noop

`set_up_benchmarks` triggers error around AWSSDK

I've run the Ansible setup scripts on a native Ubuntu box, but after restarting, when I try to run ./bin/set_up_benchmarks.sh, I'm getting errors about the AWSSDK library. My goal is basically just to profile faasm locally rather than on cloud providers, so if you can clarify the commands or environment variables I need to set, that should be sufficient.

/usr/local/code/faasm master*
❯ ./bin/set_up_benchmarks.sh
/usr/local/code/faasm/bench /usr/local/code/faasm
-- Found LLVM 9.0.1
CMake Warning at CMakeLists.txt:79 (find_package):
  By not providing "Findaws-lambda-runtime.cmake" in CMAKE_MODULE_PATH this
  project has asked CMake to find a package configuration file provided by
  "aws-lambda-runtime", but CMake did not find one.

  Could not find a package configuration file provided by
  "aws-lambda-runtime" with any of the following names:

    aws-lambda-runtimeConfig.cmake
    aws-lambda-runtime-config.cmake

  Add the installation prefix of "aws-lambda-runtime" to CMAKE_PREFIX_PATH or
  set "aws-lambda-runtime_DIR" to a directory containing one of the above
  files.  If "aws-lambda-runtime" provides a separate development package or
  SDK, be sure it has been installed.


CMake Error at CMakeLists.txt:82 (find_package):
  By not providing "FindAWSSDK.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "AWSSDK", but
  CMake did not find one.

  Could not find a package configuration file provided by "AWSSDK" with any
  of the following names:

    AWSSDKConfig.cmake
    awssdk-config.cmake

  Add the installation prefix of "AWSSDK" to CMAKE_PREFIX_PATH or set
  "AWSSDK_DIR" to a directory containing one of the above files.  If "AWSSDK"
  provides a separate development package or SDK, be sure it has been
  installed.


-- Configuring incomplete, errors occurred!
See also "/usr/local/code/faasm/bench/CMakeFiles/CMakeOutput.log".

The faasm/toolchain expects a compiler at /usr/local/code/faasm/toolchain/install/bin/clang++

After running ./bin/toolchain.sh, and then executing the example from the README, the client errors because clang is not where it expects.

(faasm) root@docker-desktop:/usr/local/code/faasm# cd func/
(faasm) root@docker-desktop:/usr/local/code/faasm/func# ls
CMakeLists.txt  demo  dynlink  errors  knative_native.py  onnx  polybench  python  run_knative_native.sh  sgd  tf
(faasm) root@docker-desktop:/usr/local/code/faasm/func# cd demo
(faasm) root@docker-desktop:/usr/local/code/faasm/func/demo# inv compile --func=demo/hello
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
System is unknown to cmake, create:
Platform/Wasm to use this system, please send your config file to [email protected] so it can be added to cmake    Your CMakeCache.txt file was copied to CopyOfCMakeCache.txt. Please send that file to [email protected].
CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_C_COMPILER:

    /usr/local/code/faasm/toolchain/install/bin/clang

  is not a full path to an existing compiler tool.

  Tell CMake where to find the compiler by setting either the environment
  variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
  the compiler, or to the compiler name if it is in the PATH.


CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_CXX_COMPILER:

    /usr/local/code/faasm/toolchain/install/bin/clang++

  is not a full path to an existing compiler tool.

  Tell CMake where to find the compiler by setting either the environment
  variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
  to the compiler, or to the compiler name if it is in the PATH.


-- Configuring incomplete, errors occurred!

Adding custom packages to faasm

I want to add a custom package to faasm, so that I can use it in my functions.
I cloned pyodide repo in third-party/pyodide folder as it was empty?

I will use boto3 as the running example

cd faasm_home
source workon.sh
./bin/pyodide mkpkg boto3 # this creates a meta.yml
./bin/pyodide buildpkg --package_abi=0 packages/boto3/meta.yaml

I don't see any errors yet.
Next step is to modify tasks/python.py
I take this is file is present in faasmcli/faasmcli
I add it to the dictionary as explained, similar to other modules but with proper version

Next step

inv python.set-up-package

It says 'inv' command not found. Here is where I am wondering if this has to be done after running
./bin/cli.sh.

I tried that, and it generated machine code.
On running a simple python file that uses boto3, it shows, No module named boto3

I am not sure which step is wrong over here or are there some inconsistencies in the documentation.

Unclear that project assumes location at /usr/local/code/faasm

Hi! Very interesting work here! I'm trying to run your docker-compose based quickstart and it seems that the upload service dies nearly immediately. Here are the relevant logs.

upload_1       | Generating object file for Python
upload_1       | [2019-11-18 17:10:47.718] [console] [info] Running codegen for function python/py_func
upload_1       | [2019-11-18 17:10:47.719] [console] [error] Invalid function: python/py_func
upload_1       | [2019-11-18 17:10:47.719] [console] [info] Generating machine code for python/py_func
upload_1       | terminate called after throwing an instance of 'std::runtime_error'
upload_1       |   what():  No fileserver URL set in fileserver mode
upload_1       | /entrypoint.sh: line 16:     7 Aborted                 ./bin/codegen_func python py_func

Glancing at the source, I looks like this might be coming from codegen_func.cpp. Not sure if this helps, but it looks like codegenForFunction is getting run even if the msg contains an invalid function.

void codegenForFunc(const std::string &user, const std::string &func) {
    const std::shared_ptr<spdlog::logger> logger = util::getLogger();

    message::Message msg = util::messageFactory(user, func);
    if (!util::isValidFunction(msg)) {
        logger->error("Invalid function: {}/{}", user, func);
    }

    logger->info("Generating machine code for {}/{}", user, func);
    storage::FileLoader &loader = storage::getFileLoader();
    loader.codegenForFunction(msg);
}

Release toolchain and CPython separately

The toolchain and stdlibs, CPython build and BLAS build are often unchanged between Faasm releases. Accordingly, it no longer makes sense to package them as part of each Faasm release.

We should package and release each repository separately, then pin Faasm releases to the relevant versions of the upstream modules.

Note that the CPython and BLAS builds rely on the toolchain to build, so they will also have to be pinned to a toolchain version.

TODO:

  • Move environment config and Python tools from Faasm into toolchain
  • Move BLAS, libffi and eigen into toolchain repo
  • Update eigen build to use ninja
  • Move Docker environment for toolchain from Faasm into toolchain repo
  • Build and release toolchain Docker image from master
  • Add CI to CPython repo to build and package Python runtime root
  • Update Faasm to use new containerised, independent submodules

Splitting SGX Attestation PR

This is my suggestion on how to split #357 which currently is too large to work with.

PR 1: SGX cluster w/ key manager

  • Define a docker-compose file, something like: docker/docker-compose-sgx.yml to run SGX-Faasm deployments.
  • Add a service for the key manager (as you do in docker-compose-sgx-attesation.yml. Is mongo needed?
  • The file should set the necessary environment variables.
  • Currently, the key manager lives in a separate repo.
  • We should move the dockerfile there as well (docker/keymanager.dockerfile), and add scripts to generate the images, and upload them to docker hub. See this for an example among others.
  • Note that this may involve 1 PR in faasm and one in the key manager.

PR 2: Crypto API (#449, #452)

  • In the PR, the invoke_impl method contains all the logic for encrypting and decrypting.
  • At the same time, the (same?) crypto methods are re-defined in CPP files.
  • My suggestion is to add a subdir src/sgx/crypto that defines an external API with encryptMsg and decryptMsg, that interacts with both python and CPP code.
  • Ideally this would also handle all interactions with the key manager.
  • Add tests for the crypto external API (encrypt, decrypt).
  • The idea is that the crypto module is not bound to attestation.
  • Hide the particular algorithms we are using to encrypt/decrypt and use generic constants like (ENC_KEY_SIZE). The particular algorithms could be part of faasm's configuration.

PR 3: Wasm parser

  • There's a massive wasm.py file here.
  • AFAICT it parses a WebAssembly file. My first question is: how does this differ from well-maintained solutions like wasm-tools/wasmparser or wasmskd/wasmparser?
  • Do we need to roll our own in-house solution?
  • Whichever option we stick with, I suggest adding it as an ExternalProject and have it live in another repo.

PR 4: Integrate SGX with crypto module

  • This PR aims to integrate the functioning of SGX with PR3.
  • In particular, it should encrypt/decrypt inputs and outputs.
  • If the interaction with the key manager is not too complex, could be implemented here.
  • Add a configuration variable that allows the SGX to run in test mode (aka without encryption) so that current tests pass.
  • Add corresponding tests to run with encrypted inputs.

PR 5: Attestation

  • Attestation should be independent of the way functions run inside enclaves in faasm and of the way we do crypto.
  • Attestation code could live in src/sgx/attestation.
  • Attestation must use DCAP (it is currently a big unknown how hard it will be to integrate DCAP).
  • This PR should only add logic for an enclave to renew its attestation credentials with the key manager upon request.
  • Ideally we could test this method as part of the testing suite in tests/tests/sgx/test_attestation.cpp, maybe we need to fake the key manager, but it should be alright.

PR 6: Chaining and attested stacks

PR 7: Policies

  • It is still not even clear how we will use policies in the project, so we should leave this for a future PR.

Read a file in python

I have a simple program that tries to read a file in python

with open(path, 'r') as f:
   content = f.read()

The problem I am facing is the path, even though I give the absolute path
eg. /usr/local/code/faasm/my_data.json
[I have added the file in the environment, just copied it though, which can be the problem]

I found only 1 sample for reading a file in python [bench_telco.py], however the file was present in dist-packages folder.

These are the steps I have done so far:

[copy my_data.json to Faasm home folder]

  1. ./bin/cli.sh
  2. cp my_data.json /tmp/
  3. Launch a function to read this file, This results in FileNotFOundError

Should I do something similar to what bench_telco.py does?

Instructions for installation on localhost

I ran the Ansible playbook locally in a VM. To do this, I did the following:

  1. I cloned your git repo in /usr/local/code. Given that your repo is named Faasm, this yielded /usr/local/code/Faasm (notice the capital F)
  2. I then changed provision_bench_host.sh to have the following line:
ansible-playbook --connection=local -i inventory/benchmark.yml benchmark.yml
  1. I then ran /usr/local/code/Faasm/bin/provision_bench_host.sh. The generated a distinct directory usr/local/code/faasm.

I'm an Ansible novice, and I understand that this tool is typically used for remote systems, but I wonder if it might be useful to support a local installation as above. Just wanted to mention this workflow in case it generates any workflow improvement ideas.

inv compile clang++ not found

Running inv compile --user=polybench --clean yields the following error:

CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_C_COMPILER:

    /usr/local/faasm/toolchain/bin/clang

  is not a full path to an existing compiler tool.

  Tell CMake where to find the compiler by setting either the environment
  variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
  the compiler, or to the compiler name if it is in the PATH.


CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_CXX_COMPILER:

    /usr/local/faasm/toolchain/bin/clang++

  is not a full path to an existing compiler tool.

  Tell CMake where to find the compiler by setting either the environment
  variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
  to the compiler, or to the compiler name if it is in the PATH.


-- Configuring incomplete, errors occurred!
See also "/usr/local/code/faasm/build/func/CMakeFiles/CMakeOutput.log".
See also "/usr/local/code/faasm/build/func/CMakeFiles/CMakeError.log".
Failed to compile

I'm not sure if this is supposed to be /usr/local/code/faasm/toolchain/bin/clang, but /usr/local/faasm/toolchain does not exist.

This is what I see in /usr/local/code/faasm/toolchain/bin/

/usr/local/code/faasm/toolchain master
❯ ls -al
total 44
drwxr-xr-x  3 sean sean 4096 Dec  3 11:57 .
drwxr-xr-x 28 sean sean 4096 Dec  3 16:54 ..
-rw-r--r--  1 sean sean  730 Dec  3 11:57 ClangNativeToolchain.cmake
-rw-r--r--  1 sean sean  827 Dec  3 11:57 env.sh
-rw-r--r--  1 sean sean 1881 Dec  3 11:57 FaasmToolchain.cmake
-rw-r--r--  1 sean sean    9 Dec  3 11:57 .gitignore
-rw-r--r--  1 sean sean    0 Dec  3 11:57 __init__.py
-rw-r--r--  1 sean sean 5264 Dec  3 11:57 Makefile
-rw-r--r--  1 sean sean  982 Dec  3 11:57 Makefile.envs
-rw-r--r--  1 sean sean 1171 Dec  3 11:57 python_env.py
drwxr-xr-x  2 sean sean 4096 Dec  3 11:57 sysroot_extras

Broken Ansible Playbook for Catch2

I am having troubles linking with the catch2 library. The ansible playbook is broken as there's a broken link, noted in their repo catchorg/Catch2#2050.

I am aware that we are moving to a CMake based integration (I was about to do that myself) in #322.

I only have then one question remaining (and hence the issue). If the dependency is downloaded through CMake isn't the ansible role obsolete? You seem to have deleted some but not this one.

PS: I am also adding this issue as a known-problem whilst the #322 isn't merged.

Update script to support zsh

Currently, the workon.sh script, invoked using source, relies on the complete keyword not available in zsh.

In order to support completion in zsh one must prepend:

autoload bashcompinit
bashcompinit

Clang-format newlines at EOF

Enforce a consistent style on this (either with or without, but it should be impossible to pass the check if the file has the wrong ending).

Note that the check that we perform in CI will be with run_clang_format.sh.

Role llvm not defined

When I run ./bin/set_up_benchmarks.sh, I get an error about the llvm role.

I was able to get this to work by changing llvm to llvm9. Considering that you have both llvm8 and llvm9, I'm assuming that there's some sort of implicit branching logic.

Review eigen build and fix install

Experiencing some issues with the current eigen build and change the installation to run on ninja rather than make. Additionally, it might be a good idea to port the repo from Gitlab to Github.

To-Do:

  • Migrate repo from Gitlab to Github (faasm/eigen).
  • Double check the current build works on 20.04.
  • Change installation script in faasm/faasmcli/faasmcli/tasks/libs.py to use ninja rather than make.

"error: redefinition of 'main'" during compilation of demo/hello

Faasm version: master (0f4460f)

Try to follow the quick start in README using the container development environment and encounter the following errors during inv compile demo hello:

FAILED: demo/CMakeFiles/hello.dir/hello.cpp.obj 
/usr/local/faasm/toolchain/bin/clang++ --target=wasm32-wasi --sysroot=/usr/local/faasm/llvm-sysroot -DWASM_PROF=1 -D__faasm  -O3 -mno-atomics     --sysroot=/usr/local/faasm/llvm-sysroot     -m32     -DANSI     -Xlinker --stack-first      -O3 -DNDEBUG -std=gnu++17 -MD -MT demo/CMakeFiles/hello.dir/hello.cpp.obj -MF demo/CMakeFiles/hello.dir/hello.cpp.obj.d -o demo/CMakeFiles/hello.dir/hello.cpp.obj -c /usr/local/code/faasm/func/demo/hello.cpp
clang-10: warning: -Xlinker --stack-first: 'linker' input unused [-Wunused-command-line-argument]
/usr/local/code/faasm/func/demo/hello.cpp:3:5: error: redefinition of 'main'
int main(int argc, char* argv[])
    ^
/usr/local/faasm/llvm-sysroot/include/faasm/faasm.h:10:5: note: previous definition is here
int main(int argc, char *argv[]) {
    ^
1 error generated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/usr/local/bin/inv", line 8, in <module>
    sys.exit(program.run())
  File "/usr/local/lib/python3.8/dist-packages/invoke/program.py", line 384, in run
    self.execute()
  File "/usr/local/lib/python3.8/dist-packages/invoke/program.py", line 566, in execute
    executor.execute(*self.tasks)
  File "/usr/local/lib/python3.8/dist-packages/invoke/executor.py", line 129, in execute
    result = call.task(*args, **call.kwargs)
  File "/usr/local/lib/python3.8/dist-packages/invoke/tasks.py", line 127, in __call__
    result = self.body(*args, **kwargs)
  File "/usr/local/code/faasm/faasmcli/faasmcli/tasks/compile.py", line 32, in compile
    wasm_cmake(FUNC_DIR, FUNC_BUILD_DIR, target, clean, debug, sgx=sgx)
  File "/usr/local/code/faasm/faasmcli/faasmcli/util/compile.py", line 42, in wasm_cmake
    raise RuntimeError("failed on make for {}".format(target))
RuntimeError: failed on make for hello

Another definition of main is in the file /usr/local/faasm/llvm-sysroot/include/faasm/faasm.h, shown below:

#ifndef _FAASM_H
#define _FAASM_H

#include "faasm/core.h"

// Hooks for handling normal argc/argv
int faasm_argc;
char** faasm_argv;

int main(int argc, char *argv[]) {
    faasm_argc = argc;
    faasm_argv = argv;

    int idx = faasmGetCurrentIdx();
    return exec(idx);
}

#endif

It does seem to be a redefinition in my opinion. Can I just remove main in faasm.h?

Should we change the user running within the container?

When moving to a container-based development cycle there's the recurrent issue of the uid:gid missmatch. As a result, with the current toolchain, the user ends with several root-owned directories. In my opinion, we'd be better of running the container with the same uid:gid as the user invoking it (note that this will also require some changes to cli.dockerfile).

Furthermore, Docker4Mac correctly handles this issue, what would create a certain non-determinism in our builds.

This is, however, a matter to be discussed

Fix build

Merge to move org required premature merge of #295. Fix test failures that resulted.

Minor fixes on documents

Correct some minor nits in the documentation, and in the process test the new Issue + PR + Project workflow.

No support for function override on upload

  • Version: master (3ea590b)
  • Description: Even if a new wasm file for a function e.g. demo/hello is uploaded, the old version of the function is executed.

This can be easily reproduced by first compiling, uploading, and invoking the demo/hello function that comes with the repo, and then just modify the hello function a little e.g. change the string to output and recompile and upload the new function. In this case, the old string is still printed when we try to invoke demo/hello.

This issue is especially inconvenient for debugging functions. Right now I have to delete all the containers every time I want to upload a new function if it has the same name as a previously-uploaded function.

[DISCUSSION] How should the related development environments look like?

After #322, we now have a smooth containerized development enviornment for faasm. Which I must admit I very much like.

However, the faasm-experiments repo (among others) has ben, rightfully, left out of the picture. I agree it should be out, but then we should provide a way to develop. I am bringing this issue up for faasm-experiments, but this works similarly for faasm-toolchain or even faabric.

The issue here is not to clutter too much the current compose files, yet provide a unified development environment. We could use this issue to brainstorm some suggestions, here are the first that come to mind.

1. Use git from within the container:

  • This approach would have us pushing/pulling from within the container. It has, however, quite some drawbacks (I really don't like it myself).
  • First of all, we'd have to ship the container image with our git credentials (non-desirable) and risk to lose untracked changes if the service stops.
  • On those same lines, persistance, is not that bad. With the change from docker-compose run to docker-compose up introduced in #332 (to be merged at the time of the writing), services persist even if no terminal is attached to them. So changes would only be lost (unless manually stopped) when shutting down the computer (i.e. never if you are working remotely).
  • However, it is, on a broader sense, contary to my idea of the containerized working environment.

2. Mount the additional repositories as shared volumes:

  • The same way we are currently persisting changes in the faasm repo, we could mount whichever additional repositories we wanted to use. A few notes on this proposal.
  • Observe that this would imply us enforcing a particular directory structure outside faasm for developers. For instance, having all faasm-* dirs on the same level.
  • Additionally, this should definately be left outside the main docker-compose-cli.yml but yet (in my opinion) tracked in the main repository.
  • My proposal is to use docker-compose override capabilities.
  • We could also provide a switch to enable/disable repos (i.e. mount experiments or not, mount toolchain or not, ...).
  • Unfortunately, this would mean that we'd have to maintain a docker-compose-cli.overrides.yml and include it in the invokations in bin/cli.sh (which might already be a bit cluttered).

I don't have a clear answer for this, so I am open to suggestions and discussion!

RPC Tests Failing in Some Installations

When running the test suite, sometimes RPC tests fail.

RPC error state_size: failed to connect to all addresses

In particular this is located at

/usr/local/code/faasm/tests/utils/worker_utils.cpp:152

Sharing state in python

I was trying to understand how to share data between a created thread and main func
I am trying to follow and reuse code from these 2 examples present in func/python:

  1. func/python/state_test_read.py ==> for sharing data without passing information
  2. func/python/chain.py ==> to create threads and wait for it

Here is the snippet I am using

from pyfaasm.core import chainThisWithInput, awaitCall, setState, getState, \
    pullState, pushState, getFunctionIdx, getInput
import json

sharedStateSize=1024
inKey='in-key'

def dict_to_bytes(the_dict):
    return json.dumps(the_dict).encode('utf-8')

def bytes_to_dict(the_bytes):
    return json.loads(the_bytes.decode('utf-8'))

def inputHandle(i_data):
    pullState(inKey, sharedStateSize)
    response = getState(inKey, sharedStateSize)
    response = bytes_to_dict(response)
    response['num'] = 25
    setState(inKey, dict_to_bytes(response))
    pushState(inKey)

def init():
    return {'num': 0.0}

def main_func():
    idx = getFunctionIdx()
    print("Got function index {}".format(idx))

    if idx == 0:
        print("Main chaining entry point")
        call_a = chainThisWithInput(1, b'00')

        setState(inKey, dict_to_bytes(init()))
        pushState(inKey)
        res_a = awaitCall(call_a)
        print("Res Code: {}".format(res_a))
        inResponse = getState(inKey, sharedStateSize)
        print(bytes_to_dict(response))

    elif idx == 1:
        inputHandle(getInput())

if __name__ == "__main__":
    main_func()

json.decoder.JSONDecodeError: Extra data: line 1 column 67 (char 66)
in both the main_func as well as the inputHandle () where I invoke
bytes_to_dict after I get state.
Error_4_decode

When I change the value of sharedStateSize to 512 I get:
I see a SIGSEGV
Error_3_dict

There are few things I want to accomplish

  1. Share a dict with the created thread (I am assuming awaitCall creates a thread and executes it)
  2. Update that dict in the created thread
  3. Use that dict in main, and possibly pass it around to more threads.

I am not sure if I understand the APIs pretty well, but the restriction I am putting here is
the dict present in "init" may have more key:value pairs, but I won't add more key:value pairs in the thread. I will only update them.

Please let me know if this is the way or is there a better way.

Unable to upload using 'inv' command

I am currently on the latest commit (master branch).
the /bin/cli.sh suggests I am on faasm CLI version : 0.1.3.

I am having trouble while uploading functions i.e. with the command
inv upload --py python dict_state

In fact, I face the same issue with all files, not just python. Below command gives the same issue
inv upload demo hello

Error_5_upload

Steps I did:

  1. git clone Faasm
  2. edit docker-compose to have PYTHON_CODEGEN=on
  3. docker-compose up --scale worker=2
  4. inv upload --py python dict_state

I am not sure what is wrong here. What I can see is in /bin/cli.sh environment python 3.6.9 is being used.

Format everything

clang-format for C/C++, black for Python

  • Apply clang-format everywhere
  • Add clang-format check to build
  • Apply Black everywhere
  • Add Black check to build
  • Apply clang-format to other non-fork repos in faasm org
  • Add CI check for formatting in non-fork repos in faasm org

"EOFError: Ran out of input" on using shared memory

I am observing a strange behavior while using shared memory in python.

Below is the code snippet: shm.py

from pickle import loads, dumps
from random import randint
from pyfaasm.core import get_state, get_state_size, set_state, chain_this_with_input, await_call

KEY_A = "dict_func1"
KEY_B = "dict_func2"
KEY_C = "dict_func3"
KEY_D = "dict_func4"

#### Utilities for functions needed to read and write shared memory ###########

def get_dict_from_state(key):
    dict_size = get_state_size(key)
    pickled_dict = get_state(key, dict_size)
    return loads(pickled_dict)


def write_dict_to_state(key, dict_in):
    pickled_dict = dumps(dict_in)
    set_state(key, pickled_dict)

###############################################################################

def square_handler(input_bytes):
    input = randint(1,50)
    number = input * input
    response = {
        "from": "square",
        "statusCode": 200,
        "body": {"number":number}
    }
    write_dict_to_state(KEY_A, response)


def inc_handler(input_bytes):
    input = randint(1,50)
    number = input + 1
    response = {
        "from": "inc",
        "statusCode": 200,
        "body": {"number":number}
    }
    write_dict_to_state(KEY_B, response)


def half_handler(input_bytes):
    input = randint(1,50)
    number = input / 2
    response = {
        "from": "half",
        "statusCode": 200,
        "body": {"number":number}
    }
    write_dict_to_state(KEY_C, response)


def mod_handler(input_bytes):
    input = randint(1,50)
    number = input % 2
    response = {
        "from": "mod",
        "statusCode": 200,
        "body": {"number":number}
    }
    write_dict_to_state(KEY_D, response)

###############################################################################

# This is the main entrypoint
def faasm_main():

    # chained call, function 1
    square_id = chain_this_with_input(square_handler, b'')
    await_call(square_id)

    # chained call, function 2
    inc_id = chain_this_with_input(inc_handler, b'')
    await_call(inc_id)

    # chained call, function 3
    half_id = chain_this_with_input(half_handler, b'')
    await_call(half_id)

    # chained call, function 4
    mod_id = chain_this_with_input(mod_handler, b'')
    await_call(mod_id)

    # Load from state again
    for key in [KEY_A, KEY_B, KEY_C, KEY_D]:
    	print(get_dict_from_state(key))

I see inconsistent outputs for multiple runs for these as seen in screen shot. One is a success and next is a failure.
Error_shm

Steps:

docker-compose up
inv upload --py python shm
inv invoke --py python shm

This works fine, sometimes, on continuous use of invoke, it fails [using below command]:

for i in {1..100}; do inv invoke --py python shm >> shm.out; done

faasm-cli: version 0.1.5

docker-compose.yml is changed a bit, particularly worker configuration
MAX_IN_FLIGHT_RATIO=10 # I don't think this should matter
MAX_WORKERS_PER_FUNCTION=10 # I have only 1 worker instance based on Steps

Is there a particular limit that each function has for shared memory that I am hitting?

Reduce Docker image sizes

Images are currently bloated with lots of set-up artifacts. Sharing a base image is useful for convenience but introduces something which are unnecessary elsewhere (like the toolchain). This can be tidied up with multi-stage builds.

poly_bench binary not found

Afer I run ./bin/build_polybench_native.sh, a lot of things get built, but poly_bench isn't found, and it doesn't seem to have been built based on the logs.

/usr/local/code/faasm master 15s
❯ poly_bench all 5 5
zsh: command not found: poly_bench

Here are the logs:

/usr/local/code/faasm master
❯ ./bin/build_polybench_native.sh
Running release type Release
-- The C compiler identification is Clang 9.0.1
-- The CXX compiler identification is Clang 9.0.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found LLVM 9.0.1
-- The ASM compiler identification is Clang
-- Found assembler: /usr/bin/clang
-- Performing Test CXX_HAS_GSPLIT_DWARF
-- Performing Test CXX_HAS_GSPLIT_DWARF - Success
-- Performing Test LINKER_HAS_WL___GDB_INDEX
-- Performing Test LINKER_HAS_WL___GDB_INDEX - Failed
-- Performing Test CXX_HAS_WSWITCH_ENUM
-- Performing Test CXX_HAS_WSWITCH_ENUM - Success
-- Performing Test CXX_HAS_WSWITCH_DEFAULT
-- Performing Test CXX_HAS_WSWITCH_DEFAULT - Success
-- Performing Test CXX_HAS_WNULL_DEREFERENCE
-- Performing Test CXX_HAS_WNULL_DEREFERENCE - Success
-- Performing Test CXX_HAS_WDUPLICATED_COND
-- Performing Test CXX_HAS_WDUPLICATED_COND - Failed
-- Performing Test CXX_HAS_WDUPLICATED_BRANCHES
-- Performing Test CXX_HAS_WDUPLICATED_BRANCHES - Failed
-- Performing Test CXX_HAS_WLOGICAL_OP
-- Performing Test CXX_HAS_WLOGICAL_OP - Failed
-- Performing Test CXX_HAS_WNON_VIRTUAL_DTOR
-- Performing Test CXX_HAS_WNON_VIRTUAL_DTOR - Success
-- Performing Test CXX_HAS_WRESTRICT
-- Performing Test CXX_HAS_WRESTRICT - Failed
-- Performing Test CXX_HAS_WDOUBLE_PROMOTION
-- Performing Test CXX_HAS_WDOUBLE_PROMOTION - Success
-- Performing Test CXX_HAS_WNO_MISSING_FIELD_INITIALIZERS
-- Performing Test CXX_HAS_WNO_MISSING_FIELD_INITIALIZERS - Success
-- Performing Test CXX_HAS_WNO_UNUSED_PARAMETER
-- Performing Test CXX_HAS_WNO_UNUSED_PARAMETER - Success
You have called ADD_LIBRARY for library libWAVM without any source files. This typically indicates a problem with your CMakeLists.txt file
-- Performing Test C_HAS_WERROR_UNGUARDED_AVAILABILITY_NEW
-- Performing Test C_HAS_WERROR_UNGUARDED_AVAILABILITY_NEW - Success
-- Looking for futimens
-- Looking for futimens - found
-- Looking for utimensat
-- Looking for utimensat - found
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.6m.so (found version "3.6.9") 
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found Protobuf: /usr/lib/libprotobuf.so;-lpthread (found version "3.6.0") 
-- Found Protobuf: /usr/lib/libprotobuf.so;-lpthread;-lpthread (found version "3.6.0") 
-- Protobuf_LIBRARIES=/usr/lib/libprotobuf.so;-lpthread;-lpthread
-- RapidJSON found. Headers: /usr/local/lib/cmake/RapidJSON/../../../include
-- Configuring done
-- Generating done
-- Build files have been written to: /usr/local/code/faasm/build/polybench_native
Scanning dependencies of target poly_ludcmp
Scanning dependencies of target poly_adi
Scanning dependencies of target poly_floyd-warshall
Scanning dependencies of target poly_seidel-2d
Scanning dependencies of target poly_deriche
Scanning dependencies of target poly_jacobi-1d
Scanning dependencies of target poly_gramschmidt
Scanning dependencies of target poly_atax
Scanning dependencies of target poly_covariance
Scanning dependencies of target poly_jacobi-2d
Scanning dependencies of target poly_3mm
Scanning dependencies of target poly_nussinov
Scanning dependencies of target poly_correlation
Scanning dependencies of target poly_trisolv
Scanning dependencies of target poly_bicg
Scanning dependencies of target poly_lu
Scanning dependencies of target poly_doitgen
Scanning dependencies of target poly_heat-3d
Scanning dependencies of target poly_2mm
Scanning dependencies of target poly_mvt
Scanning dependencies of target poly_durbin
Scanning dependencies of target poly_fdtd-2d
Scanning dependencies of target poly_cholesky
[  0%] Building C object func/polybench/CMakeFiles/poly_jacobi-2d.dir/stencils/jacobi-2d/jacobi-2d.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_durbin.dir/linear-algebra/solvers/durbin/durbin.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_3mm.dir/linear-algebra/kernels/3mm/3mm.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_trisolv.dir/linear-algebra/solvers/trisolv/trisolv.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_seidel-2d.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_doitgen.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_covariance.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_cholesky.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_bicg.dir/linear-algebra/kernels/bicg/bicg.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_doitgen.dir/linear-algebra/kernels/doitgen/doitgen.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_atax.dir/linear-algebra/kernels/atax/atax.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_deriche.dir/medley/deriche/deriche.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_ludcmp.dir/linear-algebra/solvers/ludcmp/ludcmp.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_floyd-warshall.dir/medley/floyd-warshall/floyd-warshall.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_heat-3d.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_jacobi-1d.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_trisolv.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_cholesky.dir/linear-algebra/solvers/cholesky/cholesky.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_2mm.dir/linear-algebra/kernels/2mm/2mm.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_seidel-2d.dir/stencils/seidel-2d/seidel-2d.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_3mm.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_durbin.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_floyd-warshall.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_atax.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_ludcmp.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_gramschmidt.dir/linear-algebra/solvers/gramschmidt/gramschmidt.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_jacobi-2d.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_bicg.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_nussinov.dir/medley/nussinov/nussinov.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_covariance.dir/datamining/covariance/covariance.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_jacobi-1d.dir/stencils/jacobi-1d/jacobi-1d.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_mvt.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_fdtd-2d.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_lu.dir/utilities/polybench.c.o
[  0%] Building C object func/polybench/CMakeFiles/poly_fdtd-2d.dir/stencils/fdtd-2d/fdtd-2d.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_2mm.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_adi.dir/stencils/adi/adi.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_lu.dir/linear-algebra/solvers/lu/lu.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_nussinov.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_correlation.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_mvt.dir/linear-algebra/kernels/mvt/mvt.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_deriche.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_correlation.dir/datamining/correlation/correlation.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_gramschmidt.dir/utilities/polybench.c.o
[  9%] Building C object func/polybench/CMakeFiles/poly_heat-3d.dir/stencils/heat-3d/heat-3d.c.o
[ 18%] Building C object func/polybench/CMakeFiles/poly_adi.dir/utilities/polybench.c.o
[ 18%] Linking C executable ../../bin/poly_adi
[ 18%] Linking C executable ../../bin/poly_nussinov
[ 27%] Linking C executable ../../bin/poly_gramschmidt
[ 45%] Linking C executable ../../bin/poly_doitgen
[ 45%] Linking C executable ../../bin/poly_atax
[ 45%] Linking C executable ../../bin/poly_jacobi-2d
[ 45%] Linking C executable ../../bin/poly_bicg
[ 45%] Linking C executable ../../bin/poly_heat-3d
[ 36%] Linking C executable ../../bin/poly_fdtd-2d
[ 36%] Linking C executable ../../bin/poly_durbin
[ 72%] Linking C executable ../../bin/poly_jacobi-1d
[ 90%] Linking C executable ../../bin/poly_cholesky
[ 36%] Linking C executable ../../bin/poly_floyd-warshall
[ 45%] Linking C executable ../../bin/poly_correlation
[ 45%] Linking C executable ../../bin/poly_deriche
[ 54%] Linking C executable ../../bin/poly_covariance
[100%] Linking C executable ../../bin/poly_mvt
[100%] Linking C executable ../../bin/poly_2mm
[ 54%] Linking C executable ../../bin/poly_3mm
[ 63%] Linking C executable ../../bin/poly_lu
[ 90%] Linking C executable ../../bin/poly_seidel-2d
[100%] Linking C executable ../../bin/poly_trisolv
[ 90%] Linking C executable ../../bin/poly_ludcmp
[100%] Built target poly_doitgen
[100%] Built target poly_mvt
[100%] Built target poly_gramschmidt
[100%] Built target poly_fdtd-2d
[100%] Built target poly_deriche
[100%] Built target poly_durbin
[100%] Built target poly_jacobi-1d
[100%] Built target poly_floyd-warshall
[100%] Built target poly_covariance
[100%] Built target poly_bicg
[100%] Built target poly_adi
[100%] Built target poly_seidel-2d
[100%] Built target poly_2mm
[100%] Built target poly_heat-3d
[100%] Built target poly_trisolv
[100%] Built target poly_jacobi-2d
[100%] Built target poly_cholesky
[100%] Built target poly_atax
[100%] Built target poly_nussinov
[100%] Built target poly_correlation
[100%] Built target poly_3mm
[100%] Built target poly_ludcmp
[100%] Built target poly_lu
Scanning dependencies of target polybench_all_funcs
[100%] Built target polybench_all_funcs
/usr/local/code/faasm

Manifest Issue

Hello,

Recently when I tried to start a Faasm cluster by the docker-compose up --scale worker=2 command, I had the following error:

ERROR: manifest for faasm/upload:0.4.3 not found: manifest unknown: manifest unknown

I would truly appreciate it if the issue could be resolved soon.

Best Regards,
Mengmei

No such file or directory ~/faasm/results/runtime-bench-time.csv

When I run inv bench-time, I error out because this file/directory doesn't exist

inv bench-time
Traceback (most recent call last):
  File "/usr/local/bin/inv", line 8, in <module>
    sys.exit(program.run())
  File "/usr/local/lib/python3.6/dist-packages/invoke/program.py", line 384, in run
    self.execute()
  File "/usr/local/lib/python3.6/dist-packages/invoke/program.py", line 566, in execute
    executor.execute(*self.tasks)
  File "/usr/local/lib/python3.6/dist-packages/invoke/executor.py", line 129, in execute
    result = call.task(*args, **call.kwargs)
  File "/usr/local/lib/python3.6/dist-packages/invoke/tasks.py", line 127, in __call__
    result = self.body(*args, **kwargs)
  File "/usr/local/code/faasm/tasks/bench_time.py", line 22, in bench_time
    csv_out = open(OUTPUT_FILE, "w")
FileNotFoundError: [Errno 2] No such file or directory: '/home/sean/faasm/results/runtime-bench-time.csv'

I was able to get past this by running the following:

mkdir ~/faasm/results/
touch /home/sean/faasm/results/runtime-bench-time.csv

Building ONNX function from source

I am learning Faasm by rebuilding functions, but I am currently stumped on how to generate the LLVM onnxruntime include directory:

/usr/local/faasm/llvm-sysroot/include/onnxruntime

I presume I need to use clang to emit LLVM IR by compiling the OnnxRuntime source code? Do you maybe have a build script or Makefile that you could share, to help me get started?

How to add custom function for wavm

I want to extend some function for faasm host interface in the wasm runtime.
so I did the following:

my operation

for project faasm

In src/wavm/faasm.cpp, add the code and compile

WAVM_DEFINE_INTRINSIC_FUNCTION(env, "example", I32, example)
{
    return 1;
}

for project client/cpp

in libfaasm/faasm/core.h, add:

int example();

in libfaasm faasm.imports , add:

example

in the func/demo/hello.cpp, change to:

#include "faasm/faasm.h"
#include <stdio.h>

int main(int argc, char* argv[])
{
    printf("example: %d\n", example());

    return 0;
}

and run the command:

inv libfaasm --native && inv libfaasm --native --shared && inv libfaasm
inv dev.cmake && inv dev.cc emulator && inv dev.install emulator 
inv dev.cmake --shared && inv dev.cc emulator --shared && inv dev.install emulator --shared
inv func.compile demo hello && inv func.upload demo hello

result

but when I run the function which can call example(), I has the following error:

╰─# simple_runner demo hello
05/09/21 02:39:36 [default] (info) Running demo/hello for 1 runs with input ""
05/09/21 02:39:36 [default] (info) Running demo/hello with WAVM
05/09/21 02:39:37 [default] (info) Instantiating module demo/hello  
Assertion failed at /home/yb/code/faasm/build/_deps/wavm_ext-src/Lib/Runtime/Instance.cpp(338): exportedObject && "Trying to export an import without a Runtime::Function (a native function?)"
Call stack:
  /usr/local/lib/libWAVM.so.0!WAVM::Runtime::instantiateModuleInternal(WAVM::Runtime::Compartment*, std::shared_ptr<WAVM::Runtime::Module const> const&, std::vector<WAVM::Runtime::FunctionImportBinding, std::allocator<WAVM::Runtime::FunctionImportBinding> >&&, std::vector<WAVM::Runtime::Table*, std::allocator<WAVM::Runtime::Table*> >&&, std::vector<WAVM::Runtime::Memory*, std::allocator<WAVM::Runtime::Memory*> >&&, std::vector<WAVM::Runtime::Global*, std::allocator<WAVM::Runtime::Global*> >&&, std::vector<WAVM::Runtime::ExceptionType*, std::allocator<WAVM::Runtime::ExceptionType*> >&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::shared_ptr<WAVM::Runtime::ResourceQuota> const&)+14206
  /usr/local/lib/libWAVM.so.0!WAVM::Runtime::instantiateModule(WAVM::Runtime::Compartment*, std::shared_ptr<WAVM::Runtime::Module const> const&, std::vector<WAVM::Runtime::Object*, std::allocator<WAVM::Runtime::Object*> >&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::shared_ptr<WAVM::Runtime::ResourceQuota> const&)+243
  simple_runner+433555
  simple_runner+430311
  simple_runner+766719
  simple_runner+355315
  simple_runner+358038
  /lib/x86_64-linux-gnu/libc.so.6!__libc_start_main+230
  simple_runner+354841
[1]    7962 trace trap (core dumped)  simple_runner demo hello

Am I missing some operations ?

my environment:

OS: ubuntu18.04LTS
clang: 10.0.1

the client/cpp and the simple_runner is in bare mechine. the upload and redis is in docker.

Polybench Instructions Clarification (inv upload-all)

The instructions include the following:

# Upload (must have an upload server running)
inv upload-all

Does this mean that we need to start an upload server running before this command? If so, how?

This is what I see running this:

/usr/local/code/faasm/toolchain master
❯ inv upload-all
Creating config file at /home/sean/faasm/faasm.ini

Getting started for newcomers

Hi, I'm a CS student in Padova (Italy) on Master Degree and I was interested in working on a thesis about serverless computing and I found this project. I have personal good experience with JavaScript and frontend technologies, but I'm not sure where to get started with WebAssembly and this project in particular. Do you have any advice/resource to share?

I'd also love to see if you have any roadmap. I'm currently evaluating the idea of focusing the whole thesis on this cool stack Serverless on WASM and I'd like to help with some development hopefully, once I get my head around the technologies and the source code.

I opened an issue because others might be interested in contributing, serverless is an hot topic!
Thanks!

Modern CMake

The CMake files in this project and its submodules are not up to scratch with modern CMake practices (as outlined here and here).

A to-do list of low-hanging fruit:

  • Remove all use of global functions like include_directories and link_directories
  • Add PUBLIC and PRIVATE to target properties and linking (using PRIVATE as default)
  • Remove use of source file globbing
  • Look for and avoid direct modification of CMAKE_CXX_FLAGS
  • Add aliases for Faasm/ Faabric libraries with faasm:: and faabric::
  • Remove spurious variables like LIB_FILES and HEADERS, instead list things directly in add_library where possible
  • Make sure we install all dependencies under CMAKE_INSTALL_PREFIX
  • Instead of target_link_directories separately in faabric_lib, add a link to CMAKE_INSTALL_PREFIX/include
  • Consider using a CMake linter like cmake-lint or polysquare-cmake-linter

The sgx directory is a law unto itself and can be excluded from this tidy-up for now.

When running memory benchmarks as root, errors due to missing binaries

Setting up the benchmarks seems to have placed a bunch of things in ~/faasm/, but when following the benchmark instructions, you suggest running as root, and this errors out because the expected binaries aren't in /root/faasm/bench.

To get past this, I just copied ~/faasm/bench to /root/faasm/bench

Fail to run wasm demo in wasmer

@mfournial @Shillaker I like the idea to leverage wasm as runtime working with kantive to resolve the issues of performance and state. It is pretty straightforward to deploy faasm in k8s/knative and run a demo to understand the workflow. When I want to dig into the wavm of Faasm, I try to run the demo (hello) with another wasm runtime, wasmer, it failed with error message of "wasi_snapshot_preview1".

I opened this issue just try to understand how to develop wasm program for faasm and how to test it standalone outside Faasm.

Error in matplotlib (3.3.2) requirements

Currently, when installing faasm's python environment, I get the following error:

ERROR: matplotlib 3.3.2 has requirement certifi>=2020.06.20, but you'll have certifi 2019.11.28 which is incompatible.

In order to replicate it, just delete the current venv folder and install the requirements again:

cd <FAASM_ROOT>
rm -r venv/
source workon.sh
pip install -r faasmcli/requirements.txt

Looking around, the issue is known to matplotlib's maintainers, and has was raised in matplotlib/matplotlib#18337 and addressed in matplotlib/matplotlib#18636.
The PR is tagged with v3.3 hence it still has not been pushed upstream.

As far as I know, pip should take the most up to date version once the new version is released.
In the meantime, I don't think the error is fatal, but I wanted to add it to document the issue while it persists, and ensure it does not happen once the new release comes around.

does faasm support MAC M1

hello, i followed the instruction from README. but it seems that faasm does not support my MAC M1. the error log:

upload_1       | 05/15/21 01:06:05 [default] (info) Uploading demo/hello
upload_1       | Target X86 CPU (athlon-xp) does not support SSE 4.1, which WAVM requires for WebAssembly SIMD code.
upload_1       |
upload_1       | qemu: uncaught target signal 6 (Aborted) - core dumped
upload_1       | Aborted
upload_1 exited with code 134

Library not found error

Some checks fail when running any script linked with pistache with the following error:

/build/faasm/bin/codegen_shared_obj: error while loading shared libraries: libpistache.so.0: cannot open shared object file: No such file or directory

This can be replicated (sometimes) locally running:

inv dev.tools --clean
inv compile.user mpi
inv codegen.user mpi

Browsing the filesystem, we can indeed find the library installed in /usr/local/lib/libpistache.so.0. Thus, this seems a linker issue. In particular, the linker using an out-of-date cache. Locally, the issue is solved updating the linker cache running:

sudo ldconfig

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.