Giter Club home page Giter Club logo

Comments (22)

snnn avatar snnn commented on May 24, 2024 1

They are slightly different. "float16" vs "bfloat16"

   check_cxx_compiler_flag(-march=armv8.2-a+bf16 HAS_ARM64_BFLOAT16)
   if(NOT HAS_ARM64_BFLOAT16)
     message(FATAL_ERROR  "The compiler doesn't support BFLOAT16!!!")
   endif()
   check_cxx_compiler_flag(-march=armv8.2-a+fp16 HAS_ARM64_FLOAT16)
   if(NOT HAS_ARM64_FLOAT16)
     message(FATAL_ERROR  "The compiler doesn't support FLOAT16!!!")
   endif()

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024 1

Yep, it failed on MLAS. Going to start the backup process for the SD card now. It'll probably take about an hour or so. I'll probably be back tomorrow with more information if things are working in 22.04 Radxa-Ubuntu

from onnxruntime.

hariharans29 avatar hariharans29 commented on May 24, 2024

Are you just able to build default CPU EP after commenting that check in CMakeLists.txt ? My guess is on ARM, we require bfloat16 support to compile some MLAS kernels which is why we have that check. Tagging @yufenglee @snnn to comment on compiler bfloat16 support requirement to compile Default CPU EP on ARM.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

Attempting now. Surprisingly there are 2 consecutive checks for the exact same thing in that CMakeLists.txt... No clue why you check for the exact same thing twice in a row. Might want to simplify that while you are at it.

from onnxruntime.

snnn avatar snnn commented on May 24, 2024

Would you mind upgrading your Ubuntu version from 20.04 to 22.04?

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

Right now, my project requires that I remain on 20.04. If it is required that I upgrade to make this work, I can first test this out on a clone of my SD card, and then report this to my project manager and then they can decide if we need to step up.

For now, I'm going to wait to see if the project finishes building. If it doesn't build I'll clone the SD card tomorrow and try stepping up then.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

An update:

-- Performing Test HAS_ARM64_BFLOAT16
-- Performing Test HAS_ARM64_BFLOAT16 - Success
-- Performing Test HAS_ARM64_FLOAT16
-- Performing Test HAS_ARM64_FLOAT16 - Success

I almost didn't have enough room to perform the do-release-upgrade on these tiny SD-cards. Too many packages wanted upgrading that one actually ran out of space, but it was non-critical and I was able to update it afterwards once space had been cleared.

Now as I said before, I'm not sure that I even need BFLOAT16s at all in my use case. I think that there should be a means to disable use of them in the build scripts for those that don't need them, and for some reason can't build with a non-IEEE format.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

Hrm... I have a problem. It seems that I ran into this error now:

In file included from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.cc:6:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h: In member function ‘onnxruntime::common::Status onnxruntime::armnn_ep::Gemm<T>::Compute(onnxruntime::OpKernelContext*) const’:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:63:8: warning: suggest explicit braces to avoid ambiguous ‘else’ [-Wdangling-else]
   63 |     if (X) LOGS_DEFAULT(VERBOSE) << "X " << X->Shape().ToString().c_str();
      |        ^
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:64:8: warning: suggest explicit braces to avoid ambiguous ‘else’ [-Wdangling-else]
   64 |     if (W) LOGS_DEFAULT(VERBOSE) << "W " << W->Shape().ToString().c_str();
      |        ^
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:65:8: warning: suggest explicit braces to avoid ambiguous ‘else’ [-Wdangling-else]
   65 |     if (B) LOGS_DEFAULT(VERBOSE) << "B " << B->Shape().ToString().c_str();
      |        ^
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:109:53: error: no matching function for call to ‘armnn::INetwork::AddFullyConnectedLayer(armnn::FullyConnectedDescriptor&, armnn::ConstTensor&, armnn::Optional<armnn::ConstTensor>, const char [9])’
  109 |         fc_armnn = myNetwork->AddFullyConnectedLayer(fcDescriptor,
      |                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
  110 |                                                      weights,
      |                                                      ~~~~~~~~
  111 |                                                      armnn::Optional<armnn::ConstTensor>(bias),
      |                                                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  112 |                                                      "fc_armnn");
      |                                                      ~~~~~~~~~~~
In file included from /usr/local/include/armnn/ArmNN.hpp:11,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/armnn_common.h:9,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.cc:5:
/usr/local/include/armnn/INetwork.hpp:477:24: note: candidate: ‘armnn::IConnectableLayer* armnn::INetwork::AddFullyConnectedLayer(const armnn::FullyConnectedDescriptor&, const char*)’
  477 |     IConnectableLayer* AddFullyConnectedLayer(const FullyConnectedDescriptor& fullyConnectedDescriptor,
      |                        ^~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/armnn/INetwork.hpp:477:24: note:   candidate expects 2 arguments, 4 provided
In file included from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.cc:6:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:114:53: error: no matching function for call to ‘armnn::INetwork::AddFullyConnectedLayer(armnn::FullyConnectedDescriptor&, armnn::ConstTensor&, armnn::EmptyOptional, const char [9])’
  114 |         fc_armnn = myNetwork->AddFullyConnectedLayer(fcDescriptor,
      |                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
  115 |                                                      weights,
      |                                                      ~~~~~~~~
  116 |                                                      armnn::EmptyOptional(),
      |                                                      ~~~~~~~~~~~~~~~~~~~~~~~
  117 |                                                      "fc_armnn");
      |                                                      ~~~~~~~~~~~
In file included from /usr/local/include/armnn/ArmNN.hpp:11,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/armnn_common.h:9,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.cc:5:
/usr/local/include/armnn/INetwork.hpp:477:24: note: candidate: ‘armnn::IConnectableLayer* armnn::INetwork::AddFullyConnectedLayer(const armnn::FullyConnectedDescriptor&, const char*)’
  477 |     IConnectableLayer* AddFullyConnectedLayer(const FullyConnectedDescriptor& fullyConnectedDescriptor,
      |                        ^~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/armnn/INetwork.hpp:477:24: note:   candidate expects 2 arguments, 4 provided
In file included from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.cc:6:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h: In instantiation of ‘onnxruntime::common::Status onnxruntime::armnn_ep::Gemm<T>::Compute(onnxruntime::OpKernelContext*) const [with T = float]’:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:35:10:   required from here
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:103:23: warning: narrowing conversion of ‘(element_type)(&((const onnxruntime::Tensor*)B)->onnxruntime::Tensor::Shape())->onnxruntime::TensorShape::GetDims().gsl::span<const long int>::operator[](1)’ from ‘element_type’ {aka ‘long int’} to ‘unsigned int’ [-Wnarrowing]
  103 |             biasShape = {B->Shape().GetDims()[1]};
      |             ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:103:23: warning: narrowing conversion of ‘(&((const onnxruntime::Tensor*)B)->onnxruntime::Tensor::Shape())->onnxruntime::TensorShape::GetDims().gsl::span<const long int>::operator[](1)’ from ‘gsl::span<const long int>::element_type’ {aka ‘const long int’} to ‘unsigned int’ [-Wnarrowing]
gmake[2]: *** [CMakeFiles/onnxruntime_providers_armnn.dir/build.make:132: CMakeFiles/onnxruntime_providers_armnn.dir/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.cc.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:2063: CMakeFiles/onnxruntime_providers_armnn.dir/all] Error 2

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

From INetwork.hpp:

    /// Adds a fully connected layer to the network.
    /// @param fullyConnectedDescriptor - Description of the fully connected layer.
    /// @return - Interface for configuring the layer.
    ///
    /// @note Weights and biases are passed in as inputs. If they are constant tensors you can simply store
    ///       them in a ConstantLayer as seen below. A full example can be found in samples/SimpleSample.cpp.
    ///
    /// @code
    /// // Make sure the IsConstant flag is set on the weightsInfo before passing it to the ConstTensor.
    /// ConstTensor weights(weightsInfo, weightsData);
    ///
    /// // Constant layer that now holds weights data for FullyConnected
    /// IConnectableLayer* const constantWeightsLayer = myNetwork->AddConstantLayer(weights, "weights");
    ///
    /// FullyConnectedDescriptor fullyConnectedDesc;
    /// IConnectableLayer* const fullyConnectedLayer = myNetwork->AddFullyConnectedLayer(fullyConnectedDesc,
    ///                                                                                  "fully connected");
    /// IConnectableLayer* InputLayer = myNetwork->AddInputLayer(0);
    /// InputLayer->GetOutputSlot(0).Connect(fullyConnectedLayer->GetInputSlot(0));
    /// constantWeightsLayer->GetOutputSlot(0).Connect(fullyConnectedLayer->GetInputSlot(1));
    /// @endcode
    IConnectableLayer* AddFullyConnectedLayer(const FullyConnectedDescriptor& fullyConnectedDescriptor,
                                              const char* name = nullptr);

There is no other method "AddFullyConnectedLayer" in class "INetwork" so I don't know what you are trying to access here. I built the most recent armnn repository 2 days ago. It took 2 days to do.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

From the code, it looks like you have to do some work with a constant tensor before you make the fully connected layer in the first one. Then you need to make a second constant layer with the bias, and connect it as an input to the layer as well.
I'm going to try to do this myself in the code and see if it compiles as a work-around.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

Code fix for gemm.cc works. I'll post it in a second, but now conv.cc has issues:

In file included from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:9:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:122:66: error: cannot convert ‘std::vector<long int>’ to ‘onnxruntime::TensorShapeVector&’ {aka ‘absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >&’}
  122 |   ORT_RETURN_IF_ERROR(conv_attrs_.ComputeKernelShape(W->Shape(), kernel_shape));
      |                                                                  ^~~~~~~~~~~~
      |                                                                  |
      |                                                                  std::vector<long int>
/home/rock/temp/onnxruntime/include/onnxruntime/core/common/common.h:225:21: note: in definition of macro ‘ORT_RETURN_IF_ERROR_SESSIONID’
  225 |     auto _status = (expr);                                                                                             \
      |                     ^~~~
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:122:3: note: in expansion of macro ‘ORT_RETURN_IF_ERROR’
  122 |   ORT_RETURN_IF_ERROR(conv_attrs_.ComputeKernelShape(W->Shape(), kernel_shape));
      |   ^~~~~~~~~~~~~~~~~~~
In file included from /home/rock/temp/onnxruntime/onnxruntime/core/providers/cpu/nn/conv.h:7,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.h:7,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:14:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/cpu/nn/conv_attributes.h:76:81: note:   initializing argument 2 of ‘onnxruntime::common::Status onnxruntime::ConvAttributes::ComputeKernelShape(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) const’
   76 |   Status ComputeKernelShape(const TensorShape& weight_shape, TensorShapeVector& kernel_shape, bool weight_channels_last = false) const {
      |                                                              ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~
In file included from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:9:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:140:35: error: ‘const struct onnxruntime::ConvAttributes’ has no member named ‘InferOutputShape’; did you mean ‘InferPadsAndOutputShape’?
  140 |   ORT_RETURN_IF_ERROR(conv_attrs_.InferOutputShape(input_shape, kernel_shape, strides, dilations, pads, Y_dims));
      |                                   ^~~~~~~~~~~~~~~~
/home/rock/temp/onnxruntime/include/onnxruntime/core/common/common.h:225:21: note: in definition of macro ‘ORT_RETURN_IF_ERROR_SESSIONID’
  225 |     auto _status = (expr);                                                                                             \
      |                     ^~~~
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:140:3: note: in expansion of macro ‘ORT_RETURN_IF_ERROR’
  140 |   ORT_RETURN_IF_ERROR(conv_attrs_.InferOutputShape(input_shape, kernel_shape, strides, dilations, pads, Y_dims));
      |   ^~~~~~~~~~~~~~~~~~~
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:161:81: error: could not convert ‘pads’ from ‘onnxruntime::ConvAttributes::ConvPadVector’ {aka ‘absl::lts_20240116::InlinedVector<long int, 10, std::allocator<long int> >’} to ‘std::vector<long int>’
  161 |     armnn::Convolution2dDescriptor convolutionDescriptor = createConvDescriptor(pads, dilations, strides, biasEnabled);
      |                                                                                 ^~~~
      |                                                                                 |
      |                                                                                 onnxruntime::ConvAttributes::ConvPadVector {aka absl::lts_20240116::InlinedVector<long int, 10, std::allocator<long int> >}
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:185:72: error: no matching function for call to ‘armnn::INetwork::AddDepthwiseConvolution2dLayer(armnn::DepthwiseConvolution2dDescriptor&, armnn::ConstTensor&, armnn::Optional<armnn::ConstTensor>, const char [28])’
  185 |           convolution_armnn = myNetwork->AddDepthwiseConvolution2dLayer(depthwiseDescriptor,
      |                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
  186 |                                                                         weights,
      |                                                                         ~~~~~~~~
  187 |                                                                         armnn::Optional<armnn::ConstTensor>(bias),
      |                                                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  188 |                                                                         "depthwise_convolution_armnn");
      |                                                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/local/include/armnn/ArmNN.hpp:11,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.h:10,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:14:
/usr/local/include/armnn/INetwork.hpp:417:24: note: candidate: ‘armnn::IConnectableLayer* armnn::INetwork::AddDepthwiseConvolution2dLayer(const armnn::DepthwiseConvolution2dDescriptor&, const char*)’
  417 |     IConnectableLayer* AddDepthwiseConvolution2dLayer(const DepthwiseConvolution2dDescriptor& convolution2dDescriptor,
      |                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/armnn/INetwork.hpp:417:24: note:   candidate expects 2 arguments, 4 provided
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:190:72: error: no matching function for call to ‘armnn::INetwork::AddDepthwiseConvolution2dLayer(armnn::DepthwiseConvolution2dDescriptor&, armnn::ConstTensor&, armnn::EmptyOptional, const char [28])’
  190 |           convolution_armnn = myNetwork->AddDepthwiseConvolution2dLayer(depthwiseDescriptor,
      |                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
  191 |                                                                         weights,
      |                                                                         ~~~~~~~~
  192 |                                                                         armnn::EmptyOptional(),
      |                                                                         ~~~~~~~~~~~~~~~~~~~~~~~
  193 |                                                                         "depthwise_convolution_armnn");
      |                                                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/local/include/armnn/ArmNN.hpp:11,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.h:10,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:14:
/usr/local/include/armnn/INetwork.hpp:417:24: note: candidate: ‘armnn::IConnectableLayer* armnn::INetwork::AddDepthwiseConvolution2dLayer(const armnn::DepthwiseConvolution2dDescriptor&, const char*)’
  417 |     IConnectableLayer* AddDepthwiseConvolution2dLayer(const DepthwiseConvolution2dDescriptor& convolution2dDescriptor,
      |                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/armnn/INetwork.hpp:417:24: note:   candidate expects 2 arguments, 4 provided
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:209:61: error: no matching function for call to ‘armnn::INetwork::AddConvolution2dLayer(armnn::Convolution2dDescriptor&, armnn::ConstTensor&, armnn::Optional<armnn::ConstTensor>, const char [18])’
  209 |         convolution_armnn = myNetwork->AddConvolution2dLayer(convolutionDescriptor,
      |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~
  210 |                                                              weights,
      |                                                              ~~~~~~~~
  211 |                                                              armnn::Optional<armnn::ConstTensor>(bias),
      |                                                              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  212 |                                                              "convolution_armnn");
      |                                                              ~~~~~~~~~~~~~~~~~~~~
In file included from /usr/local/include/armnn/ArmNN.hpp:11,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.h:10,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:14:
/usr/local/include/armnn/INetwork.hpp:396:24: note: candidate: ‘armnn::IConnectableLayer* armnn::INetwork::AddConvolution2dLayer(const armnn::Convolution2dDescriptor&, const char*)’
  396 |     IConnectableLayer* AddConvolution2dLayer(const Convolution2dDescriptor& convolution2dDescriptor,
      |                        ^~~~~~~~~~~~~~~~~~~~~
/usr/local/include/armnn/INetwork.hpp:396:24: note:   candidate expects 2 arguments, 4 provided
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:214:61: error: no matching function for call to ‘armnn::INetwork::AddConvolution2dLayer(armnn::Convolution2dDescriptor&, armnn::ConstTensor&, armnn::EmptyOptional, const char [18])’
  214 |         convolution_armnn = myNetwork->AddConvolution2dLayer(convolutionDescriptor,
      |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~
  215 |                                                              weights,
      |                                                              ~~~~~~~~
  216 |                                                              armnn::EmptyOptional(),
      |                                                              ~~~~~~~~~~~~~~~~~~~~~~~
  217 |                                                              "convolution_armnn");
      |                                                              ~~~~~~~~~~~~~~~~~~~~
In file included from /usr/local/include/armnn/ArmNN.hpp:11,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.h:10,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:14:
/usr/local/include/armnn/INetwork.hpp:396:24: note: candidate: ‘armnn::IConnectableLayer* armnn::INetwork::AddConvolution2dLayer(const armnn::Convolution2dDescriptor&, const char*)’
  396 |     IConnectableLayer* AddConvolution2dLayer(const Convolution2dDescriptor& convolution2dDescriptor,
      |                        ^~~~~~~~~~~~~~~~~~~~~
/usr/local/include/armnn/INetwork.hpp:396:24: note:   candidate expects 2 arguments, 4 provided
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h: In instantiation of ‘onnxruntime::common::Status onnxruntime::armnn_ep::Gemm<T>::Compute(onnxruntime::OpKernelContext*) const [with T = float]’:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:35:10:   required from here
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:106:23: warning: narrowing conversion of ‘(element_type)(&((const onnxruntime::Tensor*)B)->onnxruntime::Tensor::Shape())->onnxruntime::TensorShape::GetDims().gsl::span<const long int>::operator[](1)’ from ‘element_type’ {aka ‘long int’} to ‘unsigned int’ [-Wnarrowing]
  106 |             biasShape = {B->Shape().GetDims()[1]};
      |             ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/math/gemm.h:106:23: warning: narrowing conversion of ‘(&((const onnxruntime::Tensor*)B)->onnxruntime::Tensor::Shape())->onnxruntime::TensorShape::GetDims().gsl::span<const long int>::operator[](1)’ from ‘gsl::span<const long int>::element_type’ {aka ‘const long int’} to ‘unsigned int’ [-Wnarrowing]
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc: In instantiation of ‘onnxruntime::common::Status onnxruntime::armnn_ep::Conv<T>::Compute(onnxruntime::OpKernelContext*) const [with T = float]’:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:93:8:   required from here
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:128:24: error: no matching function for call to ‘std::vector<long int>::vector(const TensorShapeVector&)’
  128 |   std::vector<int64_t> dilations(conv_attrs_.dilations);
      |                        ^~~~~~~~~
In file included from /usr/include/c++/11/vector:67,
                 from /usr/include/c++/11/functional:62,
                 from /usr/include/c++/11/pstl/glue_algorithm_defs.h:13,
                 from /usr/include/c++/11/algorithm:74,
                 from /home/rock/temp/onnxruntime/include/onnxruntime/core/common/common.h:22,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:9:
/usr/include/c++/11/bits/stl_vector.h:653:9: note: candidate: ‘template<class _InputIterator, class> std::vector<_Tp, _Alloc>::vector(_InputIterator, _InputIterator, const allocator_type&) [with _InputIterator = _InputIterator; <template-parameter-2-2> = <template-parameter-1-2>; _Tp = long int; _Alloc = std::allocator<long int>]’
  653 |         vector(_InputIterator __first, _InputIterator __last,
      |         ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:653:9: note:   template argument deduction/substitution failed:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:128:24: note:   candidate expects 3 arguments, 1 provided
  128 |   std::vector<int64_t> dilations(conv_attrs_.dilations);
      |                        ^~~~~~~~~
In file included from /usr/include/c++/11/vector:67,
                 from /usr/include/c++/11/functional:62,
                 from /usr/include/c++/11/pstl/glue_algorithm_defs.h:13,
                 from /usr/include/c++/11/algorithm:74,
                 from /home/rock/temp/onnxruntime/include/onnxruntime/core/common/common.h:22,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:9:
/usr/include/c++/11/bits/stl_vector.h:625:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::initializer_list<_Tp>, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  625 |       vector(initializer_list<value_type> __l,
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:625:43: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘std::initializer_list<long int>’
  625 |       vector(initializer_list<value_type> __l,
      |              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:607:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  607 |       vector(vector&& __rv, const allocator_type& __m)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:607:7: note:   candidate expects 2 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:589:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&, const allocator_type&, std::false_type) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>; std::false_type = std::integral_constant<bool, false>]’
  589 |       vector(vector&& __rv, const allocator_type& __m, false_type)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:589:7: note:   candidate expects 3 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:585:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&, const allocator_type&, std::true_type) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>; std::true_type = std::integral_constant<bool, true>]’
  585 |       vector(vector&& __rv, const allocator_type& __m, true_type) noexcept
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:585:7: note:   candidate expects 3 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:575:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(const std::vector<_Tp, _Alloc>&, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  575 |       vector(const vector& __x, const allocator_type& __a)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:575:7: note:   candidate expects 2 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:572:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&) [with _Tp = long int; _Alloc = std::allocator<long int>]’
  572 |       vector(vector&&) noexcept = default;
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:572:14: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘std::vector<long int>&&’
  572 |       vector(vector&&) noexcept = default;
      |              ^~~~~~~~
/usr/include/c++/11/bits/stl_vector.h:553:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(const std::vector<_Tp, _Alloc>&) [with _Tp = long int; _Alloc = std::allocator<long int>]’
  553 |       vector(const vector& __x)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:553:28: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘const std::vector<long int>&’
  553 |       vector(const vector& __x)
      |              ~~~~~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:522:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>::size_type, const value_type&, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::value_type = long int; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  522 |       vector(size_type __n, const value_type& __value,
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:522:7: note:   candidate expects 3 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:510:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>::size_type, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  510 |       vector(size_type __n, const allocator_type& __a = allocator_type())
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:510:24: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘std::vector<long int>::size_type’ {aka ‘long unsigned int’}
  510 |       vector(size_type __n, const allocator_type& __a = allocator_type())
      |              ~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:497:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  497 |       vector(const allocator_type& __a) _GLIBCXX_NOEXCEPT
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:497:36: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘const allocator_type&’ {aka ‘const std::allocator<long int>&’}
  497 |       vector(const allocator_type& __a) _GLIBCXX_NOEXCEPT
      |              ~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:487:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector() [with _Tp = long int; _Alloc = std::allocator<long int>]’
  487 |       vector() = default;
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:487:7: note:   candidate expects 0 arguments, 1 provided
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:132:24: error: no matching function for call to ‘std::vector<long int>::vector(const TensorShapeVector&)’
  132 |   std::vector<int64_t> strides(conv_attrs_.strides);
      |                        ^~~~~~~
In file included from /usr/include/c++/11/vector:67,
                 from /usr/include/c++/11/functional:62,
                 from /usr/include/c++/11/pstl/glue_algorithm_defs.h:13,
                 from /usr/include/c++/11/algorithm:74,
                 from /home/rock/temp/onnxruntime/include/onnxruntime/core/common/common.h:22,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:9:
/usr/include/c++/11/bits/stl_vector.h:653:9: note: candidate: ‘template<class _InputIterator, class> std::vector<_Tp, _Alloc>::vector(_InputIterator, _InputIterator, const allocator_type&) [with _InputIterator = _InputIterator; <template-parameter-2-2> = <template-parameter-1-2>; _Tp = long int; _Alloc = std::allocator<long int>]’
  653 |         vector(_InputIterator __first, _InputIterator __last,
      |         ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:653:9: note:   template argument deduction/substitution failed:
/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:132:24: note:   candidate expects 3 arguments, 1 provided
  132 |   std::vector<int64_t> strides(conv_attrs_.strides);
      |                        ^~~~~~~
In file included from /usr/include/c++/11/vector:67,
                 from /usr/include/c++/11/functional:62,
                 from /usr/include/c++/11/pstl/glue_algorithm_defs.h:13,
                 from /usr/include/c++/11/algorithm:74,
                 from /home/rock/temp/onnxruntime/include/onnxruntime/core/common/common.h:22,
                 from /home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc:9:
/usr/include/c++/11/bits/stl_vector.h:625:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::initializer_list<_Tp>, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  625 |       vector(initializer_list<value_type> __l,
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:625:43: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘std::initializer_list<long int>’
  625 |       vector(initializer_list<value_type> __l,
      |              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:607:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  607 |       vector(vector&& __rv, const allocator_type& __m)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:607:7: note:   candidate expects 2 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:589:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&, const allocator_type&, std::false_type) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>; std::false_type = std::integral_constant<bool, false>]’
  589 |       vector(vector&& __rv, const allocator_type& __m, false_type)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:589:7: note:   candidate expects 3 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:585:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&, const allocator_type&, std::true_type) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>; std::true_type = std::integral_constant<bool, true>]’
  585 |       vector(vector&& __rv, const allocator_type& __m, true_type) noexcept
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:585:7: note:   candidate expects 3 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:575:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(const std::vector<_Tp, _Alloc>&, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  575 |       vector(const vector& __x, const allocator_type& __a)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:575:7: note:   candidate expects 2 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:572:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&) [with _Tp = long int; _Alloc = std::allocator<long int>]’
  572 |       vector(vector&&) noexcept = default;
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:572:14: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘std::vector<long int>&&’
  572 |       vector(vector&&) noexcept = default;
      |              ^~~~~~~~
/usr/include/c++/11/bits/stl_vector.h:553:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(const std::vector<_Tp, _Alloc>&) [with _Tp = long int; _Alloc = std::allocator<long int>]’
  553 |       vector(const vector& __x)
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:553:28: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘const std::vector<long int>&’
  553 |       vector(const vector& __x)
      |              ~~~~~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:522:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>::size_type, const value_type&, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::value_type = long int; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  522 |       vector(size_type __n, const value_type& __value,
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:522:7: note:   candidate expects 3 arguments, 1 provided
/usr/include/c++/11/bits/stl_vector.h:510:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>::size_type, const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::size_type = long unsigned int; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  510 |       vector(size_type __n, const allocator_type& __a = allocator_type())
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:510:24: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘std::vector<long int>::size_type’ {aka ‘long unsigned int’}
  510 |       vector(size_type __n, const allocator_type& __a = allocator_type())
      |              ~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:497:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(const allocator_type&) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::allocator_type = std::allocator<long int>]’
  497 |       vector(const allocator_type& __a) _GLIBCXX_NOEXCEPT
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:497:36: note:   no known conversion for argument 1 from ‘const TensorShapeVector’ {aka ‘const absl::lts_20240116::InlinedVector<long int, 6, std::allocator<long int> >’} to ‘const allocator_type&’ {aka ‘const std::allocator<long int>&’}
  497 |       vector(const allocator_type& __a) _GLIBCXX_NOEXCEPT
      |              ~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/c++/11/bits/stl_vector.h:487:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector() [with _Tp = long int; _Alloc = std::allocator<long int>]’
  487 |       vector() = default;
      |       ^~~~~~
/usr/include/c++/11/bits/stl_vector.h:487:7: note:   candidate expects 0 arguments, 1 provided
gmake[2]: *** [CMakeFiles/onnxruntime_providers_armnn.dir/build.make:160: CMakeFiles/onnxruntime_providers_armnn.dir/home/rock/temp/onnxruntime/onnxruntime/core/providers/armnn/nn/conv.cc.o] Error 1
gmake[2]: *** Waiting for unfinished jobs....

Looks to be similar problems. Looks like you have some API problems with new ArmNN.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

Ok, the code patch to gemm.h was as follows:

...
      armnn::IConnectableLayer* fc_armnn;

      armnn::TensorInfo weightsInfo(weightShape, armnn::DataType::Float32);
      armnn::ConstTensor weights(weightsInfo, w_data);
      armnn::IConnectableLayer* const constantWeightsLayer = myNetwork->AddConstantLayer(weights, "weights");

      armnn::IConnectableLayer* InputLayer;

      if (fcDescriptor.m_BiasEnabled) {
        armnn::TensorShape biasShape = ArmNNTensorShape(B->Shape());
        if (B->Shape().NumDimensions() == 2) {
          if (B->Shape().GetDims()[0] == 1 && B->Shape().GetDims()[1] > 1) {
            biasShape = {B->Shape().GetDims()[1]};
            LOGS_DEFAULT(VERBOSE) << "Bias reshaped to: {" << B->Shape().GetDims()[1] << "}";
          }
        }
        armnn::TensorInfo biasDesc(biasShape, armnn::DataType::Float32);
        armnn::ConstTensor bias(biasDesc, b_data);
        armnn::IConnectableLayer* const constantBiasLayer = myNetwork->AddConstantLayer(bias, "bias");
        fc_armnn = myNetwork->AddFullyConnectedLayer(fcDescriptor,
                                                     "fc_armnn");
        InputLayer = myNetwork->AddInputLayer(0);
        InputLayer->GetOutputSlot(0).Connect(fc_armnn->GetInputSlot(0));
        constantWeightsLayer->GetOutputSlot(0).Connect(fc_armnn->GetInputSlot(1));
        constantBiasLayer->GetOutputSlot(0).Connect(fc_armnn->GetInputSlot(2));
      } else {
        fc_armnn = myNetwork->AddFullyConnectedLayer(fcDescriptor,
                                                     "fc_armnn");
        InputLayer = myNetwork->AddInputLayer(0);
        InputLayer->GetOutputSlot(0).Connect(fc_armnn->GetInputSlot(0));
        constantWeightsLayer->GetOutputSlot(0).Connect(fc_armnn->GetInputSlot(1));
      }

      armnn::IConnectableLayer* OutputLayer = myNetwork->AddOutputLayer(0);

      InputLayer->GetOutputSlot(0).Connect(fc_armnn->GetInputSlot(0));
      fc_armnn->GetOutputSlot(0).Connect(OutputLayer->GetInputSlot(0));
...

Keep in mind that I am working on a ssh connection with command line input to these files and my only means of error checking is running your build script again, so my progress is slow. I don't have a proper environment set up to work on your stuff.

Also, it probably could have been done better by creating an inline function to emulate the old function you were used to. Your choice how you want to ultimately do it. I'll continue plugging away unless you have a solution for me before I'm done.

from onnxruntime.

snnn avatar snnn commented on May 24, 2024

It is an armnn specific problem. Sorry I cannot provide further help.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

So, what you are saying is that they are the ones that write the ArmNN extension for Onnxruntime?

from onnxruntime.

snnn avatar snnn commented on May 24, 2024

I mean I didn't write the ArmNN extension code. I am not familiar with the code. Sorry I cannot provide help on that.

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

I see. Since it is not related to the original issue, should I open a new issue for this topic to get the right developer's attention?

from onnxruntime.

ChthonicOne avatar ChthonicOne commented on May 24, 2024

An update here: It seems that the GPIO ports are not working with the update to 22.04, and the developers of the Radxa-zero have not put out a specially compiled kernel for 22.04 ubuntu yet. This makes the solution to update to 22.04 unsatisfactory for the BFLOAT16 issue, at least until they provide a new kernel.

In the interim, is there a way that you could provide a means to disable BFLOAT16 support so that I can build onnxruntime without it on the Radxa-zero?

from onnxruntime.

huyunlei avatar huyunlei commented on May 24, 2024

ubantu18.04 jetson xviewer can not build onnxruntime because BFLOAT16,error

from onnxruntime.

snnn avatar snnn commented on May 24, 2024

@huyunlei , please try JetPack 6.0 Developer Preview

from onnxruntime.

snnn avatar snnn commented on May 24, 2024

@ChthonicOne , in your local branch you may revert #17031 .

from onnxruntime.

dusty-nv avatar dusty-nv commented on May 24, 2024

@snnn Xavier doesn't support JetPack 6 so people are still are on JetPack 5, suggest you handle this check for gracefully in the future

from onnxruntime.

lin168 avatar lin168 commented on May 24, 2024

Maybe you could try to upgrade your gcc version

from onnxruntime.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.