Giter Club home page Giter Club logo

docs's Introduction

logo


Codecov PyPI - Python Version PyPI PyPI - License DOI Code style: black

Larq is an open-source deep learning library for training neural networks with extremely low precision weights and activations, such as Binarized Neural Networks (BNNs).

Existing deep neural networks use 32 bits, 16 bits or 8 bits to encode each weight and activation, making them large, slow and power-hungry. This prohibits many applications in resource-constrained environments. Larq is the first step towards solving this. It is designed to provide an easy to use, composable way to train BNNs (1 bit) and other types of Quantized Neural Networks (QNNs) and is based on the tf.keras interface. Note that efficient inference using a trained BNN requires the use of an optimized inference engine; we provide these for several platforms in Larq Compute Engine.

Larq is part of a family of libraries for BNN development; you can also check out Larq Zoo for pretrained models and Larq Compute Engine for deployment on mobile and edge devices.

Getting Started

To build a QNN, Larq introduces the concept of quantized layers and quantizers. A quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each quantized layer requires an input_quantizer and a kernel_quantizer that describe the way of quantizing the incoming activations and weights of the layer respectively. If both input_quantizer and kernel_quantizer are None the layer is equivalent to a full precision layer.

You can define a simple binarized fully-connected Keras model using the Straight-Through Estimator the following way:

model = tf.keras.models.Sequential(
    [
        tf.keras.layers.Flatten(),
        larq.layers.QuantDense(
            512, kernel_quantizer="ste_sign", kernel_constraint="weight_clip"
        ),
        larq.layers.QuantDense(
            10,
            input_quantizer="ste_sign",
            kernel_quantizer="ste_sign",
            kernel_constraint="weight_clip",
            activation="softmax",
        ),
    ]
)

This layer can be used inside a Keras model or with a custom training loop.

Examples

Check out our examples on how to train a Binarized Neural Network in just a few lines of code:

Installation

Before installing Larq, please install:

  • Python version 3.7, 3.8, 3.9, or 3.10
  • Tensorflow version 1.14, 1.15, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, or 2.10:
    pip install tensorflow  # or tensorflow-gpu

You can install Larq with Python's pip package manager:

pip install larq

About

Larq is being developed by a team of deep learning researchers and engineers at Plumerai to help accelerate both our own research and the general adoption of Binarized Neural Networks.

docs's People

Contributors

adamhillier avatar arashb avatar cnugteren avatar dependabot-preview[bot] avatar dependabot[bot] avatar imgbot[bot] avatar jneeven avatar leonoverweel avatar lgeiger avatar rameshkrsah avatar timdebruin avatar tombana avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docs's Issues

Number alignment in columns

As mentioned in comments of #149, tables would be more readable if numbers were aligned using all the known best practices. In practice, this would mean using a fixed-width font for numbers in tables and right-aligning columns with numbers (assuming they all have the same number of decimal places).

Docs browser compatability issue: logo reloading

Screen-Recording-2020-02-17-at-1

I know browser compatability is boring, but on Firefox the logo gets removed and re-loaded on each click of the top nav, as demonstrated by this screen recording (the GIF converter I used dropped several frames so it looks like the reloading didn't happen for the middle item, but it did).

This actually isn't just the top nav, it happens when clicking on anything in the left-hand nav bar too.

Works as expected on Chromium.

Originally posted by @AdamHillier in larq/larq#418 (comment)

This was a comment on the PR linked above, so the recording is of a preview of the new docs site, but the issue also arises with the existing site.

[LCE] Add End-to-End example

It would be great to have an end-to-end example walking user through the entrie process of building a model with larq and converting and deploying it with LCE

Broken menu on chrome for android

When going into the third menu down on docs.larq.dev using chrome on my phone (Pixel 2) the menu is broken. Specifically, text from higher level menus is shown as well as (and through) the text of that level. This does not happen with firefox on my phone or chrome on desktop.

Below is a screenshot from going to top level > Compute Engine > Build, with chrome left and firefox right:

image

Similar things happen when going into other sub-menus at this level (e.g. Larq -> User Guides).

[LCE] Benchmarking guide

Add guide that walks user through the process of converting and benchmarking a Larq model on an android phone or raspberry pi.

Minor layout bug with docs headers

When the header length is just under the width of the md-content <div>, there is incorrectly a double line break before the content. This can be seen below (the "Can I use Larq only ..." block):

image

The issue arises because of the "anchor" button, which is invisible unless you hover over it. When hovering, the header looks like this:

image

When not hovering over it, it overflows onto the next line but is invisible so it looks like an (additional) line break before the content.

Add section about performance claims of quantized/binarized NNs

In openjournals/joss-reviews#1746 @sbrugman raised a very valid point that is currently missing in our guides and documentation:

The documentation and paper make no claims on the comparison of performance between Quantized Neural Networks and their high-precision weight and bias counterparts. At the time of writing, there is clearly a trade-off (although the authors are working hard to close the gap). I suggest being transparent in this difference and possibly opening the option of educating users on when a few percents of accuracy on a specific dataset are essential and when not. This might be especially relevant for people working in the industry.

I think it would be good to add a small paragraph to our FAQ or the guides that briefly explains the story about efficiency and the challenges and of quantized networks as well as what theoretical speedups someone can expect. In general this always depends on the choice of networks and hardware used to run the models, but I agree that we should be transparent about this.

@jamescook106 @koenhelwegen Do we have some resources we could use to elaborate on our claims on the home page?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.