Giter Club home page Giter Club logo

kalman-and-bayesian-filters-in-python's Introduction

Introductory text for Kalman and Bayesian filters. All code is written in Python, and the book itself is written using Jupyter Notebook so that you can run and modify the code in your browser. What better way to learn?

"Kalman and Bayesian Filters in Python" looks amazing! ... your book is just what I needed - Allen Downey, Professor and O'Reilly author.

Thanks for all your work on publishing your introductory text on Kalman Filtering, as well as the Python Kalman Filtering libraries. We’ve been using it internally to teach some key state estimation concepts to folks and it’s been a huge help. - Sam Rodkey, SpaceX

Start reading online now by clicking the binder or Azure badge below:

Binder

alt tag

What are Kalman and Bayesian Filters?

Sensors are noisy. The world is full of data and events that we want to measure and track, but we cannot rely on sensors to give us perfect information. The GPS in my car reports altitude. Each time I pass the same point in the road it reports a slightly different altitude. My kitchen scale gives me different readings if I weigh the same object twice.

In simple cases the solution is obvious. If my scale gives slightly different readings I can just take a few readings and average them. Or I can replace it with a more accurate scale. But what do we do when the sensor is very noisy, or the environment makes data collection difficult? We may be trying to track the movement of a low flying aircraft. We may want to create an autopilot for a drone, or ensure that our farm tractor seeded the entire field. I work on computer vision, and I need to track moving objects in images, and the computer vision algorithms create very noisy and unreliable results.

This book teaches you how to solve these sorts of filtering problems. I use many different algorithms, but they are all based on Bayesian probability. In simple terms Bayesian probability determines what is likely to be true based on past information.

If I asked you the heading of my car at this moment you would have no idea. You'd prefer a number between 1° and 360° degrees, and have a 1 in 360 chance of being right. Now suppose I told you that 2 seconds ago its heading was 243°. In 2 seconds my car could not turn very far, so you could make a far more accurate prediction. You are using past information to more accurately infer information about the present or future.

The world is also noisy. That prediction helps you make a better estimate, but it also subject to noise. I may have just braked for a dog or swerved around a pothole. Strong winds and ice on the road are external influences on the path of my car. In control literature we call this noise though you may not think of it that way.

There is more to Bayesian probability, but you have the main idea. Knowledge is uncertain, and we alter our beliefs based on the strength of the evidence. Kalman and Bayesian filters blend our noisy and limited knowledge of how a system behaves with the noisy and limited sensor readings to produce the best possible estimate of the state of the system. Our principle is to never discard information.

Say we are tracking an object and a sensor reports that it suddenly changed direction. Did it really turn, or is the data noisy? It depends. If this is a jet fighter we'd be very inclined to believe the report of a sudden maneuver. If it is a freight train on a straight track we would discount it. We'd further modify our belief depending on how accurate the sensor is. Our beliefs depend on the past and on our knowledge of the system we are tracking and on the characteristics of the sensors.

The Kalman filter was invented by Rudolf Emil Kálmán to solve this sort of problem in a mathematically optimal way. Its first use was on the Apollo missions to the moon, and since then it has been used in an enormous variety of domains. There are Kalman filters in aircraft, on submarines, and on cruise missiles. Wall street uses them to track the market. They are used in robots, in IoT (Internet of Things) sensors, and in laboratory instruments. Chemical plants use them to control and monitor reactions. They are used to perform medical imaging and to remove noise from cardiac signals. If it involves a sensor and/or time-series data, a Kalman filter or a close relative to the Kalman filter is usually involved.

Motivation

The motivation for this book came out of my desire for a gentle introduction to Kalman filtering. I'm a software engineer that spent almost two decades in the avionics field, and so I have always been 'bumping elbows' with the Kalman filter, but never implemented one myself. As I moved into solving tracking problems with computer vision the need became urgent. There are classic textbooks in the field, such as Grewal and Andrew's excellent Kalman Filtering. But sitting down and trying to read many of these books is a dismal experience if you do not have the required background. Typically the first few chapters fly through several years of undergraduate math, blithely referring you to textbooks on topics such as Itō calculus, and present an entire semester's worth of statistics in a few brief paragraphs. They are good texts for an upper undergraduate course, and an invaluable reference to researchers and professionals, but the going is truly difficult for the more casual reader. Symbology is introduced without explanation, different texts use different terms and variables for the same concept, and the books are almost devoid of examples or worked problems. I often found myself able to parse the words and comprehend the mathematics of a definition, but had no idea as to what real world phenomena they describe. "But what does that mean?" was my repeated thought.

However, as I began to finally understand the Kalman filter I realized the underlying concepts are quite straightforward. A few simple probability rules, some intuition about how we integrate disparate knowledge to explain events in our everyday life and the core concepts of the Kalman filter are accessible. Kalman filters have a reputation for difficulty, but shorn of much of the formal terminology the beauty of the subject and of their math became clear to me, and I fell in love with the topic.

As I began to understand the math and theory more difficulties present themselves. A book or paper's author makes some statement of fact and presents a graph as proof. Unfortunately, why the statement is true is not clear to me, nor is the method for making that plot obvious. Or maybe I wonder "is this true if R=0?" Or the author provides pseudocode at such a high level that the implementation is not obvious. Some books offer Matlab code, but I do not have a license to that expensive package. Finally, many books end each chapter with many useful exercises. Exercises which you need to understand if you want to implement Kalman filters for yourself, but exercises with no answers. If you are using the book in a classroom, perhaps this is okay, but it is terrible for the independent reader. I loathe that an author withholds information from me, presumably to avoid 'cheating' by the student in the classroom.

From my point of view none of this is necessary. Certainly if you are designing a Kalman filter for an aircraft or missile you must thoroughly master all of the mathematics and topics in a typical Kalman filter textbook. I just want to track an image on a screen, or write some code for an Arduino project. I want to know how the plots in the book are made, and chose different parameters than the author chose. I want to run simulations. I want to inject more noise in the signal and see how a filter performs. There are thousands of opportunities for using Kalman filters in everyday code, and yet this fairly straightforward topic is the provenance of rocket scientists and academics.

I wrote this book to address all of those needs. This is not the book for you if you program navigation computers for Boeing or design radars for Raytheon. Go get an advanced degree at Georgia Tech, UW, or the like, because you'll need it. This book is for the hobbyist, the curious, and the working engineer that needs to filter or smooth data.

This book is interactive. While you can read it online as static content, I urge you to use it as intended. It is written using Jupyter Notebook, which allows me to combine text, math, Python, and Python output in one place. Every plot, every piece of data in this book is generated from Python that is available to you right inside the notebook. Want to double the value of a parameter? Click on the Python cell, change the parameter's value, and click 'Run'. A new plot or printed output will appear in the book.

This book has exercises, but it also has the answers. I trust you. If you just need an answer, go ahead and read the answer. If you want to internalize this knowledge, try to implement the exercise before you read the answer.

This book has supporting libraries for computing statistics, plotting various things related to filters, and for the various filters that we cover. This does require a strong caveat; most of the code is written for didactic purposes. It is rare that I chose the most efficient solution (which often obscures the intent of the code), and in the first parts of the book I did not concern myself with numerical stability. This is important to understand - Kalman filters in aircraft are carefully designed and implemented to be numerically stable; the naive implementation is not stable in many cases. If you are serious about Kalman filters this book will not be the last book you need. My intention is to introduce you to the concepts and mathematics, and to get you to the point where the textbooks are approachable.

Finally, this book is free. The cost for the books required to learn Kalman filtering is somewhat prohibitive even for a Silicon Valley engineer like myself; I cannot believe they are within the reach of someone in a depressed economy, or a financially struggling student. I have gained so much from free software like Python, and free books like those from Allen B. Downey here. It's time to repay that. So, the book is free, it is hosted on free servers, and it uses only free and open software such as IPython and MathJax to create the book.

Reading Online

The book is written as a collection of Jupyter Notebooks, an interactive, browser based system that allows you to combine text, Python, and math into your browser. There are multiple ways to read these online, listed below.

binder

binder serves interactive notebooks online, so you can run the code and change the code within your browser without downloading the book or installing Jupyter.

Binder

nbviewer

The website http://nbviewer.org provides a Jupyter Notebook server that renders notebooks stored at github (or elsewhere). The rendering is done in real time when you load the book. You may use this nbviewer link to access my book via nbviewer. If you read my book today, and then I make a change tomorrow, when you go back tomorrow you will see that change. Notebooks are rendered statically - you can read them, but not modify or run the code.

nbviewer seems to lag the checked in version by a few days, so you might not be reading the most recent content.

GitHub

GitHub is able to render the notebooks directly. The quickest way to view a notebook is to just click on them above. However, it renders the math incorrectly, and I cannot recommend using it if you are doing more than just dipping into the book.

PDF Version

A PDF version of the book is available [here]https://drive.google.com/file/d/0By_SW19c1BfhSVFzNHc0SjduNzg/view?usp=sharing&resourcekey=0-41olC9ht9xE3wQe2zHZ45A)

The PDF will usually lag behind what is in github as I don't update it for every minor check in.

Downloading and Running the Book

However, this book is intended to be interactive and I recommend using it in that form. It's a little more effort to set up, but worth it. If you install IPython and some supporting libraries on your computer and then clone this book you will be able to run all of the code in the book yourself. You can perform experiments, see how filters react to different data, see how different filters react to the same data, and so on. I find this sort of immediate feedback both vital and invigorating. You do not have to wonder "what happens if". Try it and see!

The book and supporting software can be downloaded from GitHub by running this command on the command line:

git clone --depth=1 https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python.git
pip install filterpy

Instructions for installation of the IPython ecosystem can be found in the Installation appendix, found here.

Once the software is installed you can navigate to the installation directory and run Jupyter notebook with the command line instruction

jupyter notebook

This will open a browser window showing the contents of the base directory. The book is organized into chapters, each contained within one IPython Notebook (these notebook files have a .ipynb file extension). For example, to read Chapter 2, click on the file 02-Discrete-Bayes.ipynb. Sometimes there are supporting notebooks for doing things like generating animations that are displayed in the chapter. These are not intended to be read by the end user, but of course if you are curious as to how an animation is made go ahead and take a look. You can find these notebooks in the folder named Supporting_Notebooks.

This is admittedly a somewhat cumbersome interface to a book; I am following in the footsteps of several other projects that are somewhat repurposing Jupyter Notebook to generate entire books. I feel the slight annoyances have a huge payoff - instead of having to download a separate code base and run it in an IDE while you try to read a book, all of the code and text is in one place. If you want to alter the code, you may do so and immediately see the effects of your change. If you find a bug, you can make a fix, and push it back to my repository so that everyone in the world benefits. And, of course, you will never encounter a problem I face all the time with traditional books - the book and the code are out of sync with each other, and you are left scratching your head as to which source to trust.

Companion Software

Latest Version

I wrote an open source Bayesian filtering Python library called FilterPy. I have made the project available on PyPi, the Python Package Index. To install from PyPi, at the command line issue the command

pip install filterpy

If you do not have pip, you may follow the instructions here: https://pip.pypa.io/en/latest/installing.html.

All of the filters used in this book as well as others not in this book are implemented in my Python library FilterPy, available here. You do not need to download or install this to read the book, but you will likely want to use this library to write your own filters. It includes Kalman filters, Fading Memory filters, H infinity filters, Extended and Unscented filters, least square filters, and many more. It also includes helper routines that simplify the designing the matrices used by some of the filters, and other code such as Kalman based smoothers.

FilterPy is hosted on github at (https://github.com/rlabbe/filterpy). If you want the bleeding edge release you will want to grab a copy from github, and follow your Python installation's instructions for adding it to the Python search path. This might expose you to some instability since you might not get a tested release, but as a benefit you will also get all of the test scripts used to test the library. You can examine these scripts to see many examples of writing and running filters while not in the Jupyter Notebook environment.

Alternative Way of Running the Book in Conda environment

If you have conda or miniconda installed, you can create an environment by

conda env update -f environment.yml

and use

conda activate kf_bf

and

conda deactivate kf_bf

to activate and deactivate the environment.

Issues or Questions

If you have comments, you can write an issue at GitHub so that everyone can read it along with my response. Please don't view it as a way to report bugs only. Alternatively I've created a gitter room for more informal discussion. Join the chat at https://gitter.im/rlabbe/Kalman-and-Bayesian-Filters-in-Python

License

Creative Commons License
Kalman and Bayesian Filters in Python by Roger R. Labbe is licensed under a Creative Commons Attribution 4.0 International License.

All software in this book, software that supports this book (such as in the the code directory) or used in the generation of the book (in the pdf directory) that is contained in this repository is licensed under the following MIT license:

The MIT License (MIT)

Copyright (c) 2015 Roger R. Labbe Jr

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.TION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Contact

rlabbejr at gmail.com

kalman-and-bayesian-filters-in-python's People

Contributors

benureau avatar cliansang avatar dduong42 avatar ernstklrb avatar esvhd avatar gjacquenot avatar gluttton avatar horaceheaven avatar jezek avatar kcamd avatar mattheard avatar neonquill avatar offchan42 avatar pebetouofc avatar pedrohrpbs avatar peteryschneider avatar plevasseur avatar pquentin avatar remyleone avatar rlabbe avatar robi-y avatar rummanwaqar avatar senden9 avatar slayoo avatar sleepingagain avatar spacy-doc-bot avatar staroselskii avatar tv3141 avatar undefx avatar wilcobonestroo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kalman-and-bayesian-filters-in-python's Issues

color_cycle warning with matplotlib 1.5

When executing

#format the book
%matplotlib inline
%load_ext autoreload
%autoreload 2  
from __future__ import division, print_function
import sys
sys.path.insert(0,'./code')
from book_format import load_style
load_style()

from any page, I get the following warning:

D:\Chad\Documents\WinPython-64bit-3.4.3.6\python-3.4.3.amd64\lib\site-packages\matplotlib\__init__.py:876: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
  warnings.warn(self.msg_depr % (key, alt_key))

This Python distro is using Matplotlib 1.5.0rc3. I have confirmed that changing the axes.color_cycle line in 538.json to

  "axes.prop_cycle": "cycler('color', ['#6d904f','#013afe', '#202020','#fc4f30','#e5ae38','#A60628','#30a2da','#008080','#7A68A6','#CF4457','#188487','#E24A33'])",

fixes the warning. But I am unsure how to make the json file conditional based on Matplotlib version, so as to be backwards compatible with older distributions.

Unscented Kalman Filter: suggestions to note about required upper triangular matrix as a root

Recently I'm bump into your comment in the filterpy implementation of sigma point calculation:

If your method returns a triangular matrix it must be upper
triangular. Do not use numpy.linalg.cholesky - for historical
reasons it returns a lower triangular matrix.

In my opinion it will be great add note about requirement upper triangular matrix instead of lower after words:

SciPy provides cholesky() method in scipy.linalg.
If your language of choice is Fortran, C, or C++,
libraries such as LAPACK provide this routine.
Matlab provides chol().

P.S.

Can you explain where did this requirements from?


I've tried use both of matrix view. When I use lower triangular matrix the covariance matrix P is growing up (after each step my confidence decreases).

Is it expected behaviour?


I try to found out how to find upper triangular matrix using Cholesky decomposition. The most of sources say that Cholesky decomposition is:

A = L * L'

The few sources say about another form:

A = U' * U

So it's looks like U = L', am I right? Does the upper triangular root matrix just transposed lower triangular root matrix?

Duplicated words in chapter 1

Here's a couple typos I noticed while reading chapter 1:

  • "Now, let's assume we we gained weight."

    "we" is duplicated.

  • "For example, for the 100 kg weight weight our estimate might be 99.327 kg due to sensor errors."

    "weight" is duplicated.

NBFormatError' is not defined

Hi,

I am trying to work with this ipython files, but they dont work.
It appears this error, after calling any ipython file, any advice?
thanks in advance, Jaime

cd Kalman-and-Bayesian-Filters-in-Python/
jaimebayes@jaimebayes-OptiPlex-755:~/Kalman-and-Bayesian-Filters-in-Python$ ipython notebook
2015-06-02 13:34:11.222 [NotebookApp] Using existing profile dir: u'/home/jaimebayes/.ipython/profile_default'
2015-06-02 13:34:11.320 [NotebookApp] Using MathJax from CDN: https://cdn.mathjax.org/mathjax/latest/MathJax.js
2015-06-02 13:34:11.519 [NotebookApp] Serving notebooks from local directory: /home/jaimebayes/Kalman-and-Bayesian-Filters-in-Python
2015-06-02 13:34:11.520 [NotebookApp] 0 active kernels
2015-06-02 13:34:11.520 [NotebookApp] The IPython Notebook is running at: http://localhost:8888/
2015-06-02 13:34:11.520 [NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).

(process:4174): GLib-CRITICAL **: g_slice_set_config: assertion 'sys_page_size == 0' failed
2015-06-02 13:34:27.679 [NotebookApp] WARNING | Unreadable Notebook: /home/jaimebayes/Kalman-and-Bayesian-Filters-in-Python/00_Preface.ipynb global name 'NBFormatError' is not defined
WARNING:tornado.access:400 GET /api/notebooks/00_Preface.ipynb?_=1433270067150 (127.0.0.1) 26.97ms referer=http://localhost:8888/notebooks/00_Preface.ipynb

Is the Update equation in the Univariate kalman filter class correct?

In http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/05_Kalman_Filters/Kalman_Filters.ipynb

Hi,
I am rather enjoying going though your book. (Currently working though and translating bits of it into julia for my understanding of both Julia and Kalman filters.)
I appreciate it is a work in progress.

at In[89]:

    def update(self, z):
        self.x = (self.P * z + self.x * self.R) / (self.P + self.R)
        self.P = 1. / (1./self.P + 1./self.R)

My instincts are telling me that this should be:

    def update(self, z):
        self.x = (self.R * z + self.x * self.P) / (self.P + self.R)
        self.P = 1. / (1./self.P + 1./self.R)

Because P is the variance (uncertainty) of the state,
and R is the variance of the measurement.

I'm not to certain though.
If I am wrong, perhaps the section could be written to make it more clear why this should be the case.

Unfinished sentence in chapter 2

At the very end of the Tracking and Control section of chapter 2, the last sentence appears to be truncated:

In other words, when we call predict() we will pass in the commanded movement that we gave the robot along with a kernel that describes the likelihood of

(Maybe the likelihood of distance/destination?)

inconsistent license

The license on the readme (updated Mar 9, 2015) is CC BY, but the license called out in license.html (May 16, 2014) is CC BY-NC-SA. Legally, the BY takes precedence over the BY-NC-SA and is irrevocable, but they should be the same to avoid confusion.

book PDF file missing in latest zip

Hi Roger.
After a long pause on the topic I came back to download the latest version and noticed the book PDF is no longer present in the zip (and in the repository).

edit: I just read in one of the recent commits' comment that you actually removed the PDF.
I'm often offline and was relying on the PDF to continue reading...

I get a good number of errors when trying to build the PDF on my laptop... Would you please consider adding back the PDF to the repo?

Kind regards,
Mario

Cannot see equations in Ch. 10

It might be an operating system/browser problem, but I cannot see the equations properly for chapter 10 (I haven't tried other chapters). Attached is a screenshot of what I see using Firefox Mozilla v. 42.0 on Ubuntu 12.04 LTS.
screenshot

Value for state transition in chapter 6

Hi Roger,
first, it's really great that didactically skilled people like you share their knowledge with dumb people like me in such a nice way. I really learned a lot from your book!

I found some issues I want to address. I don't know if this is the right place to do, but I interpreted from your preface that it is.
Anyway: I implemented my own Kalman filter and it looked quite ugly, so I took the input parameters that you used in chaper 6 when implementing the first full KF. It still didn't look the same and I found that you used for the state transition matrix
F = 1 1
0 1
The top right index is supposed to be the step, right? So, here you use dt=1 and further down for the calculation of the Q matrix in this example you use dt=0.1.

Is that a bug or am I missing something? I haven't read past chapter 6 yet, but to my feeling this is wrong, or? After I changed in my version the state transition to your version, the KF performed like yours.

Cheers
Marko

Standardize superscript notation

In Chapter 7, iterative least squares for sensor fusion section there is inconsistent wording and use of the - and + superscripts. This occurs to a lesser extent in the rest of the book.

Add license and links to PDF

PDF is floating around the web with no obvious link back to the github account, and the creative common license is buried in the preface. I'd like a page directly after the title page giving the license and the links to github.

Code not compiling in chapter 1

Hi there,

In binder if I try to compile this listing:

import book_plots
from book_plots import interactive_plot
import gh_internal as gh
import matplotlib.pyplot as plt


weights = [158.0, 164.2, 160.3, 159.9, 162.1, 164.6, 
           169.6, 167.4, 166.4, 171.0, 171.2, 172.6]

time_step = 1 # day
scale_factor = 4/10

def predict_using_gain_guess(weight, gain_rate, do_print=True, sim_rate=0): 
    # store the filtered results
    estimates, predictions = [weight], []

    # most filter literature uses 'z' for measurements
    for z in weights: 
        # predict new position
        prediction = weight + gain_rate * time_step

        # update filter 
        weight = prediction + scale_factor * (z - prediction)

        # save
        estimates.append(weight)
        predictions.append(prediction)
        if do_print:
            gh.print_results(estimates, prediction, weight)

    # plot results
    gh.plot_gh_results(weights, estimates, predictions, sim_rate)

initial_guess = 160.
with interactive_plot():
    predict_using_gain_guess(weight=initial_guess, gain_rate=1)   

The following exception is thrown:

ImportError                               Traceback (most recent call last)
<ipython-input-1-1dc46de6cfa4> in <module>()
----> 1 import book_plots
      2 from book_plots import interactive_plot
      3 import gh_internal as gh
      4 import matplotlib.pyplot as plt
      5 

ImportError: No module named 'book_plots'

If I change the import from import book_plots to import code.book_plots it seems to behave better but then the exception is thrown about filterpy:

/home/main/notebooks/book_format.py in test_filterpy_version()
     45 def test_filterpy_version():
     46 
---> 47     import filterpy
     48     from distutils.version import LooseVersion
     49 

ImportError: No module named 'filterpy'

Firefox cuts off right hand side by 1px

I am noticing that in Firefox Ubuntu the div.inner_cell is a few pixels too short, cutting off the end of the row of words and is visually distracting. I notice that other notebooks (from other books) don't have this issue, even though they seem to share the same .css files! But, your font is different than the linked example, so something must be different when you compile it. I can't tell from a first glance of the HTML what is different though.

Looks like a great book otherwise. Gotta tear into it :)

Possible typo ?

In 06-Multivariate-Kalman-Filters.ipynb you write

Your job as a designer will be to design the state (x,P), the process (x,P), the measurement (z,R), and the unexplained measurement function H. If the system has control inputs, such as a robot, you will also design B and u.

I think you mean .... the process(F,Q) .... ?

Implementing Kalman filter that predict spikes ?

hi,

Is it suitable to use Kalman filter for predicting spikes ?
F.e. let say I monitor CPU usage where the values go from 0 to 100 ... will the Kalman filter be "agile" enough to detect and predict when the usage go above say 80% ... with say better tha 60-70% hit rate.

Or you would use some other technique.

Use 'process model' earlier in the book

Early in the book I use 'state transtition function' everywhere. Which is a valid term, but it is not so clear maybe that this is just the process model. I do use 'process noise', so why not make the connection more obvious, especially since i mostly use process model later in the book.

You may want to clarify this:

In /06-Multivariate-Kalman-Filters.ipynb you are explaining ndarray and state

However, NumPy recognizes 1D arrays as vectors, so I can simplify this line to use a 1D array.
x = np.array([10.0, 4.5])
x

Since before you have explained how X should be a nx1 matrix having said that

x = np.array([[10.0, 4.5]]).T

I assume you want to say that you could write x = np.array([10.0, 4.5]).T (i.e. with transpose) to mean the same thing as double brackets, transpose.

Hackernews followup.

I wrote to you on hackernews (but know I'll forget to check answer, so I copy/past here)

Sorry about the notebook format change. Did you loose work? Conversion should be handled for you, if not it is a bug, we can fix it on 3.1.
We would be happy to get more feedback on your writing process and your need, feel free to directly contact the team (IPython-dev at scipy.org, or issue on main IPython repo is fine).
As for concept of "book" or collection of notebook, we are working on that (integration with sphinx)

If you loose things in migration it's not normal, we can fix that, please open an issues. We would also love to get more details feedback of your pain points in writing. We are having our bi-annual IPython dev meeting this week so it is the time where we will sketch the future for the next 6 to 12 month on IPython now.

I see you are in SF Bay Area, we are meeting in UC Berkeley, I guess you could even come and say Hi, if you prefer in person face-to-face feedback/complaint :-)

Various punctuation and typos

Chapter 2

Chapter 3

Chapter 4

Chapter 5

Discrete Bayes Filter: some examples use non-common data domain

The chapter based on example when dog Simon tracking in hallway. The hallway is described as one-dimensional array with length 10.

To keep the problem small, we will assume that there are only 10 positions in a single hallway to consider, which we will number 0 to 9, where 1 is to the right of 0, 2 is to the right of 1, and so on.

And all sections use this data domain. But two sections use different data: Incorporating Movement Data and Adding Noise to the Prediction.

In the Incorporating Movement Data section is used array with length 4 (but it is still talk about the Simon in the hallway).

At the beginning of the Adding Noise to the Prediction section is used array with length 8, but later used common data domain - array with length 10.

This is not error! All charts matched with corresponding code in Python and description, but, in my opinion this is some kind of drawback.

Unscented Kalman Filter: suggestions to improve problem domain description

I have a few suggestion to improve problem domain description of para "Tracking an Airplane" of "Unscented Kalman Filter" chapter. First of all, this improvements not related with the main topic of your book, so them is not important. Also improving of problem domain description can pollute the main idea. So you should decide accept some of them or reject, but anyway I'd like notice about them.

You wrote:

We will track one dimension on the ground and the altitude of the aircraft.

But later you wrote:

By timing how long it takes for the reflected signal to return it can compute the slant distance and bearing to the target.

We can’t compute bearing to the target by timing.

The angular determination of the target is determined by the directivity of the antenna.
link

If we measure altitude then more appropriate use the "elevation angle" instead of "bearing".

The elevation angle is the angle between the horizontal plane and the line of sight, measured in the vertical plane.
link

Also such radars called height finders.

A height finder is a ground based aircraft altitude measuring device.
link

You wrote:

Radars also provide velocity measurements via the Doppler effect, but we will be ignoring this complication for the moment.

Velocity of flying object can be calculated using radar measurement, but strictly speaking radar can't measure velocity, instead radar can measure radial velocity.

Only the radial component of the speed is relevant. When the reflector is moving at right angle to the radar beam, it has no relative velocity. Vehicles and weather moving parallel to the radar beam produce the maximum Doppler frequency shift.
link

You wrote:

We compute the (x,y) position of the aircraft from the slant distance and bearing as illustrated by this diagram:

This is very subjective judgement but in my opinion using (x,z) instead of (x,y) will be better (by the way, in code snippet 15 you use dz as delta of altitude).

In your chart ekf_internal.show_radar_chart:
You used theta as elevation angle usually theta used for bearing but epsilon used for elevation angle.

You wrote:

A typical radar might update only once every 12 seconds so we will use that for our epoch period.

As far as I'm concerned this is true if we talk about surveillance radars, but for height finder this time usually smaller (about 3 seconds).

I hope that this will be helpful for You!

Unscented Kalman Filter: using of `lambda` and `kappa` is unclear

I have a question about Chapter 10 "Unscented Kalman Filter".

In paragraph "Implementation of the UKF: Sigma Points" in the sigma point equation under square root is placed lambda. Later (in the text) instead of lambda used kappa. Later at the source code snippet:

U = scipy.linalg.cholesky((n+kappa)*P)

used kappa too.

In paragraph "Choosing the Sigma Parameters" used lambda again.

But in the description of the filter by link used kappa.

So my question is: what is the correct equation for the sigma points? Is this error or I have missed some thing?

Unscented Kalman Filter: constraint for weights of covariances computed by Van der Merwe's algorithm

I have a question about Chapter 10 "Unscented Kalman Filter".

In paragraph "Implementation of the UKF: Choosing Sigma Points" specified that sum of weights of means and covariances must be equal one.

Later In paragraph "Implementation of the UKF: Van der Merwe's Scaled Sigma Point Algorithm: Weight Computation" specified the equation for weights computation.

When I try to compute weights using the algorithm with suggested parameters (for two dimensions):

  • alpha = 0 ... 1;
  • beta = 2;
  • kappa = 1.

I will get residual with constraint for weights of covariances. For different values of alpha the sum of weights lies between 3 and 4.

wcsum

Changing number of dimensions or kappa has no effect. I have found out that to reduce the residual beta must be equal -2.

So my questions:

  • do you have the similar results of computations (sum of weights of covariances)?
  • if yes, should I always respect the constraints?
  • If yes, should I normalize weights myself?
  • If no, maybe instead adding beta should be used subtraction?

Add discussion of process model order

I'm getting questions that show me that I have not talked about tracking situations and how the order of the filter affects the performance. Yes, this belongs in a IMM chapter, but the regular chapters can use this info as well in shorter form.

Error using binder. Cant import filterpy

Hi,

Just tried to use binder to read your book. (Nice work, well done).

But when trying to run some code, such as the first cell in 01-g-h-filter.ipynb I get the following error which implies that filerpy has not been installed, though I note that filerpy is listed in requirements.txt.

Here is the error message I get.


ImportError Traceback (most recent call last)
in ()
2 get_ipython().magic('matplotlib inline')
3 from future import division, print_function
----> 4 from book_format import load_style
5 load_style()

/home/main/notebooks/book_format.py in ()
59 # called when this module is imported at the top of each book
60 # chapter so the reader can see that they need to update FilterPy.
---> 61 test_filterpy_version()
62
63

/home/main/notebooks/book_format.py in test_filterpy_version()
39
40 def test_filterpy_version():
---> 41 import filterpy
42 min_version = [0,0,28]
43 v = filterpy.version

ImportError: No module named 'filterpy'

Online Estimate of R(k)

<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"> </script>

Hi,

Nice work on the book and on providing a python implementation.

I recently came across a paper that discussed estimating the measurement noise covariance matrix in an online fashion as follows:

                 \\(\hat{R(k)} = C_v + H \bar{P}(k) H^T\\) 

where

            \\(\bar{P}(k) \\) is the posteriori error covariance matrix and \\(C_v\\) , at time \\(k\\), is computed from an average moving window of size \\(m\\), as

                      \\(C_v (k) =1/ m  ∑ v(k − 1)v(k − 1)^T\\)

The thing is when I implement this, I find my estimates are really smooth but it takes a long time to reach convergence.

Wondering if you have any idea why this is so. The paper in question is:

J. Wang, “Stochastic modeling for real-time kinematic gps/glonass position,” Navigation, vol. 46, pp. 297–305, 2000.

Looking forward to your reply.

Thank you!

How should the imports from code directory work?

I just cloned the repo and ran jypyter notebook from the root directory. Now I see /tree in my browser and double click 01-g-h-filter.ipynb. However, when I try to run

import gh_internal as gh
gh.plot_errorbar1()

I get the error below

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-7-dd29f703c030> in <module>()
----> 1 import gh_internal as gh
      2 gh.plot_errorbar1()

ImportError: No module named 'gh_internal'

I'm sure this isn't an issue with the code but my lack of understanding for how the gh_internal.py is being imported from the code directory to the notebook.

Also, I notice the same error on mybinder as well: http://app.mybinder.org/1459191046/notebooks/01-g-h-filter.ipynb

How should this be done?

Add Vocabulary appendix

I sort of have this with the notation, but it needs to be expanded. There are so many different terms in the literature for the same thing - dynamic matrix, state transition matrix, it goes on and on. It is just confusing for everyone.

I'm thinking a table with all the different terms for a concept grouped together, along with the typical math symbols for the corresponding matrix/vector, if applicable.

Question: Measurement space in Kalman filter

This is not an issue but a question.

In chapter 6 "Multivariate Kalman Filters" you mention the measurement space. Mostly the reason of using the measurement space is clear, but I have a several questions.

  1. First of all I'd like to know how called the opposite of the measurement space? I guess that it might be called the state space, am I right?

  2. Some members (Residual, Measurement and Covariance) of equations are explicit belong to the measurement space, but other not. So my question is: which members are belong to the measurement space and which members are not:

    • State (obviously no).
    • Covariance (yes);
    • State transition function (looks like no);
    • Process noise (looks like yes);
    • Control input (looks like no);
    • Control function (looks like no);
    • Residual (yes);
    • Measurement (yes);
    • Measurement function (?);
    • Measurement noise (looks like yes);
    • System uncertainty (looks like yes);
    • Kalman gain (?!).

    Or other words: to respect your example about measurement of temperature, what units (Volt, Celsius, Volt/Celsius, etc) must have each member?

  3. This question is consequence of previous question. The most obscure for me how we can update State by adding previous state estimation (which is not in the measurement space) and Residual (which is in the measurement space)? Does Kalman gain performs transformation of Residual?

Thanks!

P.S. Sorry if answers on this questions already are in your book but I have not found their.

Added control input examples to KF

A lot of people are interested in this book due to the robotics/UAV use of the filters. It is hard to do so when there are no examples of how to account for the control inputs.

Heading: Likely a hidden/observable variable

Chapter five, near the bottom: "the aircraft's state includes things such as as heading, engine RPM, weight, color, the first name of the pilot, and so on. We cannot sense these directly using the position sensor so they are not observed."

It seems to me like the aircraft's heading would be an observable, hidden variable like velocity; you could estimate it, and use it to improve your results, the same way you did with velocity.

The g-h Filter: off-by-one error in scale example

According to description of the example weight have to gain of 1 lb per day.

I hand generated the weight data to correspond to a true starting weight of 160 lbs, and a weight gain of 1 lb per day. In other words one day one the true weight is 160lbs, on the second day the true weight is 161 lbs, and so on.

It means that when weight equal 160 lb at 1-st day it will equal 171 lb at 12-th day:
1 - 160
2 - 161
3 - 162
4 - 163
5 - 164
6 - 165
7 - 166
8 - 167
9 - 168
10 - 169
11 - 170
12 - 171

But on the chart weight at 12-th day equals 172 lb.

plot
It looks like off-by-one error.

In my opinion correct values of weight should be:
0 - 160 (today)
1 - 161 (tomorrow)
2 - 162
3 - 163
4 - 164
5 - 165
6 - 166
7 - 167
8 - 168
9 - 169
10 - 170
11 - 171
12 - 172

Product of gaussians formula in chapter 3

In chapter 3, under the heading "Product of Gaussians", there's the formula for the product's mean. I think the second term in the numerator uses the wrong variance. The formula is also in chapter 4, and it looks correct there.

Chapter 5 try/except block

I put a try/except block around code in chapter 5 because it depends on code in FilterPy that is not released yet. Remove block once FilterPy is updated.

Finding value for Q in Extended Kalman Filter Chapter should be completed by substituting in values for w

Hi (again),
In http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/09_Extended_Kalman_Filters/Extended_Kalman_Filters.ipynb#Designing-Q
We get some details on how to find Q,
It would be great if this concluded by substiting in values for $w_alt$ and $w_vel$, to produce the final $Q$.

To may knowledge it goes:
$w_{vel}^2=5$, $v_{vel}^2=10$, and $w_{alt} w_{vel}=0$ (as they are independent and thus uncorrelated)

This gives a final matrix of:

Q=
[ 0.000208333  0.00625  0.0
 0.00625      0.25     0.0
 0.0          0.0      0.5]

However that produces awful results, so i think I must have got the math wrong.
It would be great to have this section more complete so I can better understand my mistake.

PS: Is it ok if I raise issues like this? If you would prefer not to receive such feedback at this stage, I can be quiet (and come back when you are, if you would like. :-) )

Example of going from paper to working code

I get questions about papers - how would I implement this paper in your code. I think there would be a lot of pedagogical value in taking a few open source papers, working through them, and implementing them in FilterPy.

Tie g-h and KF chapter.

With my recent edits the g-h chapter is pretty close to the KF chapter. The KF chapter intro should remind us of what was done in the g-h chapter to lay the land for the rest of the chapter.

Chapter 2.13 Total Probability Theorem

The formula for total probability theorem is wrong here. Probability of being at position i at time t,
P(xit) is given by the sum of probabilities of being at position j at time t-1 multiplied by probability of moving from position j to i at time t. Therefore, it will be,
P(xit) = for all j, sum of ( P(xjt-1) * P(xi|xj) )

Minor issues in chapters 6, 7, 8

While reading I stumbled upon the following minor issues. I didn't want to open a new issue for each of them, so here they are:

  • 06-Multivariate-Kalman-Filters
    • [..]So, if K is large then (1−cK) is small, and P will be made smaller than it was. If K is small, then (1−cK) is large, and P will be made larger than it was. [..]
      • Doesn't P get smaller in both cases, just smaller in the first one?
  • 07-Kalman-Filter-Math
    • [..] division is scalar's analogous operation for matrix inversion [..]
      • Reciprocal is the analogous operation, it's only somewhat division here because of the subsequent multiplication
    • integration routines such as scipy.integrate.ode. These routines are robust, but [end of paragraph]
  • 08-Designing-Kalman-Filters
    • [..] a Kalman filter with R=0.5 as a thin blue line, and a Kalman filter with R=10 as a thick red line [..]
      • On binder at least R=0.5 is a thin green line, and R=10 is a thick blue line
    • Tracking Noisy Data
      author's note: not sure if I want to cover this. If I do, not sure I want to cover this here.
      • Ok, not really a bug. I just hope you do because I would really love that chapter :)

EKF incorrect comment

From emailed report:

In EKF:
def residual(a,b):
""" compute residual between two measurement containing [range, bearing].
Bearing is normalized to [0, 360)"""

Testing looks like [-180, 180]
print('a:', np.rad2deg(residual(np.array([[0], [np.deg2rad(195.0)]]),
np.array([[1], [np.deg2rad(355.0)]]))[1]))
print('b:', np.rad2deg(residual(np.array([[0], [np.deg2rad(355.0)]]),
np.array([[1], [np.deg2rad(5.0)]]))[1]))

Non-linear filter example is linear

The Non-linear filter notebook opens with the example,

There can be nonlinearity in the process model. Suppose we wanted to track the motion of a weight on a spring, such as an automobile's suspension. The equation for the motion with $m$ being the mass, $k$ the spring constant, and $c$ the damping force is

$m\frac{d^2x}{dt^2} + c\frac{dx}{dt} +kx = 0$

There is no linear solution for $x(t)$ for this second order differential equation, and therefore we cannot design a Kalman filter using the theory that we have learned.

which is a linear system described by the state variables x, $\frac{dx}{dt}$.

Help with the Unscented Kalman Filter

Hello,

As I am still a beginner when it comes to kalman filters, I was wondering if someone could provide me with help on how to solve my problem.
At the moment I am trying to estimate the position, velocity and acceleration of an e-puck robot. The robot comes equipped with encoders for the motors' position and an accelerometer giving a 3D acceleration vector in m/s^2.
Here is how I initialized my Unscented Kalman Filter:

    points = sigPts(9, alpha=0.1, beta=2.0, kappa=0.0)
    self._ukf = ukf(dim_x=9, dim_z=5, dt=TIME_STEP, hx=self.measurements, fx=EpuckNode.transition, points=points)
    self._ukf.x = np.array([float(init_pos_x), 0.0, 0.0, float(init_pos_y), 0.0, 0.0, float(init_pos_z), 0.0, 0.0])
    self._ukf.P = np.identity(9, dtype=float)
    self._ukf.Q[0:3, 0:3] = Q_discrete_white_noise(3, dt=TIME_STEP, var=0.02)
    self._ukf.Q[3:6, 3:6] = Q_discrete_white_noise(3, dt=TIME_STEP, var=0.02)
    self._ukf.Q[6:9, 6:9] = Q_discrete_white_noise(3, dt=TIME_STEP, var=0.02)
    self._ukf.R = np.diag([0.2 ** 2, 0.2 ** 2, 0.2 ** 2, 0.03 ** 2, 0.03 ** 2])  # Sensors' sensitivity

This is the transition function:

def transition(state, dt):
    # Define the transition matrix
    f = np.array([
        [1, dt, (dt ** 2)/2, 0, 0, 0, 0, 0, 0],
        [0, 1, dt, 0, 0, 0, 0, 0, 0],
        [0, 0, 1, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 1, dt, (dt ** 2)/2, 0, 0, 0],
        [0, 0, 0, 0, 1, dt, 0, 0, 0],
        [0, 0, 0, 0, 0, 1, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 1, dt, (dt ** 2)/2],
        [0, 0, 0, 0, 0, 0, 0, 1, dt],
        [0, 0, 0, 0, 0, 0, 0, 0, 1]
    ], dtype=float)
   # Return the array containing all the transforms
    return np.dot(f, state)

And finally the measurement function:

def measurements(self, measures):
        """
        Define the computation required for the measurements
        :param measures: An array containing: [encoder_L, encoder_R, accel_X, accel_Y]
        :return: 
        """

        # Compute the position and heading
        left_encoder = measures[0]
        right_encoder = measures[3]

        # Compute the number of steps each wheel executed
        left_steps_diff = left_encoder * MOT_STEP_DIST - self._leftStepsPrev    # Expressed in meters.
        right_steps_diff = right_encoder * MOT_STEP_DIST - self._rightStepsPrev  # Expressed in meters.

        # Compute the rotation and step differences
        delta_theta = (right_steps_diff - left_steps_diff) / WHEEL_DISTANCE # Expressed in radiant.
        delta_steps = (right_steps_diff + left_steps_diff) / 2  # Expressed in meters.

        # Extract the robot's position and orientation
        pos_x = self._pos_x + delta_steps*math.cos(self._pos_z + delta_theta/2)  # Expressed in meters.
        pos_y = self._pos_y + delta_steps*math.sin(self._pos_z + delta_theta/2)  # Expressed in meters.
        pos_z = self._pos_z + delta_theta   # Expressed in radiant.

        # Filter the measurements obtained from the accelerometer and the encoders

        # Keep track of the number of steps for both wheels
        self._leftStepsPrev = left_encoder * MOT_STEP_DIST  # Expressed in meters.
        self._rightStepsPrev = right_encoder * MOT_STEP_DIST    # Expressed in meters.

        # Return the array containing all the measurements
        return np.array([pos_x, pos_y, pos_z, measures[2], measures[5]])

However each time I try to execute the filter through the following test code:

# Read the sensors' value
left_encoder = self.getLeftEncoder()
right_encoder = self.getRightEncoder()
accels = self._accelerometer.getValues()

# Predict the next state
self._ukf.predict()
self._ukf.update([left_encoder, right_encoder, accels[0], accels[1]])

# Update the robot's state
self._pos_x = self._ukf.x[0]
self._speed_x = self._ukf.x[1]
self._accel_x = self._ukf.x[2]
self._pos_y = self._ukf.x[3]
self._speed_y = self._ukf.x[4]
self._accel_y = self._ukf.x[5]
self._pos_z = self._ukf.x[6]
self._speed_z = self._ukf.x[7]
self._accel_z = self._ukf.x[8]

print(self._ukf.x)

I get an error:

Traceback (most recent call last):
self._ukf.update([left_encoder, right_encoder, accels[0], accels[1]])
File "/usr/local/lib/python2.7/dist-packages/filterpy/kalman/UKF.py", line 341, in update
y = self.residual_z(z, zp) #residual
ValueError: operands could not be broadcast together with shapes (4,) (5,) 

Could you please explain or at least point me in the direction of my mistake?
Thank you very much for your help.

Sincerely.

Chrome is not loading any notebooks

I'm using the latest version of Anaconda (still python 2.7) along with Chrome under Ubuntu but I am not able to load any of the pages when I click on the notebooks. I get an error message "Bad request". Any ideas on how to get around this?

Am i right?

It takes them many minutes to slow down or speed up significantly. So, if I know that the train is at kilometer marker 23km at time t and moving at 60 kph, I can be extremely confident in predicting its position at time t + 1 second. And why is that important? Suppose we can only measure its position with an accuracy of 500 meters. So at t+1 sec the measurement could be anywhere from 22.5 km to 23 km. But the train is moving at 60 kph, which is 16.6 meters/second. So if the next measurement says the position is at 23.4 we know that must be wrong. Even if at time t the engineer slammed on the brakes the train will still be very close to 23.0166 km because a train cannot slow down very much in 1 second.

I think it should be
"So at t+1 sec the measurement could be anywhere from 22.516 km to 23.484 km. But the train is moving at 60 kph, which is 16.6 meters/second. "
since the statement 'with an accuracy of 500 meters' means both + and - ways.
so the next 'So if the next measurement says the position is at 23.4 we know that must be wrong. ' also need a fix!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.