numenta / htmresearch Goto Github PK
View Code? Open in Web Editor NEWExperimental algorithms. Unsupported.
License: GNU Affero General Public License v3.0
Experimental algorithms. Unsupported.
License: GNU Affero General Public License v3.0
SensorimotorExperimentRunner
's feedLayers
method is optimized for taking sensorimotor sequences. Refactor it to create a separate function that can take a single sensorimotor transition, so that SensorimotorExperimentRunner
can be used more easily in an online fashion.
This might warrant a rename to SensorimotorExperimentRunner
as well (maybe to SensorimotorModel
).
Allow running tests in parallel easily.
Complement to numenta/nupic-legacy#1507.
Break the testing phase up with resets, to correctly test stability in temporal pooler.
To avoid collisions with the things they mixin to.
To collect relevant metrics.
Stuff is broken. Current problem is numpy version mismatch after installing matplotlib.
Since experiments take long to run, it would be nice to be able to store the models and inspect them long after they are finished.
Record how long each test takes for performance monitoring.
Depends on numenta/nupic-legacy#1370.
Subsample and tune thresholds to achieve graceful degradation when running out of resources.
Corresponding issue for numenta/nupic-legacy#1473.
We want to see graphs of how temporal pooling performance (stability and discriminability) vary with changing either # of distinct worlds (images) or # of fixation points.
.travis.yml
that installs NuPIC from same binary nupic.regression
usesrepos.yaml
with travis: true
Have the "learn on one cell" mode be an option as a keyword argument to the constructor of GeneralTemporalMemory
.
We want to move the TM code to the new temporal_memory implementation. The goal of this task is to allow us to delete TM_New.py from the repository. Specifically create a base class that adds distal dendrite learning and learn on one cell mode to temporal_memory code. Convert experiments to use this code.
To make it easy to run many experiments in parallel.
Create sensorimotor experiment (similar to sm_1D_test) using new TM class plus pooling.
Ensure new temporal memory contains learn on one cell bug fix. This is reflect in lines 809 to 825 of TM_SM.py:
s = None
if self.learnOnOneCell == True:
# in learn on one cell mode, always learn on one cell per column
# unless reset has just been called
i = self.getSeqLearnCell(c)
if not i:
i,s = self.getBestMatchingCell(c,self.activeState['t-1'],
self.distalDendriticInput['t-1'])
if s is not None and s.isSequenceSegment():
s.totalActivations += 1 # activationFrequency
s.lastActiveIteration = self.iterationIdx
else:
# if best matching cell does not exist, then get least used cell
i = self.getLeastUsedCell(c)
Currently, there is a limit to how many worlds/elements can be in a capacity test, due to the pretty-printing of the elements. If we exceed this limit, just print ?
instead of symbols, and allow running arbitrarily large tests.
Union pooler:
https://github.com/numenta/nupic.research/tree/master/union_pooling
This would be nice to support as a follow on:
https://github.com/numenta/nupic.research/blob/master/sensorimotor/sensorimotor/temporal_pooler.py
sm_test_with_pooling is not giving consistently good results on all patterns and with a larger number of patterns. This task is to debug and fix this.
Implement metrics for sensorimotor inference. For perfect performance two criteria must be met:
Every element in a sensorimotor sequence should get predicted perfectly. This metric would operate on columns. Every sensory column should be predicted and no extra columns should be predicted.
Sensorimotor sequences through distinct static patterns should get completely distinct representations through the course of each sequence. This metric would operate at the level of cells. At every step, the SDR representing context at that point should be different from the SDR representing context at any point in any of the other sequences. We might exclude the very first element in each sequence since that is unpredicted.
Switch to using standard NuPIC encoders (Category and SDRCategory) for encoding sensory and motor patterns instead of PatternMachines
Otherwise it's hard to know which algorithm each metric belongs to.
Investigate this result:
Setting up a new experiment...
Done setting up experiment.
Training (worlds: 2, elements: 12)...
Fed 10 / 578 elements of the sequence in 0.41 seconds.
Fed 20 / 578 elements of the sequence in 0.38 seconds.
Fed 30 / 578 elements of the sequence in 0.41 seconds.
Fed 40 / 578 elements of the sequence in 0.38 seconds.
Fed 50 / 578 elements of the sequence in 0.40 seconds.
Fed 60 / 578 elements of the sequence in 0.38 seconds.
Fed 70 / 578 elements of the sequence in 0.40 seconds.
Fed 80 / 578 elements of the sequence in 0.41 seconds.
Fed 90 / 578 elements of the sequence in 0.38 seconds.
Fed 100 / 578 elements of the sequence in 0.41 seconds.
Fed 110 / 578 elements of the sequence in 0.42 seconds.
Fed 120 / 578 elements of the sequence in 0.44 seconds.
Fed 130 / 578 elements of the sequence in 0.43 seconds.
Fed 140 / 578 elements of the sequence in 0.43 seconds.
Fed 150 / 578 elements of the sequence in 0.57 seconds.
Fed 160 / 578 elements of the sequence in 0.63 seconds.
Fed 170 / 578 elements of the sequence in 0.64 seconds.
Fed 180 / 578 elements of the sequence in 0.62 seconds.
Fed 190 / 578 elements of the sequence in 0.62 seconds.
Fed 200 / 578 elements of the sequence in 0.62 seconds.
Fed 210 / 578 elements of the sequence in 0.62 seconds.
Fed 220 / 578 elements of the sequence in 0.62 seconds.
Fed 230 / 578 elements of the sequence in 0.62 seconds.
Fed 240 / 578 elements of the sequence in 0.63 seconds.
Fed 250 / 578 elements of the sequence in 0.62 seconds.
Fed 260 / 578 elements of the sequence in 0.65 seconds.
Fed 270 / 578 elements of the sequence in 0.65 seconds.
Fed 280 / 578 elements of the sequence in 0.65 seconds.
Fed 290 / 578 elements of the sequence in 0.52 seconds.
Fed 300 / 578 elements of the sequence in 0.43 seconds.
Fed 310 / 578 elements of the sequence in 0.41 seconds.
Fed 320 / 578 elements of the sequence in 0.44 seconds.
Fed 330 / 578 elements of the sequence in 0.41 seconds.
Fed 340 / 578 elements of the sequence in 0.43 seconds.
Fed 350 / 578 elements of the sequence in 0.40 seconds.
Fed 360 / 578 elements of the sequence in 0.43 seconds.
Fed 370 / 578 elements of the sequence in 0.43 seconds.
Fed 380 / 578 elements of the sequence in 0.41 seconds.
Fed 390 / 578 elements of the sequence in 0.43 seconds.
Fed 400 / 578 elements of the sequence in 0.43 seconds.
Fed 410 / 578 elements of the sequence in 0.43 seconds.
Fed 420 / 578 elements of the sequence in 0.43 seconds.
Fed 430 / 578 elements of the sequence in 0.47 seconds.
Fed 440 / 578 elements of the sequence in 0.61 seconds.
Fed 450 / 578 elements of the sequence in 0.64 seconds.
Fed 460 / 578 elements of the sequence in 0.66 seconds.
Fed 470 / 578 elements of the sequence in 0.65 seconds.
Fed 480 / 578 elements of the sequence in 0.65 seconds.
Fed 490 / 578 elements of the sequence in 0.70 seconds.
Fed 500 / 578 elements of the sequence in 0.74 seconds.
Fed 510 / 578 elements of the sequence in 0.68 seconds.
Fed 520 / 578 elements of the sequence in 0.67 seconds.
Fed 530 / 578 elements of the sequence in 0.66 seconds.
Fed 540 / 578 elements of the sequence in 0.67 seconds.
Fed 550 / 578 elements of the sequence in 0.67 seconds.
Fed 560 / 578 elements of the sequence in 0.67 seconds.
Fed 570 / 578 elements of the sequence in 0.67 seconds.
Done training.
+---------------------------------------------------------------+--------+--------+----------+---------------+--------------------+
| Metric | min | max | sum | mean | standard deviation |
+---------------------------------------------------------------+--------+--------+----------+---------------+--------------------+
| [TP] # active cells | 20 | 20 | 11520 | 20.0 | 0.0 |
| [TP] stability confusion | 0 | 40 | 5747664 | 35.0117199873 | 9.52273451876 |
| [TP] distinctness confusion | 110 | 110 | 220 | 110.0 | 0.0 |
| [TP] connections per column (initial) | 1766.0 | 1934.0 | 942963.0 | 1841.72460938 | 31.2957218622 |
| [TP] connections per column (final) | 1768.0 | 2021.0 | 948707.0 | 1852.94335938 | 41.789403127 |
| [TM] # active columns | 20 | 20 | 11480 | 20.0 | 0.0 |
| [TM] # predicted => active columns (correct) | 0 | 20 | 920 | 1.60278745645 | 5.43017693067 |
| [TM] # predicted => inactive columns (extra) | 0 | 0 | 0 | 0.0 | 0.0 |
| [TM] # unpredicted => active columns (bursting) | 0 | 20 | 10560 | 18.3972125436 | 5.43017693067 |
| [TM] # predicted => active cells (correct) | 0 | 20 | 920 | 1.60278745645 | 5.43017693067 |
| [TM] # predicted => inactive cells (extra) | 0 | 0 | 0 | 0.0 | 0.0 |
| [TM] # segments | 0 | 5280 | 1912240 | 3319.86111111 | 1566.42107388 |
| [TM] # synapses | 0 | 211200 | 76489600 | 132794.444444 | 62656.8429554 |
| [TM] # predicted => active cells per column for each sequence | 1 | 1 | 387 | 1.0 | 0.0 |
| [TM] # sequences each predicted => active cells appears in | 1 | 1 | 387 | 1.0 | 0.0 |
+---------------------------------------------------------------+--------+--------+----------+---------------+--------------------+
It's currently using the old TemporalMemoryInspectMixin
.
Create simple example using ImageSensor region. This can serve as a base for how to use the Network API with our old image sensor classes.
There may be a way I can remove the AWS secret key if I change the S3 permissions. Need to look into it, because devs have no way of knowing if their changes pass CI on PRs until they are merged into master.
Instead of using as context (context-free sensor signal + motor signal) and disallowing lateral connections, experiment with using (motor signal) and allowing lateral connections.
We want to see the distributions of connections over columns, so a plot would also be nice.
Move some of the functionality from AbstractSensorimotorTest into a runner class that is not dependent on unittest. Update sm_1D_test to use this class.
Create ExhaustiveOneDAgent
that walks through all possible transitions for a given OneDWorld
.
Depends on numenta/nupic.research#51.
Merge code from abstract_sensorimotor_test.py
into SensorimotorExperimentRunner
and refactor tests to use it.
Create simple vision experiment where we have a small number of images and small number of fixations. We want to see stable unique representations. This is mostly to test and debug the framework.
Create a spatial pooler mixing class that monitors a few metrics:
The distribution of column activity (are all columns being used equally). This is very similar to the active duty cycle.
What is the average overlap for each column before inhibition?
Are we getting good SDR's? For two patterns that have N bits of overlap, how many bits of overlap do the SDR's have? This could be represented in an NxM overlap count matrix. OverlapCount[i,j] would be the number of times two patterns that had i bits of input overlap had j bits of SDR overlap. We should see a strong diagonal and a gradual drop off as you move away from the diagonal.
See comments here: numenta/nupic.research#57
We currently support only A-Z + a-z + 0-9 as human-friendly representation of world elements. This restricts us to a small world size. Add more capacity to this.
Clean up and update documentation for the TemporalPooler class. Some of the wording in there is a bit old. In particular we should explain exactly why the pooling state helps.
Each Agent
should keep track of its own position in its World
, instead of the World
keeping track of that information.
Implement simple class and metric for evaluating layer 3 (pooling)
See https://github.com/numenta/nupic.regression/issues/14. Whatever fix is made there needs to be made here as well.
The current TemporalPooler class has a parameter called synPermActiveInactiveDec
with the associated rule:
For inactive columns, synapses connected to input bits that are on are
decreased by synPermActiveInactiveDec.
This task is to remove this rule, and all associated data structures such as _permanenceDecCache
When testing with multiple worlds, no shared elements, I'm seeing perfect TP stability in some of them, while seeing imperfect TP stability in others. I would expect all of them to be imperfect in this case. Investigate this.
This is a cleaner way to train the network in phases.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.