Giter Club home page Giter Club logo

ratinabox's People

Contributors

adamltyson avatar alxec avatar colleenjg avatar cyhsm avatar frederikrogge avatar gsivori avatar jquinnlee avatar kushaangupta avatar mehulrastogi avatar musicinmybrain avatar synapticsage avatar tomgeorge1234 avatar willdecothi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ratinabox's Issues

Modules of grid cells with uniform offsets

A user asked: "If I want to generate 10*4 grid cells with uniform phase offset, 10 cells in each module and 4 modules in total. What is the best way to do that?"

There are three params you care about here: "gridscale", "orientation" and "phase_offset". If these are passed as tuples RatInABox assumes they are specifying the parameters of a distribution you wish to sample from (specified by the "_distribution" param, see doc string for more info). If they are passed as array-like objects it assumes you want to set them manually from this array (in which case the length of the array should of course match "n").

So in your case I would use the "modules" distribution for gridscale (the tuple then gives a list of module gridscales in meters) and set the other two manually (I don't know exactly what you want here but the below makes each module slightly rotated from the last and cells in each module uniformly move from [0,0] to [2pi, 2pi] phase offset but im sure you can generalise).

Env = Environment()
Ag = Agent(Env)
GCs = GridCells(Ag, params={
     "n":40,
     "gridscale":(0.1,0.3,0.6,1.0),
     "gridscale_distribution": "modules", #the 40 cells are split evenly between the four modules
     "orientation":([0]*10 + [0.1]*10 + [0.2]*10 + [0.3]*10),
     "phase_offset":[(1/10)*(i%10)*np.array([2*np.pi, 2*np.pi]) for i in range(40)]
})

GCs.plot_rate_map(shape=(4,10))

81a1881e-a250-455d-89f4-74a2aec89913

Make sense?

Error in ratinabox/Agent.py animate_trajectory

Hi Tom,

Probably because of an update in matplotlib, the function animate_trajectory at ratinabox/Agent.py line 557 throws the error AttributeError: module 'matplotlib' has no attribute 'animation'.

You can replicate the error in matplotlib version 3.6.2:

>>> import matplotlib
>>> matplotlib.__version__
'3.6.2'
>>> matplotlib.animation
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/marco/miniconda3/envs/wp1/lib/python3.9/site-packages/matplotlib/_api/__init__.py", line 224, in __getattr__
    raise AttributeError(
AttributeError: module 'matplotlib' has no attribute 'animation'
>>> from matplotlib import animation
>>> animation
<module 'matplotlib.animation' from '/Users/marco/miniconda3/envs/wp1/lib/python3.9/site-packages/matplotlib/animation.py'>

The simple workaround is to import animation.

Full error message
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In [9], line 1
----> 1 anim = agent.animate_trajectory(speed_up=10)

File ~/phd/RatInABox/ratinabox/Agent.py:557, in Agent.animate_trajectory(self, t_start, t_end, speed_up)
    554     return
    556 fig, ax = self.plot_trajectory(0, 10 * self.dt)
--> 557 anim = matplotlib.animation.FuncAnimation(
    558     fig,
    559     animate,
    560     interval=40,
    561     frames=int((t_end - t_start) / (40e-3 * speed_up)),
    562     blit=False,
    563     fargs=(fig, ax, t_start, t_end, speed_up),
    564 )
    565 return anim

File ~/miniconda3/envs/wp1/lib/python3.9/site-packages/matplotlib/_api/__init__.py:224, in caching_module_getattr.<locals>.__getattr__(name)
    222 if name in props:
    223     return props[name].__get__(instance)
--> 224 raise AttributeError(
    225     f"module {cls.__module__!r} has no attribute {name!r}")

AttributeError: module 'matplotlib' has no attribute 'animation'

Object vector cells

I intend to add a new class of cells ObjectVectorCells. Each object vector cell reflects a tue of an object location, a prefered directional tuning and preferred angular tuning. The activity of the cell will be high when the agent is the preferred distance at the preferred angle away from the the object.

Each OVC will have an on-off parameter. This means they will be able to resemble, for example, LEDs which can be on or off.

Multiplatform support with jax for GPU

Could RIAB support GPU via jax (import jax.numpy as jnp)

This should be backward compatible. I.e. a users should optionally specify a GPU usage flag otherwise numpy is used as normal.

GPU would not massively speed up RIAB except for FeedForwardCells and any use-case where synaptic weight matrices into FeedForwardCells are learnt, typically requiring large N_cell x N_cells matrix multiplications. I.e it is likely that non-GPU will continue to satisfy a large majority of use cases.

For now I do not intend to use jax to pre-compile or vectorise RIAB code, a change which would require significant and likely backwards incompatible modifications to the code base.

Open discussion: Object disappear after Agent passes by

Hi,

I'm working on a project that the Object is some kind of "dessert" that can be eaten by the Agent. Therefore, the Object should disappear after the agent trajectory passes by. Is it possible to realize this function in the current code? I know it requires to change the environment because the object is associated with it.

Thanks a lot!!

Noisy neurons

I will include functionality to add noise to the neuronal firing rates. Currently, if you want noise, you best option is the add it post-hoc to the data obtained from RatInABox.

Task env errors

@SynapticSage I'm getting some errors and several hundred identical warning on the recent tests. sorry if its something I did, the last lines of the stack trace are:

tests/test_taskenv.py::test_parallel_api[10-Env0-Ag0-2-possible_goal_positions1-noninteract]
[315](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:316)
  /opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/_pytest/python.py:198: PytestReturnNotNoneWarning: Expected None, but tests/test_taskenv.py::test_parallel_api[10-Env0-Ag0-2-possible_goal_positions1-noninteract] returned True, which will be an error in a future version of pytest.  Did you mean to use `assert` instead of `return`?
[316](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:317)
    warnings.warn(
[317](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:318)

[318](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:319)
tests/test_taskenv.py::test_parallel_api[10-Env0-Ag0-2-random_2-interact]
[319](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:320)
  /opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/_pytest/python.py:198: PytestReturnNotNoneWarning: Expected None, but tests/test_taskenv.py::test_parallel_api[10-Env0-Ag0-2-random_2-interact] returned True, which will be an error in a future version of pytest.  Did you mean to use `assert` instead of `return`?
[320](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:321)
    warnings.warn(
[321](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:322)

[322](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:323)
tests/test_taskenv.py::test_parallel_api[10-Env0-Ag0-2-random_2-noninteract]
[323](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:324)
  /opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/_pytest/python.py:198: PytestReturnNotNoneWarning: Expected None, but tests/test_taskenv.py::test_parallel_api[10-Env0-Ag0-2-random_2-noninteract] returned True, which will be an error in a future version of pytest.  Did you mean to use `assert` instead of `return`?
[324](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:325)
    warnings.warn(
[325](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:326)

[326](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:327)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
[327](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:328)
=========================== short test summary info ============================
[328](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:329)
FAILED test_taskenv.py::test_parallel_api[1-Env0-Ag0-2-random_2-interact] - AssertionError: ['agent_0'] != set()
[329](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:330)
assert {'agent_0'} == set()
[330](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:331)
  Extra items in the left set:
[331](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:332)
  'agent_0'
[332](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:333)
  Full diff:
[333](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:334)
  - set()
[334](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:335)
  + {'agent_0'}
[335](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:336)
============ 1 failed, 188 passed, 483 warnings in 87.53s (0:01:27) ============
[336](https://github.com/TomGeorge1234/RatInABox/actions/runs/5193543340/jobs/9364221102#step:6:337)
Error: Process completed with exit code 1.

Grid cell initialization does not preserve "n" provided in params dict if you provide low and high for distribution as a list

Hey Tom,

I have been playing with the new GridCells class with the 1.7.0 update and was having issues with the number of grid cells not matching the number I provided at initialization. For example, when I initialized with the following params I would get 2 grid cells, and not 500 as I was hoping:

GC = GridCells(Ag, params={"n": 500, "gridscale": [0.28, 0.73], "gridscale_distribution": "logarithmic", "description": "three_shifted_cosines", "min_fr": 0, "max_fr": 1, "name": "GridCells")

I was expecting this to work since in the initialization the logarithmic option is supposed to receive "low" and "high" from a list provided in gridscales, which I think would be good to actually provide for all the distribution options that are sampling between some floor and ceiling value (uniform for example). The issue I think stems from this line during initialization:
https://github.com/TomGeorge1234/RatInABox/blob/0ab2e66ea94cfb9817b41684f478d43e440bf39d/ratinabox/Neurons.py#L922

Whenever you want to provide a high and low, rather than a single value "n" will be overwritten as 2. I have some changes I've made to circumvent the issue and can make a pull request to contribute the change if you'd like!

Best,
Quinn

Speed and place cells

Hi,
I am using ratinabox to model some place cell activity. As we know, place cells are really only active representations of the animals location when the animal is moving/during theta (of course they also activate during SWRs, etc, but those dont represent the current location). I would like to look at the code that "specifies" this property but I cant find it. Would you mind directing me to the right place? Many thanks!!

Flexible Manifolds in FOV Cells

Right now, the manifolds in the FOV Cells are kind of uniform, which might not be the case for rats in general.

Having a separate get_manifolds function to return the 4 lists that control the manifolds of the FOVCells might go a long way. This, in theory, can be overwritten by any user to define any custom manifolds they want.

Proposing two default support for the manifold

  • Uniform manifold (currently supported)
  • Manifolds inspired by the Harley Model (Hartley et al. (2000) equation (1) and following sentence): Have more packed cells close to the agent and loosely packed cells away from the agent.

This will require the following:-

  • Changing the way we initialize the FOV cells
    • Call a new function generate_manifold accepts
      • FOV_distance , FOV_angles
      • a way to pass manifolds parameters
    • returns the following lists tuning_distance, tuning_angles, sigma_distances, sigma_angles
  • Modifying the plotting function in the display_manifold function in the FOV Class.
    • this should now plot the actual shapes of the cells rather than circles based on the parameters returned from the generate_manifold function

Non-rectangular environments

An upgrade is planned to allow non-rectangular environments in RiaB.

Users would (optionally) pass, at initialisation, coordinates defining a simple polygon which would then make up the boundaries of the arena.

Whenever the Agents position, or the location of a Neuron etc. is selected it would be guaranteed to fall within this polygon. Plotting rate maps etc. would also be updated too so they still look elegant.

This may require adding in the shapely package as a requirement.

Minimum required python version currently 3.10

Thanks for your work on this package!

I noticed that the setup file mentions python 3.7 as the minimum version,

https://github.com/TomGeorge1234/RatInABox/blob/23f21b3c23e4a92464625704b04ae5f00a0e16c2/setup.cfg#L28

However, certain features in the code like the "|" notation for type annotations were only introduced in python 3.10

https://github.com/TomGeorge1234/RatInABox/blob/23f21b3c23e4a92464625704b04ae5f00a0e16c2/ratinabox/utils.py#L1079

Cf https://docs.python.org/3.10/library/typing.html

image

I think it would be good to either:

-- clarify the python version in the setup.cfg, and bump it to >= 3.10
-- replace the problematic statements (I would guess it is only very few of them) to make the code compatible with e.g. at least python 3.8

Thoughts?

Slow animations

Animations are slow. Long run the fix for this is to have plotting done in a more structured way so the whole frame doesn't need to be updated on each frame, just the changing comments then we can put blitting=True

Another good solution could be to multithread this. Split time span into segments and do each one on another thread.

FieldOfViewNeurons structuring

FieldOfViewNeurons are structured poorly with the super class being Neurons when it should really be ObjectVectorCells or BoundaryVectorCells. This current work around is that the OVCs or BVCS a stored as an attribute called (cheekily) super. This is bad and should be update to the following:

FieldOfViewNeurons is a parent class. The manifold display function etc. should live in here.
FieldOfViewOVCs and FieldOfViewBVCs are. child classes.

Confused about setting multiple spatial goals in the A2C demo

image
Hello, I am trying to have the agent have the option to have multiple goals in the arena. When I put in multiple position centers, they show up in the plot. But they only trigger rewards if there is exactly one goal, otherwise zero reward even when they reach the goal when multiple rewards are present. I've tried turning on nonsequential mode, using reset_n_goals = 2, and these don't work. No error, just no rewards or goal completions trigger.

For completion criterion right now I've extended the timeout time, and have it quit from anywhere from 10-100s of timesteps after a goal but it's not registering completion either with more than one goal even when I see the agent travel over both goals.

Also I am confused whether in goals = [SpatialGoal(env,pos=GOAL_POS,goal_radius=GOAL_RADIUS, reward=reward)], if I make GOAL_POS a list of np arrays (one [X.Y] array per goal position), do I have to make GOAL_RADIUS and reward arrays of the same length? I've tried it both ways, keeping the latters scalar and making them all the same list length and it doesn't work but also doesn't complain either way. It would be nice to have multiple goals present in one arena of different sizes and reward magnitude if it supports that, but it doesn't seem to work.

"I" not updated in FeedForward Neurons

I just stumbled upon the fact that the "I" key in the inputs attribute of FeedForward Neurons is initialized with zeros when an input is added, but it is never updated thereafter. Only "I_temp" is updated when get_state() is called.

I'm wondering whether it would be useful to update "I" during the update() step (i.e., when get_state() is called with evaluate_at="last").

For example:

inputlayer["I_temp"] = I
if evaluate_at == "last":
    inputlayer["I"] = I

See

self.inputs[name]["I"] = I

Module names overwritten by classes

I thought I'd mention this as a refactor is now in the works: it's not possible to import the modules, instead of the classes (e.g., Neurons.py instead of the Neurons class). This is due to the * imports in the __init__.py overwriting the module names in the namespace.

For example, on dev:

In [1]: import ratinabox
In [2]: import ratinabox.Neurons as imported_Neurons
In [3]: imported_Neurons
Out[3]: ratinabox.Neurons.Neurons

The same thing happens on v2_beta (tested after PR #58).

In [1]: import ratinabox
In [2]: import ratinabox.neuron.Neurons as imported_Neurons
In [3]: imported_Neurons
Out[3]: ratinabox.neuron.Neurons.Neurons

I haven't found a clean way around this to access the module. This is needed, for example, when you want to reimport a module that is in use, e.g.

In [1]: import ratinabox
In [2]: import ratinabox.neuron.Neurons as imported_Neurons
In [3]: import importlib
In [4]: importlib.reload(imported_Neurons) # this fails, as it is not a module

I don't know what the standard way is to address this. I think in some cases there is the use of underscores and/or lowercase, e.g., _neurons.py or neurons.py, or doing the * import at a different depth.

Egocentric BVCs, KeyError: 'vel'

Thanks for the amazing toolbox. I was just trying to create some egocentric BVCs and I received the following error:
Code:

BVCs = BoundaryVectorCells(Ag, params={"n": num_neurons, "color": "C3", "reference_frame": "egocentric"})

Error:

File ~usr/local/lib/python3.8/dist-packages/ratinabox/Neurons.py:908, in BoundaryVectorCells.__init__(self, Agent, params)
    906 locs = locs.reshape(-1, locs.shape[-1])
    907 self.cell_fr_norm = np.ones(self.n)
--> 908 self.cell_fr_norm = np.max(self.get_state(evaluate_at=None, pos=locs), axis=1)
    910 if verbose is True:
    911     print(
    912         "BoundaryVectorCells (BVCs) successfully initialised. You can also manually set their orientation preferences (BVCs.tuning_angles, BVCs.sigma_angles), distance preferences (BVCs.tuning_distances, BVCs.sigma_distances)."
    913     )

File ~usr/local/lib/python3.8/dist-packages/ratinabox/Neurons.py:984, in BoundaryVectorCells.get_state(self, evaluate_at, **kwargs)
    982     vel = self.Agent.pos
    983 else:
--> 984     vel = kwargs["vel"]
    985 vel = np.array(vel)
    986 head_direction_angle = utils.get_angle(vel)

KeyError: 'vel'

Agent vector cells

Hi @mehulrastogi when you get a chance can you push Agent vector cells as I want to make a figure to showcase them in the paper!

Few comments: These will need to be Agent selective (not sure how you've done this right now), Agents have names which I guess could be used to specify which Agent the cells select for but I don't think these names are unique. We might need to give Agents a new Agent.number attribute which is guaranteed to be unique amongst all Agents in the same Environment.

AgentVectorCells and FieldOfViewAVCs

Hi,

We're trying the notebooks in the demo folder and found the last notebook requires class AgentVectorCells and class FieldOfViewAVCs to be imported. However, we noticed these two are recently added classes and are only available in the dev branch. Can I know when they're ready to be pushed to the public package? Or can I install ratinabox from dev branch?

Thanks!

1D rate map plot is blank due to NaNs

I was running Neurons.plot_rate_map(method="history") with a 1D environment, and getting blank rate maps. I realized that in Neurons.py, line 463, if there are any NaNs in the binned data obtained from utils.bin_data_for_histogramming(), at line 471, they are propagated when you run utils.interpolate_and_smooth(). (In my case, they were propagated through the whole map). However, NaN values are expected for any position in the map that was never visited by the Agent.

Possible fixes: Do you prefer to (1) remove the positions with NaN rate map estimates from x and rate_map before running the interpolation, or (2) interpret NaNs as 0s for the purpose of interpolation?

RatInABox custom Gym environment for RL

Hi,

Your repo is really cool and the visuals are excellent. However, when going through the RL example section, I find that although the example is decent, it still leaves a lot of the skeleton showing (in terms of all the direct stuff you do with the environment and the agent). I would really like to give PPO with an intrinsic curiosity module a go, since your environment is quite interesting. Specifically, to see what the rat would do if only rewarded with curiosity vs the environment reward and curiosity. It would be also be interesting to see if it has rat-like exploration and what would happen if the reward was as sparse as possible (e.g a reward for only reaching the reward square).

However, it feels a bit difficult to interact with it and get something like that working immediately. Mainly because the environment is not making use (at least in this case https://github.com/TomGeorge1234/RatInABox/blob/1.x/demos/reinforcement_learning_example.ipynb) of the standard Markov Decision Process abstraction. If you had this abstraction, I think it would be super cool since it makes it much easier to try out a variety of RL methods quickly and easily on the task, as it cleanly separates the RL part and your environment.

Not sure if this is a useful suggestion, but if it is I would be super keen to interact with the environment if it has this modification given how cool and interesting your repo looks.

Occasionally Ag.update() returns nan velocity and position

Hi Tom,
Please see this example of error I encountered:

Ag = Agent(Env)
Ag.dt = 0.1
print(Ag.pos, Ag.velocity)
# [0.79958237 0.76560547]  [0.51603493 0.49723318])

random_drift_velocity = np.random.random(2)
print(random_drift_velocity)
# [0.51777053 0.88738508]

Ag.update(drift_velocity=random_drift_velocity,
          drift_to_random_strength_ratio=2.0)
print(Ag.pos, Ag.velocity)
# [nan nan]  [nan nan]

RatInABox2.0 - Opening the discussion

I've begun to think about 2.0. The reason is that there are are certainly a couple of choices I made early on in development which weren't optimal. Now could be a good time to fix these as the community is growing but still small enough it won't be super disruptive. Also fixing them will make it easier to maintain RiaB in the long run.

I'm opening this issue to get community thoughts on this. @SynapticSage @colleenjg @jquinnlee @mehulrastogi you're some of the most active users I know fairly well so I'm tagging you to get your input (if you have any), but anyone can chip in here. Here's my thoughts:

Essential and backwards incompatible changes (do first):

  • Refactoring: As discussed in #58 #55. E.g. it's not nice having all Neurons classes in one .py file.
  • Args not Dicts: It's increasingly annoying me that parameters are always handed in as dicts. This is unconventional and has warranted very-well-made but hacky work-arounds e.g. #38 #39
  • Global Environment update(): Given, now, Environments know about their Agents and Agents know about their Neurons we could have just one update function in Env which cascades through else thing else. Cleaner?
  • Rename dev --> main
  • Environment stores the global clock. This just makes sense imo.
  • Better policy API - I don't love the drift_velocity kwarg. Maybe instead Agents can have a policy() method which returns a drift - this would default to the random motion policy, unifying that too. Just something to consider.

Other essential changes

  • Type hinting: This is a new thing in python which I've been told to consider. Any thoughts?
  • Modularity Break down some of the larger update() perhaps adding into new agent/neuron/env specific utils scripts.
  • Dynamic environments Environments can change by adding walls and objects but we should formalise this with setters which, whenever called, save the "state" of the environment alongside a timestamp as a dictionary to an Env.history dictionary. Then, when plotting / animating the environment we can pass in a time argument and the correct state can be retrieved and plotted. The state of the environment only appends to history whenever it changes (e.g. a setter is called).
    • Related to the above, if an Environment changes half way through a simulation then animations will not support this since they always replot the last environment which is both wasteful and possibly wrong. To get around this whenever you call plot_environment() it can be passed a fig an ax and a new object which is a list/dict of plot objects, R which are all matplotlib.Artists already existing on the figure. The environment can store an equivalent list of plot objects and whenever this changes (e.g. a wall is added or an object is moved etc.) this change is logged then plotting can (i) get the list of plot objects corresponding to the correct time and (ii) compared it to the passed list, if they aren't equal then repot the env, otherwise don't bother. Something like that.
    • Alternatively (maybe better): Environments have an Env.history dictionary storing the full "state" of the environment (all object locations, walls, boundaries, etc.). Then Env.plot_environment() takes a time argument and find the state of the at that time and plots that.
  • Plotting: To me at least the visualisation ability of RiaB is really important but animations are slow and I like animating things so this annoying. Could improve by being smarter about how we render stuff in matplotlib, and not re-rendering the Environment or trajectories each frame. Stuff like that. See #54
    • For example if this Environment state dictionary was stored inside the figure itself with some kind of hash code we could just check on each call whether the desired state matches the state thats been plotted. Only replot if they aren't equal.
  • Only pass ax not fig to figure plotting functions. This may throw up some things but likely minor.
  • Break up utils.py into separate ones for the Agent package, Neurons package and Env package and maybe also a misc.
  • Documentation: Would be great to have a sustainable ReadTheDocs page. We should think about how to structure doc strings so they are all uniform. #36
  • Unit testing: I have been pretty sloppy about this but will add loads more.
  • Testing on PRs Run RiaB tests, test doc strings and text styling.
  • **Move this to RatInABox/RatInABox not TomGeorge1234/RatInABox
    • RatInABox/RatInABox_RL** package containing all the RL stuff (Actor, Critic, ValueNeuron, TDError, TaskEnv etc.)
  • IntermediateNeurons subclass for neurons which aren't "fundamental" but take other neurons as inputs. Current examples are FeedForwardLayer and NeuralNetworkNeurons
  • DynamicNeurons subclass for neurons which aren't static i.e. you can't call Neurons.plot_rate_map() because they actually depend on the past history. Examples include TDErrorNeurons (to be made) or anything with recurrency.
  • **SmoothRandomFeatureNeurons just some spatially tuned but random neurons. Users just provide a length scale. Would be useful for a lot of feature learning studies. Probably something like a gaussian process underlying these neurons.

Things to consider

  • Neurons should follow torch.nn.module API - this would make more efficient the evaluation of complex feedforward graphs which currently happens in a backwards manner. This might require renaming the .get_state() method with .forward(). Need to think more about this
  • conda Once all of the above is done it would be nice to publish this on the condo-forge channel.
  • Jax compatibility: Very on the fence about this one. Probably leaning towards not doing it. Would be great to have speed ups, autograd and gpu capacity but it could be just a bit too much / unnecessary / off-putting for non-python geeks (tbh, like me). But if jax is the future I want to consider it. Options include:
    • Don't do it
    • Partial jax to hit a few heavy-lifting utils functions. Q: Does this even work, would converting to/from jax arrays not be inconveniently slow here?
    • Full jax no numpy. np--> jnp everywhere.
    • Both jax and numpy. Users choose which backend. This should hard but I've played around and probably could be done. Has complications though.

I'm not a software guy so @SynapticSage @mehulrastogi feel free to give high level comments about best way to go forward.

Hide and Seek In A Box?

I just stumbled upon this from your gymnasium compliance tweet.

I was wondering how feasible would it be to combine the field of view + multi-agent + new gym wrapper classes to create a hide-and-seek environment similar to mujoco-based works (with bonus points if you could also move walls around) using the tools here?

If not, and if this isn't something you'd have interest in having in here, no worries.

Firingrate norm for BVCs should be unscaled

I noticed that the BVC firingrates weren't scaling correctly if max_fr is not 1. I've traced the source of the issue to self.cell_fr_norm. This value is initialized based on scaled firingrates, but then applied before scaling when calling get_state(), and thus cancels out the subsequent scaling. See L1543 and L1679 of Neurons.py

import numpy as np

from ratinabox import Agent, Environment
from ratinabox.Neurons import BoundaryVectorCells

MAX_FR = 1

np.random.seed(10)

env = Environment()
ag = Agent(env)
BVCs = BoundaryVectorCells(ag, params={"min_fr": 0, "max_fr": MAX_FR})

for i in range (500):
    ag.update()
    BVCs.update()

min = np.asarray(BVCs.history["firingrate"]).min()
max = np.asarray(BVCs.history["firingrate"]).max()

print(f"{min:.4f} to {max:.4f}")

gives 0.0000 to 1.0165 (FYI: you can see there's a slight overshoot of the max, which might be for another issue.)

But, if you set MAX_FR = 10, you still get 0.0000 to 1.0165.

I'll create a PR in a moment to propose a solution.

Unnoticed, incorrectly typed out parameter names

Hey @TomGeorge1234 , I was just working with ratinabox and decided to implement a small functionality for myself. I wanted to mention it here in case you'd like to add it / anyone else could use it.

So, as we know, the downside to allowing users to specify a wide range of parameters using a params dictionary when initializing a new object (Agent, Environment, Neurons) is that incorrectly typed out parameter names can slip through easily. For example, you might be initializing a GridCells object, accidentally using the key fr_max instead of max_fr in your parameters dictionary, and not notice for a while that you failed to set max_fr as intended.

So, I wrote a function that checks whether objects have unexpected attributes, and raises a warning if they do.

def check_attributes(Obj, check_attrs=None):
    """Checks that the attributes of an object are expected, based on a 
    default initialization. This is useful when passing a dictionary of
    parameters to set attributes, as it could flag incorrectly typed out 
    parameter names.
    
    Args:
        Obj (object): Object to check.
    
    Optional Args:
        check_attrs (dict, optional): Dictionary of attribute names to check. If None, 
        all attributes are checked. Defaults to None.
    """

    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        argspec = inspect.getfullargspec(type(Obj).__init__)
    arg_names = argspec.args
    if arg_names[0] != "self":
        raise NotImplementedError("Expected the first argument to be 'self'.")
    required_args = [
        getattr(Obj, arg_name) 
        for arg_name in arg_names[1: -len(argspec.defaults)]
        ]
    
    kwargs = dict()
    if "check_attributes" in arg_names:
        kwargs["check_attributes"] = False

    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        base_Obj = type(Obj)(*required_args, **kwargs)
    
    unexpected_attributes = [
        key for key in Obj.__dict__.keys() if key not in base_Obj.__dict__.keys()
    ]

    if check_attrs is not None:
        unexpected_attributes = [
            key for key in unexpected_attributes if key in check_attrs
        ]
    
    if len(unexpected_attributes):
        num = len(unexpected_attributes)
        unexpected_attributes = ", ".join(
            [f"'{attr}'" for attr in unexpected_attributes]
            )
        if hasattr(Obj, "name"):
            object_name = Obj.name
        else:
            object_name = str(Obj)
        warnings.warn(
            f"Found {num} unexpected attribute(s) for {object_name}: "
            f"{unexpected_attributes}"
            )

If called in the init of a class, as follows, the function will flag with a warning any keys in params that would not be attributes of a default object of that class. Specifically, using GridCells as an example:

class GridCells(Neurons):
    ...
    def __init__(self, Agent, params={}, check_attributes=True): # added check_attributes argument
        default_params = {
            ...
        }
        self.Agent = Agent
        default_params.update(params)
        self.params = default_params
        super().__init__(Agent, self.params)
        if check_attributes: # check, if applicable
            utils.check_attributes(self, params.keys())

If you then call GridCells(Agent, params={"fr_max": 2}) you get the following warning:
UserWarning: Found 1 unexpected attribute(s) for GridCells: fr_max

So, this would be useful for any classes where (1) all parameters received are set as attributes, and (2) all acceptable parameters received have default values in the default_params dictionaries at some level of initialization. I believe most if not all of the classes that take a params dict as an input in ratinabox meet these two criteria.

Still, because this is a bit... non-pythonic... I should mention a few potential unintended consequences I can currently foresee

  1. an infinite loop, if calling this from the __init__(), which is avoided as long as the objects this is used with have a check_attributes keyword argument in their __init__(),
  2. spurious printing when the dummy object is created (the function suppresses warnings.warn calls, but can't suppress print calls, of course and doesn't currently suppress log calls),
  3. this seems unlikely, but I'll mention it in case: a spurious reference to the dummy object could be created while running check_attributes() if, for example, when an Object, like GridCells is initialized, circular references are created (i.e., not only does the GridCells object have a pointer to its Agent (self.Agent), but the Agent then also has a pointer added in return to any associated Neurons (e.g., self.Agent.Neurons.append(self)). As far as I can tell, this kind of circular referencing was not implemented in ratinabox, so I don't believe that my dummy object will create undue clutter in any associated objects.

This got a bit complicated..., but anyway, do let me know if you think it would be useful!

Access to built-in "uniform" function for PlaceCells / Agent velocity initialization

Great work, it's simple and pretty!

I'm trying the 1D version. Here are some small questions/suggestions.

In the class PlaceCells, place_cell_centres uses "uniform_jitter" method to sample the position.
Although I can pass an array into place_cell_centres, I think it's easier to use the built-in "uniform" method?

When initializing Agent, if I didn't use the argument params but changed speed_mean directly, the initial velocity wouldn't be updated (it's still the default value).
However, if using the params, it's correct. Not sure if this would be a problem or not.

Ag.speed_mean = 0.20
# Ag.velocity = 0.08

Ag = Agent(Env, params={"speed_mean": 0.20})
# Ag.velocity = 0.20

New Actions

Opening a discussion for expanding the actions allowed within the RiaB framework. The following was suggested by @jeff.

The behavior experiments require rats/mice to do some actions other than navigation. These range from pressing a lever (/button), licking, eating, rolling a wheel, etc. @TomGeorge1234 I think we should think about expanding the environment to handle these sorts of actions.

I was thinking about a spatial button in a box as an initial proof of concept. Obviously, this is a step toward dynamic environments where agents can do tasks in steps. A simple task might be to press a button to go to the next room to get a reward.

Thoughts?

Head direction flexibility

Egocentric representations (e.g. HeadDirectionCells and egocentric BoundaryVectorCells) require the agents head direction to calculate their firing rates. Currently the head direction of the Agent is assumed to just be the normalised velocity. This is a little restrictive. It would be better if there was a separate variable which could, in theory, be independent from velocity. Here's what I propose:

A new variable Agent.head_direction : np.ndarray(shape=(2,). By default Agent.head_direction is updated to be equal to the (normalised) Agent.velocity at each time step. However, if a new input parameter ("head_direction_smoothing_timescale" defaulting to zero) is non-zero then the head direction can be equal to the velocity vector smoothed by an exponential kernel. This should allow head_direction` to be less noisy even if the velocity vector is noisy.

In the future users could generalise this further and even have entirely independent dynamics for the head direction vector.

How far can RiaB go as a platform for linking behavior to neural activity?

My understanding of the current implementation is that the OVC and place cells in RiaB are essentially sensory neurons (activity is controlled by relationship of the agent to the world with no intrinsic dynamics, no internal connections and no state), and yet we know these neurons are deep in the brain, deeply interconnected, we know there are phenomena like remapping, and state dependent phenomena.

Is RiaB an appropriate platform / foundation for examining these more realistic neural mechanisms? What does the path to there look like?

history not loaded when importing trajectory

Hello!

While recently using this timeless toolbox I have run into an issue when importing trajectories and updating the agent and neurons with the imported data. I first loaded the position data, and then hand these data off the agent as such:

position = joblib.load("position_file")
fps = 30 # frames per second in Hz
Ag.import_trajectory(times=[i / fps for i in range(position[-1])], positions=position, interpolate=False)

Problem is, when I then try to iterate through the updates like this:

T = position.shape[-1]
for i in range(int(T)):
     Ag.update()
     BVC.update()

It kicks the following error:

Traceback (most recent call last):
...
\ratinabox\Agent.py", line 262, in _update_position_along_imported_trajectory
old_time = self.history['t'][-1]
IndexError: list index out of range

I have a work-around that simply adds a zero value first to the history attribute and the updates seem to work, but why not just fill the history values or compute this when doing the trajectory import?

Thanks Tom and the team!!

Generalise HeadDirectionCells to N directions

Currently there are always exactly 4 HDCs encoding the N,E,S,W components of the velocity. It would be better to generalise this to N arbitrary or evenly space directions, perhaps also there could be a angular_spread parameter determining their specificity.

BVC firingrates go over max_fr

As mentioned in #108, BVC firing rates sometimes go over max_fr. I believe this is due to how self.cell_fr_norm is computed from locations uniformly sampled across the environment during initialization.

I've given it a lot of thought, and I don't think there's a straight forward perfect solution. However, I think I can propose a slight improvement. Increasing the resolution by making dx smaller is not ideal, as memory usage climbs quickly. So, I propose using the uniformly sampled locations, but also adding jittered locations.

For clarity, I've separated the initialization from the init into its own function, but everything here is the same as before, except the lines in if add_jittered:. Basically, in addition to using the uniformly locs to estimate max firing, I also jitter each location with a value between -dx/2 and dx/2 in x and y. (Using dx/2 should prevent any points going out of bounds.) These jittered locations are appended to locs and all of the values are used together to estimate max firing rate for each neuron. Importantly, this cannot make the estimates worse, as locs still includes the original uniformly sampled locs. It can only improve them in cases where the jittered locs find higher firing rates near the uniform locs. This does double the memory use, but my tests indicate it this is more than compensated for by the improvement in the estimates.

def _set_cell_fr_norm(self, dx=0.04, add_jittered=True):

    locs = self.Agent.Environment.discretise_environment(dx=dx)
    locs = locs.reshape(-1, locs.shape[-1])

    if add_jittered:
        jitter = np.random.uniform(-dx/2, dx/2, locs.shape)
        locs = np.append(locs, locs + jitter, axis=0)

    self.cell_fr_norm = np.ones(self.n) # value for initialization

    # ignores the warning raised during initialisation of the BVCs
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        _cell_fr_norm = np.max(self.get_state(evaluate_at=None, pos=locs), axis=1)
        self.cell_fr_norm = (_cell_fr_norm - self.min_fr) / (self.max_fr - self.min_fr)

From the examples below, you can see that no jitter and dx=0.04 leads to 1.5% of firingrates in this random run I did being over the max firing rate of 1. Cutting dx in two greatly improves this (down to 0.6% of firingrates being above 1), but requires 4x more datapoints and in my tests using way too much memory. Using jitter and a higher dx (0.05) requires only a few more points and performs quite well (0.63% of firingrates above max_fr), though the max firingrate is higher than with dx=0.02. I've also included an example with dx=0.04 and jitter for comparison purposes. Of course, this uses 2x more points than no jitter.

jitter_effect

I've run it several times to make sure the improvement isn't a fluke. It might not be worth implementing this, as it's not an actual fix, but I figured I'd propose it just in case.

A different solution would be a two-step estimate: Use locs to identify the top 6 locations with the highest firingrates for each neuron, and then sample uniformly a certain number of points around those.

Package structuring

Long run we will probably look to changing the package structure of ratinabox so that Environments Agents and Neurons are modules. Each module should contain its one contribs folder (tidier this way rather than one global contribs containing loads of stuff). The package structure will eventually look something like this:

โ”œโ”€โ”€ demos
โ”œโ”€โ”€ dist
โ”œโ”€โ”€ figures
โ””โ”€โ”€ ratinabox
 ย ย  โ”œโ”€โ”€ README.md
 ย ย  โ”œโ”€โ”€ __init__.py
 ย ย  โ”œโ”€โ”€ Environments
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ __init__.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ Environment.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ SubEnvironment.py
 ย ย  โ”‚ย ย  โ””โ”€โ”€ contribs
 ย ย  โ”œโ”€โ”€ Agents
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ __init__.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ Agent.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ SubAgent.py
 ย ย  โ”‚ย ย  โ””โ”€โ”€ contribs
 ย ย  โ”œโ”€โ”€ Neurons
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ __init__.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ Neurons.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ PlaceCells.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ GridCells.py
 ย ย  โ”‚ย ย  โ”œโ”€โ”€ ...
 ย ย  โ”‚ย ย  โ””โ”€โ”€ contribs
 ย ย  โ”œโ”€โ”€ data
  ย  โ””โ”€โ”€ utils.py

Small bug relating to PR #101

@colleenjg relating to PR #101

Sorry! After giving it the all clear I ran the tests and spotted an extra warning. If I just initialise a population of BVCs

Env = Environment()
Ag = Agent(Env)
BVCs = BoundaryVectorCells(Ag,params={'n':11})

it warns

UserWarning: Ignoring 'n' parameter value (11) that was passed, and setting number of BoundaryVectorCells neurons to 11, inferred from the cell arrangement parameter.

Sorry, I can dig into this in the morning but it might be quicker for you to solve.

add error checking to parameter settings

Hi there,

I am using the develop version of RatInABox (maybe this won't be an issue in the normal version).

It will be nice to add error check in parameter settings. For example, in generating trajectories with different tortuosity:

Ag = Agent(Env, params = {
"rotation_velocity_std": 60 * (np.pi / 180)
})

This will not change the tortuosity since the correct param name is "rotational_velocity_std", not "rotation_velocity_std". However, the code will not through out an error for that.

Error if initializing GridCells with random_gridscales=False

env = Environment()
ag = Agent(env)
grid_cells = GridCells(
        ag,
        params={"random_gridscales": False}
    )

grid_cells.plot_rate_map()

The code above raised an error:

  File "/Users/pei/.pyenv/versions/3.10.5/lib/python3.10/site-packages/ratinabox/Neurons.py", line 244, in plot_rate_map
    t_end = t_end or t[-1]
IndexError: index -1 is out of bounds for axis 0 with size 0

My quick workaround is:

self.gridscales = np.full(self.n, fill_value=self.gridscales)

BVC tuning distances distribution

Hello! Just writing to suggest a small feature that I hope is useful for the BVC cell type. I have noticed the tuning distance parameter for the tuning_distances attribute in the BVC class is drawn from a Rayleigh distribution by default. Based on some recent comparisons against large population recordings, I am observing better fits in a BVC-to-Place model if the BVC tuning distances are drawn from a uniform distribution, and I thought this could be a useful feature to integrate. The solution I have found easy enough is to add the param "max_wall_dist" to the params dictionary and perhaps a string to determine whether the distances are drawn from a uniform or rayleigh distribution, and then define the tuning distances from a uniform distribution, e.g:

default_params = {
"n": 10,
"reference_frame": "allocentric",
"pref_wall_dist": 0.15,
"max_wall_dist": 0.75,
"distance_distribution": "uniform",
"angle_spread_degrees": 11.25,
"xi": 0.08,
"beta": 12,
"dtheta": 2,
"min_fr": 0,
"max_fr": 1,
"name": "BoundaryVectorCells",
}

...

if distance_distribution == "uniform":
self.tuning_distances = np.random.uniform(low=0, high=self.max_wall_dist, size=self.n)
else:
self.tuning_distances = np.random.rayleigh(scale=self.pref_wall_dist, size=self.n)

Perhaps this would be a useful option to integrate for the BVC class. Many thanks for the amazing tool box!

FieldOfView Neurons Unexpected Behaviour

Hi Tom,

I am opening an issue to better explain my problem and track progress more easily.

I am experiencing unexpected behaviour in regards to FieldOfView neurons as I anticipated last week. I ran an experiment which can probably explain my problem more clearly. I think your input would be very valuable because you have a better understanding how BVCs are coded, I can't wrap my head around it.

I have simulated a horizontal trajectory for 60 seconds, with 60 FPS. The trajectory goes from position (0.1, 0.1) to (1.1, 0.1) in a 1.2x1.2 environment.

whiskers_params = {
    "FoV_distance": [0, 0.4]
    "FoV_angles": [0, 100]
    "spatial_resolution": 0.1
}

env = Environment(params={'scale':1.2, 'aspect':1})
agent = Agent(env)
whiskers = FieldOfViewNeurons(agent, params=whiskers_params)

agent.import_trajectory(
    times=np.linspace(0, 60, 60*60),
    positions=list(zip(np.linspace(0.1, 1, 60*60), np.linspace(0.1, 0.1, 60*60))),
)

DT = 1./60
t_max = 60

for i in range(int(t_max/DT)):
    agent.update(dt=DT)
    whiskers.update()

tmp

These are the receptive fields of the BVCs associated to the FoVs (whiskers.super.plot_BVC_receptive_field())
tmp1

I would expect FoV cells index 7, 8, 16, and 17 - which are the four external cells pointing around 0 degrees (which in this case is in the right direction), see BVC plot above, they are ordered - to start firing when the agent gets close to the right wall, and keep firing until the end of the simulation.

On the other hand, these four cells fire when the agent is close to the right wall, but stop firing when the agent is TOO close to the right wall, i.e. at the end of the simulation. See the following plot (x axis: time, y axis: firing rate)
tmp2

I think the problem arises from the BVC cells. It would be great if you have a hint on how to investigate more!

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.