juliaml / learningstrategies.jl Goto Github PK
View Code? Open in Web Editor NEWA generic and modular framework for building custom iterative algorithms in Julia
License: Other
A generic and modular framework for building custom iterative algorithms in Julia
License: Other
Is there a reason that the hook
methods do not resolve to hook(s::LearningStrategy, model) = nothing
?
I'm finding that I do not always want to include the iteration parameter in the hook
and am wondering why the following is not implemented?
hook(s::LearningStrategy, model) = nothing
hook(s::LearningStrategy, model, i) = hook(s, model)
hook(s::LearningStrategy, model, data, i) = hook(s, model, i)
Hello everyone,
As part of a university group project (University of Warwick), we have been reviewing a lot of packages from the machine learning community of Julia, and have made some reports.
One of the task that I did was reviewing this package, and creating a notebook helping the user understand the package, implement custom models and strategies, and use built-in ones.
Here is the current link to the notebook.
https://github.com/dominusmi/warwick-rsg/blob/master/Scouting/LearningStrategies.ipynb
I think it might be a useful add-on to the repository, to help newcomers understand how to use it and such. If you agree, let me know and I'll fork + pull request.
Obviously, any modifications is welcome!
Thank you and have a good day!
I want Verbose{MaxIter}
to print something like Current iteration 002/050
.
But in current implementation, there isn't any hookable function before update!
,
any suggestion how to implement this?
I tried to overload update!(model, v::Verbose{MaxIter}, item)
,
but update!
function do not have the iteration counter i
from enumerate
.
I'm grateful to have stumbled across this package's release on the julia discourse page just as I was starting to think about writing my reinforcement learning code base. The framework is nearly perfect for what I need and significantly more concise and clear than what I was doing before (which was very un-julia like).
I was wondering if you had applied some thought to how to deal with generated data. In many reinforcement learning problems, the data
the optimization uses changes from iteration to iteration (for example when we use the current best performing model to collect new rewards from the environment). One method I thought of to use with LearningStrategies would be to have a MetaStrategy that, in the update!
function, collects new data and stores it in scratch space in model
. The next MetaStrategy could then look at the data and calculate and apply a gradient to the model parameters.
The issue I see with this is forcing model to be a data store for elements needed between strategies. This could be fine for the most part as it seems to use model
for this functionality already (ie. if I want to Tracer
the norm of the gradient, the gradient has to be cached in the model). Some algorithms/MetaStrategies may not need all caches I'd have to give model
. My question is if there is a better way to accomplish data generation. I'm still not 100% Julia congruent, but packages like this one really help me see the light. Thank you.
Hello!
I've been looking at this package and testing it these last few days, trying to get the hang of it and I have to say it really offers a very nice framework.
While doing this, I was trying to use the Converved
strategy, and I kept on getting inexact errors, so after trying to see if it was coming from my code, I decided to give a look at the source.
According to the docstring
'''
Converged(f; tol = 1e-6, every = 1)
Stop learning when `norm(f(model) - lastf) ≦ tol`.
'''
mutable struct Converged{F <: Function} <: LearningStrategy
f::F # f(model)
tol::Float64 # normdiff tolerance
every::Int # only check every ith iteration
lastval::Vector{Float64}
end
My understanding was therefore that f is a function that takes the model as parameter, and returns a number, supposedly some sort of score to check convergence. The reason why I highlight this, is because if one looks at the setup line (throwing the inexact error), f(model) is used as parameter for the zeros function, therefore it should be an integer specifying the size of an array of zeros:
setup!(s::Converged, model, item) = (s.lastval = zeros(s.f(model)); return)
This would, given my interpretation of f, not make sense and obviously return an inexact error as soon as f can return a float.
So to conclude, is this an error, or am I completely wrong as to how Converged is meant to be used, and what f is meant to be?
In case it's a glitch, I wouldn't mind fixing it since I'm already working with these functions at the moment they're pretty fresh in my head
It will be good if the underlying of Tracer is a DataFrame.
User can push!
several values into that Tracer in a single function call.
I'd like to do this soon... any objections to it?
A common use case for iterative algorithms is the tracking of values through iterations as implemented by the Tracer
strategy. What would be the preferred way to use the containers in ValueHistories.jl for this tracking?
I tried using using the IterFunction
strategy to update a global history object and implementing a custom learning strategy. The latter seems cleaner but access to the history object is trickier as it is contained inside the MetaLearner
.
This is also related to JuliaML/ValueHistories.jl#9
Has every
been implemented as intended in the Converged
and ConvergedTo
strategies?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.