Comments (16)
Hi @Akwasi-Richie ,
Could you please send your code here or an image of your error. I don't really know which cause your error. Maybe it is not from my library.
from mealpy.
Hi @thieu1995
I have attached my objective function and the error message to this. Pardon me but my coding skills are not top notch and probably may have written my objective function wrongly
.
from mealpy.
Hi @Akwasi-Richie ,
You are so close.
Choice which strategy you want to select the best optimizer below. I usually use combine both training and validation set (case 5 or 6)
In your objective function, you need to return the real value or the list of real value. Not the keras model.
def objective_function(solution):
....
model.compile(....)
history_object = model.fit(xTrain, yTrain, batch_size=10, validation_split=0.3, epochs=2)
# So if you want to select the best optimizer based on accuracy or/and loss of training or both training and validation.
# 1. Using accuracy of training only
return history_object["accuracy"] # Your problem is "maximum" problem
# 2. Using loss of training only
return history_object["loss"] # Your problem is "minimum" problem ==> Redefine the problem dictionary above
# 3. Using accuracy of validation only
return history_object["val_accuracy"]
# 4. Using loss of validation set only
return history_object["val_loss"]
# 5. Using accuracy of both training set and validation set with the higher weight for validation.
return [ history_object["accuracy"], history_object["val_accuracy"] ] ==> Redefine the problem dictionary above with keyword "obj_weight": [0.3, 0.7]
# 6. Using loss of both training set and validation set
return [ history_object["loss"], history_object["val_loss"] ]
Remove the last line of your code: "history = model.fit...".
The problem you are trying to do is find the best optimizer for this neural network.
So the solution is the best optimizer, after you decode your solution like you did in the objective_function.
best_optimizer = Optimizer_Encoder.inverse_transform( int(GWO_Model.solution[0][0]) )[0]
print(f"{best_optimizer"})
from mealpy.
Thank you for your assistance @thieu1995
will do as said and give you feedback when done.
from mealpy.
Hi @thieu1995
I modified the objective as you directed but run into a a whole new error. I have attched the new objective function and the error message to this for your perusa
l
from mealpy.
Hi @Akwasi-Richie ,
My bad, I forgot that the history object return a list of values based on epoch. So you should return the last element in the history object. Mean that the last value of the last epoch. For example:
def objective_function(solution):
....
history_object = model.fit(xTrain, yTrain, batch_size=10, validation_split=0.3, epochs=2)
# 1. Using accuracy of training only
return history_object["accuracy"][-1]
# 2. Using loss of training only
return history_object["loss"][-1]
# 3. Using accuracy of validation only
return history_object["val_accuracy"][-1]
# 4. Using loss of validation set only
return history_object["val_loss"][-1]
# 5. Using accuracy of both training set and validation set with the higher weight for validation.
return [ history_object["accuracy"][-1], history_object["val_accuracy"][-1]]
# 6. Using loss of both training set and validation set
return [ history_object["loss"][-1], history_object["val_loss"][-1] ]
from mealpy.
Hi @thieu1995
Noted and really appreciate your patience.
from mealpy.
Hi @Akwasi-Richie ,
Are your code working now? If not I made another video that use mealpy to optimize hyper-parameter for neural network.
You can get the video link in the ReadMe.md
from mealpy.
Hi @thieu1995,
yet to effect the last input you made. had a power outage,but it has been resolved one. will run the codes and get back to you.
With the videos, I was notified by youtube when you posted but yet to watch it. i will do that this evening as well.
Thank you
from mealpy.
Hi @thieu1995.
the codes are running now but just a bit puzzled. set my epoch at 2 but it is on the 4th run now. I have attched a snip of the output
from mealpy.
Hi @Akwasi-Richie ,
There are 2 type of epochs in this program. The first one is for the neural network itself (look at the model.fit() function) and the second one is for the metaheuristic algorithm.
Like I said in the video, for hyper-parameter optimization problem, each time you create a new solution - means that you are running a new neural network. That is why you got the 4th run.
Set the verbose = 1 in model.fit() function, you can see your network is running with 2 epochs only, and then another network created with 2 epochs again, and so on.
Set the verbose = 2 in model.fit() function, you will see the results of metaheuristics training only.
from mealpy.
Hi @thieu1995
So if I got you right, should I set verbose=1, then my model could run for an infinite number of times
but verbose=2 will run for the set epochs right??
from mealpy.
Hi @Akwasi-Richie,
Have you watched the tutorial yet? Have you tried to run the traditional neural network (no need optimal hyper-parameter, just choice a random one).
Do you know the different between epoch (generations / iterations) of neural network and metaheuristic algorithm?
The verbose keyword is in keras model (https://keras.io/api/models/model_training_apis/). Please read the verbose explaination. It has nothing to do with your the number of loop your model will run.
Basically, when you run a traditional neural network without tunning hyper-parameter. It will run 1 neural network architecture.
So it will run until reach the epoch you set in this line of code:
history_object = model.fit(xTrain, yTrain, batch_size=10, validation_split=0.3, epochs=2)
But when you run a hyper-parameter tunning using metaheuristics like above. It will run N*Gmax neural network architecture.
In which, N is number of population size of your metaheuristic algorithm. Gmax is maximum number of genations (epochs/iterations). It is the epoch (1000) you set in this line of code:
GWO_model = GWO.BaseGWO(problem, epoch=1000, pop_size=50).
from mealpy.
Hi @thieu1995
I get you now. The library works and codes run and that is a good thing. It was the execution period that was just too long for an epoch (30 mins per epoch).
Got it to run using my GPU and the time increased to an hour and 50 mins.
I will try setting up my TensorFlow and Keras to run faster and start again. Thank you for your assistance.
I will be looking forward to the hybrid metaheuristics in the mealpy library too. GWO and WOA, GWO and PSO kind of hybrids.
cheers buddy
from mealpy.
In the keras code part, try to run it on GPU (https://keras.io/guides/distributed_training/).
And use Early Stopping method (https://keras.io/api/callbacks/early_stopping/) to stop your model to avoid wasting time.
In mealpy mode part you can try to run with multi-processing (the default mode currently is sequential). Of course, the results of different mode may various as I suggested in the table in ReadMe.md file. The code should be:
model.solve(mode='process') # or mode='thread' --> using multi-threading
Using multi-threading or multi-processing should speed up your epoch runtime.
Again, just like keras, you can also use early stopping method in mealpy by:
from mealpy.utils.termination import Termination
....
ter_dict = {
"mode": "ES",
"quantity": 30 # after 30 epochs, if the global best doesn't improve then we stop the program
}
....
if __name__ == "__main__":
....
model7 = SMA.BaseSMA(problem_dict1, epoch=100, pop_size=50, pr=0.03, termination=ter_dict)
model7.solve(mode='process')
....
from mealpy.
For the hybrid algorithm, as I said before.
My goal for this library is to implement all of the original nature-inspired algorithms. There are around 300 original algorithms, and I've done around 90. Still need to finish more than 200 algorithms before think of anything else.
If you have the paper of hybrid paper, you can leave it here. I may spend a little bit of time implementing it.
Else I will not spend my time on the hybrid algorithm.
from mealpy.
Related Issues (20)
- [BUG]: get_index_roulette_wheel_selection(self, list_fitness: np.array) HOT 2
- [FEAT]: Change Minimum pop_size? HOT 4
- [FEAT]: About multi-objective BFO algorithem HOT 2
- [FEAT]: Add irregular constraints HOT 2
- [BUG]: OriginalGWO not work as expected HOT 2
- [BUG]: ValueError occurred when the L_SHADE algorithm is utilized. HOT 1
- [FEAT]: MEALPY for feature selection? HOT 2
- [FEAT]: Is it possible to combine DE and GA algorithm to improve the individual algorithms for optimization
- [BUG]: OriginalZOA Phase2 S1:Parameter R is the constant number equal to 0.01 in referenced paper. HOT 2
- TypeError: fitness_function() missing 1 required positional argument: 'data' HOT 6
- Using mealpy with LSTM for time series forecasting HOT 7
- add a python file to compare differernt algorithms HOT 4
- [BUG]: Original hill climbing algorithm step_size issue HOT 4
- RuntimeWarning: invalid value encountered in divide prob = final_fitness / np.sum(final_fitness) HOT 1
- [BUG]: get_index_roulette_wheel_selection HOT 1
- Artificial Bee Colony Algorithm HOT 1
- [FEAT]: Add Donkey and Smuggler Optimization Algorithm (DSO) HOT 3
- Multiple stopping criteria HOT 15
- [BUG]: Verbose mode not working HOT 1
- [FEAT]: ChOA.py is empty HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mealpy.