Giter Club home page Giter Club logo

Comments (16)

thieu1995 avatar thieu1995 commented on May 18, 2024

Hi @Akwasi-Richie ,

Could you please send your code here or an image of your error. I don't really know which cause your error. Maybe it is not from my library.

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Hi @thieu1995
I have attached my objective function and the error message to this. Pardon me but my coding skills are not top notch and probably may have written my objective function wrongly
obj_function
Error
.

from mealpy.

thieu1995 avatar thieu1995 commented on May 18, 2024

Hi @Akwasi-Richie ,
You are so close.
Choice which strategy you want to select the best optimizer below. I usually use combine both training and validation set (case 5 or 6)

In your objective function, you need to return the real value or the list of real value. Not the keras model.
def objective_function(solution):
	....
	
	model.compile(....)
	
	history_object = model.fit(xTrain, yTrain, batch_size=10, validation_split=0.3, epochs=2)
	
	# So if you want to select the best optimizer based on accuracy or/and loss of training or both training and validation. 
	
	# 1. Using accuracy of training only
	return history_object["accuracy"]		# Your problem is "maximum" problem 
	
	# 2. Using loss of training only 
	return history_object["loss"]			# Your problem is "minimum" problem ==> Redefine the problem dictionary above 
	
	# 3. Using accuracy of validation only 
	return history_object["val_accuracy"]
	
	# 4. Using loss of validation set only 
	return history_object["val_loss"]
	
	# 5. Using accuracy of both training set and validation set with the higher weight for validation.
	return [ history_object["accuracy"], history_object["val_accuracy"] ] 			==> Redefine the problem dictionary above with keyword "obj_weight": [0.3, 0.7]
	
	# 6. Using loss of both training set and validation set 
	return [ history_object["loss"], history_object["val_loss"] ]

Remove the last line of your code: "history = model.fit...".

The problem you are trying to do is find the best optimizer for this neural network.
So the solution is the best optimizer, after you decode your solution like you did in the objective_function.

best_optimizer = Optimizer_Encoder.inverse_transform( int(GWO_Model.solution[0][0]) )[0]
print(f"{best_optimizer"})

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Thank you for your assistance @thieu1995

will do as said and give you feedback when done.

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Hi @thieu1995
I modified the objective as you directed but run into a a whole new error. I have attched the new objective function and the error message to this for your perusa
obj_func
Value_Error
l

from mealpy.

thieu1995 avatar thieu1995 commented on May 18, 2024

Hi @Akwasi-Richie ,

My bad, I forgot that the history object return a list of values based on epoch. So you should return the last element in the history object. Mean that the last value of the last epoch. For example:

 def objective_function(solution):
	....
	history_object = model.fit(xTrain, yTrain, batch_size=10, validation_split=0.3, epochs=2)
	
	# 1. Using accuracy of training only
	return history_object["accuracy"][-1]		
	
	# 2. Using loss of training only 
	return history_object["loss"][-1]			
	
	# 3. Using accuracy of validation only 
	return history_object["val_accuracy"][-1]
	
	# 4. Using loss of validation set only 
	return history_object["val_loss"][-1]
	
	# 5. Using accuracy of both training set and validation set with the higher weight for validation.
	return [ history_object["accuracy"][-1], history_object["val_accuracy"][-1]] 			
	
	# 6. Using loss of both training set and validation set 
	return [ history_object["loss"][-1], history_object["val_loss"][-1] ]

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Hi @thieu1995
Noted and really appreciate your patience.

from mealpy.

thieu1995 avatar thieu1995 commented on May 18, 2024

Hi @Akwasi-Richie ,
Are your code working now? If not I made another video that use mealpy to optimize hyper-parameter for neural network.
You can get the video link in the ReadMe.md

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Hi @thieu1995,
yet to effect the last input you made. had a power outage,but it has been resolved one. will run the codes and get back to you.

With the videos, I was notified by youtube when you posted but yet to watch it. i will do that this evening as well.

Thank you

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Hi @thieu1995.

the codes are running now but just a bit puzzled. set my epoch at 2 but it is on the 4th run now. I have attched a snip of the output
GWO_OUTPUT

from mealpy.

thieu1995 avatar thieu1995 commented on May 18, 2024

Hi @Akwasi-Richie ,
There are 2 type of epochs in this program. The first one is for the neural network itself (look at the model.fit() function) and the second one is for the metaheuristic algorithm.

Like I said in the video, for hyper-parameter optimization problem, each time you create a new solution - means that you are running a new neural network. That is why you got the 4th run.

Set the verbose = 1 in model.fit() function, you can see your network is running with 2 epochs only, and then another network created with 2 epochs again, and so on.

Set the verbose = 2 in model.fit() function, you will see the results of metaheuristics training only.

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Hi @thieu1995
So if I got you right, should I set verbose=1, then my model could run for an infinite number of times

but verbose=2 will run for the set epochs right??

from mealpy.

thieu1995 avatar thieu1995 commented on May 18, 2024

Hi @Akwasi-Richie,

Have you watched the tutorial yet? Have you tried to run the traditional neural network (no need optimal hyper-parameter, just choice a random one).
Do you know the different between epoch (generations / iterations) of neural network and metaheuristic algorithm?

The verbose keyword is in keras model (https://keras.io/api/models/model_training_apis/). Please read the verbose explaination. It has nothing to do with your the number of loop your model will run.

Basically, when you run a traditional neural network without tunning hyper-parameter. It will run 1 neural network architecture.
So it will run until reach the epoch you set in this line of code:
history_object = model.fit(xTrain, yTrain, batch_size=10, validation_split=0.3, epochs=2)

But when you run a hyper-parameter tunning using metaheuristics like above. It will run N*Gmax neural network architecture.
In which, N is number of population size of your metaheuristic algorithm. Gmax is maximum number of genations (epochs/iterations). It is the epoch (1000) you set in this line of code:
GWO_model = GWO.BaseGWO(problem, epoch=1000, pop_size=50).

from mealpy.

Akwasi-Richie avatar Akwasi-Richie commented on May 18, 2024

Hi @thieu1995

I get you now. The library works and codes run and that is a good thing. It was the execution period that was just too long for an epoch (30 mins per epoch).
Got it to run using my GPU and the time increased to an hour and 50 mins.

I will try setting up my TensorFlow and Keras to run faster and start again. Thank you for your assistance.

I will be looking forward to the hybrid metaheuristics in the mealpy library too. GWO and WOA, GWO and PSO kind of hybrids.
cheers buddy

from mealpy.

thieu1995 avatar thieu1995 commented on May 18, 2024

@Akwasi-Richie ,

In the keras code part, try to run it on GPU (https://keras.io/guides/distributed_training/).
And use Early Stopping method (https://keras.io/api/callbacks/early_stopping/) to stop your model to avoid wasting time.

In mealpy mode part you can try to run with multi-processing (the default mode currently is sequential). Of course, the results of different mode may various as I suggested in the table in ReadMe.md file. The code should be:
model.solve(mode='process') # or mode='thread' --> using multi-threading

Using multi-threading or multi-processing should speed up your epoch runtime.
Again, just like keras, you can also use early stopping method in mealpy by:

from mealpy.utils.termination import Termination
....

ter_dict = {
    "mode": "ES",
    "quantity": 30  # after 30 epochs, if the global best doesn't improve then we stop the program
}
....

if __name__ == "__main__":
	....
    model7 = SMA.BaseSMA(problem_dict1, epoch=100, pop_size=50, pr=0.03, termination=ter_dict)
    model7.solve(mode='process')
	
	....

from mealpy.

thieu1995 avatar thieu1995 commented on May 18, 2024

For the hybrid algorithm, as I said before.
My goal for this library is to implement all of the original nature-inspired algorithms. There are around 300 original algorithms, and I've done around 90. Still need to finish more than 200 algorithms before think of anything else.
If you have the paper of hybrid paper, you can leave it here. I may spend a little bit of time implementing it.
Else I will not spend my time on the hybrid algorithm.

from mealpy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.