Giter Club home page Giter Club logo

cloudsimplus / cloudsimplus Goto Github PK

View Code? Open in Web Editor NEW
397.0 19.0 194.0 44.9 MB

State-of-the-art Framework 🏗 for Cloud Computing ⛅️ Simulation: a modern, full-featured, easier-to-use, highly extensible 🧩, faster 🚀 and more accurate ☕️ Java 17+ tool for cloud computing research 🎓. Examples: https://github.com/cloudsimplus/cloudsimplus-examples

Home Page: https://cloudsimplus.org

License: GNU General Public License v3.0

Shell 2.44% Java 97.56%
simulation-framework cloud-computing cloud-simulation research cloud-infrastructure simulation java test-bed iaas paas

cloudsimplus's Introduction

1. Overview

Consulting Build Status Codacy Badge Codacy Code Coverage Maven Central Documentation Status GPL licensed GitHub Repo stars Twitter Follow

CloudSim Plus is a modern, up-to-date, full-featured and fully documented Java 17 simulation framework. It's easy to use and extend, enabling modeling, simulation, and experimentation of Cloud computing infrastructures and application services. It allows developers to focus on specific system design issues to be investigated, without concerning the low-level details related to Cloud-based infrastructures and services.

CloudSim Plus is a fork of CloudSim 3, re-engineered primarily to avoid code duplication, provide code reuse and ensure compliance with software engineering principles and recommendations for extensibility improvements and accuracy. It's currently the state-of-the-art in cloud computing simulation framework.

The efforts dedicated to this project have been recognized by the EU/Brasil Cloud FORUM. A post about CloudSim Plus is available at this page of the Forum, including a White Paper available in the Publications Section.

CloudSim Plus started through a partnership between the Instituto de Telecomunicações (IT, Portugal), the Universidade da Beira Interior (UBI, Portugal) and the Instituto Federal de Educação Ciência e Tecnologia do Tocantins (IFTO, Brazil). It was partially supported by the Portuguese Fundação para a Ciência e a Tecnologia (FCT) and by the Brazilian foundation Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).

Note If you are using CloudSim Plus in your research, please make sure you cite this paper: M. C. Silva Filho, R. L. Oliveira, C. C. Monteiro, P. R. M. Inácio, and M. M. Freire. CloudSim Plus: a Cloud Computing Simulation Framework Pursuing Software Engineering Principles for Improved Modularity, Extensibility and Correctness, in IFIP/IEEE International Symposium on Integrated Network Management, 2017, p. 7.

IMPORTANT

  • Developing and maintaining this project takes a huge effort. This way, any kind of contribution is encouraged. Show your support by giving it a star ⭐ using the button at the top of the GitHub page. It takes no time, helps promoting the project and keeps it evolving. Thanks in advance 👏.
  • If you are not intending to make changes and contribute back to the project, you shouldn't fork it. Your fork become obsolete as the project is updated.
  • If you're willing to use the framework to develop your own project on top of it, creating a fork is the worst way. You aren't supposed to change the framework code to implement your project, but to extend it by creating some subclasses. Unless you are planning to contribute your changes back, you'll end up with an incompatible and obsolete version of the framework. The project is constantly evolving and bugfixes are a priority. Your fork with personal changes will miss those updates and high performance improvements.

⬆️

2. Main Exclusive Features 🧰

CloudSim Plus provides lots of exclusive features, from the most basic ones to build simple simulations, to advanced features for simulating more realistic cloud scenarios:

  1. It is easier to use. A complete and easy-to-understand simulation scenario can be built in few lines of code.
  2. Multi-cloud simulations with inter-datacenter VM migrations (#361).
  3. Creation of joint power- and network-aware simulations (#45).
  4. Vertical (#7) and Horizontal VM scaling (#41).
  5. Highly accurate power usage computation (#153).
  6. Built-in computation of CPU utilization history and energy consumption for VMs (and Hosts) (#168).
  7. Virtual Memory and Reduced bandwidth allocation when RAM and BW are oversubscribed. (#170).
  8. Automatically power Hosts on and off according to demand (#128) and support defining a startup and shutdown delay/power consumption (#238).
  9. Parallel execution of simulations in multi-core computers, allowing multiple simulations to be run simultaneously in an isolated way (#38).
  10. Delay creation of submitted VMs and Cloudlets, enabling simulation of dynamic arrival of tasks (#11, #23).
  11. Allow dynamic creation of VMs and Cloudlets in runtime, enabling VMs to be created on-demand (#43).
  12. Listeners to enable simulation configuration, monitoring and data collection.
  13. Create simulations from Google Cluster Data trace files. (#149).
  14. Strongly object-oriented, allowing chained calls such as cloudlet.getVm().getHost().getDatacenter() without even worrying about NullPointerException (#10).
  15. Classes and interfaces for implementing heuristics such as Tabu Search, Simulated Annealing, Ant Colony Systems and so on (example here).
  16. Implementation of the Completely Fair Scheduler used in recent versions of the Linux Kernel (example here) (#58).
  17. Host Fault Injection and Recovery Mechanism to enable injection of random failures into Hosts CPU cores and replication of failed VMs (#81).
  18. Creation of Hosts at Simulation Runtime to enable physical expansion of Datacenter capacity (#124).
  19. Enables the simulation to keep running, waiting for dynamic and even random events such as the arrival of Cloudlets and VMs (#130).
  20. TableBuilder objects that are used in all examples and enable printing simulation results in different formats such as Markdown Table, CSV or HTML.
  21. Colors log messages and enables filtering the level of messages to print (#24). If you want to just see messages from warning level, call Log.setLevel(ch.qos.logback.classic.Level.WARN);
  22. Enables running the simulation synchronously, making it easier to interact with it and collect data inside a loop, as the simulation goes on. This brings freedom to implement your simulations (#205).
  23. Allows placing a group of VMs into the same Host. (#90).
  24. Enables Broker to try selecting the closest Datacenter to place VMs, according to their time zone. (#212).
  25. Non-Live VM migration from/to public-cloud datacenters (#437).
  26. Support VM startup/shutdown delay and boot overhead (#435)
  27. It outperforms CloudSim 4, as can be seen here.

3. Project's Structure 🏗

CloudSim Plus has a simpler structure to make it ease to use and understand. It consists of 4 modules, 2 of which are new, as presented below.

CloudSim Plus Modules

  • cloudsimplus (this module): the CloudSim Plus cloud simulation framework API, which is used by all other modules. It is the main and only required module you need to write cloud simulations.
  • cloudsimplus-examples: includes a series of different examples, since minimal simulation scenarios using basic CloudSim Plus features, to complex scenarios using workloads from trace files or Vm migration examples. This is an excellent starting point for learning how to build cloud simulations using CloudSim Plus.
  • cloudsimplus-testbeds: enables implementation of simulation testbeds in a repeatable manner, allowing a researcher to execute several simulation runs for a given experiment and collect statistical data using a scientific approach.
  • cloudsimplus-benchmarks: a new module used just internally to implement micro benchmarks to assess framework performance.

It also has a better package organization, improving Separation of Concerns (SoC) and making it easy to know where a desired class is and what is inside each package. The figure below presents the new package organization. The dark yellow packages are new in CloudSim Plus and include its exclusive interfaces and classes. The light yellow ones were introduced just to better organize existing CloudSim classes and interfaces.

CloudSim Plus Packages

⬆️

4. Project Requirements

CloudSim Plus is a Java 17 project that uses maven for build and dependency management. To build and run the project, you need JDK 17+ installed and an updated version of maven (such as 3.8.6+). Maven is already installed on your IDE. Unless it's out-of-date or you want to build the project from the command line, you need to install maven into your operating system. All project dependencies are download automatically by maven.

5. How to Use CloudSim Plus 👩🏽‍💻

Warning Before trying to use this project, make sure you have JDK 17 installed.

There are 2 ways to use CloudSim Plus:

  • creating your own project and add it as a dependency. This way, it will be downloaded directly from Maven Central.
  • downloading the cloudsimplus-examples project and following the instructions there.

Check sections below if you want to add CloudSim Plus as a dependency into your own Maven or Gradle project. This way you can start building your simulations from scratch.

5.1 Maven

Add the following dependency into the pom.xml file of your own Maven project.

<dependency>
    <groupId>org.cloudsimplus</groupId>
    <artifactId>cloudsimplus</artifactId>
    <!-- Set a specific version or use the latest one -->
    <version>LATEST</version>
</dependency>

5.2 Gradle

Add the following dependency into the build.gradle file of your own Gradle project.

dependencies {
    //Set a specific version or use the latest one
    implementation 'org.cloudsimplus:cloudsimplus:LATEST'
}

⬆️

6. Building CloudSim Plus

CloudSim Plus is a maven project. The previuos section just showed that you don't need to download the project sources to understand how the project works or to create your own experiments or tool on top of CloudSim Plus. You can just download the example's project and start your experiments or a new simulation framewework from there. Anyway, if you want to build CloudSim Plus, you have two ways:

6.1 Using some IDE

Open the project on your favorite IDE and click the build button and that is it.

6.2 Using a terminal

Open a terminal at the project root directory and type one of the following commands:

on Linux/macOS

./mvnw clean install

on Windows

mvnw.cmd clean install

7. A Minimal but Complete Simulation Example ⚙️

In order to build a simulation scenario, you have to create at least:

  • a datacenter with a list of physical machines (Hosts);
  • a broker that allows submission of VMs and Cloudlets to be executed, on behalf of a given customer, into the cloud infrastructure;
  • a list of customer's virtual machines (VMs);
  • and a list of customer's cloudlets (objects that model resource requirements of different applications).

Due to the simplicity provided by CloudSim Plus, all the code to create a minimal simulation scenario can be as simple as presented below. A more adequate and reusable example is available here, together with other ones available in the cloudsimplus-examples repository.

//Enables just some level of logging.
//Make sure to import org.cloudsimplus.util.Log;
//Log.setLevel(ch.qos.logback.classic.Level.WARN);

//Creates a CloudSimPlus object to initialize the simulation.
var simulation = new CloudSimPlus();

//Creates a Broker that will act on behalf of a cloud user (customer).
var broker0 = new DatacenterBrokerSimple(simulation);

//Host configuration
long ram = 10000; //in Megabytes
long storage = 100000; //in Megabytes
long bw = 100000; //in Megabits/s
        
//Creates one host with a specific list of CPU cores (PEs).
//Uses a PeProvisionerSimple by default to provision PEs for VMs
//Uses ResourceProvisionerSimple by default for RAM and BW provisioning
//Uses VmSchedulerSpaceShared by default for VM scheduling
var host0 = new HostSimple(ram, bw, storage, List.of(new PeSimple(20000)));

//Creates a Datacenter with a list of Hosts.
//Uses a VmAllocationPolicySimple by default to allocate VMs
var dc0 = new DatacenterSimple(simulation, List.of(host0));

//Creates one VM with one CPU core to run applications.
//Uses a CloudletSchedulerTimeShared by default to schedule Cloudlets
var vm0 = new VmSimple(1000, 1);
vm0.setRam(1000).setBw(1000).setSize(1000);

//Creates Cloudlets that represent applications to be run inside a VM.
//It has a length of 1000 Million Instructions (MI) and requires 1 CPU core 
//UtilizationModel defining the Cloudlets use only 50% of any resource all the time
var utilizationModel = new UtilizationModelDynamic(0.5);
var cloudlet0 = new CloudletSimple(10000, 1, utilizationModel);
var cloudlet1 = new CloudletSimple(10000, 1, utilizationModel);
var cloudletList = List.of(cloudlet0, cloudlet1);

broker0.submitVmList(List.of(vm0));
broker0.submitCloudletList(cloudletList);

/*Starts the simulation and waits all cloudlets to be executed, automatically
stopping when there is no more events to process.*/
simulation.start();

/*Prints the results when the simulation is over
(you can use your own code here to print what you want from this cloudlet list).*/
new CloudletsTableBuilder(broker0.getCloudletFinishedList()).build();

The presented results are structured and clear to allow better understanding. For example, the image below shows the output for a simulation with two cloudlets (applications).

Simulation Results

7.1 Comparison with CloudSim

A complete, side-by-side comparison between CloudSim and CloudSim Plus Java simulation scenarios is available here.

⬆️

8. Documentation and Help 📘🆘

The project documentation originated from CloudSim was entirely updated and extended. You can see the javadoc documentation for classes and their elements directly on your IDE.

The documentation is available online at ReadTheDocs, which includes a FAQ and guides. CloudSim Plus has extended documentation of classes and interfaces and also includes extremely helpful package documentation that can be viewed directly on your IDE or at the link provided above. Such a package documentation gives a general overview of the classes used to build a cloud simulation. Also, check the publications section to access published CloudSim Plus papers.

A Google Group forum is available at https://groups.google.com/group/cloudsimplus and you can also use the Discussions page here.

⬆️

9. Consulting and Professional Support 👨🏽‍🏫

If you are doing research on cloud computing simulation and facing challenging issues, I've started to offer my consulting services.

I can help you with different kinds of issues and provide specific features for your simulations, including resource allocation, task scheduling, VM placement and migration, metrics computation, process automation, debugging, results analysis, validation and more.

If you have a CloudSim project and want to migrate to CloudSim Plus to benefit from its extensive documentation, active development and support, exclusive features, great accuracy and performance, the consulting can be fit for you too.

Get the contact e-mail here.

10. General Features of the Framework 🛠

CloudSim Plus supports modeling and simulation of:

  • large scale Cloud computing data centers;
  • virtualized server hosts, with customizable policies for provisioning host resources to virtual machines;
  • data center network topologies and message-passing applications;
  • federated clouds;
  • user-defined policies for allocation of hosts to virtual machines and policies for allocation of host resources to virtual machines.

⬆️

11. CloudSim Plus Publications 📝

  1. M. C. Silva Filho, R. L. Oliveira, C. C. Monteiro, P. R. M. Inácio, and M. M. Freire. CloudSim Plus: a Cloud Computing Simulation Framework Pursuing Software Engineering Principles for Improved Modularity, Extensibility and Correctness, in IFIP/IEEE International Symposium on Integrated Network Management, 2017, p. 7. If you are using CloudSim Plus in your research, please make sure you cite that paper. You can check the paper presentation here.
  2. White Paper. CloudSim Plus: A Modern Java 17+ Framework for Modeling and Simulation of Cloud Computing Infrastructures and Services. 2016.
  3. R. L. Oliveira. Virtual Machine Allocation in Cloud Computing Environments based on Service Level Agreements (only in Portuguese). Master's Dissertation. University of Beira Interior, 2017 (Supervisor: M. M. Freire).

⬆️

12. Related Projects 🧩

Here, it's presented a list of some projects based on CloudSim Plus, which trust in its accuracy, performance, maintainability and extensibility. If you want your project to be listed here, send us a Pull Request. Make sure your project has a descriptive README.

  1. CloudSim Plus Py4j gateway: building CloudSim Plus simulations in Python
  2. PySDNSim: a Python simulation tool for microservice-based SDN using CloudSim Plus as the underlying framework.
  3. RECAP Discrete Event Simulation Framework: an extension for CloudSimPlus
  4. CloudSim Plus Automation: defining CloudSim Plus simulation scenarios into a YAML file.
  5. LEAF: Simulator for modeling Large Energy-Aware Fog computing environments.
  6. EPCSAC: Extensible Platform for Cloud Scheduling Algorithm Comparison.
  7. SatEdgeSim: A Toolkit for Modeling and Simulation of Performance Evaluation in Satellite Edge Computing Environments.

⬆️

13. License ⚖️

This project is licensed under GNU GPLv3, as defined inside CloudSim 3 source files.

⬆️

14. Contributing 🤝

You are welcome to contribute to the project. However, make sure you read the contribution guide before starting. The guide provides information on the different ways you can contribute, such as by requesting a feature, reporting an issue, fixing a bug or providing some new feature.

⬆️

cloudsimplus's People

Contributors

abderrahmanl avatar andreacappelletti97 avatar beloglazov avatar birnbaum avatar dependabot[bot] avatar deremo avatar douglasleer avatar humaira-salam avatar katrinleinweber avatar manoelcampos avatar monkeywithacupcake avatar nickdelta avatar nikolayg avatar pkoperek avatar raysaoliveira avatar reiz avatar romantic668 avatar silasmahler avatar snyk-bot avatar sohamchari avatar struffel avatar sun-lingyu avatar tewsmax avatar tnanukem avatar wadewaleganesh avatar williamvoor avatar wuusn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudsimplus's Issues

Implement new object-oriented, polymorphic and type-safe message passing mechanism

IMPROVEMENT

CloudSim Plus is a discrete event simulation framework that relies on tagged message passing to notify SimEntity objects when specific events happen.

Each SimEvent is tagged with a constant usually defined in CloudSimTags class and the processEvent method on every SimEntity receives the messages and process the desired ones.

Besides the tag attribute that classifies a message, the SimEvent class has a data attribute that is define as Object and can store any possible value from any class. Inside the processEvent method, to know what type of event was received, it is used a switch statement to check the tag attribute. That allows to guess what the type of object inside the data attribute is, requiring an unchecked typecast that may generate runtime ClassCastException. There is no way to enforce data of an event with a given tag to be of a required type. Further, when creating or extending a SimEntity, it is difficult to know what king of object it is supposed to be passed for the data attribute of a SimEvent.

The Issue #10 improved drastically the message passing mechanism that was usually passing an array of int for the SimEvent.data, instead of an actual object such as Host, Vm or Cloudlet. Using int arrays is even more difficult to know whose actual Object a specific ID in such an array is. There is no way to know that without questing throughout the code.

The Issue #10 tried to reduce such effort by providing documentation of what type of data it is expected a SimEvent object to have, when it is tagged with a specific value for the tag attribute. Despite this makes it easier to understand and extend the code, it doesn't solve the issues mentioned before.

This design also violates the Open/Closed Principle (OCP), it is not polymorphic and makes the code of the processEvent large and confusing. Every time a new tag is added, the SimEntity.processEvent method has to be changed to process this new tag.

Detailed information about how the feature should work

Instead of a general processEvent(SimEvent evt), we could use specialized SimEvent child classes such as CloudletEvent, VmEvent, HostEvent, DatacenterBrokerEvent and DatacenterEvent.
Then the general method should just call an overloaded version such as processEvent(VmEvent evt) or processEvent(HostEvent evt). This way, the correct version can be called directly and there will be no need for switch statements to check the type of the event and events won't need to be tagged anymore. The CloudSimTags may be removed. Since objects such as brokers can have different events, we can specialize the class even more with BrokerShutDownEvent, for instance.

An example scenario where this feature should be used

Final users can benefit from this improvement by allowing then to register for receiving notifications about any event that may happen during simulation execution in order to improve simulation monitoring. Currently, the introduced Listeners allow that but not for any possible event.

A brief explanation of why you think this feature is useful

It will make easier and safer to understand and extend the framework, will make the code compile time safe avoiding typecasts that my generate runtime exceptions. It will provide an improved design and less error prone code.

Allow submission of VMs with a specific delay

FEATURE:

Create an overloaded version of the DatacenterBroker.submitVms method to receive a delay parameter in order to postpone the creation of the submitted VMs inside some Host.

Detailed information about how the feature should work

A Delayable interface was introduced and now Cloudlet and Vm interfaces extends it.
The interface provides a getSubmissionDelay and setSubmissionDelay methods that are used by a DatacenterBroker when a list of VMs or Cloudlets is submitted passing a specific delay.
In this case, the broker will set the given delay to the delay attribute of the VMs or Cloudlets into the list.

The delay os VMs and Cloudlets can also be defined individually by calling the setSubmissionDelay in each object. By this way, using the regular submission method, the delay will not be changed and the request to create the objects will follow the delay defined in each object.

Related Issues

Allow child CloudletSchedulerTimeShared classes to apply a preemption mechanism to perform context switch between Cloudlets sharing the same PEs

IMPROVEMENT

The current CloudletSchedulerTimeShared class (inherited from CloudSim) provides an oversimplified time-shared scheduler which considers that every submitted Cloudlet will be immediately added to the execution list, even if the number of Cloudlets is greater than the number of existing PEs.

This is an oversimplification inherited from CloudSim that has a minor effect in how Cloudlets are executed, considering that context switch overhead is not concerned and would be challenging to implement.

The current CloudletSchedulerTimeShared implementation makes all submitted Cloudlets to be executed simultaneously, regardless of the number of PEs. By this way, if there is 1 PE of 1000 MIPS and 2 Cloudlets requiring 1 PE each one, all these 2 Cloudlets will run at the same time. However, each PE will be used simultaneously by 2 Cloudlets, splitting the PE capacity between them. That gives 500 MIPS for each Cloudlet to use. By this way, the scheduler always adds Cloudlets to the execution list and the waiting list is always empty.

In a real computer, it is not possible to run 2 processes in the same processor CORE at the same time by just splitting the core capacity between the processes. A CPU core can only be used by a single process at a given time. The oversimplification given by the CloudletSchedulerTimeShared hasn't a great impact on the simulation. Just the results are a little bit different if it was used a more realistic approach. Considering that all 2 Cloudlets in the given example have 10000 MI of length, this implementation makes that all these Cloudlets finish at the same time in the 10th simulation second.

However, in a real time-shared scheduler that does not assign priorities to process and gives the same time slice to every process, a Cloudlet will, in fact, finish prior to another one. As depicted below, the Cloudlet 0 will finish in the 4th second, while Cloudlet 1 will finish just in the 5th second:

Time in seconds 00 01 02 03 04 05
Running Cloudlet C0 C1 C0 C1 C0 C1

The issue is that if one needs to implement a new and more realistic time-shared CloudletScheduler such as the Completely Fair Scheduler used in the Linux Kernel, this behaviour in the CloudletSchedulerTimeShared will not allow the implementation of a subclass defining a different behaviour.

Detailed information about how the feature should work

Refactoring in the CloudletSchedulerTimeShared and superclass should be made to allow the scheduler implementation to define when a submitted Cloudlet has to go to the waiting list, according to scheduler policies.

An example scenario where this feature should be used

This enhancement will enable the implementation of more realistic schedulers such as the Completely Fair Scheduler already mentioned, that preempts executing tasks in order to give other Cloudlets the opportunity to run.

A brief explanation of why you think this feature is useful

The CloudletSchedulerTimeShared scheduler is quite naive and a more realistic one would enable simulation of more complex and useful scenarios.

Allocation of a Host PE to a VM fails when the host uses a VmSchedulerSpaceShared and there are more VMs than PEs.

ISSUE:

Actual behaviour and expected behavior

Consider a scenario where we have 1 host with 2 PEs and 4 VMs each one requesting these 2 PEs. The paper CloudSim: A Novel Framework for Modeling and Simulation of Cloud Computing Infrastructures and Services states in section 3.2 that when using a VmSchedulerSpaceShared, one VM will execute first and after it finishes, the other one will start executing. However, the DatacenterBroker fails in allocating a host for 2 of these VMs due to lack of available PEs. Since each one of the 2 host PEs has a capacity of 1000 MIPS, even defining that each one of the 4 VMs requires just 500 MIPS, the allocation of 2 VMs fails.

As stated in the paper, the allocation should work in the same way that the CloudletSchedulerSpaceShared does. For the given scenario, If:

  • the MIPS capacity requested by each VM is not higher than the capacity of available Host PEs;
  • no VM is requiring more PEs or MIPS than the total Host capacity;

then, even if there is less PEs than the amount required by all VMs, 2 VMs should be queued to start executing after the first 2 ones finish.

The other scenarios presented in section 3.2 of the mentioned paper have to be tested two, to ensure they are working as expected. Integration tests for these scenarios would be included too.

If it is a more wide problem or you don't know where it happens, provide a minimal simulation example that reproduces the problem

It is a broader problem related to both DatacenterBroker and VmScheduler implementations.
An example showing the problem is available here. You will see in the logs that 2 of the 4 VMs fail and just the other 2 are placed.

Specifications like the version of the project, operating system or workload file used

The problem occurs in the current development version of CloudSim Plus but it comes from prior versions of CloudSim.

Provide a functional implementation of the UtilizationModelDynamic

FEATURE:

The UtilizationModelDynamic provides an UtilizationModel implementation that arithmetically increases resource utilization along the time. However, different progressions such as geometric, logarithmic, exponential, etc would require the creation of a lot of new classes.

The UtilizationModelDynamic should be refactored to allow setting a lambda expression that defines how the resource usage should be increment.

Detailed information about how the feature should work

The class must be renamed to UtilizationModelDynamic and receive a Lambda function as a parameter to enable the researcher defining how the increment should work, without creating a new class for that.

An example scenario where this feature should be used

It will be possible to create simulations where the usage of a given Cloudlet resource can be dynamically defined in different ways, according to the researcher requirements. For instance, it will be possible to define a resource usage that starts incrementing arithmetically, but at a given time it grows exponentially.

A brief explanation of why you think this feature is useful

This will enable simulating different and complex realistic workload models and give more options for developers to create their simulations.

Examples using this the new UtilizationModelDynamic

Processing of packets sent and received by Cloudlet Tasks is just performed by a NetworkCloudletSpaceSharedScheduler

FEATURE:

Only the NetworkCloudletSpaceSharedScheduler class provides the code to send and receive packets. The NetworkCloudletTimeSharedScheduler didn't have any code to make such a thing it was deleted. By this way, the user is obliged to use the first mentioned scheduler and there is no way to he/she to know that intuitively. Using a different scheduler will just raise class cast runtime exceptions without the user understand why.

Initially it was thought that the network tasks had nothing to do with time or space-shared, just the execution tasks. But it turns out that it has everything to do with it. Even they are network tasks, they depend that their Cloudlets are executing so that packets can be sent or received.

The method updateCloudletProcessing(CloudletExecutionInfo, double) is dealing with all kinds of tasks according to the scheduler operation, either a time or a space-shared scheduler, but in this way, it should be provided a NetworkCloudletTimeSharedScheduler that really sends and receives network packets.

CloudletScheduler interface must be updated to enable any implementing class (such as the CloudletSchedulerTimeShared or CloudletSchedulerSpaceShared) to process network packets, whitout requiring to create an specialized NetworkCloudletScheduler.

Detailed information about how the feature should work

However it will be easier to just include a NetworkCloudletTimeSharedScheduler that really processes packets, this just causes confusion since the public interface of a Vm and NetworkVm doesn't make clear that it is required a specific NetworkCloudletScheduler.

Further, providing such NetworkCloudletSchedulers just causes a proliferation of child classes and doesn't favor composition over inheritance, as it is discussed in #45. Thus, a composition approach should be provided to avoid creating a new child class.

A new independent class must be included and automatically instantiated in the appropriated place and moment to enable the user to use any of the existing regular CloudletScheduler such as the CloudletSchedulerTimeShared and CloudletSchedulerSpaceShared.

An example scenario where this feature should be used

It will start favouring composition over inheritance to pave the way to provide the features in #45.

A brief explanation of why you think this feature is useful

This will make easier and more intuitive to create network simulations. The user will not be surprised by runtime errors due to the use of an unexpected CloudletScheduler.

Final Solution

  • Created an independent TaskScheduler class and include an attribute of such a class into the CloudletScheduler. The default value for such an attribute should be a TaskScheduler.NULL object that implements the Null Object Pattern to avoid NullPointerExceptions when such an attribute is not set (that will be the case were the simulation is not using the network module).

Updates

The TaskScheduler class was renamed to PacketScheduler.

Examples using this feature

InfoPacket, NetworkPacket and HostPacket must implement a common interface

IMPROVEMENT

Actual behaviour and expected behaviour

Classes InfoPacket, HostPacket and NetworkPacket have common attributes and methods that are duplicated because they don't have a common interface.

A common interface and possibly an abstract class should be included to remove the code duplication. See the already existing Packet interface.

Provide constructors with shorter parameter list for classes that implement interfaces Cloudlet, Vm, Host, Datacenter and DatacenterCharacteristics

IMPROVEMENT

Several basic classes such as the CloudletSimple have very complex constructors with lots of parameters. The CloudletSimple class constructor, for instance, has 9 parameters. Constructors with a shorter parameter list have to be introduced for these classes in order to make them easier and less confusing to instantiate.

Detailed information about how the feature should work

New constructors have to be introduced for such classes and the other constructors would be marked as deprecated for future removal.

Values of required attributes should be defined by the developer using setter methods.
Appropriated default values should be provided for such attributes. For instance, the minimum number of PEs a Cloudlet must have is 1, thus this may be the default value. Attributes of specific interfaces such as UtilizationModel may be initialised with the NULL object from these interfaces.

An example scenario where this feature should be used

The use of constructors with shorter parameter list and the setters to define the object attributes values will make the code to create such objects clearer. Examples will be simpler since a developer reading them will see each attribute being set individually. He/she can directly understand which value is being given for which attribute. With several parameters in the constructor, as the case of the CloudletSimple class, it is not direct to know what means each value being passed. Sometimes the developer has to resort to the constructor documentation or even its source code to know which value is being set to which parameter.

A brief explanation of why you think this feature is useful

As the number of constructor parameters increases, it makes more difficult to call the constructor. As new attributes are introduced, new constructors with more parameters tend to be included, worsening the problem. The usage of appropriated default values for attributes such as the number of PEs a Cloudlet requires to reduce the number of setters to be called when the developer wants to use some of these default values.

This feature aligns with clean code programming, that suggests methods should not have more than 3 parameters. Since that using these simpler constructors and deprecating the other ones avoids the introduction of new constructors with more parameters, it makes the code maintenance easier.

The class usage is simplified since there is no pre-defined order to call the setters to define attribute values. It is also not required that all the attributes are explicitly set, once some of them may use the default value.

Related Issues

Replace Log class with the standard and widely used Logback Library

Replace the CloudSim Log class by the Logback as logging engine and SL4J as logging facade.

An example scenario where this feature can be used

Log messages will be printed with different colors depending on the message type, as depicted below.

log-messages-by-type

Using Logback, you can even customize the log by creating a logback.xml file inside the resource directory of your simulation's project. The default CloudSim Plus configuration is defined here.

It also enables filtering simulation logs by different categories (namely ERROR, WARN, INFO, DEBUG and TRACE), giving a more fine-grained control over log messages. This way, debug process will be easier. It will enable the developer to show just the type of messages relevant to his/her experiments.

The line of code below shows an example of how to enable a specific level of log messages. It can be called before instantiating the CloudSim object.

//Make sure to import org.cloudsimplus.util.Log;
Log.setLevel(ch.qos.logback.classic.Level.WARN);

A brief explanation of why you think this feature is useful

The developer will be able to select the kind of logs he/she want to show, using a standard, well-known library. He/she will have more options when exporting the log and could create its own categories of log messages.

References

Implementation of failure models and recovery systems

FEATURE

Failures are a constant in distributed environments such as the cloud. Different type of failures can occur. Failures of physical devices can be related to network, disk, cooling systems, power supply and so on. Software failures can occur at different layers. Inside physical machines they can occur at hypervisor. Inside both virtual and physical machines they can occur at the operating system (host or guest OS) and installed softwares or libraries. By this way, it has to be defined a set of classes and interfaces to enable the modelling of failure models and recovery.

A bibliographic review has to be made to determine the most known failure model for each kind and level of failure. Works such as WorkflowSim simulator [1] and FIM-SIM implement some failure features, but the last one doesn't seem to provide any source code.

An initial Host Failure Injection mechanism was introduced by #93, but only for Host PEs.

Detailed information about how the feature should work

Implementations of the interfaces to be introduced have to provide at least one basic failure model well known in literature for each kind and level of failure. By this way, the researcher implementing his/her own simulations will be able to set a failure model for different simulator objects, such as Datacenter, Host, Vm, Cloudlet, Network Switches, Storage devices and SANs, etc.

An example scenario where this feature should be used

Failure models can be used to introduce different failures during the simulation execution and allowing the assessment of how the algorithms being implemented by the research (such as VM placement and migration algorithms) react to these failures.

A brief explanation of why you think this feature is useful

Considering that no system is free of failures and that cloud providers must have recover mechanisms to avoid large service outages, the introduction of failure models allows more realistic simulations and more accurate results.

Allow submission of VMs and Cloudlets to an already existing DatacenterBroker in runtime

FEATURE:

Such a feature will allow the creation of new VMs and Cloudlets after the simulation has already started, without requiring the creation of a new DatacenterBroker. The requirement of creating a new broker to enable dynamic creation of VMs and Cloudlets is not obvious and causes confusion since when calling the submitVmList and submitCloudletList methods from DatacenterBroker has no effect after the simulation has started.

Detailed information about how the feature should work

The submitVmList and submitCloudletList methods from DatacenterBroker implementations just works before the simulation has started. All the process to request a Datacenter to create VMs and Cloudlets is performed just in the startEntity method (inherited from the SimEntity), that is just executed once when the simulation starts.

When VMs are submitted, the DatacenterBroker has to check if it was already started and then immediately request the creation of such VMs. When Cloudlets are submitted after the DatacenterBroker has started, the broker has to check if there are VMs waiting to be created. If so, the new submitted cloudlets have to wait in the queue. If all VMs were already created, the request to create the new cloudlets has to be sent immediately.

An example scenario where this feature should be used

This will allow creation of VMs and Cloudlets dynamically (during simulation execution) for the same cloud customers (DatacenterBroker), allowing simulation of dynamic workload (cloudlets arriving dynamically) and the on-demand provisioning of new VMs.

A brief explanation of why you think this feature is useful

It is a long waited feature and will make the DatacenterBroker submission methods to work as expected during simulation execution.

Included Examples

Related Issues

AppCloudlet is useless and must be removed

IMPROVEMENT

AppCloudlet class is just being used to store a list of NetworkCloudlets. It doesn't inherit from any CloudSim Plus class and has no behaviour or useful attribute except the mentioned list.

DatacenterBrokers don't even accept the submission of such an AppCloudlet, since it doesn't implement the Cloudlet interface.

This class is useless and it must be removed, updating the Network module and examples accordingly.

CloudletScheduler doesn't consider the Cloudlet UtilizationModel for CPU when updating cloudlets processing

ISSUE:

Consider a scenario with 1 Cloudlet of 10000 MI running inside a VM with 1 PE of 1000 MI.

Expected behavior

If the Cloudlet uses a UtilizationModelFull for CPU, it will spend 10 seconds to finish, but if it uses other
UtilizationModelFull that defines the Cloudlet will use just 50% of CPU capacity all the time, then it must spend 20 seconds.

Actual behavior

The CloudletScheduler doesn't even consider the UtilizationModel and makes such a Cloudlet to finish always in 10 seconds, whatever is the UtilizationModel.

Specifications like the version of the project, operating system or workload file used

Current development version.

Provide a functional implementation of VmAllocationPolicy to enable dynamically changing the Host selection policy for a Vm

ENHANCEMENT

The VmAllocationPolicySimple provides a VmAllocationPolicy implementation that selects the Host with fewer PEs in use to place a VM. The selection policy is implemented by the method boolean allocateHostForVm(Vm vm).

However, to provide a different implementation of the selection policy, the developer is required to create a new class which extends the VmAllocationPolicySimple or its abstract superclass VmAllocationPolicyAbstract.

The VmAllocationPolicyAbstract should be refactored to provide a Functional implementation that enables changing the selection policy at runtime, without requiring the creation of a new class. Similar implementation was made for the DatacenterBroker, as can be seen in issues #25 and #28.

The refactoring of other VmAllocationPolicy implementations has to be assessed too, such as the PowerVmAllocationPolicyMigration.

Inappropriately changing Cloudlet length in CloudletSchedulerSpaceShared

Issue

Some CloudletSchedulers, such as the CloudletSchedulerSpaceShared, inappropriately change the length of the Cloudlet, inside the cloudletResume method, to the remaining length to be executed.

The CloudletSchedulerSpaceShared .movePausedCloudletToExecList also inappropriately changes the length of the Cloudlet.

The getter getCloudletFinishedSoFar() is supposed to retrieve this data, instead of changing
the cloudlet length that is defined by the user. Further, the length is changed to the remaining length across all PEs. The cloudlet documentation states that the length is the amount of MI to be executed by each Cloudlet PE, not the length sum across all existing PEs.

Test CloudletSchedulerSpaceSharedTest.testCloudletResume_CloudletLengthNotChangedAfterResumeAndMovingToWaitList was included to confirm the issue.

What should be done is to use the getCloudletFinishedSoFar().

If the issue is related to a specific method, provide a test case that fails in order to show the problem

It has to be provided new test cases to confirm the problem.

CloudSim OnClockTickListener events are fired before all the events happening at the same time were processed

FEATURE:

CloudSim class has a addOnClockTickListener method that allows to give a Lambda Expression that represents a Listener that will be notified every time the simulation clock changes.

This feature is working fine, but considering that the simulation can generate different consecutive events for the same simulation time (that are processed one at a time), when the simulation clock changes, for instance from 1 to 2, the Listeners are immediately notified.

The issue is that if there are more N events happening at the simulation time 2, the Listeners are notified before all such events are processed. By this way, if the developer needs to check the simulation state inside his/her listener, he/she won't get an updated simulation state.

Detailed information about how the feature should work

The class should be refactored to check if there are several events happening at the same time and only notify the OnClockTickListeners when the last event for that time is processed.

An example scenario where this feature should be used

This feature may be used to fire some other actions, such as the creation of new VMs or Cloudlets, at a given simulation time when a specific user-defined condition is met. For instance, when the created VMs are overloaded, the creation of new VMs can be requested to balance the load.

A brief explanation of why you think this feature is useful

Such an improvement will increase the accuracy of the simulation state, enabling the Listeners defined by a researcher inside his/her simulations to get the most updated simulation state.

Convert the static methods and attributes in CloudSim class to instance methods and attributes

FEATURE:

CloudSim class is used just in a static context, since all its methods and attributes are static. By this way, the class stores data related to a particular simulation being run, what makes impossible to run multiple simulations in parallel.

Using CloudSim instances will provide a complete isolation between simulations being executed in parallel.

The use of static methods also makes very difficult to test classes that depend on CloudSim class.
Mocking a static method can just be performed using libraries such as PowerMock, that introduces more complexity to test writing.

Detailed information about how the feature should work

The static elements in CloudSim class have to be converted to instance elements and the class has to provide public constructors to be used to initialise the simulation, instead of calling a static init method.

An example scenario where this feature should be used

It will allow running several simulations in parallel, avoiding that a simulation interfere in the results of other ones.

It will also make much easier to write unit tests for classes that depend on CloudSim, since a simple library such as EasyMock, that is already being used, can create mock objects without a large increase in tests complexity.

A brief explanation of why you think this feature is useful

It will take one step further in moving away from CloudSim issues and paving the way to provide a highly extensible and maintainable framework.

It will allow writing more and meaningful tests to increase framework accuracy and provides the base for inclusion of upcoming features.

Included Examples

  • All existing examples now use this new feature. They just instantiate a CloudSim object using a no-args constructor to initialize the simulation. The most basic example is the BasicFirstExample.
  • The ParallelSimulationsExample is one that shows how it is straightforward to run simulations in parallel using Java 8 Streams and Lambda Expressions features.

The update interval of Datacenter doesn't work when there is just 1 cloudlet

Issue

Classes that implement the Datacenter interface have the attribute schedulingInterval that defines the interval of time to update the processing of cloudlets, by means of the updateCloudletProcessing() method. This interval can be increase to reduce the number of times you receive update messages during the simulation run and to speed up the execution.

However, when there is just 1 cloudlet, the update of this cloudlet processing is done just after the cloudlet finishes, instead of each defined interval.

Expected behaviour and actual behavior

If you have just 1 cloudlet with 10000 MI of length, running into a VM with a single core processor of 1000 MIPS, the cloudlet has to spend 10 seconds to run. If you set the schedulingInterval of Datacenter to 1 second, you have to receive 10 updates.

However, for this specified scenario, you will receive just one update after the cloudlet finishes.

If it is a more wide problem or you don't know where it happens, provide a minimal simulation example that reproduce the problem

You can check the CloudletListenersExample2_ResourceUsageAlongTime.java, changing the NUMBER_OF_CLOUDLETS constant to 1.

Allow adding multiple listeners for the same event of simulation entities such as Vm and Cloudlet

FEATURE:

Instead of defining just a single Listener for a given event of an object such as a Cloudlet, it may be allowed to add as many Listeners as desired.

Detailed information about how the feature should work

For each different event of an object, there may be a List attribute instead of just a Listener.
The listener setter has to be changed to and add method that will add the given listener to the list. A remove method also has to be included to enable removing (unregistering) listeners.

An example scenario where this feature should be used

It will allow researchers to create Listeners of a given event to perform different tasks in different methods. For instance, an onCloudletFinishListener can be set to print results while other one may be used just to collect metrics.

A brief explanation of why you think this feature is useful

Creating different Listeners allows researchers to implement multiple shorter methods for the same event, but each one with usually a single responsibility.

This feature will also pave the way to use Listeners as a new compile-time type-safe message passing mechanism, as described in #47

VM placement/migration triggering schemes

FEATURE:

VM placement (initial) and VM migration triggering can be:

  1. threshold-based;
  2. event-driven;
  3. periodic;
  4. or hybrid (using a mix of these mechanisms).

Currently, VM migration is triggered just based on static or dynamic lower and upper thresholds. VM placement is only event-driven, triggered instantaneously when new VMs arrive.

New triggering schemes may be defined to enable a researcher to assess how these mechanisms improve desired goals such as energy efficiency, resource usage and SLA fulfillment.

The new schemes could allow, for instance, to define if the current VM allocation should be re-assessed when a new VM arrives. A VM arrival can be a good opportunity to optimize current VM allocation by migrating VMs.

The following additional concerns may be also defined as a parameter for migration decision making:

  • VM migration overhead;
  • VM migration interference on co-located VMs ;
  • VM migration cost x new allocation benefit;
  • VM migration shuffle (defining the order to migrate VMs to reduce bandwidth usage and migration time).

References

Assess the impact to adapt CloudSim classes used in HashMaps to take advantage of Java 8 improvements

FEATURE

Change classes such as Vm, Host and Cloudlets, that are used in HashMaps, to implement the Comparable interface in order to take advantage of new performance improvements of HashMap implementation in Java 8.

Detailed information about how the feature should work

See the article https://tamasgyorfi.net/2016/05/01/java-8-hashmaps-keys-and-the-comparable-interface/
for more details.

An example scenario where this feature should be used

Simulations that uses thousands or even hundreds of thousands objects such as Vm, Cloudlets and Hosts, that spend several minutes to run, can have their execution time drastically reduced.

A brief explanation of why you think this feature is useful

Making advantage of the improvements of Java 8 HashMap can speed up drastically large simulations.

Change the examples module to create a uber jar with all dependencies included, to enable running these examples directly from a jar without requiring any external dependencies or configurations

FEATURE:

Modify the examples module to create a new uber jar, that is, a jar including all CloudSim Plus dependencies to enable running examples from a single jar file.

Detailed information about how the feature should work

The maven shade plugin can be used to implement this feature. See https://maven.apache.org/plugins/maven-shade-plugin/

An example scenario where this feature should be used

Run CloudSim Plus examples directly from the command line, allowing to test these simulation examples more easily.

A brief explanation of why you think this feature is useful

It will be easier for new users to try CloudSim Plus examples without having to open the source code into any IDE. They will be enabled to run the examples directly from the command line without worrying about dependencies and class path configurations.

Using this feature

The bootstrap.sh script uses the new examples jar to run any existing example. Run the script without parameters to see the options.

Horizontal VM Scaling (Auto Up/Down Scaling)

Implement horizontal scaling of VMs by creating a VM snapshot to enable starting copies of such a VM from that snapshot when a given SLA metric is violated. This should be an auto scaling feature such as the AWS Auto Scaling.

Included Examples

Detailed information about how the feature should work

The horizontal scaling of VMs has to be performed, at least initially, considering the usage of some VM resource such as RAM, CPU or BW. As each Cloudlet has an UtilizationModel for each of these resources, to compute the total VM usage of a given resource it has to be just added up the utilization of all currently executing Cloudlets.

The horizontal auto scale mechanism has to be provided with a predicate that defines a lower utilization threshold (that may be composed of multiple conditions such as utilization of CPU and RAM) and a predicate that defines an upper utilization threshold, like it is used for PowerVmAllocationPolicyMigration.

The horizontal scale can act when dynamically created Cloudlets are submitted to the VM (see feature #43). By this way, Load Balancer defined inside the broker should use the upper utilization threshold to define when a new VM has to be created in order to balance the arrived Cloudlets between such VMs.

It has to use the lower utilization threshold to define when a VM has to be destroyed. An under utilised VM will be destroyed just after all Cloudlets have finished. The Load Balancer won't just place new Cloudlets on such a VM. At the current version, if a VM doesn't have Cloudlets anymore, it will just be destroyed when all VMs of the same broker finishes executing.

The Load Balancer has to be provided with a Supplier that knows how to create new VMs under request. The balancer has to maintain a map with the list of additional VMs that was created to balance the load of a given VM. By this way, there is not need to add a new attribute to relate a VM to another one.

When a VM is created from a snapshot, it must start with no running Cloudlets, simulating a cold boot. The Cloudlets that will execute inside such a new VM will be those ones dynamically created and submitted by the Load Balancer mechanism inside the Broker.

An example scenario where this feature should be used

It will enable starting new VMs in order to fulfil SLA requirements and balance workload.

A brief explanation of why you think this feature is useful

It will allow to evaluate how SLA is fulfilled based on different load balancing policies.

Related Issues

Refactor Builder classes to enable instantiating objects other than the basic ones

IMPROVEMENT

Builder classes currently just instantiate the "Simple" version of Datacenter, Host, Vm, Cloudlet, VmAllocationPolicy. These builders should be refactored to enable instantiating specialised objects such as the Network or Power versions.

Detailed information about how the feature should work

The Builders may use a Supplier to enable passing a Lambda to create some kind of elements such a VmSchedulers and CloudletSchedulers.

An example scenario where this feature should be used

It will enable the builders to be used in a general way, for any kind of simulation object a researcher wants to instantiate.

A brief explanation of why you think this feature is useful

The current implementation is limited and was not refactored since its inception.

VmSchedulerTimeShared doesn't share Host PEs between different VMs when there is fewer Host PEs than required by all VMs

ISSUE

Simulation Scenario

  • 1 Datacenter with 1 Host of 2 PEs of 1000 MIPS (totaling 2000 MIPS)
  • 2 VMs requiring 2 PEs of 500 MIPS (1000 MIPS by VM, totaling 4 PEs and 2000 MIPS for all VMs)

Expected behavior

Since the Host has just 2 PEs and the 2 VMs require a total of 4 PEs, using a VmSchedulerTimeShared these VMs must share the same Host PEs and the VM placement should succeed.

Actual behavior

Only one VM is placed inside the Host. The other one fails to be placed due to lack of PEs.

Specifications like the version of the project, operating system or workload file used

Current development version.

Simulation Scenario

Check the VmSchedulerTimeSharedExample.

Dynamically created cloudlets submitted to an existing broker are ignored

Issue

When you dynamically create Cloudlets during the simulation execution and submit them to an existing broker, these new Cloudlets are not executed. Just if you dynamically create a broker and submit VMs and cloudlets to it, the cloudlets will run.

Expected behaviour and actual behaviour

During the simulation time, when new VMs and/or Cloudlets are created and submitted to an existing broker, this broker have to ensure that the cloudlets are executed until the end of the simulation.

Even creating and submitting the VMs and cloudlets, the existing broker just ignore them and only the cloudlets created before the simulation start are run.

If it is a more wide problem or you don't know where it happens, provide a minimal simulation example that reproduce the problem

The example VmListenersExample3_DynamicVmCreation.java show that creating a new broker, the dynamically created VMs and cloudlets are executed. If the code is changed to use an existing broker instead of a new one, the dynamically created VMs and cloudlets are ignored.

PeProvioner has completely duplicated code from ResourceProvisionerAbstract and ResourceProvisionerSimple and does not provide a uniform way to allocate resources

IMPROVEMENT:

The PeProvionerSimple class was an entire copy of previous CloudSim classes such as RamResourceProvisioner. Even after the refactoring performed by CloudSim Plus, the class yet is completely duplicated with code from ResourceProvisionerAbstract and ResourceProvisionerSimple.

The duplicated code should be removed and the class should extend some of the existing classes. That will allow a uniform way to manage allocation of RAM, BW, Storage and Virtual PEs for a VM.

For RAM, BW and Storage, it is used just the ResourceProvisionerSimple class to manage such resources. When such a provisioner is created, it is passed a Resource to be managed (that will be an object implementing the ResourceManageable interface, such as the Ram, Bandwidth and RawStorage objects). Changing the PeProvisioner to extend some of these classes will create a singly way to manage the allocation of any physical resources to VMs.

The method allocatePesListForVm and allocateMipsFromHostPesToGivenVirtualPe from VmSchedulerTimeShared class, show that when allocation of PEs for a VM is requested, they will try to find a set of Host PEs that can be allocated to the VM. By this way, for each Host PE found, a given amount of their capacity will be allocated to the VM.

The method allocatePesListForVm also has an issue because if a VM required more than one PE, when calling the allocateMipsFromHostPesToGivenVirtualPe method, it may allocate the same physical PE for all requested virtual PEs.

Further investigation must be carried out to understand how different Hypervisors such as Microsoft Hyper-V, VMWare ESX, KVM or Xen.

The VmSchedulerTimeShared has the same oversimplification of the CloudletSchedulerTimeShared described in #58.

Some useful links:

Such a change is required to close the issues #7.

Detailed information about how the feature should work

Each Pe must have its own PeProvioner and the provisioner has to maintain a map of VMs and the amount of the capacity from the PE that is allocated to each VM. The current implementation uses a List to represent the capacity allocated for a VM. But since different physical PEs (each one with its own PeProvisioner) may be allocated to a same VM, each PeProvisioner just has to track the amount of MIPS from its Pe that is allocated to a VM. It doesn't make sense to have a list of MIPS representing the allocated capacity from a Physical Pe to a given VM.

getTotalUtilizationOfCpu() method gives incorrect values for the oversimplified CloudletSchedulerTimeShared

ISSUE:

The CloudletSchedulerAbstract.getTotalUtilizationOfCpu method returns incorrect CPU usage values for the CloudletSchedulerTimeShared subclass due to the oversimplification provided by this last one, that doesn't perform task preemption and considers that every submitted Cloudlets can execute simultaneously at the same CPU cores, even there is less cores than cloudlets.

For instance, if there is just 1 core and 2 Cloudlets, these 2 cloudlets are executed at the same time (what in practice is impossible), using just 50% of the core capacity each one. That oversimplification "allows" the 2 cloudlets to run at the same time but causes issues when computing the current CPU usage.

Expected behavior

If there are 2 CPU cores and 4 cloudlets that started at the same time and have the same length, each one requiring one core, the current CPU usage at any time must be 100%.

Actual behavior

At the given scenario, the CPU usage is computed as 200% because the oversimplified CloudletSchedulerTimeShared "runs" all the 4 Cloudlets at the same time, even there are just 2 CPU cores.

If the issue is related to a specific method, provide a test case that fails in order to show the problem

The following CloudletSchedulerTimeSharedTest test cases were introduced to confirm the issue:

  • testGetTotalUtilizationOfCpu_LessCloudletsThanPesHalfUsage
  • testGetTotalUtilizationOfCpu_LessCloudletsThanPesNotFullUsage
  • testGetTotalUtilizationOfCpu_MoreCloudletsThanPes
  • testGetTotalUtilizationOfCpu_OnePeForEachCloudlet

Specifications like the version of the project, operating system or workload file used

The issue applies to the current development version.

Allow submission of cloudlets with a specific delay

Create an overloaded version of the DatacenterBroker.submitCloudlets method to receive a delay parameter in order to postpone the creation of the submitted cloudlets inside some VM.

Detailed information about how the feature should work

An example scenario where this feature should be used

If you are creating a simulation scenario from a workload trace file, where the applications (cloudlets) start executing in a specific time, this feature will allow you to define when the applications have to start.

If you are creating a simulation scenario where the cloudlets have to arrive dynamically in a random way (such as using a Poison arrival process) along the time, you can simply create each cloudlet prior to start the simulation, defining the delay of each one. Thus, when you submit the cloudlet list to the broker, it can delay the creation of each cloudlet based on the Cloudlet.delay attribute. By this way, you can instantiate all the cloudlets you want previously and start the simulation after that, making the simulator take care of delaying the cloudlet creation inside some VM.

This ease the process of defining dynamic cloudlet arrival, relieving you to use listeners or threads to perform such a task. The simulation code, in this case, will be simpler. However, if the number of cloudlets is too huge, it can cause your simulation to use a lot of memory, once you create all the cloudlets in advance. Thus, this resource has to be used carefully to delay the submission of short sets of cloudlets at a time. However, currently, without this resource, instantiating all cloudlets in advance is the default way used by the simulator.

Other situation in which this feature may be used it to ensure that a given cloudlet will be executed only after a specific time after another cloudlet starts executing, stimulating some parallel process that is triggered by the first cloudlet.

A brief explanation of why you think this feature is useful

This allows the simulation of dynamic arrival of cloudlets into the Cloud provider infrastructure.

Included Examples

Related Issues

CloudletSchedulerTimeShared it not performing concurrency of cloudlets execution

ISSUE:

After the removal of code duplication for CloudletSchedulerTimeShared, it is working as if there were dedicated PEs for each cloudlet, making two cloudlets that are concurring for the same PE to work as if each one had an individual PE.

The existing tests didn't get this problem.

Expected behaviour

Consider a scenario where there is 1 Host with 2 PEs of 1000 MIPS, 1 VM that uses all Hosts PEs, and 4 Cloudlets of 10000 MI requiring 1 PE each one. Using a CloudletSchedulerTimeShared, as there is more Cloudlets than PEs, instead of each cloudlet spend 10 seconds to finish, they have to spend 20 seconds.

Actual behaviour

All the cloudlets are executing as if each one had a dedicated PE to execute.

If the issue is related to a specific method, provide a test case that fails in order to show the problem

A new Integration Test CloudletSchedulerTimeSharedWithMoreCloudletsThanPEs was introduced to confirm the problem.

Include a new module to provide micro benchmarks for assessment of framework performance

FEATURE:

One of the crucial features for a simulation framework is the performance, mainly considering large scale simulation scenarios. By this way, performance has to be measured in order to assess parts of the framework that need improvements.

In order to allow these assessments to be performed in a continuous base, automated benchmarks have to be implemented. Accordingly, the project has to provide a new module that makes use of the JMH (Java Microbenchmark Harness framework) to provide a set of automated benchmarks.

Detailed information about how the feature should work

A new maven module that uses the JMH has to be included and new tests (simular to JUnit Tests) have to be implemented to assess the performance of specific parts of the simulation framework.

An example scenario where this feature should be used

The JMH will allow to detect CloudSim Plus bottlenecks and improve its performance, mainly for large scale experiments and heuristics implementation for NP-Hard problems (such as mapping of Cloudlets to VMs or VMs to Hosts).

A brief explanation of why you think this feature is useful

The use of JMH will allow the automation of benchmarks execution and can be integrated with the Travis continuous integration system in order to execute the benchmarks after every "git push" that triggers a build process.

Final Solution

The cloudsim-plus-benchmarks module was introduced and is used to assess performance of internal CloudSim Plus features.

Provide realistic implementations for VmScheduler

FEATURE:

The VmSchedulerTimeShared is an unrealistic, oversimplified implementation of a time-shared scheduler that does not perform actual Vm preemption.

More realistic implementations used in actual hypervisors, such as Xen, should be provided, such as this one.

This is a similar issue as in #58.

Passing IDs instead of entire objects is not an Object-Oriented approach

IMPROVEMENT

The tool is a discrete event simulator that relies on message passing to execute simulation tasks.
However, several times it is passed raw data types into messages that are not type safe and makes
difficult to know what exactly a given method parameter has to receive.

Detailed information about how the feature should work

The SimEntity.send method, which is used by several sub-classes, receives an Object data parameter, that usually is an array of entities' IDs. The DatacenterSimple.processVmCreate method is an example of this problem, where the parameter is filled with the Datacenter ID, the VM id and a last integer to represent a boolean value.

Considering that the method send is used to pass several and totally different kind of data for different entities, it should be tried to pass a specific object with everything that is needed, instead of an int array. This object shouldn't just store these IDs into different attributes, but instead, store the objects that these IDs represent.

The problem of using IDs to create relationships happens in other contexts, such as to relate a Cloudlet to the Vm that will run it and the broker that represents the Cloudlet customer, it is used a set of IDs, respectively, vmId and brokerId, instead of a Vm and DatacenterBroker attributes.
The use of these IDs also requires Integers instead of entire objects when passing messages during simulation execution.

Some attributes that have to be changed from int to a specific object (the default value has to be set using the NULL object pattern)

  • Cloudlet.userId, Cloudlet.vmId
  • Vm.userId
  • VmScheduler and CloudletScheduler internal attributes
  • Objects of type List or Map where the key is an Integer representing the ID of some object instead of the object itself.
  • Several classes inside the network package, including all the NetworkPacket implementations.

A brief explanation of why you think this feature is useful

The proposal improves the OO design, makes the code type-safe and clear to understand and tries to avoid runtime cast exceptions.

Passing an entire object doesn't increase memory footprint once it is passed just a reference to an already instantiated object.

After all, creating true OO relationships using objects instead of IDs, allows you to navigate through the entities to get the data that you want. For instance, to know what is the Datacenter where a cloudlet is running would be possible just by calling cloudlet.getVm().getHost().getDatacenter().
This is true OO relationships and will allow developers using the simulator to get a lot of information about the execution of their simulations in an intuitive way.

Inappropriately changing Cloudlet length in CloudletScheduler to consider the time spent to transfer Cloudlet files to a VM

Issue

Some CloudletSchedulers, such as the CloudletSchedulerSpaceShared, inappropriately increase the length of the Cloudlet inside the cloudletSubmit method as a mean of computing the time the cloudlet spent to be transferred to the VM.

It is very strange to change the length of the cloudlet, once it is a value defined by the user.
The execution length is one thing, the total execution time is another one.

Besides the problem to change the cloudlet length to simulate the files transfer time, when files are in fact added to the required file list, the transfer time was being doubled because the DatacenterSimple already compute it in the submitCloudletToVm method.

What would be done is to delay the cloudlet execution time.

If the issue is related to a specific method, provide a test case that fails in order to show the problem

The CheckCloudletStartDelayForTransferRequiredFilesTest integration test confirms the issue.

Enable parallel build of maven CloudSim Plus packages

IMPROVEMENT

Detailed information about how the feature should work

Configure the pom.xml file of the top CloudSim Plus maven project to enable parallel build.

A brief explanation of why you think this feature is useful

It will speed up the build process using multiple processor cores, each one to build a given project's maven module.

The list of finished cloudlets is always empty using the HostDynamicWorkload class

Issue

The results from the list of finished cloudlets, obtained from the broker.getCloudletsFinishedList(), aren't being shown because some issue at the HostDynamicWorkload class. The mentioned broker method just returns an empty list.

When it is used Host objects, the broker.getCloudletsFinishedList() works as expected, however the Host usage history doesn't. And if the host usage history doesn't work for Host class, just for HostDynamicWorkload.

Expected behaviour and actual behavior

The broker.getCloudletsFinishedList() should return the list of executed cloudlets when using HostDynamicWorkload hosts, however, it always returns an empty list.

If it is a more wide problem or you don't know where it happens, provide a minimal simulation example that reproduce the problem

See the example HostsCpuUsageExample.

Allow the combination of specialised Datacenters, Hosts, VMs and Cloudlets to enable such objects to be both power- and network-aware.

FEATURE

Interfaces such as Datacenter, Host, Vm and Cloudlet have different implementing classes that provide specialised behaviour such as either network-enabled or power-aware objects. However, using inheritance to implement different kinds of such objects limits the use of just one kind of object at a given simulation. For instance, it is not possible to have a network and power-aware kind of Host. The only way would be creating a new NetworkPowerHost class that join the behaviour of both hosts.

Detailed information about how the feature should work

There must be just the basic DatacenterSimple, HostSimple, VmSimple and CloudletSimple classes. The complementary behaviour must be provided using composition instead of inheritance. This way, any object will be allowed to have multiple behaviours, such as network or power consumption. New behaviours would even be added at runtime when an object is instantiated.

An example scenario where this feature should be used

It would be possible to build a simulation that aims to assess the power consumption of network-enabled objects such as Datacenters, Hosts and VMs.

A brief explanation of why you think this feature is useful

This feature will move CloudSim Plus to a next level, allowing the composition of different behaviours for simulation objects and then the modelling of even more realistic simulation scenarios.
It will allow defining different behaviour classes such as Network, Power and so on that could be mixed together in any way and number.

Examples Included

More Information

CloudSim Plus 2.0.0 Release Notes

Vertical VM Scaling

Implement a mechanism for scaling up and down VM resources during simulation execution.

Included Examples

Examples are available here.

Detailed information about how the feature should work

The VM should have a property to enable vertically scaling RAM, CPU and BW.

This can be tricky in real cloud environments, once both hypervisor and guest operating systems may have to support hot adding resources such as RAM and CPU. For instance, it is reported that KVM and Microsoft Hyper-V support such a feature.
However, the guest OS has to provide a so-called "ballon driver" to enable hot addition of memory and CPU (without requiring to reboot the VM).

Approaches

There are two different ways to vertically scale a specific resource, by:

  1. resizing the current resource - the current capacity of a resource such as RAM can be dynamically resized. The VM will continue to have the same number of virtual devices (such as virtual memory cards), but with a different capacity. However, doing the same for CPU may not be possible in actual hypervisors. Thus, the best approach for CPU could be the second one below.
  2. adding another virtual device to a VM - for instance, to vertically scale the RAM, a new virtual memory card can be attached to the VM. A new CPU core can be attached as well. This is like performing a horizontal scaling at the resource level (since more CPUs will be made available).

Vertical scaling of RAM, Storage and BW follows the 1st approach, by resizing the resource to scale it up or down.

Vertical scaling of PEs uses the 2nd approach. Since the VM doesn't have a List of PE objects, but just stores the number of PEs and MIPS capacity of each one, the scaling is performed by just increasing or decreasing the numberOfPes attribute to simulate the addition of a virtual CPU core.

The Processor class is currently being used in the Vm class to put together the numberOfPes and MIPS. Since the Processor implements the ResourceManageable interface, this approach enabled to detect under and overload in a polymorphic way, but the actual scaling of CPU is not implemented polymorphically, due to some design issues.

References

An example scenario where this feature should be used

  • When a VM requires more resources and there are enough resources into the PM that can be assigned to the VM, in order to VM migration.
  • When a VM is requiring more resources and the PM doesn't have enough ones, but other co-hosted VM is not using all allocated resources and would lend it to the requesting VM.

A brief explanation of why you think this feature is useful

It allows physical resources to be used in a more granular and efficient way, reducing resource wastage and VM migrations.

Related Issues

Provide a more realistic time-shared scheduler to overcome the oversimplification of the CloudletSchedulerTimeShared

FEATURE:

The CloudletSchedulerTimeShared doesn't performs processes preemption and considers that even when there are more Cloudlets than Virtual PEs, all Cloudlets will execute at the same time. For instance, if there are 2 Cloudlets and just 1 PE, all Cloudlets will run simultaneously, using just 50% of the PE capacity each one. This oversimplification causes some issues as is described in #33.

Thus, a more realistic time-shared scheduler must be provided that really implements process preemption. Such a new CloudletScheduler might implement the Completely Fair Scheduler used in recent Linux Kernels.

Detailed information about how the feature should work

The Completely Fair Scheduler must perform task preemption based on Cloudlet priority, allocating different time slices according to the priority.

An example scenario where this feature should be used

The new scheduler could be used to assess the efficiency of actual schedulers such as the Linux Scheduler and investigate how such a scheduler could be improved.

A brief explanation of why you think this feature is useful

It will enable creating more realistic simulation scenarios, resulting in more accurate outcomes.

Update

This issues was closed by commit d5768cd, that introduces the CloudletSchedulerCompletelyFair, and it was released on version v0.8-beta.5.

Example using this feature

Improve documentation of SimEvent and CloudSim classes

Improvement

The documentation of several methods and attributes of the class SimEvent is missing.
About the CloudSim class, several parameters just have unuseful javadoc. For instance, it is common to see a parameter named tag which the javadoc just says "the tag", instead of explaining it.

DatacenterBroker interface has to provide a method to define the policy used to select a VM to run each Cloudlet

FEATURE

The DatacenterBroker interface is not well defined in order to enable implementing classes to define how a VM is selected to place each submitted Cloudlet. This is a problem came from CloudSim, that not even has a DatacenterBroker interface.

If the goal is to allow third-party developers to implement new DatacenterBrokers, the framework has to provide the necessary base interfaces and classes to allow that. Currently such extensibility is not being allowed, without changing core framework classes.

The protected createCloudletsInVms() method in DatacenterBrokerSimple class always select the first VM to each Cloudlet that was not manually bound to a given VM.

Detailed information about how the feature should work

DatacenterBroker interface has to provide one method to be implemented in order to specify the policy used to select a VM for a given Cloudlet. The DatacenterBrokerSimple.createCloudletsInVms() has to be refactored in order to move the VM selection code to that proposed method.

An example scenario where this feature should be used

This feature will allow implementing simulations that can give priorities to cloudlets to be assigned to VMs, according to the policy implemented by a custom DatacenterBroker. Different policies can asses even characteristics of VM and Cloudlets when assigning a Cloudlet to a VM, trying the best mapping possible. A Knapsack Problem approach could be implemented by a DatacenterBroker in order to pack the maximum number of cloudlets into the best VM, or assess mappings that will reduce overall task completion time for all Cloudlets, reduce the mean task completion the among Cloudlets, select VMs where the Cloudlet resource usage would reach a certain level (in cases where the cloudlet has dynamic UtilizationModel's), or several other scenarios.

A brief explanation of why you think this feature is useful

It will enable real extensibility to third-party developers to create custom implementations of a DatacenterBroker.

Examples Available

CloudletToVmMappingBestFit.java

Some cloudlets exec and finish time are being incorrectly computed

ISSUE:

Simulation Scenario

  • 1 Datacenter with 1 Host of 8 PEs of 1000 MIPS
  • 2 VMs with 4 PEs of 1000 MIPS
  • 3 Cloudlets with 4 PEs of 10000 MI length

Expected behavior

Using a CloudletSchedulerSpaceShared for VMs, 1 Cloudlet will be submitted to a VM and the other 2 ones will be submitted to the other VM.

One of the 2 Cloudlets there were submitted to the same VM with a CloudletSchedulerSpaceShared have to wait the first one to finish. Accordingly, the first Cloudlet must finish at the 10th second.
The 2nd of two Cloudlets running inside the same VM must have the following results:

Cloudlet VM Start Finish Exec
2 1 10 20 10

Actual behavior

The 2nd of two Cloudlets running inside the same VM have the incorrect results below:

Cloudlet VM Start Finish Exec
2 1 10 30 20

Example

The DynamicVmCreationExample.java inside the org.cloudsimplus.migration package of the testbeds project shows the issue.

Specifications like the version of the project, operating system or workload file used

Current development version.

UtilizationModelStochastic just uses the bad native Random class

IMPROVEMENT:

The UtilizationModelStochastic class just uses the Java native Random class that provides bad quality pseudo random numbers (PRNs). There is an entire set of classes implementing the ContinuousDistribution inherited from CloudSim, to provide better quality PRNs based on the Apache Math library.

UtilizationModelStochastic should be changed to use ContinuousDistribution and allow the developer to define what implementation to use. A default implementation such as the UniformDistr should be used in case the developer doesn't define one.

A brief explanation of why you think this feature is useful

The quality of PRN generators that are highly uniform and low correlated are fundamental to provide non-biased and scientifically valid simulation results.

Introduce clone methods for Vm and Cloudlet

FEATURE:

Introduce clone methods and/or clone constructors for DatacenterCharacteristicsSimple, DatacenterSimple, Host, VmSimple and CloudletSimple to allow creating a new object of such classes as a copy of another given object.

Detailed information about how the feature should work

Main simulation classes such as Datacenter, Host, Vm and Cloudlet should have a clone constructor
that accepts an object from the same class and clones it. For objects that require an explicit ID, the construtor has to receive the ID to be assigned to the new object.

For auxiliar and simpler classes such as VmAllocationPolicy, VmScheduler, CloudletScheduler and UtilizationModel, they should just implement the Clonable interface. Since any object that implements Clonable will be automatically powered with a clone method that is protected, it will not be allowed to be called elsewhere. But the method can be redeclared public inside the implementing classes (inside the interface it is not possible).

Challenges

  • Despite the clone method in the Cloneable interface is protected, it can be declared at public directly on implementing classes such as VmSimle. However, this method performs a shallow copy instead of a deep copy, thus, it doesn't clone the internal attributes of the object being cloned. Using the clone as already was used in CloudSim will cause several problems. For instance, a CloudletScheduler that has several lists of cloudlets that will not be cloned. By this way, the original and the cloned scheduler will have references to the exact same lists.
  • The use of clone constructors approach also just allows a shallow copy. For instance, considering that an attribute of interface CloudletScheduler there may be different implementations, inside a Vm clone construtor, there is not a generic way to call the constructor of the scheduler class that is used by the Vm, unless using reflection (that may generate several runtime exceptions).
  • Another alternative is using serialization/deserialization.

General guidelines

  • The new clone methods must first call the constructor with less parameters to avoid code duplication.

  • The javadoc must explicitly inform that the constructor is a clone constructor and state what non-primitive attributes are cloned. For instance, when cloning a DatacenterCharacteristics, it has to be said that each host in the hostList will be cloned.

  • The object to be cloned, that must be passed to the clone constructor, should be called source for every created constructor, maintaining naming consistency.

  • As every attribute from the given object to be cloned has to be copied to the cloned object, to ensure data consistency, ALWAYS use the setter methods to set the attributes of the cloned object. NEVER assign a value directly to the attribute. For instance, inside the DatacenterSimple clone constructor call this.setSchedulingInterval(source.getSchedulingInterval()) instead of this.schedulingInterval = source.schedulingInterval. Since the setter usually performs some validations, if you use the second line above, such validations will not be performed.

DatacenterCharacteristicsSimple clone

It must clone every Host from the hostList of the given DatacenterCharacteristicsSimple object passed to that constructor and add them to the created clone.

DatacenterSimple clone

It must clone the DatacenterCharacteristics from the DatacenterSimple object passed to that constructor. It must create clone every Vm from the vmList of the given DatacenterCharacteristicsSimple object passed to that constructor.

For the vmAllocationPolicy atribute, since there may be different VmAllocationPolicy implementations, the VmAllocationPolicy interface must implement Clonable and provide a clone method that has to be called to clone it and assign to the cloned Datacenter.

HostSimple clone

It must clone the VmScheduler. That interface must implement Clonable so that the VmScheduler clone is created from the clone method.

VmSimple clone

It must clone the CloudletScheduler. That interface must implement Clonable so that the CloudletScheduler clone is created from the clone method.

An example scenario where this feature should be used

These clone methods will make it easier to create multiple objects with the same configurations, as it is usually made in simulations. For instance, usually a Datacenter has a set of hosts with the same configurations, thus one host can be created and the other ones cloned from this first one.

A brief explanation of why you think this feature is useful

It will reduce the complexity of creating objects with the same configurations and give one more way to instantiate such similar objects. It will also contribute for upcoming features such as creating a snapshot of a given VM to enable horizontal load balancing and also for starting a copy of a VM into another PM in case of a host failure.

Enable UtilizationModels to define resource usage either in percentage or absolute values

FEATURE:

The current implementations of the UtilizationModel interface imposes the usage of a given Cloudlet resource to be in percentage values, what is inflexible. Such an interface should provide a way to allow the developer to define the resource usage in either percentage or absolute values.

Detailed information about how the feature should work

An Unit enum with values {PERCENTAGE, ABSOLUTE} must be defined inside the UtilizationModel interface to define what is the unit for the resource usage. The default value for implementing classes inherited from CloudSim project must continue to define the values in percentage, such as the UtilizationModelFull.

Other classes such as the UtilizationModelStochastic and UtilizationModelArithmeticProgression should provide different constructors to enable defining the unit when instantiating such an object.

The documentation should be properly updated to make clear that the getUtilization() method may return a percentage or absolute value, depending on the given Unit.

Methods of classes such as CloudletScheduler and Vm, that make use of these UtilizationModels, should be properly refactored to check what is the unit of each UtilizationModel for every Cloudlet resource.

The usage of different units for different resources of the same Cloudlet should be allowed. Even Cloudlets running into the same VM may have different units for their UtilizationModels. When computing total resource usage, it has to be checked the unit of every UtilizationModel in order to convert to a single unit (either percentage or absolute) in order to get, for instance, the total percentage of RAM usage of a given VM.

An example scenario where this feature should be used

This will enable more flexibility when creating simulation scenarios, allowing the developer to define the resource usage according to his/her needs. It will also easily allow creating simulations from trace files where resource utilization values are defined in absolute values.

A brief explanation of why you think this feature is useful

Besides the additional flexibility and possibility to create more realistic simulations, this feature is key for closing the issue #7. The VM vertical scaling mechanism will increase a VM resource, such as RAM, when such a resource is overloaded. For instance, when the VM RAM utilization reaches 70%, the scaler can request the broker to send a request for the Datacenter to check if the Host where the VM is placed has enough resources to scale the VM RAM up. However, consider that the Host has enough resources and the VM RAM is doubled from 1GB to 2GB. If the Cloudlet RAM utilization is defined in percentage, after the up scaling, the Cloudlet will continue to use 70% of the VM RAM, that now will be equal to 1.4GB. Thus, the up scaling will have no effect in reducing VM overload.

By defining the resource usage in absolute values, this issue will not happen.

Examples using this new feature

Define packages documentation

FEATURE

One of the most doubts that novice users have is where to find some feature that they want to change or extend. Usually they have difficulty in found out what class to start with.

By this way, a package documentation can be a very useful, broad and intuitive place to understand what are the responsibilities of classes of a package.

The package documentation can be a centralised place to provide documentation that currently is duplicated along the classes of a package. It also can provide additional information of how to use such classes in a more general way.

Thus, it can be a perfect start point to users study how the simulator is structured and how it works, without having to go through all classes.

Detailed information about how the feature should work

Use NetBeans to add a package-info.java file for each package.

It has to be assessed what is the best option.

An example scenario where this feature should be used

Package documentation can be used as a study guide to novice users,
and even for any other user that wants to get detailed information about a package.

A brief explanation of why you think this feature is useful

It increases and improves documentation coverage, making easier to users to understand how the simulator works and avoiding several repetitive questions that are posted every day in forums.

Final Solution

A package-info.java file was added to document each package. The package documentation can be seen directly on any IDE. When accessing the online javadocs at http://cloudsimplus.rtfd.io, the page for each package includes the documentation.

NetworkExamples freeze, packets are sent multiple times and not delivered

ISSUE:

  • Network examples stay in infinite loop.
  • The packets sent are not delivered and the same packets are sent multiple times.
  • The Switch classes yet have a lot of duplicated code.
  • There is lots of redundant maps, such as a map of VMs to Hosts and a map of VMs to Switches. Such data can now be got directly from the VM using vm.getHost() and vm.getHost().getEdgeSwitch().

If it is a more wide problem or you don't know where it happens, provide a minimal simulation example that reproduce the problem

Examples are provided in the org.cloudsim.examples.network package inside the examples project.

Specifications like the version of the project, operating system or workload file used

Current CloudSim Plus development version.

Sequential and parallel execution of CloudletExecutionTask for a NetworkCloudlet

Each execution task must use just a single core. It may represent a thread (so the name of the class would be changed). This way, an execution task should use only one core. However, tasks may be executed in parallel (considering there are multiple cores) and/or sequentially.

This feature has to be included in the class. One proposal is to create an int group attribute. All tasks (not only execution tasks) that have the group equals to zero are executed sequentially (that means they aren't grouped). The tasks that have the same group have to be executed in parallel, one in each CPU core (PE).

All tasks into a group will be executed together. The next group starts only when all the tasks in the prior finishes (each task can have a different length, so may finish at different times).

The value of the group defines the task execution order. Tasks with lower group number are executed first. You can have single tasks (that are not grouped) between grouped tasks (defining the order that this single task executes) just assigning a group number to it and making sure not to add other tasks with the same group.
For instance, consider the tasks below, represented by their group number, for a NetworkCloudlet with 4 cores:
0 0 1 1 1 1 2 3 3

There are:

  • 2 ungrouped tasks (0) that will be executed sequentially;
  • 4 tasks of group 1 that will be executed in parallel after all ungrouped tasks;
  • a single task at group 2 that will be executed after group 1:
  • 2 tasks at group 3 to be executed in parallel at the end.

When adding a task to a NetworkCloudlet, the addTask() method has to check if the current number of tasks for the group (that represents parallel tasks)
is lower than the number of NetworkCloudlet PEs.

Create documentation site at http://readthedocs.org

FEATURE

Organize and extend existing documentation and create site at http://readthedocs.org

A brief explanation of why you think this feature is useful

It will enable version control on documentation of each project release, enabling the user to select the version of the project he/she wants to see the docs.
It will provide a central point to find documentation, with search capabilities and a structured site.

Update

The documentation is available at http://cloudsimplus.rtfd.io

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.