Giter Club home page Giter Club logo

benchbase's Introduction

BenchBase

BenchBase (Java with Maven)

BenchBase (formerly OLTPBench) is a Multi-DBMS SQL Benchmarking Framework via JDBC.

Table of Contents


Quickstart

To clone and build BenchBase using the postgres profile,

git clone --depth 1 https://github.com/cmu-db/benchbase.git
cd benchbase
./mvnw clean package -P postgres

This produces artifacts in the target folder, which can be extracted,

cd target
tar xvzf benchbase-postgres.tgz
cd benchbase-postgres

Inside this folder, you can run BenchBase. For example, to execute the tpcc benchmark,

java -jar benchbase.jar -b tpcc -c config/postgres/sample_tpcc_config.xml --create=true --load=true --execute=true

A full list of options can be displayed,

java -jar benchbase.jar -h

Description

Benchmarking is incredibly useful, yet endlessly painful. This benchmark suite is the result of a group of PhDs/post-docs/professors getting together and combining their workloads/frameworks/experiences/efforts. We hope this will save other people's time, and will provide an extensible platform, that can be grown in an open-source fashion.

BenchBase is a multi-threaded load generator. The framework is designed to be able to produce variable rate, variable mixture load against any JDBC-enabled relational database. The framework also provides data collection features, e.g., per-transaction-type latency and throughput logs.

The BenchBase framework has the following benchmarks:

This framework is design to allow for easy extension. We provide stub code that a contributor can use to include a new benchmark, leveraging all the system features (logging, controlled speed, controlled mixture, etc.)


Usage Guide

How to Build

Run the following command to build the distribution for a given database specified as the profile name (-P). The following profiles are currently supported: postgres, mysql, mariadb, sqlite, cockroachdb, phoenix, and spanner.

./mvnw clean package -P <profile name>

The following files will be placed in the ./target folder:

  • benchbase-<profile name>.tgz
  • benchbase-<profile name>.zip

How to Run

Once you build and unpack the distribution, you can run benchbase just like any other executable jar. The following examples assume you are running from the root of the expanded .zip or .tgz distribution. If you attempt to run benchbase outside of the distribution structure you may encounter a variety of errors including java.lang.NoClassDefFoundError.

To bring up help contents:

java -jar benchbase.jar -h

To execute the tpcc benchmark:

java -jar benchbase.jar -b tpcc -c config/postgres/sample_tpcc_config.xml --create=true --load=true --execute=true

For composite benchmarks like chbenchmark, which require multiple schemas to be created and loaded, you can provide a comma separated list:

java -jar benchbase.jar -b tpcc,chbenchmark -c config/postgres/sample_chbenchmark_config.xml --create=true --load=true --execute=true

The following options are provided:

usage: benchbase
 -b,--bench <arg>               [required] Benchmark class. Currently
                                supported: [tpcc, tpch, tatp, wikipedia,
                                resourcestresser, twitter, epinions, ycsb,
                                seats, auctionmark, chbenchmark, voter,
                                sibench, noop, smallbank, hyadapt,
                                otmetrics, templated]
 -c,--config <arg>              [required] Workload configuration file
    --clear <arg>               Clear all records in the database for this
                                benchmark
    --create <arg>              Initialize the database for this benchmark
 -d,--directory <arg>           Base directory for the result files,
                                default is current directory
    --dialects-export <arg>     Export benchmark SQL to a dialects file
    --execute <arg>             Execute the benchmark workload
 -h,--help                      Print this help
 -im,--interval-monitor <arg>   Throughput Monitoring Interval in
                                milliseconds
 -jh,--json-histograms <arg>    Export histograms to JSON file
    --load <arg>                Load data using the benchmark's data
                                loader
 -s,--sample <arg>              Sampling window

How to Run with Maven

Instead of first building, packaging and extracting before running benchbase, it is possible to execute benchmarks directly against the source code using Maven. Once you have the project cloned you can run any benchmark from the root project directory using the Maven exec:java goal. For example, the following command executes the tpcc benchmark against postgres:

mvn clean compile exec:java -P postgres -Dexec.args="-b tpcc -c config/postgres/sample_tpcc_config.xml --create=true --load=true --execute=true"

this is equivalent to the steps above but eliminates the need to first package and then extract the distribution.

How to Enable Logging

To enable logging, e.g., for the PostgreSQL JDBC driver, add the following JVM property when starting...

-Djava.util.logging.config.file=src/main/resources/logging.properties

To modify the logging level you can update logging.properties and/or log4j.properties.

How to Release

./mvnw -B release:prepare
./mvnw -B release:perform

How use with Docker

  • Build or pull a dev image to help building from source:

    ./docker/benchbase/build-dev-image.sh
    ./docker/benchbase/run-dev-image.sh

    or

    docker run -it --rm --pull \
      -v /path/to/benchbase-source:/benchbase \
      -v $HOME/.m2:/home/containeruser/.m2 \
      benchbase.azure.cr.io/benchbase-dev
  • Build the full image:

    # build an image with all profiles
    ./docker/benchbase/build-full-image.sh
    
    # or if you only want to build some of them
    BENCHBASE_PROFILES='postgres mysql' ./docker/benchbase/build-full-image.sh
  • Run the image for a given profile:

    BENCHBASE_PROFILE='postgres' ./docker/benchbase/run-full-image.sh --help # or other benchbase args as before

    or

    docker run -it --rm --env BENCHBASE_PROFILE='postgres' \
      -v results:/benchbase/results benchbase.azurecr.io/benchbase --help # or other benchbase args as before

See the docker/benchbase/README.md for further details.

Github Codespaces and VSCode devcontainer support is also available.

How to Add Support for a New Database

Please see the existing MySQL and PostgreSQL code for an example.


Contributing

We welcome all contributions! Please open a pull request. Common contributions may include:

  • Adding support for a new DBMS.
  • Adding more tests of existing benchmarks.
  • Fixing any bugs or known issues.

Please see the CONTRIBUTING.md for addition notes.

Known Issues

Please use GitHub's issue tracker for all issues.

Credits

BenchBase is the official modernized version of the original OLTPBench.

The original OLTPBench code was largely written by the authors of the original paper, OLTP-Bench: An Extensible Testbed for Benchmarking Relational Databases, D. E. Difallah, A. Pavlo, C. Curino, and P. Cudré-Mauroux. In VLDB 2014. Please see the citation guide below.

A significant portion of the modernization was contributed by Tim Veil @ Cockroach Labs, including but not limited to:

  • Built with and for Java 17 21.
  • Migration from Ant to Maven.
    • Reorganized project to fit Maven structure.
    • Removed static lib directory and dependencies.
    • Updated required dependencies and removed unused or unwanted dependencies.
    • Moved all non .java files to standard Maven resources directory.
    • Shipped with Maven Wrapper.
  • Improved packaging and versioning.
    • Moved to Calendar Versioning (https://calver.org/).
    • Project is now distributed as a .tgz or .zip with an executable .jar.
    • All code updated to read resources from inside .jar instead of directory.
  • Moved from direct dependence on Log4J to SLF4J.
  • Reorganized and renamed many files for clarity and consistency.
  • Applied countless fixes based on "Static Analysis".
    • JDK migrations (boxing, un-boxing, etc.).
    • Implemented try-with-resources for all java.lang.AutoCloseable instances.
    • Removed calls to printStackTrace() or System.out.println in favor of proper logging.
  • Reformatted code and cleaned up imports.
  • Removed all calls to assert.
  • Removed various forms of dead code and stale configurations.
  • Removed calls to commit() during Loader operations.
  • Refactored Worker and Loader usage of Connection objects and cleaned up transaction handling.
  • Introduced Dependabot to keep Maven dependencies up to date.
  • Simplified output flags by removing most of them, generally leaving the reporting functionality enabled by default.
  • Provided an alternate Catalog that can be populated directly from the configured Benchmark database. The old catalog was proxied through HSQLDB -- this remains an option for DBMSes that may have incomplete catalog support.

Citing This Repository

If you use this repository in an academic paper, please cite this repository:

D. E. Difallah, A. Pavlo, C. Curino, and P. Cudré-Mauroux, "OLTP-Bench: An Extensible Testbed for Benchmarking Relational Databases," PVLDB, vol. 7, iss. 4, pp. 277-288, 2013.

The BibTeX is provided below for convenience.

@article{DifallahPCC13,
  author = {Djellel Eddine Difallah and Andrew Pavlo and Carlo Curino and Philippe Cudr{\'e}-Mauroux},
  title = {OLTP-Bench: An Extensible Testbed for Benchmarking Relational Databases},
  journal = {PVLDB},
  volume = {7},
  number = {4},
  year = {2013},
  pages = {277--288},
  url = {http://www.vldb.org/pvldb/vol7/p277-difallah.pdf},
}

benchbase's People

Contributors

alandzq avatar alendit avatar anjagruenheid avatar apavlo avatar bohanjason avatar bpkroth avatar curino avatar davidlday avatar dedcode avatar dependabot-preview[bot] avatar dependabot[bot] avatar dvanaken avatar eribeiro avatar ethenzlere avatar gogo9th avatar grooverdan avatar grundprinzip avatar jkosh44 avatar kgrittn avatar lazyplus avatar lmwnshn avatar mbutrovich avatar mpardesh avatar ngbanguyen avatar nuno-faria avatar simonkrenger avatar stonewhitener avatar timveil avatar treilly-nuodb avatar woonhak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchbase's Issues

Question about workload access distribution

Hi Guys, nice tool. Could you tell me, how do you deal with access distribution? Is it possible to parameterize the benchmark for using for example random, zipf, etc. Or it's managed internally by specific configuration?

Thanks for help.

Stop Benchmark on SQLException for Testing

Right now we will print but ignore any SQLException that occurs during a benchmark. This causes us to incorrectly mark benchmarks as passing in tests when they really did not. We should add a commandline flag that causes the benchmark to fail if any such error occurs.

Standardize SQL PrettyPrinter + Generic DDLs

  1. We should use a prettyprinter for the SQLStmt queries and DDL files. I did this for the TPC-H queries because the old formatting was atrocious. But I was not able to find an online prettyprinter that had the right formatting.

  2. We need to make sure that the generic DDLs do not use DROP TABLE...CASCADE. This is a minor thing that is not supported in SQLite. We should probably codify the rules of what is the minimum SQL features for these DDL files. @timveil already did the right thing and made it so that MySQL is no longer assumed to be the generic DDL.

Dynamically import benchmark as opposed to having to statically compile it in

If I want to create a private benchmark, right now it seems like the only way is to fork benchbase and add the custom benchmark. It would be nice if that code can be maintained separately, and upstream-benchbase can be used directly.

A low priority feature request, but may help the adoption of benchbase a bit more.

Remove references to Ivy/Ant from project

Is it possible to remove ivy from this project? I see that it is used by a Dockerfile and specifically its referenced .deploy directory. If this Dockerfile is useful then at least it should be migrated to use Maven instead of Ant. Seems duplicative to have both ivy and maven listing and managing dependencies. Thoughts?

support comma separated host:port combinations in <url>

today, valid jdbc url's that included multiple host:port combinations are not properly handled by benchbase. An example of a valid, yet unsupported url is jdbc:postgresql://localhost:26257,localhost:26259,localhost:26261/benchbase?sslmode=disable&amp;ApplicationName=seats&amp;reWriteBatchedInserts=true&amp;loadBalanceHosts=true. In this instance the postgres driver should load balance connections across these three instances.

However because we have enabled the LegacyListDelimiterHandler in the XMLConfiguration object, the url value is split at the , thus generating an invalid url.

The goal would be to support comma separated hosts in the url parameter and continue to properly support other comma separated values which should be treated as lists.

Auctionmark shows many SQLExceptions!

Hi there,
I recently installed benchbase (as hoping to get rid of some errors of Oltpbench). but unfortunetely, I still get errors, but of course different than before. In this issue, I focus on Auctionmark on MySQL DB. I set the scale-factor to 100 to make around 25GB of table instead of 260MB. The creation phase goes OK. but in the execution phase, it shows many WARNINGs of SQLException which does not look good.
Here is one of the output messages:

[WARN ] 2021-09-15 00:31:25,379 [AuctionMarkWorker<000>]  com.oltpbenchmark.api.Worker doWork - SQLException occurred during [com.oltpbenchmark.benchmarks.auctionmark.procedures.NewItem/07] and will not be retried... sql state [23000], error code [1062].
java.sql.SQLIntegrityConstraintViolationException: Duplicate entry '316659366627087-17827599' for key 'item.PRIMARY'
        at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:117)
        at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1092)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1040)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1350)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdate(ClientPreparedStatement.java:1025)
        at com.oltpbenchmark.benchmarks.auctionmark.procedures.NewItem.run(NewItem.java:172)
        at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkWorker.executeNewItem(AuctionMarkWorker.java:748)
        at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkWorker.executeWork(AuctionMarkWorker.java:371)
        at com.oltpbenchmark.api.Worker.doWork(Worker.java:357)
        at com.oltpbenchmark.api.Worker.run(Worker.java:269)
        at java.base/java.lang.Thread.run(Thread.java:829)

The config file that I run is modified as follows (compared to the default):

<url>jdbc:mysql://127.0.0.1:3306/benchbase?rewriteBatchedStatements=true&amp;allowPublicKeyRetrieval=true&amp;autoReconnect=true&amp;useSSL=false</url>

and scala-factor is changed from "1" to "100". I use a Centos 8 machine with Java-11 installed.
Thanks in advanced

Question: How to send a query to multiple nodes

When benchmarking a database that has multiple nodes accepting queries, such as CockroachDB, how do you expect queries to be distributed to those nodes? I think the simplest solution is to run HAProxy on a machine running BenchBase.

Improve GitHub actions CI workflow

Today the maven action has a build-and-upload job and separate jobs for each supported database. The problem is that each job independently checkouts and builds the project leading to considerable wasted time and overhead. My recommendation is to make all "database" jobs dependent on build-and-upload and force them to download the previously uploaded artifact instead of rebuilding it. I've tested this approach in my original fork and it works well. Thoughts @apavlo or @lmwnshn ?

Error on Crreating SEATS benchmark table

Hi again,
Similar the previous issue I posted, running "seats" benchmark on MySQL db fails even at "CREATING" phase. The main thing modified in the default config script is the scale factor. I have set it to 100.
After making tables with about 5GBs of space, creation phase failed with following errors.

[INFO ] 2021-09-15 01:02:43,297 [main]  com.oltpbenchmark.DBWorkload main - Loading data into SEATS database...
Exception in thread "main" java.lang.RuntimeException: Failed to execute threads: Failed to load data files for scaling-sized table 'flight'
        at com.oltpbenchmark.util.ThreadUtil.run(ThreadUtil.java:106)
        at com.oltpbenchmark.util.ThreadUtil.runNewPool(ThreadUtil.java:70)
        at com.oltpbenchmark.api.BenchmarkModule.loadDatabase(BenchmarkModule.java:286)
        at com.oltpbenchmark.DBWorkload.runLoader(DBWorkload.java:598)
        at com.oltpbenchmark.DBWorkload.main(DBWorkload.java:418)
Caused by: java.lang.RuntimeException: Failed to load data files for scaling-sized table 'flight'
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader.loadScalingTable(SEATSLoader.java:497)
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader$8.load(SEATSLoader.java:307)
        at com.oltpbenchmark.api.LoaderThread.run(LoaderThread.java:45)
        at com.oltpbenchmark.util.ThreadUtil$LatchRunnable.run(ThreadUtil.java:139)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.RuntimeException: Failed to load table flight
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader.loadTable(SEATSLoader.java:638)
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader.loadScalingTable(SEATSLoader.java:495)
        ... 6 more
Caused by: java.lang.IllegalArgumentException: FlightId value at position 3 is 65538. Max value is 65535

        at com.oltpbenchmark.util.CompositeId.encode(CompositeId.java:60)
        at com.oltpbenchmark.benchmarks.seats.util.FlightId.encode(FlightId.java:86)
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader$FlightIterable.specialValue(SEATSLoader.java:1358)
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader$ScalingDataIterable$1.next(SEATSLoader.java:841)
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader$ScalingDataIterable$1.next(SEATSLoader.java:827)
        at com.oltpbenchmark.benchmarks.seats.SEATSLoader.loadTable(SEATSLoader.java:550)

Any ideas would be welcomed.
Thanks

add support and examples for running benchmarks with maven

would like to be able to run benchmarks directly from maven without having to first package benchbase. This would be similar to running the benchmarks directly from an IDE. For example something like this should be possible to run ycsb against postgres...

 mvn compile exec:java -Dexec.args="-b ycsb -c config/postgres/sample_ycsb_config.xml --create=true --load=true --execute=true" -P postgres

Adding tpcc support for Apache Phoenix

Apache Phoenix SQL Layer on top of popular NOSQL database Apache HBase. Apache Phoenix also has transactional support with integration of projects like Apache Omid and Apache Tephra. This tool is very helptful to run TPCC benchmarks on Apache Phoenix so would like to add it to this tool.
But Phoenix has UPSERT only semantics so only thing obstructing to support TPCC benchmarks in INSERT keyword used in SQLUtil.getInsertSQL to generate INSERT statements. So would like to make this changes where the insert keyword can be pluggable.

Already raised PRs oltpbenchmark/oltpbench#366 at OltpBench would like to port here.

Bug in ThreadUtil causing benchmarks to fail at high scale factor

I've discovered a problem with ThreadUtil that's exposed during high SF runs... my tests were done at 250 SF on the seats benchmark. What I observed is that not all loading threads would be permitted to finish even though the logic appeared to be written to wait until all loaders had counted down their latches resulting in a rather confusing set of failures. Here is a snippet of the current logic...

        final long start = System.currentTimeMillis();
        final int num_threads = runnables.size();
        final CountDownLatch latch = new CountDownLatch(num_threads);
        LatchedExceptionHandler handler = new LatchedExceptionHandler(latch);

        if (LOG.isDebugEnabled()) {
            LOG.debug(String.format("Executing %d threads and blocking until they finish", num_threads));
        }
        for (R r : runnables) {
            pool.execute(new LatchRunnable(r, latch, handler));
        }

        pool.shutdown();


        try {
            latch.await();
        } catch (InterruptedException ex) {
            LOG.error("ThreadUtil.run() was interrupted!", ex);
            throw new RuntimeException(ex);
        } finally {
            if (handler.hasError()) {
                String msg = "Failed to execute threads: " + handler.getLastError().getMessage();
                throw new RuntimeException(msg, handler.getLastError());
            }
        }
        if (LOG.isDebugEnabled()) {
            final long stop = System.currentTimeMillis();
            LOG.debug(String.format("Finished executing %d threads [time=%.02fs]",
                    num_threads, (stop - start) / 1000d));
        }

The problem is that pool.shutdown() is called before waiting for latches to finish. Perhaps this isn't a problem during small SF runs but as SF increases this causes issues. As described in the javadoc for shutdown()... "This method does not wait for previously submitted tasks to complete execution. Use awaitTermination to do that.". Thus loader tasks are potentially killed before they are complete. The fix reorders this logic so that shutdown isn't called until all latches are finished. In addition to this fix, I added some improved logging, wrapped important code in finally blocks and removed a lot of confusing rethrows of exceptions that were wrapped in unnecessary RuntimeException 's.

The project is not going under java 11.

If you try to build a project, we get an error:

java: error: release version 17 not supported
Module benchbase SDK 11 is not compatible with the source version 17.
Upgrade Module SDK in project settings to 17 or higher. Open project settings.

Plus, there are several places in the codebase (mostly switches) that are only available from older versions.

We need to either change the Readme:

* Built with and for Java 11.

or make edits to the code.

Add configurable remote warehouse ratio for New Order transactions of TPC-C

I would like to support configurable remote warehouse ratios to evaluate distributed transaction processing performance and am considering how this could be implemented in BenchBase.

I was able to find the part that gets the configuration values from the XML, but my concern is that implementing a configurable remote warehouse ratio here would affect all other workloads.

Do you have any ideas for implementing such a feature?

diff --git a/src/main/java/com/oltpbenchmark/DBWorkload.java b/src/main/java/com/oltpbenchmark/DBWorkload.java
index df41b301..7d560e84 100644
--- a/src/main/java/com/oltpbenchmark/DBWorkload.java
+++ b/src/main/java/com/oltpbenchmark/DBWorkload.java
@@ -210,6 +210,11 @@ public class DBWorkload {
                     postExecutionWait = xmlConfig.getLong(key + "/postExecutionWait");
                 }
 
+                double remoteWarehouseRatio = 1.0;
+                if (xmlConfig.containsKey(key + "/remoteWarehouseRatio")) {
+                    remoteWarehouseRatio = xmlConfig.getDouble(key + "/premoteWarehouseRatio");
+                }
+
                 TransactionType tmpType = bench.initTransactionType(txnName, txnId + txnIdOffset, preExecutionWait, postExecutionWait);
 
                 // Keep a reference for filtering

clarify metrics / collectors

Hey, I've been looking at benchbase, and playing around with it for a bit and of course I've read the \cite{DifallahPCC13} paper.

From that paper, I get the impression that oltpbench, and subsequently benchbase, can gather metrics, such as OS level metrics like disk throughput, disk IOPS (like in Figure 5 in the paper), or DB level metrics, like statistics about internal buffers (like in Figure 6).

When running benchbase, I end up with some files in the results directory, one of which contains some database internal metrics gathered at the end of the benchmark run (In my case, a JSON dump of most of the pg_stat_* views).
Looking at the code a little bit, I've come to the conclusion that there are no other metrics, and, in particular, no metrics per second.

Having more metrics would be very interesting, especially periodical metrics over the runtime of the benchmark, like it is suggested in the paper.
When using an external metrics gatherer care needs to be taken to align the OS/DB metrics with the benchmark metrics (req/sec, latencies) to the same timeframe.

I just want to confirm that I'm not missing the secret dial to turn on more exhaustive metrics before I start gathering metrics with external tools.

Please, for someone searching for this in the future, also tell us if such metrics gathering is even on the scope of this project, or if the recommendation is to use external tools.
I understand that metrics gathering itself is almost as big a challenge as benchmarking, there are so many different things to consider, everyone wants to see different things and they are all in different places in different operating systems or DBMS.

Detect closed connections and automatically attempt to reconnect them.

The Worker creates a connection here in its constructor.

However, if the connection is closed by the DBMS (e.g., hammer a CPU-limited PostgreSQL read replica with reads to the point of replication replay being blocked), the Worker does not recover and tries to execute on the closed connection.

We would like the closed connection to be reopened. Maybe add a setting for whether this behavior should happen.

Deadlock WorkloadState/BenchmarkState deadly embrace

When running with a large number of terminals with the wikipedia benchmark I'm regularly seeing the following java deadlock:

Found one Java-level deadlock:
=============================
"main":
  waiting to lock monitor 0x00007f6dc8000be0 (object 0x00000000874f0098, a com.oltpbenchmark.WorkloadState),
  which is held by "WikipediaWorker<053>"

"WikipediaWorker<053>":
  waiting to lock monitor 0x00007f6dcc000be0 (object 0x00000000874f00c8, a com.oltpbenchmark.BenchmarkState),
  which is held by "main"

Java stack information for the threads listed above:
===================================================
"main":
	at com.oltpbenchmark.ThreadBench.runRateLimitedMultiPhase(ThreadBench.java:212)
	- waiting to lock <0x00000000874f0098> (a com.oltpbenchmark.WorkloadState)
	- locked <0x00000000874f00c8> (a com.oltpbenchmark.BenchmarkState)
	at com.oltpbenchmark.ThreadBench.runRateLimitedBenchmark(ThreadBench.java:54)
	at com.oltpbenchmark.DBWorkload.runWorkload(DBWorkload.java:646)
	at com.oltpbenchmark.DBWorkload.main(DBWorkload.java:460)
"WikipediaWorker<053>":
	at com.oltpbenchmark.BenchmarkState.getState(BenchmarkState.java:55)
	- waiting to lock <0x00000000874f00c8> (a com.oltpbenchmark.BenchmarkState)
	at com.oltpbenchmark.WorkloadState.fetchWork(WorkloadState.java:142)
	- locked <0x00000000874f0098> (a com.oltpbenchmark.WorkloadState)
	at com.oltpbenchmark.api.Worker.run(Worker.java:233)
	at java.lang.Thread.run([email protected]/Thread.java:833)

Found 1 deadlock.

For reference this is my workload configuration file:

<?xml version="1.0"?>
<parameters>

    <!-- Connection details -->
    <type>MYSQL</type>
    <driver>com.mysql.cj.jdbc.Driver</driver>
    <url>jdbc:mysql://ajm-mysql-2:3306/benchbase?rewriteBatchedStatements=true&amp;sslMode=DISABLED</url>
    <username>admin</username>
    <password></password>
    <isolation>TRANSACTION_REPEATABLE_READ</isolation>
    <batchsize>128</batchsize>

    <!-- Scale factor is the number of wikipages *1000 -->
    <scalefactor>10</scalefactor>

    <!-- The workload -->
    <terminals>96</terminals>
    <works>
        <work>
            <time>60</time>
            <rate>30000</rate>
            <weights>1,1,7,90,1</weights>
        </work>
        <work>
            <time>60</time>
            <rate>35000</rate>
            <weights>1,1,7,90,1</weights>
        </work>
        <work>
            <time>60</time>
            <rate>40000</rate>
            <weights>1,1,7,90,1</weights>
        </work>
    </works>

    <!-- Wikipedia Procedures Declaration -->
    <transactiontypes>
        <transactiontype>
            <name>AddWatchList</name>
        </transactiontype>
        <transactiontype>
            <name>RemoveWatchList</name>
        </transactiontype>
        <transactiontype>
            <name>UpdatePage</name>
        </transactiontype>
        <transactiontype>
            <name>GetPageAnonymous</name>
        </transactiontype>
        <transactiontype>
            <name>GetPageAuthenticated</name>
        </transactiontype>
    </transactiontypes>
</parameters>

Add Google Spanner support

Would like to begin adding basic Spanner support; specifically for YCSB. This should involve adding an enum to DatabaseType and some example configurations

Question: Are the replicas exactly the same in a distributed environment?

Hi, everybody! I am new to distributed databases, and I have a question to ask everyone about the test benchmarks such as TPCC and YCSB.

In a distributed case, if benchmark is used to generate test data, is it exactly the same for each replica in the cluster? Or will the data or will it be stored in different replicas according to business scenarios as needed in reality?

Thanks!

ERROR: relation "HTABLE" does not exist - HYADAPT for Postgres

the file dialect-postgres.xml quotes the table name in all the queries... FROM "HTABLE". This causes the following error which is now properly logged due to #95. In my original refactor this dialect was removed. Curious why this was added back. Updating the dialect file with FROM htable resolves the problem. Thoughts @lmwnshn

[WARN ] 2021-12-27 19:25:42,020 [HYADAPTWorker<000>]  com.oltpbenchmark.api.Worker doWork - SQLException occurred during [com.oltpbenchmark.benchmarks.hyadapt.procedures.ReadRecord5/15] and will not be retried... sql state [42P01], error code [0].
org.postgresql.util.PSQLException: ERROR: relation "HTABLE" does not exist
  Position: 1297
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2674)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2364)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:354)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:484)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:404)
	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:162)
	at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:114)
	at com.oltpbenchmark.benchmarks.hyadapt.procedures.ReadRecord5.run(ReadRecord5.java:43)
	at com.oltpbenchmark.benchmarks.hyadapt.HYADAPTWorker.readRecord5(HYADAPTWorker.java:153)
	at com.oltpbenchmark.benchmarks.hyadapt.HYADAPTWorker.executeWork(HYADAPTWorker.java:66)
	at com.oltpbenchmark.api.Worker.doWork(Worker.java:392)
	at com.oltpbenchmark.api.Worker.run(Worker.java:280)
	at java.base/java.lang.Thread.run(Thread.java:833)

Could not complete Mojo execution...

after:
git clone --depth 1 https://github.com/cmu-db/benchbase.git
cd benchbase
./mvnw clean package

Returns:
[INFO] Scanning for projects...
[INFO]
[INFO] --------------------< com.oltpbenchmark:benchbase >---------------------
[INFO] Building BenchBase 2021-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ benchbase ---
[INFO]
[INFO] --- git-commit-id-plugin:4.9.10:revision (get-the-git-infos) @ benchbase ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.059 s
[INFO] Finished at: 2021-12-29T11:10:32-05:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal pl.project13.maven:git-commit-id-plugin:4.9.10:revision (get-the-git-infos) on project benchbase: Could not complete Mojo execution...: pl.project13.core.NativeGitProvider$NativeCommandException: Git command exited with invalid status [128]: directory: /root/benchbase, command: git log -1 --pretty=format:%an --no-show-signature HEAD, stdout: ``, stderr: fatal: unrecognized argument: --no-show-signature -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

aborted transactions in SEATS

There are a number of instances in SEATS where a UserAbortException is thrown when it appears simply escaping the method would do. This is particularly prevalent in UpdateReservation, DeleteReservation and UpdateCustomer. This issue is to track the PR that will attempt to reduce the frequency of Aborted Transactions in the SEATS benchmark.

YCSB table creation never ends on Cloud Spanner

I am trying to benchmark Cloud Spanner with YCSB, however, YCSB table creation never ends.

$ java -jar benchbase.jar -b ycsb -c config/spanner/ycsb_config.xml --create=true --load=true --execute=true
[DEBUG] 2021-12-17 05:56:26,426 [main]  com.oltpbenchmark.api.StatementDialects getSQLDialectPath - No dialect file in benchmarks/ycsb/dialect-spanner.xml
[DEBUG] 2021-12-17 05:56:26,428 [main]  com.oltpbenchmark.api.StatementDialects load - SKIP - No SQL dialect file was given.
[INFO ] 2021-12-17 05:56:26,440 [main]  com.oltpbenchmark.DBWorkload main - ======================================================================

Benchmark:     YCSB {com.oltpbenchmark.benchmarks.ycsb.YCSBBenchmark}
Configuration: config/spanner/ycsb_config.xml
Type:          SPANNER
Driver:        com.google.cloud.spanner.jdbc.JdbcDriver
URL:           jdbc:cloudspanner:/projects/foo/instances/bar/databases/sira?credentials=/Users/sira/Code/benchbase/config/spanner/sira-personal-project-48ff2575be0b.json
Isolation:     TRANSACTION_SERIALIZABLE
Batch Size:    128
Scale Factor:  1.0
Terminals:     1

[INFO ] 2021-12-17 05:56:26,440 [main]  com.oltpbenchmark.DBWorkload main - ======================================================================
[DEBUG] 2021-12-17 05:56:26,464 [main]  com.oltpbenchmark.DBWorkload main - Using the following transaction types: [com.oltpbenchmark.api.TransactionType$Invalid/00, com.oltpbenchmark.benchmarks.ycsb.procedures.ReadRecord/01, com.oltpbenchmark.benchmarks.ycsb.procedures.InsertRecord/02, com.oltpbenchmark.benchmarks.ycsb.procedures.ScanRecord/03, com.oltpbenchmark.benchmarks.ycsb.procedures.UpdateRecord/04, com.oltpbenchmark.benchmarks.ycsb.procedures.DeleteRecord/05, com.oltpbenchmark.benchmarks.ycsb.procedures.ReadModifyWriteRecord/06]
[DEBUG] 2021-12-17 05:56:26,465 [main]  com.oltpbenchmark.DBWorkload main - Num groupings: 0
[DEBUG] 2021-12-17 05:56:26,474 [main]  com.oltpbenchmark.DBWorkload isBooleanOptionSet - CommandLine has option 'create'. Checking whether set to true
[DEBUG] 2021-12-17 05:56:26,474 [main]  com.oltpbenchmark.DBWorkload isBooleanOptionSet - CommandLine create => true
[INFO ] 2021-12-17 05:56:26,474 [main]  com.oltpbenchmark.DBWorkload main - Creating new YCSB database...
[DEBUG] 2021-12-17 05:56:26,474 [main]  com.oltpbenchmark.DBWorkload runCreator - Creating ycsb Database
[DEBUG] 2021-12-17 05:56:27,802 [main]  com.oltpbenchmark.api.BenchmarkModule createDatabase - Executing script [benchmarks/ycsb/ddl-spanner.sql] for database type [SPANNER]
[DEBUG] 2021-12-17 05:56:27,802 [main]  com.oltpbenchmark.util.ScriptRunner runScript - trying to find file by path benchmarks/ycsb/ddl-spanner.sql
[DEBUG] 2021-12-17 05:56:27,804 [main]  com.oltpbenchmark.util.ScriptRunner runScript - CREATE TABLE usertable (
[DEBUG] 2021-12-17 05:56:27,804 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     ycsb_key INT64,
[DEBUG] 2021-12-17 05:56:27,804 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field1   STRING(100),
[DEBUG] 2021-12-17 05:56:27,804 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field2   STRING(100),
[DEBUG] 2021-12-17 05:56:27,805 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field3   STRING(100),
[DEBUG] 2021-12-17 05:56:27,805 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field4   STRING(100),
[DEBUG] 2021-12-17 05:56:27,806 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field5   STRING(100),
[DEBUG] 2021-12-17 05:56:27,806 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field6   STRING(100),
[DEBUG] 2021-12-17 05:56:27,806 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field7   STRING(100),
[DEBUG] 2021-12-17 05:56:27,806 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field8   STRING(100),
[DEBUG] 2021-12-17 05:56:27,806 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field9   STRING(100),
[DEBUG] 2021-12-17 05:56:27,806 [main]  com.oltpbenchmark.util.ScriptRunner runScript -     field10  STRING(100),
[DEBUG] 2021-12-17 05:56:27,806 [main]  com.oltpbenchmark.util.ScriptRunner runScript - ) PRIMARY KEY (ycsb_key);

(never ends...)

I have already checked that the credentials work well and the same table can be created using simple code like this.

I couldn't make any progress on this problem though I investigated the source code.

Random failures during TestAuctionMarkLoader

The following error appears during maven test execution that is called as part of Github action builds. This appears to be caused because AuctionMarkLoader generates the category_id using a random number generator which is inclusive of the count of rows in the category table... int category_id = profile.rng.number(0, (int) this.num_categories);. This sometimes results in the assignment of 19459 to the category_id which is not a valid id (the max valid category_id is always 19458. I believe the correct code should be int category_id = profile.rng.number(0, ((int) this.num_categories - 1));.

Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 25.61 sec <<< FAILURE!
testLoad(com.oltpbenchmark.benchmarks.auctionmark.TestAuctionMarkLoader)  Time elapsed: 0.217 sec  <<< ERROR!
java.lang.RuntimeException: Failed to execute threads: Unexpected error while generating table data for 'global_attribute_group'
	at com.oltpbenchmark.util.ThreadUtil.run(ThreadUtil.java:106)
	at com.oltpbenchmark.util.ThreadUtil.runNewPool(ThreadUtil.java:70)
	at com.oltpbenchmark.api.BenchmarkModule.loadDatabase(BenchmarkModule.java:291)
	at com.oltpbenchmark.api.AbstractTestLoader.testLoad(AbstractTestLoader.java:67)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at junit.framework.TestCase.runTest(TestCase.java:177)
	at junit.framework.TestCase.runBare(TestCase.java:142)
	at junit.framework.TestResult$1.protect(TestResult.java:122)
	at junit.framework.TestResult.runProtected(TestResult.java:142)
	at junit.framework.TestResult.run(TestResult.java:125)
	at junit.framework.TestCase.run(TestCase.java:130)
	at junit.framework.TestSuite.runTest(TestSuite.java:241)
	at junit.framework.TestSuite.run(TestSuite.java:236)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:90)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: java.lang.RuntimeException: Unexpected error while generating table data for 'global_attribute_group'
	at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader$AbstractTableGenerator.load(AuctionMarkLoader.java:393)
	at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader$CountdownLoaderThread.load(AuctionMarkLoader.java:152)
	at com.oltpbenchmark.api.LoaderThread.run(LoaderThread.java:45)
	at com.oltpbenchmark.util.ThreadUtil$LatchRunnable.run(ThreadUtil.java:139)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.sql.BatchUpdateException: integrity constraint violation: foreign key no parent; SYS_FK_10140 table: GLOBAL_ATTRIBUTE_GROUP value: 19459
	at org.hsqldb.jdbc.JDBCPreparedStatement.executeBatch(Unknown Source)
	at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader.generateTableData(AuctionMarkLoader.java:245)
	at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader$AbstractTableGenerator.load(AuctionMarkLoader.java:391)
	... 6 more

01106f08-12ea-4900-a144-75a7753acab5 Conflicts with higher priority transaction

This issue is occurring while running benchbase with tpcc, chbenchmark. Kindly suggest.

Test case : Terminal 1 with 300 seconds.

error
Caused by: org.postgresql.util.PSQLException: ERROR: Operation failed. Try again.: [Operation failed. Try again. (yb/docdb/conflict_resolution.cc:73): 01106f08-12ea-4900-a144-75a7753acab5 Conflicts with higher priority transaction: bce07dae-e544-41fa-a4c1-14767cf89a72 (transaction error 3)]

classpath conflicts due to transitive dependencies of spanner and mysql drivers

both the spanner jdbc driver and the mysql driver specify com.google.protobuf:protobuf-java as a dependency. Mysql version 8.0.27 depends on version 3.11.4 while spanner version 2.5.5 depends on version 3.19.1. When running spanner tests, the mysql required version is loaded causing NoSuchMethod exceptions at runtime.

Smallbank SendPayment transaction has two identical accounts

} else if (procClass.equals(SendPayment.class)) {
this.generateCustIds(true);
this.procSendPayment.run(conn, this.custIdsBuffer[0], this.custIdsBuffer[0], SmallBankConstants.PARAM_SEND_PAYMENT_AMOUNT);

Smallbank SendPayment transaction has two identical accounts.

I understand this transaction did not exists in the original Smallbank benchmark and is added in benchbase. But sending payments from and to the same account does not seem right?

aborted transactions in Wikipedia

If you look at any recent run of wikipedia you will see that 3 of the procedures fail 100% of the time with "aborted" transactions. Upon further inspection i think this was unintended but i'd like to confirm.

Aborted Transactions:
com.oltpbenchmark.benchmarks.wikipedia.procedures.UpdatePage/03                  [ 4170] ******
com.oltpbenchmark.benchmarks.wikipedia.procedures.GetPageAnonymous/04            [54069] ********************************************************************************
com.oltpbenchmark.benchmarks.wikipedia.procedures.GetPageAuthenticated/05        [  585] 

The cause of this appears to be that the runner logic essentially creates new page variables when executing the above procedures instead of referencing existing pages previously created. In other words, instead of doing lookups using existing data, the code generates keys that don't already exist in the database... thus always returning 0 records. The fix here is to ensure that the above procedures get a handle on an existing page.

I have a fix completed and tested but want to open this issue for history and discussion if i'm missing something.

`isRetryable` incorrectly returns true if sqlstate is null

as seen in #92, the method "isRetryable" incorrectly returns true if the property sqlstate is null. This is not a valid retry condition. This led to legitimate SQL errors being logged as DEBUG and thus hard to find. In the issue referenced above, those errors are not retryable and should have been logged at a more severe level.

createdatabase fails when `autocommit` is `false`

when running TPC-C against cockroach, the recent change that sets autoCommit to false (meaning explicit transactions), doesn't play well with dropping and recreating the schema. the following error is generated...

[INFO ] 2021-10-06 10:13:13,113 [main]  com.oltpbenchmark.DBWorkload main - Creating new TPCC database...
Exception in thread "main" java.lang.RuntimeException: Unexpected error when trying to create the tpcc database
	at com.oltpbenchmark.api.BenchmarkModule.createDatabase(BenchmarkModule.java:259)
	at com.oltpbenchmark.api.BenchmarkModule.createDatabase(BenchmarkModule.java:237)
	at com.oltpbenchmark.DBWorkload.runCreator(DBWorkload.java:593)
	at com.oltpbenchmark.DBWorkload.main(DBWorkload.java:389)
Caused by: org.postgresql.util.PSQLException: ERROR: table "warehouse" is being dropped, try again later
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2565)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2297)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:322)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:322)
	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:308)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:284)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:279)
	at com.oltpbenchmark.util.ScriptRunner.runScript(ScriptRunner.java:111)
	at com.oltpbenchmark.util.ScriptRunner.runScript(ScriptRunner.java:67)
	at com.oltpbenchmark.api.BenchmarkModule.createDatabase(BenchmarkModule.java:257)

Setting the autoCommit parameter back to true fixes this problem. I think this change either needs to be reverted or needs to configurable on a per database basis. This works...ScriptRunner runner = new ScriptRunner(conn, true, true);

TPC-C schema specifies `DECIMAL` data type but jdbc code uses double

I noticed my change here was reverted and the comment suggests its because "noisepage" does not support the use of payUpdateWhse.setBigDecimal(1, BigDecimal.valueOf(paymentAmount));. Unfortunately, I don't believe payUpdateWhse.setDouble(1, paymentAmount); is correct and using double breaks TPC-C for Cockroach. Thoughts?

Glitch phenomenon during testing

Hi Guys, excellent tool. Could you tell me how the glitch phenomenon appears during testing? I want to execute the TPC-C benchmark by PostgreSQL.

I generate the database data in advance and loop the following command to run a quick start. I run it on a single computer and use the default config.xml.

java -jar benchbase.jar -b tpcc -c config/postgres/sample_tpcc_config.xml --execute=true

<!-- The workload -->
<terminals>1</terminals>
<works>
    <work>
        <time>60</time>
        <rate>10000</rate>
        <weights>45,43,4,4,4</weights>
    </work>
</works>

4-9-result-1

The red line in Fig1 is the throughput, while the blue line is the system's latency. I ran the TPC-C benchmark without modifying any parameters 60 times. Experimental results show large fluctuations in throughput and latency. I wonder why they happen.

Thanks for help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.